WITH OVER 60% of OLDER ADULTS HAVING SMARTPHONES – NOW IS THE BEST TIME TO MAKE IT A DOCTOR’S VIRTUAL OFFICE

According to Dr. Andrew Gostine, CEO of the sensor network startup Artisight, “it’s very hard to put devices into the patient’s home or in the hospital, but everyone is carrying a cell phone with a network connection“. The percentage of U.S. adults who own a smartphone has risen steadily over the past decade, with the proportion of those 65 and older now at over 60% (up from 13% a decade ago). People’s acceptance of remote medical assistance has been boosted by the COVID-19 pandemic.
Selfie-taking smartphones and Twitter-composing tablets are being repurposed and marketed to provide easy, on-the-go access to vital health data. It is possible to read one’s heart rate by placing a fingertip on a smartphone’s camera lens. If you keep the microphone next to your bed, it can detect sleep apnea. Using sonar technology, the speaker’s breathing is being monitored as well.
 
 

In the idealized future of medicine, the data is transmitted wirelessly to a healthcare provider, relieving the patient of any unnecessary discomfort or, in some situations, aiding a clinician without the need for expensive technology. Experts agree, though, that using smartphones as diagnostic tools is still very new. Even though doctors and patients have had some success using phones for medical purposes, the full potential of this field is still unknown.Smartphones include numerous sensors that can be used to track physiological parameters in real-time. Among the many emerging uses for these devices is in assisting with concussion assessments, monitoring for atrial fibrillation, and mental health wellness checks.

 

Researchers and businesses are using smartphone cameras, light sensors, microphones, accelerometers (which detect body motions), gyroscopes, and speakers to develop new medical applications for the devices. Once the data is collected, the apps use AI to evaluate it so that patients and doctors can easily communicate. According to a survey by the IQVIA Institute for Human Data Science on digital health trends, the fact that over 350,000 digital health goods are currently available in app stores is indicative of both profit potential and marketability.

 

Medical device approval from the FDA has been sought for several of these devices. This increases the likelihood that health insurance providers will cover the full cost of the program if patients are required to pay for access. Some items, including Band-Aids, are exempt from this regulation process because of their low risk to patients. The FDA’s policies for medical devices that use artificial intelligence or machine learning are still being updated to reflect the dynamic nature of software.

Accuracy and clinical validation are key to winning over doctors and other medical professionals. CEO and Founder of SenSights.AI, Nauman Jaffar, remarked that many technologies still need to be fine-tuned. Presently, Nauman’s team is experimenting with a method to monitor a patient’s blood pressure, pulse rate, and oxygen saturation without ever touching the patient’s skin by using any smartphone camera footage of the patient’s face.

Evaluating these new technologies is hard because they don’t use traditional medical equipment to collect data. Instead, they use algorithms made with machine learning and artificial intelligence. As a result, Nauman argues, scientists cannot “compare apples to apples” using criteria established by the medical business. A doctor still has to check the results, so not putting in place these safeguards goes against the ultimate goals of the technology, which are to lower costs and increase access.

When asked about the rising cost of healthcare, he stated, “False positives and false negatives lead to additional testing.”

Tech giants like Google have poured resources into developing and studying such tools, with the intention of selling them to healthcare professionals, home health aides, and consumers. In the current version of the Google Fit app, users can either place a finger on the back camera lens to get their heart rate or use the front camera to monitor their breathing rate.

According to Shwetak Patel, director of health technologies at Google and professor of electrical and computer engineering at the University of Washington, “if you removed the sensor out of the phone and out of a clinical device, they are probably the same thing.”

Google uses computer vision, an area of artificial intelligence that focuses on processing data from visual sources like photos and videos. So, instead of using a blood pressure cuff, for example, the algorithm may be able to pick up on small changes in the way a patient’s body looks that act as biosignals for their blood pressure, Patel said.

 

According to data given by the business last year, Google is also looking into the feasibility of utilizing the camera to check for diabetic eye illness and the microphone to detect heartbeats and murmurs in an effort to preserve eye health.

In a recent deal, the tech giant acquired Sound Life Sciences, a Seattle-based business that develops sonar technology apps, approved by the Food and Drug Administration. It uses the speaker of a smart device to bounce pulses off a patient’s body that can’t be heard in order to detect movement and track breathing.

 

Our Canadian-based startup, Veyetals.comhas developed software to use smartphone cameras to measure physiological parameters via selfie and finger/palm of hand scan. The program examines the thin skin around the eyes and analyzes the light reflected from blood vessels back to the lens.

 

With the microphone, Canary Speech employs the same underlying technology as Amazon’s Alexa to analyze patients’ voices for mental health issues. The CEO of the company, Henry O’Connell, said that the program could work with telemedicine consultations and let clinicians use a library of vocal biomarkers and predictive analytics to check for anxiety and depression.

 

Australia-based ResApp Health obtained FDA certification last year for its iPhone software that tests for moderate to severe obstructive sleep apnea by listening to breathing and snoring. SleepCheckRx, which will require a prescription, is minimally intrusive compared with sleep studies now used to diagnose sleep apnea. These can cost thousands of dollars and necessitate a slew of tests.

 

The Reflex app from Brightlamp is a clinical decision support tool for managing concussions and other vision-related issues. Using an iPad’s or iPhone’s camera, the mobile app detects how a person’s pupils react to variations in light. Through machine learning processing, the imagery gives practitioners data points for evaluating patients. Bright lamp sells directly to healthcare practitioners and is being used in more than 230 clinics. Clinicians pay a $400 standard yearly charge per account, which is currently not covered by insurance. The Department of Defense has an ongoing clinical investigation employing Reflex.

 

In some circumstances, such as with the Reflex app, the data is handled immediately on the phone — rather than in the cloud, Brightlamp CEO Kurtis Sluss explained. By doing everything on the device, the software avoids privacy problems, since the patient has to give permission to send data outside the device. But algorithms need to be taught and checked by gathering huge amounts of data, which is a continuous process.

 

Researchers, for example, have shown that some computer vision applications, like heart rate or blood pressure monitoring, can be less accurate for darker skin. Studies are undertaken to identify better solutions.

 

Small algorithm flaws can potentially trigger false alerts and terrify patients sufficiently to keep general adoption out of reach. “We’re not there yet, but it is coming, says Nauman Jaffar, founder of SenSights.AI and Veyetals.