Sound sick? New AI technology could shed light on whether it’s COVID

Medical debt can even crush the insured
September 19, 2022
The lack of data still weakens the US response to outbreaks
September 20, 2022


September 19, 2022 – Imagine this: you think you may have COVID. You speak a few sentences into your phone. Then an app will provide you with reliable results in less than a minute.

“You sound sick,” we humans might say to a friend. Artificial intelligence or AI could take this to new frontiers by analyzing your voice to detect COVID infection.

An inexpensive and simple app could be used in low-income countries or to screen crowds at concerts and other large gatherings, researchers say.

This is just the latest example in an emerging trend that is exploring voice as a diagnostic tool to detect or predict disease.

Over the past decade, AI speech analysis has been shown to help detect Parkinson’s disease, post-traumatic stress disorder, dementia, and heart disease. The research has been so promising that the National Institutes of Health has just launched a new initiative to develop AI to use speech to diagnose a variety of medical conditions. These range from respiratory diseases such as pneumonia and COPD to throat cancer, stroke, ALS and psychiatric diseases such as depression and schizophrenia. Software can detect nuances that the human ear can’t, researchers say.

At least half a dozen studies have taken this approach to COVID detection. In the latest advancement, researchers at Maastricht University in the Netherlands report that their AI model was accurate 89% of the time, compared to an average of 56% across different lateral flow tests. The voice test was also more accurate at detecting infection in people who showed no symptoms.

One problem: Lateral flow tests produce false positives less than 1% of the time, compared to 17% for the voice test. Still, because the test is “virtually free,” it would be practical to only let those who test positive get further testing, said researcher Wafaa Aljbawi, who presented the preliminary results at the European Respiratory Society’s international congress in Barcelona, ​​​​Spain, presented.

“Personally, I’m excited about the possible medical implications,” says Visara Urovi, PhD, researcher on the project and associate professor at the Institute of Data Science at Maastricht University. “If we better understand how the voice changes with different medical conditions, we could potentially know when we are getting sick or when we need to seek further testing and/or treatment.”

Development of AI

COVID infection can change your voice. It affects the airways, “resulting in a lack of speech energy and loss of voice due to shortness of breath and upper airway congestion,” according to the preprint paper, which has not yet been peer-reviewed. The typical dry cough of a COVID patient also causes changes in the vocal cords. And previous research found that lung and larynx dysfunction from COVID alters the acoustic properties of a voice.

Part of what makes the latest research remarkable is the size of the dataset. The researchers used a crowdsourced database from the University of Cambridge that contained 893 audio samples from 4,352 people, 308 of whom tested positive for COVID.

You can contribute to this database – all anonymously – via Cambridge’s COVID-19 Sounds app, which asks you to cough three times, breathe deeply through your mouth three to five times and read a short sentence three times.

For their study, the researchers at Maastricht University “only focused on the spoken sentences,” explains Urovi. The “signal parameters” of the sound “provide some information about the energy of the speech,” she says. “It’s these numbers that are used in the algorithm to make a decision.”

Audiophiles may find it interesting that the researchers used Mel spectrogram analysis to identify characteristics of the sound wave (or timbre). Artificial intelligence enthusiasts will note that the study found that long short-term memory (LSTM) was the type of AI model that worked best. It is based on neural networks that mimic the human brain and is particularly good at modeling signals collected over time.

Suffice it for the layperson to know that advances in this field could lead to “reliable, efficient, affordable, convenient and easy-to-use” technologies for disease detection and prediction, the paper states.

What’s next?

Turning this research into a meaningful app requires a successful validation phase, says Urovi. Such “external validation” – testing how the model works with a different data set of sounds – can be a slow process.

“A validation phase can take years before the app can be made available to the general public,” says Urovi.

Urovi emphasizes that even with the large Cambridge dataset, “it is difficult to predict how well this model might work in the general population.” If language testing turns out to work better than a rapid antigen test, “people might prefer the cheap non-invasive option.”

“However, more research is needed to examine which language features are most useful in selecting COVID cases and to ensure models can tell the difference between COVID and other respiratory diseases,” the paper said.

So are app tests before concerts our future? That depends on cost-benefit analyzes and many other considerations, says Urovi.

Nevertheless: “It can still be advantageous if the test is used in support of or in addition to other established screening tools such as a PCR test.”


Source link

Leave a Reply

Sound sick?  New AI technology could shed light on whether it’s COVID
This website uses cookies to improve your experience. By using this website you agree to our Data Protection Policy.
Read more