Andrei Perez
©SHUTTERSTOCK.COM/GOGOFVECTOR
The continuous innovation and development of sensing technologies, image processing techniques, and deep learning frameworks have caught the attention of many researchers, who are developing cutting-edge systems for medical applications. This article talks about the recent usage of camera-based biomedical signal processing and the basic concepts you should understand or reinforce if you are planning to combine engineering hardware and software for the analysis of biosignals.
Accurate clinical decision making is fundamental for health-care providers. This skill allows them to arrive at quick conclusions about what is inducing a patient’s symptoms. Due to current technologies, physicians can be introduced to excessive amounts of data with relevant information about a person’s health status, but this can sometimes interfere with their capacity to process all those data to arrive at an accurate diagnose. Specifically, this can cause problems in high-stress settings, like the ones that occur in emergency departments in hospitals and clinics.
For that reason, clinical decision support tools have entered the picture. Clinical decision support systems (CDSSs) are widely becoming indispensable tools to assist physicians in their interpretation of biomedical data. These tools are built to process large amounts of information to help clinicians in their decision making. Today, these tools range from electronic health record and electronic medical record systems to deep learning frameworks that can analyze genetic, environmental, and lifestyle factors for personalized medicine.
Many professionals in the medical field have reported that, thanks to the implementation of a CDSS in their workflow, enormous amounts of digital data can be processed. Once the CDSS has summarized the most relevant information for the health-care professional, an analytic approach can be efficiently achieved. Nonetheless, not every physician shares the same idea.
In a survey published by Medscape, where a population of 1,500 doctors was interviewed, it is reported that only one in five doctors acknowledged that artificial intelligence (AI) and engineered systems have changed the way they practice medicine. Conversely, half of the people in the study mentioned that they are uncomfortable about the role of AI in health care.
Although that study never analyzed the reasons for such distrust, many other studies have shown that this opinion is influenced by a lack of understanding about how these new technologies work. Medical professionals are usually comfortable using diagnostic tools that they understand. One important problem reported when doctors are introduced to current technologies is the understanding of the term AI and the mind-bending mathematical algorithms used for signal processing in health-care applications.
Therefore, another purpose of this article is to raise awareness about the potential use of innovative technologies to assist physicians in their clinical decisions by demystifying the engineering side of data processing and the current image processing tools that are already revolutionizing patient vital sign monitoring, point-of-care diagnostics, and the early detection of diseases.
For better understanding of this technology, readers can look at Fig. 1, which synthesizes some of the current applications of existent commercial CDSSs. These applications are in continuous maintenance and are related to the topics addressed in this article.
Fig 1 The current commercial applications of CDSSs.
Signal processing can be defined as the ability to manipulate a signal coming from the object under analysis to extract information of interest. In 1948, Claude Shannon published “A Mathematical Theory of Communication” in the Bell System Technical Journal. This marks the annus mirabilis of signal processing. Although Shannon’s article was aimed at communication systems and processing signals of transmission, many researchers in the medical field reviewed this article and similar numerical analysis techniques to discover solutions for interpreting the signals emitted from the human body. As a result, biomedical signal processing was born.
The human body consists of multiple subsystems simultaneously performing different physiological processes. Consequently, scientists studied various mathematical techniques that could be employed to extract information from the signals and reveal if a physiological abnormality could be detected. One remarkable example of a biological signal of particular interest is the electrical activity in the heart, which can be measured through an electrocardiogram. This test uses signal processing to translate the electrical impulse of a heartbeat into valuable medical information for the physician.
In the medical field, signal processing plays an important role in modern patient diagnosis. A case in point is that it has transformed the methods used for imaging and monitoring. Physiological factors can influence the signals coming from the human body, thereby causing variations in the readings detected. A straightforward example of this principle is a temperature gun, which can detect thermal signals and translate them into electrical impulses, which are then processed with a microcontroller to output the body temperature on a display. Thanks to the evaluation of a physician and the information coming from the temperature measurement device, a reliable diagnosis can be delivered.
As shown in Fig. 2, there are different types of biological signals that can be used to recognize the underlying physiological mechanisms of a system in the body. Bio-optical, biochemical, and biomechanical signals have been some of the most difficult to reliably analyze. This is mainly caused by the noise that these signals can produce. Noise is defined as the presence of distorted information influenced by unwanted signals from various sources. Picture yourself playing “Where’s Waldo?” In this scenario, your eyes capture unnecessary data coming from sources that can distract and influence the time you take to find Waldo. Distracting data are a problem that sensors can have when measuring bio-optical and biomechanical signals. This represents a major issue when dealing with accuracy and medical data estimation. Despite that, recent developments in image sensors, electronics, and computational regression models in the last decade seem to be the potential solution to efficiently process physical signals and innovate in the patient diagnosis.
Fig 2 The classification of biosignals. ECG: electrocardiogram; Spo2: oxygen saturation.
Overall, biomedical signal processing aims to benefit physicians when patient illness monitoring is needed. The processing used for the signals coming from the human body utilizes statistical characterizations of the incoming signals to extract their components. The revolution of CDSSs is influenced by emerging techniques that can better separate the signal of interest from the noise to classify the data. Among these techniques we can find motion tracking, sequence analysis, segmentation, and statistical processing.
As mentioned before, bio-optical, biomechanical, and biochemical signals coming from the human body contain essential medical information. That is why, for CDSSs, the most common algorithms derived from signal processing methods used are the following:
For several years, researchers have worked on replicating the complexity of the human vision capability on computers. Computer vision (CV) is focused on studying computational methods to analyze images and videos. This has been possible thanks to the study of digital image processing. At the early stages of image processing, there were many limitations since the processing of a digital image relies on several factors: the processor of a computer, the efficiency of a controller to solve numerical analysis, the image sensor that gathers the digital information, and the software techniques employed for image enhancement and image analysis. One of the main barriers to the correct extraction of important data from images was the fact that most of the hardware utilized to obtain accurate outputs was expensive. This represented high costs for scientists exploring this field.
However, in recent years, the leveraging of innovative camera lenses, cost-effective computers, and complex computational methods to reduce the memory processing power has exponentially improved the capacity of computers to analyze digital inputs, such as images for various applications including the analysis of important medical data. Now, cost-effective systems can be manufactured, allowing research groups to pioneer the innovation of tools for clinical analysis. Therefore, one of the areas that benefited from these advancements was the medical field.
As mentioned at the beginning of the article, health-care providers find, on a daily basis, patient information that can influence clinical decisions. However, much of this information is not structurally organized. The large existent amount of medical data is not being efficiently used since it involves a deep analysis of extensive information for only one patient. In simpler words, there are critical data that can help clinicians make good decisions, but this advantage has not been fully controlled. This problem has been addressed since the emergence of problem-solving programs like Dendral, released in the 1960s. Originally, the system was intended to solve problems in organic chemistry by analyzing complex data, but other systems adopted those algorithms to create information analyzers for health care. As a result, other scientists saw the potential of experimenting with technological advancements like AI to replicate the physician perceptual process. Since then, many specialties in medicine have increased their efforts in finding solutions for clinical diagnosis through the application of intelligent computing systems specialized in health-care diagnosis.
Consequently, AI in health care can be defined as the field of study to design machines that could perform challenging tasks that typically require human intervention, centered on the interpretation of complex sets of data previously gathered.
Now that AI and image processing have been explained, readers can dive more into the applications of combining these technologies in computational frameworks for medical situations.
One of the many applications of AI is machine learning (ML). ML can be described as the AI that not only performs human-like analysis but that also autonomously learns over time by processing information coming from real-world interactions. In short, these systems no longer need to be continuously programmed. The system possesses “artificial neural networks” that can rewrite and fine-tune parameters to strengthen the processing accuracy. Therefore, ML has been chosen as the preferred field of study for new CDSSs.
The health-care system has reported an increase in the demand for medical image services, such as ultrasound images, magnetic resonance imaging, computed tomography, and so forth. This imaging analysis is often time-consuming and challenging to carefully analyze in a short period of time. Because of the emergent demand, radiologists can feel overwhelmed with the number of tests that need to be done. This situation can lead to inaccurate medical diagnoses and long periods of time until the physician and the patient get the results back. Therefore, AI solutions have caught the eye of medical experts in the imaging field.
It can be evidenced how practical and strategic ML-driven devices could be implemented as a potential solution. For that reason, many biomedical device companies have invested a significant amount of their resources to engineer systems with the capabilities of performing advanced analysis in medical image processing. As described in Fig. 3, ML in medicine is using a myriad of frameworks that vary according to the application of the system.
Fig 3 ML algorithms used in health care.
This can be evidenced by the amount of research articles/papers and patents released in the last five years. From 2017 until the moment this article was written, it has been reported that there have been more than 6,000 patents of devices using ML for health-care purposes. Based on recent publications from research groups at the Massachusetts Institute of Technology, Google, and Amazon as well as AI researchers at important universities worldwide, the future of applying ML for medical analysis will surpass the ability of humans to prevent diseases. Techniques like the ones presented in Fig. 3 are continuously augmenting their mathematical complexity. Some experienced individuals have already shown how the implementation of these new technologies is having an impact on the financial side of medicine. For example, in 2012, the “Choosing Wisely” program was launched in many organizations to limit the number of unnecessary tests on patients based on a deep learning analysis using AI technologies.
Moreover, data analytics has evidenced extensive capabilities for cancer risk assessment. It is well known that there is a big challenge for oncologists to state the clinical significance of genetic variants. Medical practitioners can now access recently developed genetic databases that include the benign and malignant carcinogenic genetic variants. Therefore, clinicians can quickly make more well-informed decisions when interpreting a genetic test result.
Fig. 3 describes great learning algorithms in health care that are now the basis of much more complicated programs that are in the forefront of the CDSS innovation. One example is the creation of graph-based neural networks (GNNs) that are being implemented for human body modeling for medical diagnosis.
A GNN is a new state-of-the-art neural network. Contrary to simple neural networks, a GNN can take a graph as the input to model, structure, and process data. A GNN uses structural data to train the machine. Similar to a neural network, the objective of a GNN is to leverage labeled nodes to foresee and classify unlabeled elements. Since graphs are used to label the node, the algorithm will seek a unique solution where the Banach fixed-point theorem is applied to pass the message between nodes. This framework has enabled researchers to use GNNs to explore potential abnormalities with the guidance of not only semantic information but also graphs from previous cases. Therefore, CDSSs will be able to retrieve the data analyzed by the AI algorithm and provide essential clinical information to medical practitioners.
As described previously, image processing and AI frameworks are allowing physicians to provide more reliable diagnoses while reducing the clinical misdiagnoses and rapidly organizing large amounts of relevant medical data. Nonetheless, this article would like to introduce readers to the future applications of camera-based systems for noncontact continuous vital sign monitoring because these systems represent the combination of the usage of AI, CV, and image processing for clinical thinking support.
The emergent interest in reinventing clinical decision support through biomedical tools has led to a number of projects to detect vital signs through contactless and faster methods. Some of these projects propose video-based analysis technologies to measure heart rate (HR), blood pressure, and oxygen saturation (Spo2). Image analysis plays an important role in many of the reported studies about noninvasive camera-based diagnostics. In the next 10 years, we will start looking at the first results of high-end bio-optical signal processing techniques to estimate vital signs.
A particular field of interest in vital sign monitoring is the detection and interpretation of photoplethysmography (PPG) signals coming from the body. According to many studies, these signals contain valuable information to estimate HR and Spo2 values.
By understanding how this estimation is achieved, readers should be capable of identifying the principles of PPG analysis. PPG uses light-generated signals that can detect volume changes in living tissue. For medical devices, PPG can be applied to detect the volume of blood to analyze the behavior of the blood flow. PPG applications can be observed in common pulse oximeters; in point of fact, the advancements done in microcomputers have enabled companies to implement PPG technology to rapidly estimate vital signs for wellness purposes through common wearables, such as smart watches and rings.
In 2005, Wieringa et al. proposed a camera-based Spo2 detection system. This research group exposed the potential applications of image sensors and computational methods to create the first contactless PPG detector. Since then, many other studies have been published. Those studies propose interesting models to analyze the red, green, and blue light channels captured by cameras. Moreover, similar applications have been proposed to detect signals that could estimate HR, breathing rate, HR variability, and other unique vital signs that currently require contact sensors to be measured.
The work from innovative companies does not end here. Many companies in “MedTech” are ceaselessly seeking the help of visionary engineers to start combining signal processing with AI frameworks to speed up the diagnosis process. This sounds promising and futuristic; some people may even argue that existing technologies do not have the capability to bring these theories into the real world.
However, the leading companies in health-care analytics, such as IBM Watson and Lumiata, are constantly pushing AI-led medical decisions. Their recent developments already indicate that it will be possible to make screening recommendations more precise and personalized due to prediction models and high-end deep learning techniques. Consequently, R&D departments will—if they are not doing this already—combine data analytics with image processing to accurately estimate vital signs through noninvasive methodologies.
The innovations described are trying to create a more targeted preventive care to be used in routine and critical clinical applications.
These advancements are having an impact on specialties such as radiology, dermatology, emergency care, pathology, and ophthalmology. However, most of the tools built and tested have been categorized as screening tools, not diagnostic tools. This is caused by the struggle of medical technology companies to create an onboarding experience for physicians to use machines when making clinical decisions.
Engineered medical equipment researchers face a controversial challenge: the acceptance of and resistance to emergent pioneering machinery in medical practice.
Health-care workers want to deeply understand how their tools function. Generally, they are familiar with technology that is consistent with their knowledge about human physiology and biochemistry. Therefore, it is important for future biomedical tool providers to address their stakeholders by clearly explaining, in a nontechnical fashion, mind-bending AI/signal processing concepts. Moreover, companies can work together with doctors to set real expectations for the capabilities of next-generation tools.
Earlier in this article, readers were introduced to basic descriptions of engineering methods to reshape CDSSs. Although image processing seems to help convince physicians that computer programs can analyze images to find relevant clinical information, when the computer is not based on the analysis of images, there is still some doubt and rejection. The acceptance of AI solutions highly depends on simplifying concepts for health-care workers. This process will obviously take time, and researchers should keep this in mind without letting it hinder their work in the innovation of health-care equipment.
Fortunately, this paradigm is evolving. New university curricula for engineering and medical schools are being introduced. A case in point is the curricula of biomedical engineering programs, where educators are using their industry experience to teach effective communication to bridge the gap between the two professional careers. Another case is the introduction of medical technology courses for medical students so they can understand the basics of the tools to be used in clinical settings.
The field of modernization in medical care is extensive, and, as described previously, it entails maintaining an organized development with clear expectations. Medical workers should be aware that “smart” technologies will not hamper their critical thinking or their independent diagnosis. It can be possible to understand why AI can make clinicians uncomfortable, but there is little doubt that a physician using the upcoming CDSSs will soon supersede the physician who overlooks them.
In this scientific review, the implementation of biomedical signal processing and data analytics in forthcoming CDSSs were discussed in detail. Specifically, topics like image processing and AI in health-care applications were introduced. Additionally, a high-level overview of CDSSs and “understanding the needs of clinicians” to set real expectations of the yet-to-come health-care tools was given. Therefore, readers should now be aware of the responsibility of product development teams to consider the factors that could establish a methodical revolution of medical technologies.
This analysis clearly shows that AI frameworks, CV, and image processing hold promise; however, there are still many obstacles hindering the employment of AI-led devices. Therefore, physicians and engineering experts need to overcome those obstacles if they want to fully integrate those applications into a health-care ecosystem.
One topic that was not addressed in this article but that should be considered for further research is the analysis of patient acceptance and resistance to engineered machines in clinical settings. Nevertheless, pioneering companies in medical technologies should first address physicians’ resistance to new instruments, thereby resulting in the approval of health-care workers who will communicate with patients about the tools that can provide them with better health care.
The author would like to acknowledge the great support from Dr. George Shaker and other members of the Wireless and Sensors Devices Lab at the University of Waterloo. Their constant feedback and mentorship have been valuable for the analysis of this article.
The author would also like to acknowledge the financial support received from Sterasure, Mitacs, and Senescyt. Their sponsorship is helping the author in the development of projects related to the topic addressed in this scientific article.
J. D. Enderle and J. D. Bronzino, Eds., Introduction to Biomedical Engineering. New York, NY, USA: Elsevier, 2005.
A. T. M. Wasylewicz and A. M. J. W. Scheepers-Hoeks, “Clinical decision support systems,” in Fundamentals of Clinical Data Science. Cham, Switzerland: Springer, 2018.
M. Puttagunta and S. Ravi, “Medical image analysis based on deep learning approach,” Multimedia Tools, vol. 80, no. 16, pp. 24,365–24,398, Apr. 2021, doi: 10.1007/s11042-021-10707-4.
T. Davenport and R. Kalakota, “The potential for artificial intelligence in healthcare,” Future Healthcare J., vol. 6, no. 2, pp. 94–98, Jun. 2019, doi: 10.7861/futurehosp.6-2-94.
E. Montagnon et al., “Deep learning workflow in radiology: A primer,” Insights Imag., vol. 11, no. 1, Feb. 2020, Art. no. 22, doi: 10.1186/s13244-019-0832-5.
Z. Z. Islam, S. M. Tazwar, M. Z. Islam, S. Serikawa, and M. A. R. Ahad, “Automatic fall detection system of unsupervised elderly people using smartphone,” in Proc. 5th IIAE Int. Conf. Intell. Syst. Image Process., 2017, pp. 418–424, doi: 10.12792/icisip2017.077.
R. Chaudhry, A. Ravichandran, G. Hager, and R. Vidal, “Histograms of oriented optical flow and Binet-Cauchy kernels on nonlinear dynamical systems for the recognition of human actions,” in Proc. IEEE Conf. Comput. Vision Pattern Recognit., 2009, pp. 1932–1939, doi: 10.1109/CVPR.2009.5206821.
F. P. Wieringa, F. Mastik, and A. F. W. van der Steen, “Contactless multiple wavelength photoplethysmographic imaging: A first step toward ‘SpO2 camera’ technology,” Ann. Biomed. Eng., vol. 33, no. 8, pp. 1034–1041, Aug. 2005, doi: 10.1007/s10439-005-5763-2.
D. Ahmedt-Aristizabal, M. A. Armin, S. Denman, C. Fookes, and L. Petersson, “Graph-based deep learning for medical diagnosis and analysis: Past, present and future,” Sensors, vol. 21, no. 14, Jul. 2021, Art. no. 4758, doi: 10.3390/s21144758.
S. Reddy, Artificial Intelligence: Applications in Healthcare Delivery. Evanston, IL, USA: Routledge 2021.
M. Chang, Artificial Intelligence for Drug Development, Precision Medicine, and Healthcare. Boca Raton, FL, USA: CRC Press, 2020.
Andrei Perez (af2perez@uwaterloo.ca) attends the University of Waterloo, Waterloo, ON, N2L3G1, Canada, working in an undergraduate research assistant position at the Wireless and Sensors Devices Lab. He is currently pursuing his studies at the Nanotechnology Engineering Department thanks to a scholarship given by the Ecuadorian Ministry of Higher Education, Science, Technology and Innovation in 2019. In 2020, Mitacs funded his first research project about contactless biomedical technologies. His current research interests include theranostics equipment, noncontact vital sign monitoring, and image processing in cost-effective embedded systems.
Digital Object Identifier 10.1109/MPOT.2023.3284207