A Six-Step Framework on Biomedical Signal Analysis for Tackling Noncommunicable Diseases: Current and Future Perspectives

Low-and middle-income countries (LMICs) continue to face major challenges in providing high-quality and universally accessible health care. Researchers, policy makers, donors, and program implementers consistently strive to develop and provide innovative approaches to eliminate geographical and financial barriers to health care access. Recently, interest has increased in using mobile health (mHealth) as a potential solution to overcome barriers to improving health care in LMICs. Moreover, with use increasing and cost decreasing for mobile phones and Internet, mHealth solutions are becoming considerably more promising and efficient. As part of mHealth solutions, biomedical signals collection and processing may play a major role in improving global health care. Information extracted from biomedical signals might increase diagnostic precision while augmenting the robustness of health care workers’ clinical decision making. This paper presents a high-level framework using biomedical signal processing (BSP) for tackling diagnosis of noncommunicable diseases, especially in LMICs. Researchers can consider each of these elements during the research and design of BSP-based devices, enabling them to elevate their work to a level that extends beyond the scope of a particular application and use. This paper includes technical examples to emphasize the applicability of the proposed framework, which is relevant to a wide variety of stakeholders, including researchers, policy makers, clinicians, computer scientists, and engineers.


Introduction
According to 2012 World Health Organization (WHO) estimates, noncommunicable diseases (NCDs) contributed to 38 million deaths globally, accounting for 68% of 56 million total deaths.Meanwhile, nearly 80% of NCD deaths (28 million) occurred in low-and middle-income countries (LMICs) [1].The leading causes of NCD deaths in 2012 were cardiovascular diseases (17.5 million deaths, or 46% of all NCD deaths), In the proposed BSP-based framework, we discuss six essential steps that guide scientists from scientific hypotheses and analysis to practical application: simplicity, mining, connecting, reliability, affordability, and scalability (SMCRAS).When considering each of these elements during the research and design of BSP-based devices, researchers are able to extend their work beyond the scope of local application and use.
To our knowledge, this is the first paper of its kind to elucidate a framework for BSP-based POC devices.In 2014, researchers recently began to propose and discuss the idea that biomedical engineering can improve global health [2].In 2015, researchers also suggested a framework for POC devices, without BSP, based only on affordability [3], which is one element of our framework.
Specific technical examples will be discussed using four recent BSP-based case studies in order to illustrate real-world applications for the proposed framework, including how it will aid in the development of global health care BSP-based technologies.

Overview
The nature of a biomedical signal is like any other: it is information bearing.Signals play a major role in our daily communication whether they be verbal, social, mental, or physical.Measuring a biomedical signal from a certain body part reveals the state of that specific part as well as the whole body.Like any communication system, there is a sender, receiver, and medium through which the signal is sent.With current technological advances (science, computers, etc), we can use algorithms to detect abnormalities and understand the information from the sender (ie, body part or organ).It is becoming increasingly common to use mobile phones to log biomedical signals, thus assisting physicians and health care practitioners in their decisions related to disease prediction, diagnosis, and treatment.
Nonetheless, there remain many challenges in collecting data from mobile devices, such as inconsistent measurement, unreliable signal quality and training, limited computational resources, finite power, time constraints for clinical staff, varying user interface designs, and uncontrolled environments.The guiding principles proposed in this paper address these challenges and create a framework for developing BSP algorithms that can be used for disease classification and prediction.We propose six main objectives: (1) aim for simplicity, (2) mine through noise based on information detection theory, (3) reveal hidden connections, (4) assess robustness, (5) plan for scalability, and (6) strive for affordability.

Simplicity
Einstein famously said, "Any intelligent fool can make things bigger and more complex.It takes a touch of genius and a lot of courage to move in the opposite direction."Simplicity is particularly effective when it comes to mobile computation.Simple methods that achieve high detection accuracy require less storage and power while remaining more suitable for wireless and online processing [4].For example, Figure 1 shows two algorithms that both detect QRS complexes in electrocardiogram (ECG) signals [5,6].One algorithm is simpler than the other, and it requires fewer execution steps, thus lowering the complexity.BSP-based POC devices collect biomedical signals wirelessly (or wired) and send them to a central monitoring station using Global System for Mobile communications (GSM) or Internet for further analysis [7,8].In such cases, some analyses are executed locally on the POC device before transmission; however, this process is not always recommended as the transmission can consume more power than the ECG analysis itself [9].Undoubtedly, it is essential that any algorithm used for real-time analysis retain simplicity so long as this simplicity does not decrease accuracy significantly.The simpler the algorithm, the faster it will be in processing large-scaled biomedical signals; it also will consume less power in battery-operated POC devices [4].For example, Figure 1 shows that the simple method is more sensitive and specific while outperforming the complex approach.The data reliability are discussed in detail in the "Reliability" section.
Note that there can be a trade-off between algorithm simplicity and accuracy.At times, a more complex algorithm can achieve higher accuracy than a simple algorithm.However, the aim is to develop a simpler algorithm that can achieve the same or even higher accuracy than the complex algorithm.There is currently an unmet need to develop relatively simple but reliable and accurate algorithms for tackling large data [10].In terms of time, application, and long-term use, it will be beneficial to investigate NCDs, global health care issues, and BSP-based POC devices using simple but efficient algorithms.Comparison between simple and complex QRS detectors for ECG signal analysis (SE=sensitivity; +P=positive predicitivity; simple QRS detector refers to Elgendi's algorithm [5]; complex QRS detector refers to Pan-Tompkins algorithm [6]).Here, N/A stands for Not Applicable.

Mining
The extraction of the most informative patterns in a given dataset is commonly referred to using different terms depending on the study field (eg, data mining is used as information/knowledge extraction, information/knowledge discovery, information/knowledge harvesting, or data analysis/processing) [11]."Data mining" is typically the term of preference used by biomedical engineers and computer scientists.In this paper, we refer to the "mining" step as a combination of filtering and feature extraction phases.
When mining noisy biomedical signals, it can be tempting to give up before obtaining an informative waveform.Many studies in the literature used filters to clean the signal on the waveform's account; in other words, we need to filter the signal with techniques that help us preserve the main waveforms of the processed signal, which hold valuable data and information.
When it comes to mining data collected from BSP-based POC devices, the data are very noisy (as mentioned in the Overview section).The accurate detection of the main waveforms within the biomedical signal will increase the accuracy of disease diagnosis and prediction.Figure 2 demonstrates that a mining algorithm was able to successfully demarcate the first (S1) and second (S2) heart sounds, outperforming other mining algorithms (compare [b-e] in Figure 2).For example, if we apply fixed thresholds to the output of the mining algorithms shown in Figure 2, the S1 events would not be detected as they have lower amplitudes compared to S2 events, and therefore, the overall detection rate will decrease [11].
It is critical to provide health care workers with feedback on the quality of the data collected in order to allow a real-time recollection of biomedical signals if needed.Therefore, signal quality assessment algorithms are necessary to distinguish between signals that are clinically acceptable and those that are uninformative.The user can be informed in real time accordingly.Mining using second-order Shannon energy of D5 wavelet, (c) Mining using 2nd-order Shannon energy of D6 wavelet, (d) Mining using 3rd-order Shannon energy, (e) Mining using wavelet approximation A6, (f) Mining using two moving averages, black and purple dotted lines, to generate blocks of interest.Here, S1 refers to the first heart sound while S2 refers to the second heart sound.

Connecting
To reveal relationships that would otherwise remain hidden, it is imperative to extract multiple features from biomedical signals and find correlations/causalities between these features for abnormality (disease of interest).For example, Figure 3 shows the extraction of relative power features (f1, f2, f3) in three nonoverlapping frequency bands (4-10 Hz, 10-20 Hz, and 20-30 Hz) in healthy and subjects with Alzheimer's disease (AD) [12,13].It is clear that the relative power of the first frequency band, 4-10 Hz, is associated with AD compared to the 10-20 Hz and 20-30 Hz bands [12,13].The AD subject had a higher relative power in the 4-10 Hz, while the healthy subject had a lower relative power over the same frequency band.Figure 3 uses topoplots of electroencephalography (EEG) signals.These topoplots are generated using the PyMVPA free software [14].
There are several important features to investigate first when analyzing the statistical and deterministic properties of biomedical signals: kurtosis, skewness, energy, entropy, line length/curve length, minima/maxima, activity (1st Hjorth parameter), mobility (2nd Hjorth parameter), complexity (3rd Hjorth parameter), root mean square amplitude, zero crossings, and relative power.
The features extracted from biomedical signals and correlated with an NCD are used as biomarkers.Once the biomarkers are tested rigorously, we can either train practitioners to identify them, or we can develop machine-learning algorithms to identify and report them automatically to the user.

Reliability
As said, "Simplicity is a prerequisite for reliability."Thus, the simplicity step cannot be achieved unless reliability is also achieved.Simplicity goes hand-in-hand with reliability and must be established in conjunction with simplicity.
After applying the simplicity, mining, and connecting steps, the accuracy of the developed solution needs to be verified.This assessment is needed to check if the algorithm/device meets current international standards.Moreover, it is important that the BSP-based POC solution performs as well as the current diagnostic tools, if not even better.
The reliability of a BSP-based algorithm is mainly assessed using four distinct results: true positives, true negatives, false positives, and false negatives.Based on these four results, several statistical measures can be used to assess reliability of simple algorithms, such as sensitivity, specificity, and positive predictivity.
Quality control needs to be implemented in order to detect and prevent errors before deployment of the BSP-based POC device.It is worthy to note that BSP-based sensors typically undergo quality control and risk management reviews during the manufacturing process.Beyond this, device quality needs to also be assessed by the initial health care professional using the device to ensure it is appropriate for the intended purpose, before mass usage.Moreover, the systematic management of the device quality and reliability needs to be maintained through institution policies and procedures, user training, evaluation of these procedures and policies, and an overall evaluation of these components on a regular basis to ensure and maintain reliability and quality.

Affordability
The difference in health care quality between high-income and low-income countries primarily results from the lack of trained health care professionals, poor infrastructure, limited physical accessibility to health care, and the relative cost of health care delivery.As mobile broadband network penetration has reached 89% in LMICs, the use of mobile devices has increased to collect biomedical signals to address some of the NCD challenges.This increase allows for the easy conversion of mobile phones into BSP-based POC devices.Sensors, which are low-cost items, can be hooked into these mobile devices to collect the needed biomedical signals.
Affordability also applies to high-income developed countries because they are increasingly facing shortages of funds and health care professionals.
Once an affordable BSP-based POC device has been implemented, it is important to continue exploring alternative affordable methods to ensure utilization of the most cost-efficient option.For example, instead of using ECG to detect heart rate variability, we could use photoplethysmogram (PPG) signals.We also can create an inexpensive solution, such as a digital stethoscope.
The use of BSP-based POC devices, including wearable sensors, as diagnostic tool is a feasible and affordable way to reach larger populations for improved health care outcomes.In developed countries, BSP-based POC devices are already being used by affluent populations (eg, heart rate monitoring using wearable watches and mobile phones).Most of the current apps focus on heart rate monitoring and number of steps walked in a day; we can predict that the same technology can be modified and used to tackle more serious problems, such as diagnosis of NCDs.In rural/remote areas and developing countries, there is a paradigm shift towards using devices related to affordable global health care for both preventative, diagnostic, and treatment purposes using smart and mobile technology.The proposed framework therefore blends well with this paradigm shift and seeks to help address current needs in rural/remote areas and in developing and developed countries with vulnerable populations [3].

Scalability
It is intuitive to think that simplicity ensures scalability for both algorithm development and BSP-based POC device use (ie, simpler algorithms require less processing time and simpler devices are more likely to be used by nonspecialized individuals; therefore, scalability is a certainty).This assumption is partially correct and accounts for only part of the meaning of scalability here.For example, simple algorithms/devices can be patient-specific or environment-specific (ie, the simplicity of the algorithm can be applied to a specific patient subset in a specific environment).Scalability shifts disease diagnosis to the community level by developing simpler algorithms/devices that can be used outside a formal clinic setting and on different patient subsets.
Scalability ties the previous four elements of the framework together for the purpose of mass implementation of technology into the real world.Moreover, scalability must include a user-friendly approach with clear directive instructions.In developing algorithms/solutions, it is essential that they have the capacity to be used with different applications and devices.For instance, we need to provide algorithms that can work with different sampling frequencies and not require an adjustment for a particular sampling frequency or parameter and condition.When reaching this final step of the framework, we begin to reap the benefits of knowledge sharing, having reached the point where we can impact global health outcomes meaningfully.
Successful scalability occurs when users with limited experience/knowledge are able to use the provided BSP-based mHealth technologies successfully in real time with minimal complications and in multiple environments (eg, in a clinical setting, offsite in remote areas, in a patient's home community, on-the-go in areas of need).

Case I: Detection of Pulmonary Arterial Hypertension Using Heart Sounds
Pulmonary Arterial Hypertension (PAH) is progressive and fatal [15].Complicating other conditions, it is estimated to affect 100 million people worldwide [16,17].PAH is difficult to diagnose because symptoms appear late in the disease, and signs in clinical examination are easily missed.Despite the remarkable advances in cardiac catheterization, which is the gold standard for measuring pulmonary artery pressure [18], there is a pressing need for alternative techniques to diagnose pulmonary hypertension noninvasively.Traditional stethoscope-based auscultation remains a valuable noninvasive tool for diagnosing PAH; however, physicians require years of training to become adept at diagnosing PAH.Auscultation lacks hemodynamic accuracy and is insufficient for monitoring the effects of therapy or indicating an abnormality.Although the clinical significance of heart sounds has been investigated thoroughly, there remains a lack of research focusing on the automatic detection of PAH in heart sounds.Prior to developing any automated algorithm, it is important to investigate the optimal features for detecting PAH.However, there have been few attempts to extract features from the heart sound in PAH subjects [19][20][21][22].One complete normal heart sound cycle primarily consists of the first heart sound (S1) followed by the second heart sound (S2).The interval between the S1 and S2 is the systole, and the interval between the S2 and S1 is the diastole.The components of the S1 are M1 and T1 due to the closure of the mitral and tricuspid valves [23,24].The second sound (S2) has two components (A2 and P2) due to the closure of the aortic and pulmonary valves XSL • FO RenderX [23,24].It is well known that the A2-P2 interval increases during inspiration in PAH; meanwhile, during expiration, this interval decreases [24,25].However, measuring the A2-P2 interval is not easy because of their relatively short duration and their significant overlap with each other in the time domain [22].The relative intensities of A2 and P2 in PAH have been well studied [19,26]; specifically, Sutton et al [19] found that the A2 was less than the P2 in all PAH subjects.However, this feature has not been validated statistically for developing algorithms to detect PAH.

Simplicity
Catheterization is the gold standard for diagnosing PAH; however, it is a very complex, risky, and costly operation, requiring a highly expert clinician.The diagnosis process can be simplified by using the digital stethoscope (noninvasive method) instead of catheterization (invasive method).
In the past, clinicians have also used standard stethoscopes to diagnose PAH; however, this method faces limitations in detecting PAH.With current advances in science and technology, we now have digital stethoscopes that can be used to detect PAH by using machine-learning algorithms that support clinicians in their decision making.

Mining
It is necessary to filter heart sounds while preserving the waveforms of the S1 and S2 heart sounds [11].In the literature, wavelet-based Shannon energy was the most-used method for emphasizing the S1 and S2 [11].This method emphasizes medium-intensity signals and attenuates the effect of low-energy signals much more than that of high-intensity signals.Liang et al [27] first recommended the use of Shannon energy after comparing its performance to the Shannon entropy, absolute value, and energy on heart sounds.Kumar et al [28] tried to improve Method I by introducing multiple wavelet coefficients and a duration-based threshold.Wang et al [29] found that Method I was sensitive to noise and heart murmurs, which easily leads to false segmentation.Therefore, they investigated different wavelet features and introduced a higher-order Shannon energy, specifically third order, to emphasize the S1 and S2 and to suppress the noise and murmurs.
Another mining method that can be used is based on the sequential wavelet analysis introduced by Zhong and Scalzo [30].They developed algorithms based on the Daubechies db5 wavelet, not the db6 used in Methods I, II, and III.

Connecting
Features extracted from heart sounds, such as relative power and sinusoids, have been investigated recently, and the literature has reported on the correlations between them and the detection of PAH [31,32].
The algorithm in [32] used the entropy of the first sinusoid formant and achieved sensitivity of 93% and specificity of 92% over the same data used in [31].The results of [32] were more reliable than the results in [31].
Note, the data in [31,32] were collected in clinical settings and analyzed offline on a laptop computer, not a mobile phone.Having said that, these two studies can be considered as proof-of-concept studies for a BSP-based POC implementation.

Affordability
Digital stethoscopes can be quite expensive, costing at least US $450 per unit in developed countries, and the cost is even higher in LMICs.Making it even more costly is the additional expense of a computer to analyze the heart sounds recorded by the digital stethoscope.
Locally made digital stethoscopes are an inexpensive alternative that are manufactured with available parts.Designing an inexpensive digital stethoscope consists of three components.The first component is the chest piece, which is placed on the skin to capture the heart sounds.The second component is the electret microphone, which records the heart sounds captured by the chest piece.The third component is the transmitter that sends the recorded heart sounds to a device where heart sounds can be visualized and played back for diagnosis.The following are three examples of inexpensive digital stethoscopes: 1. Hands-free kit, eggcup, and rubber O-ring.In 2010, Kuan [33] developed an inexpensive digital stethoscope that can be made for a maximum of US $40 in low volumes.The first component is a combination of a rubber gasket (US $1), a soup ladle (US $1), a plastic folder (US $1), and a hand towel (US $1).The second and third components (the electret microphone and the transmitter) are a hands-free headset (ranging from US $20-$35 per device), which captures the heart sounds and then transmits them via the headset cable to the mobile phone.[34] developed a simple mobile stethoscope in 2015 that can be made for a maximum of US $50 in low volumes.The first component of this system is the chest piece of the traditional nondigital stethoscope (US $22).The second and third components (the electret microphone and the transmitter) are a hands-free headset (US$27) that captures and transmits heart sounds to a mobile phone.

Microphone with wireless kit.
In 2012, Sangasoongsong et al [35] developed a wireless sensor platform that can be made for US $13 in large volumes.The first and second components of this digital stethoscope are the phonocardiography sensor and its wire (US $4).About 70% of the unit cost comes from the third component, which consists of a microprocessor (US $5) and a Zigbee transceiver chip (US $4).
Creating alternatives to the expensive digital stethoscope to capture heart sounds is a plausible and sustainable solution.Moreover, the development of these inexpensive digital XSL • FO RenderX stethoscopes utilizes already-existing parts for the devices, making them easily accessible and relatively more affordable.

Scalability
The digital stethoscope is widely used by clinicians all over the globe.One of the main advantages of the digital stethoscope is that it can be used inside and outside any clinical setting.
In developing a robust algorithm to detect PAH that accompanies the digital stethoscope, there needs to be a graphical user interface that allows clinicians with varying backgrounds and knowledge levels to easily interact with it.

Case II: Detection of Heat Stress in a Changing Climate
According to the Intergovernmental Panel on Climatic Change, accelerated global warming will result from increasing anthropogenic greenhouse gas emissions.Global warming will manifest itself in higher mean ambient temperatures and an increased frequency and intensity of heatwaves [36].Both factors increase the likelihood of heat stress, defined as the net heat load to which an individual is exposed.Warmer environments limit the gradient for body heat dissipation.Meanwhile, due to the physical nature of their tasks and resultant body heat production, workers in labor-intensive industries are a cohort susceptible to heat stress [37] due to the impact of global warming [38].Sustained heat stress can result in heat illness, with potential for permanent harm and even death [39].In this regard, worker heat illness is approximately 4-7 times more likely during heatwave periods [40], and worker injury claims are positively related to ambient temperature [41].
Given the heat-stress risk profile of labor-intensive workers, screening for heat tolerance [42] and physiological monitoring during work shifts could be part of the mitigation strategy [37].The literature identifies two primary physiological heat stress indices during physical activity: body core temperature (BCT) and heart rate (HR).The standard site for BCT assessment is the rectum [43]; however, due to the invasive nature of this measurement, alternative sites have been assessed as BCT surrogates.Despite their ease of measurement, forehead, temporal, oral, aural, and axilla temperature do not provide accurate indices of BCT during physical activity [44].Therefore, BCT assessment remains problematic in most occupational settings.
Heart rate measurement, on the other hand, is less complex and provides insight into heat stress due to significantly higher HR during both seated rest [45,46] and standardized physical activity in hotter climates [47].Higher HRs are attributed to the aforementioned narrow body heat dissipation gradient in hotter climates, with the resultant increase in BCT and skin temperature triggering augmented blood flow to the cutaneous circuit in order to permit greater heat dissipation [48].In turn, stroke volume-the amount of blood pumped by the heart per beat-decreases and requires a compensatory HR increase in an attempt to maintain cardiac output [49].Higher HRs in the heat also may reflect the perfusion of warmer blood on the sinoatrial node [50].Part of the HR increase from rest values is achieved through a significant reduction in parasympathetic tone, reflected in heart rate variability (HRV) analysis as reduced root mean square of the differences of successive differences.Hence, HRV indices have been proposed as objective indicators of heat stress [46].Application of BCT-based and HRV-based heat stress indices can be problematic owing to the invasiveness of BCT assessment.Meanwhile, calculating HRV typically requires long, recorded electrocardiogram signals.Therefore, there remains a need for a simple, noninvasive, in-the-field method to assess heat stress.Such a method would allow monitoring of workers during their shifts to prevent heat stress symptoms and ultimately, heat-related illnesses and deaths.

Simplicity
Diagnostic tests to predict individuals susceptible to heat stress include assessment of maximal aerobic power and/or heat tolerance in controlled settings [42,51].These tests require specialized equipment and a controlled climate while inducing high levels of physiological strain.Physiological monitoring of BCT and HRV also can require specialized equipment; they may suffer from invasiveness of measurement and complex analysis, respectively.Alternatively, PPG signal collection is a simple-to-measure and noninvasive test that can be conducted on a fingertip during scheduled breaks at work.Recent improvements in wearable sensor technology allow for the continuous measurement of PPG for the purpose of measuring HR.Information derived from the PPG signal can be analyzed to provide additional insight into physiological strain, heat stress, and autonomic arousal.While HRV standards of heat stress have yet to be developed, heat stress analysis techniques have been determined [52].

Mining
The bandpass filter is used as an essential mining step that preserves the saliency of the systolic and diastolic waves as well as the dicrotic notch.Researchers have recommended a zero-phase second-order Butterworth filter, with bandpass 0.5-8 Hz, to remove the baseline wander and high frequencies [53].
Recent investigations to detect systolic peaks in PPG signals measured after exercise reflected challenges due to motion artifacts, sweat, and nonstationary effects [53].Studies have examined several filters and algorithms to analyze the PPG wave contour; however, they continue to lack accuracy and reproducibility [54].As a result of these challenges, researchers have started to apply the second derivative to emphasize and easily quantify the delicate changes in the PPG contour [55].For these reasons, a second derivative is used to improve the mining and increase accuracy.

Connecting
There has been a recent attempt to connect PPG features to the effect of heat stress while investigating global warming [52].We have examined existing PPG features used in the literature to diagnose different diseases, such as the b/a index, the amplitude of the a wave, and the amplitude of the b wave in the acceleration photoplethysmogram (APG).Furthermore, we tested new features to determine the optimal PPG feature for heat stress detection, such as the energy of the aa area, the energy of the ab area, the energy of the ba area, and the slope XSL • FO RenderX of the ab segment.In total, we tested 14 time-domain features-seven features extracted from the PPG signals and seven features extracted from the APG signals [52].

Reliability
The algorithm in [52] used the combination of entropy and HRV index and achieved a sensitivity of 95% and positive predictivity of 90.48% on 40 healthy, heat-acclimatized emergency responders (30 males and 10 females) with a median age of 34 years.The participants were normotensive (mean systolic blood pressure of 129.3 mmHg, range 110-165 mmHg) and had no known cardiovascular, neurological, or respiratory disease.The range of systolic blood pressure exceeds the usual normotensive range.The results were considered reliable and more robust against the existing method.
Note, the data in [52] were collected in the in Australia as part of the National Critical Care and Trauma Response Centre project to assess the physiological and perceptual responses of emergency responders to simulated chemical, biological, and radiological incidents in tropical environmental conditions to compare the efficacy of various cooling methods.
The PPG signals were collected in a very noisy setting and were analyzed offline on a laptop computer, not a mobile phone.Given the challenging environment, the work is considered promising for BSP-based POC implementation.

Affordability
Current PPG devices in developing countries are quite costly at around US $300 per unit [56].An affordable alternative solution is converting a mobile phone into a BSP-based POC device.The comparative cost for a mobile phone in some developing countries is approximately US $15, and the addition of the PPG costs approximately only US $3 more.Clearly, the option of using the mobile phone is very promising in terms of affordability when compared to stand-alone PPG devices.

Scalability
In the developed world, the use of the PPG signal for anesthesia monitoring during surgery has been the standard of care for more than 20 years.The WHO is now leading the Global Pulse Oximetry Project, which aims to make the PPG component available in every operating room in the world [57].
A minimum amount of knowledge is needed to use the device as the PPG probe is very intuitive.The user simply places the clip on the patient's finger to collect the PPG signal.The software will show the PPG signal in real time along with the automatic diagnosis.

Case III: Predict Adverse Outcomes Related to Hypertension and Preeclampsia
Preeclampsia (PE) is a disorder of pregnancy characterized by high blood pressure and proteinuria.It affects approximately 3-8% of all pregnancies worldwide and accounts for 18.5% of maternal deaths each year [58].Although PE threatens the lives of pregnant women around the world, the burden is disproportionately felt in LMICs, where it is believed that 99% of the estimated 70,000-80,000 annual maternal and 500,000 annual perinatal PE-related deaths occur [59].

Simplicity
To diagnosis preeclampsia, both hypertension and proteinuria must be present [60].Therefore, there is a need for two items.First, a blood pressure cuff is needed to check if blood pressure is ≥140 mm Hg (systolic), or ≥90 mm Hg (diastolic) after 20 weeks of gestation in a woman with previously normal blood pressure.The second component is a urine test to check if proteinuria is ≥0.3 g of protein in a 24-hour urine collection [60].These two tests are usually unavailable in developing countries; therefore, there is a need to improve (or replace) current PE diagnosis with a simple method, such as PPG signals.
Oxygen saturation (SpO2) is related to hypertension [61], and therefore, the use of pulse oximeter can be a simpler way to improve the diagnosis/detection of PE and its related complications.

Mining
This mining step is similar to the mining step discussed in Case II.

Connecting
Oxygen saturation (SpO2) is related to hypertension [61]; therefore, we use it to calculate the risk prediction index.The light transmitted from the light-emitting diode (LED) in the PPG probe can be detected on the same side (reflectance mode) or on the other side (transmittance mode) of the tissue by a photodetector.The output from the photodetector is converted into a voltage and then further processed producing PPG [62].The signal can be divided into a pulsatile (alternating current [ac]) and a relatively constant (direct current [dc]) PPG component.The SpO2 then is calculated using the ratios of the ac and dc components of the red and infrared PPG signals along with a calibration curve [63].The SpO2 values then are calculated every 10 seconds ac, and the dc PPG amplitudes are determined using the empirically calibrated equation [63,64].

Reliability
Recently, the use of SpO2 has been tested as part of the miniPIERS prediction model [65] in a proof-of-concept study including a cohort of 726 women (118 of whom had adverse pregnancy outcomes) in South Africa and Pakistan.These women were admitted into hospitals with suspected or confirmed PE (ie, with any hypertensive disorder of pregnancy).Interestingly, the preliminary results showed that adding oxygen saturation derived from the PPG signals improved prediction accuracy from 81% to 84% [61].
The PPG signals were collected using a mobile phone with a real-time analysis capability and can be viewed as a real-world practical implementation for BSP-based POC devices.

Affordability
The affordability step is similar to the affordability step discussed in Case II.

Scalability
The scalability step is similar to the scalability step discussed in Case II.

Case IV: Early Detection of Alzheimer's Disease
Alzheimer's disease (AD) is the most common form of dementia, eventually leading to death.AD is one of the most costly diseases worldwide; the health care cost associated with the disease is estimated to have been US $604 billion in 2010 [66,67].As the world population ages, we truly face a looming global epidemic with AD.Epidemiological studies indicate that the number of AD cases will nearly double every 20 years, to 65.7 million in 2030 and 115.4 million in 2050, affecting 1 in 85 people globally [68].With this in mind, it becomes clear that AD is a global problem with a dramatic impact on the health of the population.New approaches need to be considered in terms of prevention, diagnosis, and treatment.

Simplicity
Researchers have put forward EEG as a potential low-cost diagnostic tool in the early stages of AD.Compared to other systems like functional magnetic resonance imaging or positron emission tomography, EEG systems are simple, easy to use, and cost efficient.
Until now, most technological solutions addressing AD have focused on the satisfaction of a specific need, such as position tracking, memory and skill enhancement, and daily needs reminders [69,70].Neural feedback may improve the user's (or patient's) ability to control brain activity, help with the diagnosis of medical conditions, and assist in the rehabilitation of neurological or psychiatric disorders.Several psychological and medical studies have confirmed that neurofeedback activity is enjoyable, stimulating, and potentially healing.Neurofeedback is generated from the EEG signals of AD patients and healthy subjects.The auditory and visual representations of AD EEG differ substantially from healthy EEGs, potentially yielding novel diagnostic tools.Moreover, such alternative representations of AD EEG are natural and intuitive, making them easily accessible to laypeople (AD patients and family members) while providing insight into the abnormal brainwaves associated with AD.
Researchers recently developed a simple neurofeedback methodology that uses real-time collection of EEG signals with a wireless EEG headset, specifically the Emotiv EPOC wireless headset, with a sampling frequency of 128 Hz.The headset has 14 data-collecting electrodes and two reference electrodes.The electrodes are placed at 10-20 locations, AF3/4, F3/4, FC5/6, F7/8, T7/8, P7/8, and O1/2.The BCI2000 software package [71] was used to interface with the Emotiv EPOC wireless headset.The headset transmits encrypted data wirelessly to a laptop computer.

Mining
Low-cost EEG headsets, such as Emotiv (14 electrodes) and OpenBCI (16 electrodes) were originally designed for entertainment purposes (eg, video games) [72]; however, these devices seem to be prone to various artifacts, such as eye blinking, ECG, electromyogram (EMG), body movements, and power sources.These artifacts easily obscure the EEG signal and make analysis difficult.Currently, a study has been proposed to combine the gyroscope with EEG signals to optimally remove artifacts from EEG signals collected using wireless EEG headsets [73].
EEG signals are corrupted by noise and artifacts: 50/60 Hz powerline interference, motion, eye-blinking artifacts, EMG signals from muscles, and artifacts due to changes in the electrode-skin interface [74].The gamma range (30-100 Hz) has a particularly low signal-to-noise ratio, and researchers exclude it from further analysis.Therefore, the frequency range of investigation is 4-30 Hz [75].

Connecting
The literature has reported a strong relationship between the slowing of EEG and AD.The results presented in [12] demonstrate that relative power, specifically within the 4-10 Hz band, holds discriminative features to detect AD.
Researchers consistently found and confirmed that AD is associated with the slowing of EEG (frequency reduction in the power spectrum density) [12].There is also a reduced overall synchrony [76] between EEG leads when compared to healthy subjects.
Connecting the relative power features with the classification of AD using sonification is proposed in [13].The system computes the relative power features (f1, f2, f3) in three nonoverlapping frequency bands (4-10 Hz, 10-20 Hz, and 20-30 Hz).The EEG sonification system then generates melody notes from the computed values depending on whether the values are above or below a predetermined threshold.To prove the concept, we used notes from only one octave (MIDI Octave -1) with the pentatonic scale (five notes per octave), and the study was limited to only one instrument (acoustic bass).Obviously, it is possible to incorporate additional musical instruments and multiple octaves.However, the extracted sound easily becomes cacophonic and difficult to parse.In the future, there is a need to explore alternative schemes to generate music from EEG relative power and other EEG patterns in the time-frequency domain.

Reliability
As a proof-of-concept study [77], two databases were used.One contained mild cognitive impairment (MCI) and healthy subjects (patient age 71.9, SD 10.2; healthy subject age 71.7, SD 8.3), and the other contained mild AD and healthy subjects (patient age 77.6, SD 10.0; healthy subject age 69.4,SD 11.5).The use of a single feature achieved a detection rate of 78.33% for the MCI dataset and 97.56% for mild AD.When multiple features were used, the detection rate improved.More specifically, 11 features achieved 95% in the MCI dataset, and four features achieved 100% in the mild AD dataset.The results were very promising and were considered reliable.
The EEG signals were collected in a clinical setting; however, the data analysis was applied in real-time using a portable laptop, not a mobile phone.This work is considered promising for BSP-based POC devices.

Affordability
EEG systems are relatively inexpensive; with suitable signal processing, they may become useful for research and clinical purposes.In developing countries, several EEG headsets are affordable while the computer/phone accompanying the headset is relatively costly.

Scalability
EEG systems are easy to use and commonly utilized by neurologists all over the world.Minimal knowledge is needed to use the EEG device.

Conclusion
Biomedical signals analysis and processing could play a major role in early detection of disease.With the current barriers to accessing health care in LMICs (eg, lack of resources, lack of funding, and environmental factors), advances in technology offer promise based on the proposed BSP-based SMCRAS framework.Moreover, there is a need to find inexpensive alternative tools that are diagnostic and noninvasive, relying on signal processing to reduce the occurrence of death, disease, and disability, particularly in developing countries.This paper proposes a new framework as a roadmap to biomedical signal analysis and implementation.The six key objectives of the proposed SMCRAS framework are simplicity, mining, connecting, reliability, affordability, and scalability.We have presented and discussed four case studies relevant to biomedical signal analysis and the application of the SMCRAS framework.The proposed framework represents a promising method when considering these six crucial objectives.It may increase our capability to develop BSP-based POC technologies that significantly impact mortality and morbidity rates, especially for those living in LMICs.

Figure 2 .
Figure 2. Example of mining heart sounds: (a) Original heart sound signal from a subject with mean pulmonary arterial pressure of 20 mmHg, (b)Mining using second-order Shannon energy of D5 wavelet, (c) Mining using 2nd-order Shannon energy of D6 wavelet, (d) Mining using 3rd-order Shannon energy, (e) Mining using wavelet approximation A6, (f) Mining using two moving averages, black and purple dotted lines, to generate blocks of interest.Here, S1 refers to the first heart sound while S2 refers to the second heart sound.

Figure 3 .
Figure 3. Connecting hidden relationship between frequency bands and early diagnosis of patients with Alzheimer's: (a) Topoplot of EEG signals in a healthy subject, (b) Topoplot of EEG signals in a patient with Alzheimer's (color scale from blue to red represents the relative EEG power value from 0-1 respectively).