As the doctor-patient relationship continues to evolve, there is increased enthusiasm for the role of mHealth technologies (smartphone apps or wearable sensors that monitor/log health-related functions) in the promotion of precision medicine. But numerous challenges have also emerged in the collection, transmission, and storage of personal health data. This paper addresses the ethical, legal, and public policy implications of mHealth use in the just allocation of health care resources. First, it utilizes Beauchamp and Childress' four principles of medical ethics (autonomy, beneficence, nonmaleficence, and justice) to weigh ethical considerations; next, it reviews privacy issues (e.g. informed consent, use of personal health information), and marketing influences, all of which raise legal concerns; and, finally, it looks at practical implementation of mHealth data/analysis, discussing evidence of current and future impacts upon health policy.
The doctor-patient relationship has undergone a substantial shift since the penning of the Hippocratic Oath; most notably, in its transition from a paternalistic decision-making model to a shared one. As the doctor-patient relationship continues to evolve, the medical profession must now determine how, and to what extent, it is prudent to incorporate mobile health technologies into healthcare. Mobile health (mHealth) technology refers to the use of smartphone apps or wearable sensors to monitor and log health-related functions and activities. Over the past decade, there has been increasing enthusiasm for the role of mHealth in promoting precision medicine, but numerous challenges have also emerged in the collection, transmission, and storage of personal health data. This paper addresses the ethical, legal, and public policy implications of mHealth use in the just allocation of health care resources. First, it utilizes Beauchamp and Childress' four principles of medical ethics (autonomy, beneficence, nonmaleficence, and justice) to weigh ethical considerations (Beauchamp & Childress, 2013); next, it reviews privacy issues (e.g. informed consent, use of personal health information), and marketing influences, all of which raise legal concerns; and, finally, it looks at practical implementation of mHealth data/analysis, discussing evidence of current and future impacts upon health policy.
In less than a decade, a steady stream of new smartphone applications and wearable mobile sensors allow users to monitor sleep, food intake, exercise, blood sugar, mood, and a host of other physiological states/behaviors. These technologies have helped advance a paradigm shift from total reliance on physician healthcare provision/responsibility to a more patient-engaged view that "My health is my responsibility, and I have the tools to manage it" (Sharon, 2017). Self-tracking for health is expected to play a key role in the move toward personalized healthcare, a model of preventive, targeted, patient-participatory healthcare. mHealth is also being thought of as an avenue of ameliorating crises of public healthcare access/provision not only in industrialized nations, but in developing ones as well.
mHealth and telehealth have some overlap, but they are not the same; mHealth refers to mobile means of self-tracking (via apps or sensors), whereas telehealth refers to the broad use of technology for healthcare provision (e.g. doctor's appointment through video conference). There are thousands of health-related apps for smartphones and tablets that track information about food consumption, sleep patterns, blood chemistry, moods, menstrual cycles, heart rate, and stress levels, among other biological indicators. There are apps directed at managing various chronic conditions including those for diabetes, Crohn's disease, asthma, heart conditions, and pain; they act like digital journals, allowing users to log symptoms, the effectiveness of treatment and medications, and reactions to different foods and environments (Sharon, 2017). Analysis is typically offered in graph format, so users can visualize trends, such as when asthma symptoms peak, or what triggers high blood pressure. Mental well-being is another popular area for self-tracking; the Moodscope app, for example, allows users to score and track their daily moods and share these with a nominated buddy (Sharon, 2017). Some devices are designed to be worn directly on the body, automatically collecting data and wirelessly syncing it to the user's computer or smartphone. Currently, there are estimates of as many as 97,000 mobile health apps and 485 million wearable devices (Sharon, 2017).
Self-monitoring applications have great potential to aid and modify people's lifestyle habits, and to encourage self-management of chronic conditions; uses which serve to increase patient autonomy. Interactive technological systems designed to change behavior of their users are known as persuasive technologies. Utilizing strategies from behavioral psychology, "nudging" applications use tactical choice patterns intended to facilitate or promote certain behaviors, while individuals retain freedom of self-determination (Paldan, Sauer, & Wagner, 2018). Accessed via smartphones or smartwatches, these technologies are marketed as empowerment strategies for taking charge of one's own health. Nudging practices have long been utilized within the advertising/marketing industry. And while they are met with criticism by some in the healthcare field, they are heartily embraced by others.
The use of technology for good, or medical beneficence, has been demonstrated in multiple studies reporting outcomes ranging from improved self-management of illness to improved access to care. In particular, rural residents report improved health care access, as barriers such as transportation, or provider distance, are mitigated (Francis, 2017). One specific example of mHealth beneficence is offered by Larry Smarr, a member of the Quantified Self movement, who discovered he had Crohn's disease before his doctors, due to his extensive self-tracking. He maintains that one cannot necessarily trust how one "feels"; certainty lies only in what can be measured (Sharon, 2017).
Issues of nonmaleficence arise through potential harms incurred as a result of patient/consumer actions taken due to lack of understanding, erroneous interpretation of presenting information, technology failures, or privacy and confidentiality breaches (Francis, 2017). (Privacy issues will be addressed in more detail in the section of the paper citing legal concerns.) There is also potential for harm in patient over-reliance on app use, in lieu of doctor visits or evidence-based treatments. Self-tracking exemplifies valuing "one's choices and the need to be responsible for them while, at the same time, relieving oneself of responsibility by delegating it to external technology" (Schüll, 2016).
Justice issues arise in the unequal opportunities those from lower socioeconomic groups must contend with in the access, and utilization, of such technologies. People require the financial resources to buy these (often expensive) devices, the time to fit their use into their daily routines (comprised of multiple jobs and tight schedules), and they need the knowledge and education to use these devices properly. Maintaining one's health through the adoption and maintenance of a healthy lifestyle requires financial, personal, environmental and educational resources.
Individuals of higher socioeconomic means are therefore most likely to benefit from available "knowledge, efficacy, and resources in adopting innovative health-related behaviors" (Francis, 2017). A phenomenon referred to as the "digital divide" occurs when there is a divide in access to digital technology or information due to socio-economic demographic variances among groups. People age 65 and older, those who did not attend college, those living in households earning less than \$30,000, and those in rural areas, have less access to smartphones and thus to self-monitoring applications (Francis, 2017).
Health and computer literacy also factor into health-related information searches, and self-monitoring behaviors. People of migrant background, low levels of education, low economic means, chronic illness, and older age have comparatively limited health literacy. These inequities serve to compound healthcare disparities through the "differential distribution of technologies that simultaneously enhance and impede literacy, motivation, and ability of different groups (and individuals) in the population" (Francis, 2017).
Corporate wellness programs offer lower health insurance premiums to employees based on activity measures (e.g. number of steps counted), or worker productivity through physiological indicators such as respiration or stress level measures. Employers promote programs to optimize employee health, improve productivity, and increase morale through friendly competition, but critics argue that the line between voluntary and compulsory participation is not always clear-cut, and those who do not participate (or succeed) in these initiatives are being penalized via more expensive premiums. Financial incentives, such as lower health insurance premiums offered to those using self-monitoring applications, can result in financial discrimination for those who are not able to use self-monitoring applications, thus exacerbating existing inequalities. Additionally, when construed as a choice, those who do not, or cannot, "choose" health-promoting activities risk scorn or stigmatization for added burdens placed on the public health system, and may incur penalizations or the withholding of treatments based on lifestyle (Sharon, 2017).
In addition to ethical issues surrounding mHealth use, there are legal concerns that have emerged regarding trust, privacy, and sharing of health data outside of clinical settings. Studies have shown that "members of the general public expressed little concern" (Ostherr et al., 2017) about sharing health data with the companies that sold the devices or apps they used, indicating that they rarely read the "terms and conditions" they signed or agreed to detailing how their data may be used by the company or shared with third-party affiliates (Ostherr et al., 2017). In contrast, interviews with researchers revealed significant resistance among potential research participants in sharing their user-generated health data (data captured through devices or software, e.g. via wearable heart rate monitors, step-counters, and sleep trackers) for purposes of scientific study.
Data generated from wearable technologies and mobile apps may serve to uncover new indicators of health and illness outside of traditional clinical settings. On the flip side, there is also potential risk involved when data that emerges is subsequently shared, or utilized, in ways that may not serve user/participant best interests. Questions surrounding who benefits from big health data are further entangled with uncertainties about data ownership, sharing, trust, and privacy (Ostherr et al., 2017). Contemporary practices of health datafication within and outside clinical settings pose new "challenges to traditional understandings of agency and ownership of medical data" (Health Information Law Project, 2015; Ostherr et al., 2017).
Special privacy and confidentiality issues, such as those involving family planning services and treatment for sexually transmitted diseases can raise important mHealth implications. Hospitals need to be sensitive to the special issues raised in the use of personal health records (PHR) by their patients and families, such as what might be added to a personal health record, how, and by whom, that can be occasioned by the link between a patient's electronic health record (EHR) and PHR (Petersen & DeMuro, 2015). How might social media sites interface with an individual's EHR or PHR? Although a topic of debate in the United States, it is generally accepted that "an individual's medical record is owned by the provider who retains the record, not the individual whose medical information resides in the record" (Petersen & DeMuro, 2015). As providers move to EHRs, the location of information from an individual's EHR might be in more disparate locations, but providers still take the position that they own a patient's medical record. This ownership, however, does not take all rights away from patients as they are still covered by privacy protections included in the Health Insurance Portability and Accountability Act of 1996 (Petersen & DeMuro, 2015). Patient-generated health data (PGHD) is health-related data created, recorded, or gathered from patients (or designees) from multiple sources, including patient registries, research networks, social media, remote sensors, smart wearable devices, and mHealth apps. Patients may believe that they are in control of their PGHD, when in fact, due to the multiple venues utilized, each with its own privacy/exclusion criteria, this may not be the case.
Medical contexts, or traditional clinical settings are sites where formal doctor–patient interaction is governed by health law such as the Health Information Portability and Accountability Act of 1996 (HIPAA) and the U.S. Food and Drug Administration (FDA) which governs the use of medical devices, including some digital health tools (Ostherr et al., 2017). When considering how user-generated health data travels through social and information networks, regulatory boundaries between the non-clinical and the clinical must be defined; specifically, boundaries between consumer-facing software applications and devices (which do not require FDA approval and are not governed by HIPAA), and clinical-facing apps and devices (regulated by FDA and HIPAA). Although the proliferation of PGHD may seem a natural extension of consumer reliance on technology and online information-sharing, healthcare is not like other service industries; it is a unique realm comprised of "histories, demands, and stakes that do not…apply to rideshare networks, real estate tourism, or romantic match-making services" (Ostherr et al., 2017). This problematic boundary-blurring manifests as the "net of surveillance" is extended; beginning with user self-surveillance, who in turn invite peers to participate in monitoring practices (via sharing personal data on social media and other digital platforms), which results in data being potentially used for other purposes (e.g. marketing, potentially discriminatory, etc.). Marketing practices are already noticeable in the "gamification" of self-tracking, where self-trackers can compare their data against others'; intrusive surveillance practices are thus normalized, at the expense of unsuspecting participants having fun (Sharon, 2017).
Large corporations that manufacture self-tracking devices like Nike and Fitbit transform the data their users generate into commercial value by sharing this information with various third parties. Some view these practices as free "digital labor", forms of unpaid labor that people carry out online under the guise of fun and leisure that end up being highly profitable for these corporations (Sharon, 2017). News about the amount of data shared or sold by health technology companies and by platforms like Facebook, the lack of transparency about some of these activities, and the possible malicious uses by third-parties of these data have sparked a "techlash" reflecting public unease about many technologies central to mHealth research and clinical care (Schairer, Rubanovich, & Bloss, 2018).
Health-related device and software companies operating outside of hospitals, clinics, and other HIPAA-protected zones "face few restrictions on their exploitation of users' data" as consumers must agree to the "terms and conditions" to activate and use the app. In many cases, these lengthy, complexly written terms of use permit the parent company to sell users' health data to third parties, such as marketers, advertisers, and other types of data brokers (Ostherr et al., 2017). Paradoxically, some users are more willing to share their health data on an app than with their healthcare provider. This may result from the device's sociotechnical infrastructure; the social networking capacity that enables users to share their health data is often a key feature in product design and a central marketing component for many health-related apps on mobile devices. While users seem generally aware that consenting to a company's terms of use constitutes a legal contract, very few report reading those agreements before consenting to them; "research suggests that the concept of privacy itself is undergoing change in the public consciousness, and the legal system has not kept pace" (Ostherr et al., 2017).
The person who generates data only controls the data until he/she posts such data on a social media site; once posted, the social media sites take the position that the data is theirs and they can use it as they wish (incorporated in the legalese of consent). Due to the many complications created by a social media site owning an individual's data, in 2012 the European Union (EU) introduced the "right to be forgotten", implemented by requirements that search engines, like Google, remove links to personal information at the request of the individual (Petersen & DeMuro, 2015).
Questions related to provider licensure pose another potential barrier to the routine use of mHealth. When a patient has her radiograph read, many current US state laws require that it be read by a physician licensed in the state where the patient is located and had the radiograph (Petersen & DeMuro, 2015). If, for example, a patient resides in New York and has an mHealth device that continues to monitor aspects of her health while the patient crosses three states, does the patient's physician need to be licensed in each of the other three states where data is transmitted? Some states issue a telemedicine license to facilitate practice across state lines when the physician holds an unrestricted license in another state (Petersen & DeMuro, 2015). But for practitioners without a telemedicine license, mHealth transmissions/readings across state lines and out of a provider's state/s of licensure can pose potential liability issues.
mHealth tools pose challenges for the process of obtaining meaningful informed consent from users. The risks are wide ranging; insurance discrimination based on data from mHealth technologies integrated into workplace wellness programs, invasion of privacy of family members/bystanders with collection of data in home environments, "compromising community safety (as in military presence recently revealed by the Strava app), and political manipulation through profiling based on health data" (Schairer et al., 2018). The unique obstacle for mHealth with respect to informed consent is that users, whether research participants, patients, or "life-loggers" (people digitally recording all aspects of their lives), are nearly always required to agree to terms of use of the under-regulated commercial entities supplying mHealth devices and services (Schairer et al., 2018). Typical terms of use for commercially developed apps and devices include lengthy legalese covering the release or selling of personal identifiable data; such complicated terms of agreement do not comply with principles of informed consent (Schairer et al., 2018). In medical settings, institutional review boards (IRBs) require clear and explicit language stating risks, such as risks to privacy, and how patient confidentiality will be protected. But "there is a challenge in reconciling IRB-approved informed consent documents with the terms of use set forth by commercial entities" (Schairer et al., 2018).
Collaborative private and public initiatives are well underway; examples include one between Apple and leading research centers using the "ResearchKit" platform (which allows clinicians to develop apps for carrying out medical studies on iPhones), Google's "Baseline Study" in partnership with Duke and Stanford Universities, or the Institute for Systems Biology's "Hundred Person Wellness Project" (Sharon, 2017). Research scientists and policy-makers must remember to exercise caution around data neutrality. Algorithms and big data are not necessarily objective; study designs, meanings, and interpretations are at the hands of human beings, which means they cannot be completely free of bias and embedded value judgments. Ideals of wellness, health, and even happiness may be configured in ways that perpetuate normative stereotypes, influencing mHealth users to think about their own behaviors in accordance with predetermined (societal) standards, rather than evidence-based medical ones (Sharon, 2017).
In an attempt to facilitate research, recent changes to the Common Rule have expanded exemptions for informed consent. But expanding exemptions may become problematic for mHealth research as the public becomes more concerned about privacy issues related to consumer devices and apps (Schairer et al., 2018). Some have proposed opt-out consent in situations where "the potential benefits of mHealth research for collective health" may be deemed significant enough to outweigh individual autonomy (Schairer et al., 2018), or establishment of a "health data commons", a system of governance that would allow individuals to lend their health data to research as part of a collective that would set its own terms for data use (wherein participants use democratic means to negotiate the terms of consent) (Schairer et al., 2018). All these approaches are ways to reinvent informed consent, which begs the question, Should we be experimenting with informed consent (for medical/research purposes)? And, if so, how do we define vulnerable populations in the context of mHealth users?
Public policy supports greater use of PGHD (Petersen & DeMuro, 2015). The Office of the National Coordinator for Health Information Technology has indicated that bringing patient-reported data into certified electronic health records is a high priority and expected to "stimulate greater patient engagement in stage 3 of the meaningful use electronic health record incentive program" (Petersen & DeMuro, 2015). The FDA has also acknowledged the need to use patient-generated information in pharmacovigilance. The growing use of self-tracking devices means that a significant amount of data can be generated beyond the clinic by patients themselves, thus "reconfiguring them as knowledge producers, not just knowledge recipients" (Sharon, 2017). In this way, citizens and patients are effectively helping to improve population health, by actively contributing to medical decision-making and research in ways that were previously inconceivable (Sharon, 2017).
Federal Agencies such as the Agency for Healthcare Research and Quality, the National Institute of Health, and the Department of Health and Human Services Office of Minority Health, fund grants for research in mHealth. It is not yet clear which agencies will ultimately have oversight for specific components of this evolving field; nor is it known the extent of US regulation and guidance. The complexity of the issues, along with legislation pertaining to various aspects of the mHealth continuum, highlight an increasing need to adequately sort out these regulations (Bloomrosen, 2014). There are also evolving tensions between government regulators and the private sector business community to balance the needs for patient safety with the desire to promote technological innovation.
As the field of medicine embraces big data, a "truism has taken hold: more data equals more knowledge equals better health outcomes" (Ostherr et al., 2017). But if more data consists of erroneous data, then more is not better—for research, treatment, nor health outcome purposes. Are data generated from mHealth technologies as reliable as data obtained through gold standard equipment and technologies? Studies conducted so far have yielded mixed results. mHealth app accuracy can be negatively impacted by patient movements and positional variability, differences in smartphone technologies, variations in app software (e.g. algorithm), and environmental effects (e.g. uncontrolled ambient lighting) (Li et al., 2019). One study comparing iPhone app-based heart rate measurements with in-hospital electrocardiograms and pulse oximetry measurements showed that only one app measured heart rate with comparable accuracy to pulse oximetry, while three other apps did not perform as well (Coppetti et al., 2017). It was also found that contact (fingertip-based) PPG apps performed better than non-contact PPG apps (PPG refers to photoplethysmography, a photoelectric technique for detecting changes in blood volume, using smart-phone camera technology (Elgendi et al., 2019), with "differences of 20 bpm or more in 20%" (Coppetti et al., 2017) of non-contact app transmissions. Another study comparing the diagnostic accuracy of a smartphone PPG app for atrial fibrillation (AF) against the gold standard 12-lead ECG, found the FibriCheck app accurately detected AF in 192 of 196 subjects, a 98% similarity (Proesmans et al., 2019). Discrepancies in PPG app data results serve an important reminder; people should not be overly reliant on such apps as standalone monitoring devices. The more serious the medical condition being monitored by app or remote sensor, the greater the need for clinical and empirical substantiation of data/transmission equivalency to that of gold standard medical equipment.
In-depth examination and synthesis of what works and what does not will require rigorous, ongoing assessment. While there is growing literature that documents the promise of mHealth, the current evidence base is not yet sufficient to adequately inform public policy. Due to evolving, overlapping, and increasingly blurred boundaries of the healthcare, information technology and telecommunications industries, "public policy challenges especially those related to cybersecurity, privacy, and standards and interoperability must be considered" (Bloomrosen, 2014). There are some encouraging mHealth results from a report by the MEASURE Evaluation project, a project funded by the U.S. Agency for International Development (USAID) that works to strengthen health information systems in developing countries, which indicate that "mobile technologies have helped to improve training and service quality of healthcare workers; lower the cost of services by reducing redundancy and duplication; and enhance access to reliable data to facilitate decision making" (MEASURE Evaluation of mHealth, 2018).
Utilizing mHealth technologies for the just allocation of healthcare, by reaching underserved populations, is certainly an exciting prospect, but it is still too early to tell whether this technology will yield the improvements projected. Not only do privacy/data use issues need to be addressed from a legal perspective, and technology accessibility and literacy levels addressed from a socioeconomic perspective, but from a report by the MEASURE Evaluation project, there remain cultural sensitivity issues (e.g. distrust of medical practices) carrying over from in-person medical relationships to technology-enabled ones. Efforts are being made to foster increased trust/receptivity to mHealth technology use through local churches and community centers. In one such collaborative effort in Minnesota, academic medical hospitals are working closely with five predominately African-American churches to recruit congregants into a pilot study investigating mHealth technology use in the promotion of cardiovascular health (Brewer et al., 2018).
mHealth researchers and policy developers must be cognizant that most apps are developed for the general population, and not focused on the health needs of underserved communities (Anderson-Lewis, Darville, Mercado, Howell, & Di Maggio, 2018). As a result, research regarding the use of mHealth interventions for the populations that may need it the most remains sparse. Positive results are reported for the text4baby program dedicated to improving maternal and child health outcomes among low-income, minority, and underserved women. The program applies "culturally tailored health messages and uses in depth strategies to survey and identify the optimal methods for delivery of service and care among the target population" (Anderson-Lewis et al., 2018). Such lessons of cultural tailoring are important for mHealth projects currently underway.
Mobile phones and other portable health information technologies offer unprecedented opportunities to improve the health of the U.S. population and reach traditionally underserved populations. As this paper demonstrates, there remain numerous ethical, legal, and public policy challenges yet to be adequately addressed. But there are many reasons to remain optimistic about the potential for mHealth to improve healthcare access across demographics (socioeconomic, ethnic, cultural, age, gender) and terrains (urban, remote, resource- or access-poor). mHealth technology is indeed well positioned to help bridge some of the gap in healthcare provision; in so doing, it can improve equity and the just allocation of health care resources across the United States, and globally.
Harvard Medical Student Review Issue 5 | January 2020