Personalised Mental Health Care in the Digital Age

Good Data Initiative
Good Data Initiative
12 min readAug 11, 2021

--

We live in a digitised world.

The year is 2021. Digitalisation has a tremendous impact on how we live and interact with others in our daily lives. At its best, digital innovations herald the promise of a better future by offering a set of alternative solutions to everyday problems. Through optimising time and resources, such innovations facilitate efficient living — think of the worlds opened up by your smartphone that would not have been accessible prior to the mid-2000s.

Healthcare specifically has benefited enormously from digitalisation. As one illustrative example: there now exist wearable medical devices that provide a comfortable and non-invasive way to track glucose levels via sweat. These devices measure and transmit your glucose levels to a smartphone app, creating a valuable tracking resource for users who are managing diabetes. Integrating digital solutions such as these with healthcare both boosts convenience and enables greater personalisation of patient care.

Common mental health disorders, such as depression and anxiety, are the first and sixth largest contributors (respectively) to global disability rates.¹ The resulting demand for hospital resources is currently unmatched: in Europe alone there is a notable shortage of healthcare personnel, with an average of only 19 psychiatrists per 100,000 inhabitants. Consequently, one in four patients have to wait three months or more for their second appointment, and in extreme cases, up to four years. Inadequacies in mental disorder treatment delivery over long periods can result in patients opting out of the service, leading to increased instances of self-harm and suicide. This begs the question:

Can the digitalisation of hospital resources remedy the discrepancy in supply and demand for mental health?

One proposed solution being explored in greater depth by healthcare experts is that of using smartphone apps targeting distressing mental health symptoms (such as sleep issues, anxiety, and low moods) to reach potential patients² faster and more consistently.

Since using smartphone apps is now commonplace, applications providing mental healthcare services can easily be incorporated into one’s daily routine. Such mobile applications can also improve the reach of healthcare service delivery to geographically remote areas where such services are not as readily accessible.³ Long waits to receive treatment are also further minimised since these self-help tools⁴ deliver care on demand for patients. This helps to reserve available hospital services for emergencies, as routine non-emergency mental health maintenance care can be done by the patient through their app. Finally, since data on patient mood and sleeping behaviour is tracked systematically through these apps, they also create a useful reference for the clinician during diagnosis to supplement patients’ self-reported symptoms.

Sharing Patient Information through Apps

Although digital apps can augment the way diagnosis is performed and tracked, we must also note that there are significant pitfalls to consider. For instance, as sensitive patient health information is being uploaded to these apps, proper regulations and channels for data storing and processing must be enforced to respect patient privacy. Unfortunately, current support is marred by opaque data privacy practices⁵ which in turn have hindered the widespread use of these apps as a potentially powerful healthcare aid.

Moreover, since patients generally hold high trust and regard for healthcare providers and their services, it is vital to ensure patient safety is prioritised. Lack of clear privacy policies and verbose legal documents and writings prevent patients from making informed choices about what happens to their data. To understand the extent of this, it is worth noting that some mental health-focused apps have even been found to sell patient information to third parties (including Facebook and Google) without individual users’ knowledge. Although the sold information was specifically tied to a person’s digital behaviour⁶ in most cases, in a few instances medically sensitive information was also revealed.

While shared patient information can be de-identified to protect an individual’s identity, current protection measures are still not robustly secure and data may be re-identified with minimal effort. For example, a 2019 study published in Nature Communications created a model that could correctly re-identify 99.98% of Americans in any data set using only 15 demographic attributions. Given the sensitivity of medical data, if a mental health care app is breached it can create further distress for users and increase the severity of an existing mental condition.

Further, the simple fact that a smartphone user has a mental health app (and by extension, is likely to have a mental health condition they are addressing) could be leveraged by employment or insurance agencies to negate that individual’s services and limit opportunities provided. Health-based discrimination, including and extending to genetic-based discrimination, is an area of serious ethical and legal concern around the world and a space where legislative protections are rushing to keep up with technological advances.

Making this situation even more dire is the fact that many people simply don’t know what happens to their personal data. A 2017 study by Deloitte found that 91% of users (out of 2,000 participants) accepted legal terms and conditions for an app without reading them, with that number climbing as high as 97% of users for those aged between 18 to 34 years.

Further academic studies have backed up these findings, including a 2018 experiment by Obar and Oeldorf-Hirsch that also found approximately 98% out of 543 participants missed fake clauses included in a fictitious app that permitted, “data sharing with the NSA and employers… and providing a first-born child as payment for SNS [Social Network Service] access.” In the context of sensitive health care information, it is clear that legal regulation is necessary as a protective guiding framework, and that apps like those providing mental health care services need to clearly state exactly what information is collected, whom it is shared with, and — if shared with third parties — how it is utilised.

Patient Privacy and Data Protection

To address the first of these concerns, data protection laws need to be in place to protect patient privacy. The federal Health Insurance Portability and Accountability Act of 1996 (HIPAA)⁷ in the USA, General Data Protection Regulation (GDPR) in the European Union, and the California Consumer Privacy Act (CCPA) in California, USA, are a few examples of existing frameworks used to protect consumer privacy and that dictate which entities can control and access users’ data.

As mentioned previously, though, data can be re-identified with increasing ease. If a patient were to upload data onto a platform that is not encompassed within the prescribed ‘entities’, no data protection is offered. In this regard, the EU’s GDPR provides more robust data protection as it protects the data itself and defines entities depending on their relationship to the data. Stipulated rules by themselves do not provide a trajectory for app development, but additional studies have been conducted examining best practices for how app developers can facilitate the construction of privacy policy compliant mobile apps.⁸

In this discussion, data ownership remains a crucial point regarding data accessibility and usage. Within the medical community, it is widely believed that the patient — the source of the medical data — should undoubtedly hold the foremost right. Sole ownership, however, also poses restrictions as it can curb effective treatment due to constant interspersion of permission requests.

One possible remedy to this could be extending ownership rights to a given medical facility appointed with treating the patient. When it comes to using third parties for handling or processing patient data, seeking permission or (at the least) informing the patient of the underlying process in a manner they can understand is warranted. Additionally, secure channels should be in place for data sharing and processing, and patient information must be de-identified using higher levels of encryption than are currently practised. Blockchains offer great promise as a valuable tool in ensuring secure data storage and transfer — and more so than traditional forms of electronic health records (EHRs), or even the faxing of patients’ physical health records (a practice still in use today).

Digitising Hospital Resources: The Case of Mental Health Apps

Even though — theoretically — these privacy and regulation considerations are valid in defining how patient ownership of the data and transparency in data processing is essential to prevent data misuse, in practice this is not enough. It can often be hard to expect mental health patients to make an informed choice while they are seeking treatment, even with increased levels of data use transparency. Moreover, expecting people in need of care to go through an arsenal of apps, associated privacy regulations, and select a contender is unrealistic.

Patients seeking mental health care need access to simple and effective solutions and services with minimal barriers, wherein they (as consumers) need only provide the bare minimum of input necessary to receive appropriate treatment.

To this end, credible marks signaling mental health apps’ data use standards could simplify their search. Such credible marks might assure potential patients that required confidentiality measures and acceptable levels of efficiency are guaranteed by the app — simply put, that the app is trustworthy. The current research landscape, however, prohibits deployment of credibility marks for mental health apps. Diagnosis and therapeutic strategies are complicated by cross-disorder symptom overlap, unclear molecular targets, and disease heterogeneity (i.e., when the same disease has multiple underlying causes).

Despite these limitations, multiple treatment options are available to target the known symptoms of mental health concerns, including mood-stabilising drugs and talk therapy to understand the origins of these moods. With a multitude of diagnostic scales to choose from — yet no clear-cut choice to measure improvements in symptoms — the field could benefit from future collaborations between academic research and digital platform providers. Together, these collaborations could re-design frameworks for measuring health care app efficacy while also enabling app recommendations to users.

The validity and applicability of these mental health apps are, of course, based on the assumption that decent mobile networks, data protection and hospital records are in place. There are still areas globally where mental health remains a taboo subject and hospitals, if present, lack adequate care facilities. In such scenarios, these apps could provide users with a safe self-help tool to diagnose, assure, and offer advice to alleviate their symptoms — arguably a better option than offering no help at all. Finally, these apps are not yet platforms that should stipulate a treatment plan (for which in-person care is instead advisable). But for such facilities to be available and the full, positive power of digitalisation explored, increased government funding paired with consistent political support for mental health assistance are both necessary foundations.

Mental Health Apps: Looking Forward

Despite these limitations, medical professional-developed, evidence-based mental health apps are arguably an important and valuable self-help tool for diagnosis designed to help with distressing mental symptoms. The scope of them as an aid can be further enhanced if used alongside hospital care.

Doing this will require the integration of hospital records with app data. The personalised nature of mental health apps, although excellent, can still be hard to sustain alone. Linking these services with hospitals could allow practitioners to track and even check-in with their patients, as we have seen occur with virtual consultations throughout the COVID-19 pandemic. When combined with EHR, a clinician has access to a wealth of data that can better inform patients’ diagnosis.

Linking mental health apps with hospitals ensures hospital control and the application of relevant regulatory procedures over the patient’s medical data — extra measures towards ensuring safe data handling. Thinking even more broadly, the creation of global patient IDs could help in this regard by facilitating the integration of patient data across multiple platforms. Within the UK, several mental health apps are already recommended by the country’s NHS (National Health Service). It is not unthinkable that this practice might be expanded globally.

Incorporation of AI further expands the potential use of these mental health apps for patient diagnosis. AI built into mental health apps could use input data to analyse and monitor patients’ moods and recommend a formal diagnosis or necessary interventions through the patient’s physician if self-harm is deemed likely. A range of interventions is possible.

Where the primary care service of an individual is linked to the app, appropriate authorities can also be alerted in situations of serious risk. At a minimum, the app could provide recommendations such as meditation or another calming activity if a patient is flagged as needing a more mild intervention. For example, Ada Health, a global health company focused on end-user self-assessment apps, has designed a free app to do exactly this. With almost 10 million users, it uses AI to support patient health, assesses reported symptoms, and suggests potential diagnosis and treatment measures the user can seek.

Digitalisation has significantly impacted and will continue to re-shape our daily lives, with healthcare being no exception. Despite reasonable data-related concerns that must be first addressed, professionally-developed mental health apps offer a growing resource of self-help and collaborative tools to allow people to take agency of their health whilst relieving pressure from hospital services, allowing them to be reserved for more immediate or critical care. To this end, however, the highest standard of proper and rigid safety measures absolutely must be in place to prevent data misuse and safeguard patient information while still maximizing these apps’ potential benefits.

TL;DR

  • Digitalization has greatly facilitated the advancement of many fields, including healthcare. In this blog post, we take a closer look at the use of mobile phone applications in healthcare delivery, with particular emphasis on mental health.
  • Using mobile phones for the delivery of mental health care services can enable more people to get the help they are seeking, but also access care sooner than the often unreasonably long wait times encountered when seeking help through national health care systems.
  • The use of such apps in healthcare provides opportunities for furthering research, particularly if combined with patients’ Electronic Health Data.
  • However: before the usage of such applications can become widespread, stricter measures must be put into place both to protect patient privacy and to improve transparency of how user data is collected and used.

About the Author: Chaitanya Erady

Chaitanya Erady is a PhD student at the University of Cambridge’s Department of Psychiatry, and is currently investigating the genomic basis for neuropsychiatric disorders. She is passionate about how health-related data is handled, and patient privacy protected, as upcoming technologies revolutionize the healthcare industry. Chaitanya has published peer-reviewed research in journals including npj Genomic Medicine. She is a senior analyst at GDI, and is currently working with GDI’s Healthcare cluster team on an upcoming report on digital healthcare in Africa (Winter 2022).

Additional Notes

[1] For additional sources regarding the scope and scale of depression alone: Dr. Alex J. Mitchell has written a compelling commentary piece detailing why depression is difficult to diagnose, while Greenberg et al. (2015) have published a peer-reviewed study valuing the overall economic burden of major depressive disorder (MDD) in US-based adults at approx. USD $210.5 billion.

[2] These include both those already admitted into care and those in need, but not yet in receiving care.

[3] This largely depends on the specific geographic region and accompanying infrastructure available. For example, in many regions of Sub-Saharan Africa, mobile penetration is quite good (as also discussed in GDI’s April 2021 report, Digital Finance in Africa) yet many healthcare facilities remain in need of support and further development.

[4] An issue frequently raised within this space is that of a general lack of medical standards for these types of health apps, which would allow medical professionals to vouch for their safety and efficacy. Although an important distinction, the line separating general self-help vs. professionally-vetted healthcare apps unfortunately remains quite blurred.

[5] Referring to how important details often either not conveyed, or are lost in lengthy terms and conditions that must be accepted prior to app usage.

[6] Digital behaviour here can refer to (but is not exclusive to) the types of apps a user has on their phone as well as things they frequently search for. By tracking a person’s digital behavior, organisations can potentially personalise ads — which, in the case of healthcare, can sway patient decisions beyond the patient/personal healthcare provider relationship.

This personalisation technique is known as microtargeting and is the topic of an upcoming GDI Assembly event in September 2021. To find out more (including how you can participate), visit the GDI Assembly page.

[7] Interestingly, HIPAA protects data covered by entities such as healthcare providers and their business associates but not the data itself, nor de-identified data.

[8] Similarly, setting up subscription services to finance mental health-focused mobile apps could offer an alternative, commercially viable route to curtail trafficking of sensitive patient information. However, such practices raise concerns around accessibility for lower-income individuals living in countries where healthcare is neither free nor universally available.

--

--

Good Data Initiative
Good Data Initiative

Think tank led by students from the Univ. of Cambridge. Building the leading platform for intergenerational and interdisciplinary debate on the #dataeconomy