ÇďżűĘÓƵ

An open access publication of the ÇďżűĘÓƵ
Fall 2023

Can Mental Health Care Become More Human by Becoming More Digital?

Authors
Isaac R. Galatzer-Levy, Gabriel J. Aranovich, and Thomas R. Insel
View PDF
Abstract

Over the past two decades, advances in digital technologies have begun to transform three aspects of mental health care. The use of sensors and artificial intelligence (AI) have provided new, objective measures of how we think, feel, and behave. The ease of connecting and communicating remotely has transformed the brick-and-mortar practice of mental health care into a telehealth service, increasing access and convenience for both patients and providers. And the advent of digital therapeutics, from virtual reality for treating phobias to conversational agents for delivering structured therapies, promises to alter how treatments will be delivered in the future. These digital transformations can help to solve many of the key challenges facing mental health care, including access, quality, and accountability. But digital technology introduces a new set of challenges around trust, privacy, and equity. Despite high levels of investment and promotion, there remain profound questions about efficacy and safety of digital mental health technologies. We share our experiences from the front lines creating digital innovations for mental health, with a focus on what a digital transformation of care could deliver for millions with a serious mental illness.

Isaac R. Galatzer-Levy is Adjunct Assistant Professor in the Department of Psychiatry at the NYU Grossman School of Medicine and a Senior Staff Research Scientist at Google. He has recently published in such journals as JAMA Psychiatry, General Hospital Psychiatry, and Annual Review of Clinical Psychology.

Gabriel J. Aranovich is a practicing psychiatrist and the Clinical Expert for the Health Tech Program at Cornell Tech. He has recently published in such journals as Frontiers in Digital Health, Journal of Psychiatric Research, and Forum for Health Economics and Policy.

Thomas R. Insel is the Co-Founder and Executive Chair of Vanna Health, an organization focused on serving the needs of people with serious mental illness. He previously served as Director of the National Institute for Mental Health. He is the author of Healing: Our Path from Mental Illness to Mental Health (2022) and recently launched MindSite News, a nonprofit digital publication focused on mental health issues.

Anna was a high school history teacher arrested while buying heroin late one night in a rough part of town, not far from the school where she had been teaching. After a very difficult night of unrelenting withdrawal symptoms in a holding cell under the court, she was finally arraigned before the judge.

I felt so ashamed and disgusted. I was standing before a judge, trying my best to look put-together at 9:30 am on no sleep and serious dope sickness setting in. I knew my principal and students would be wondering where I am. I knew my husband would be worried I was dead. I couldn’t help having the morbid thought that he would be relieved to learn I was in jail and hadn’t overdosed. I felt so ashamed. But most of all, I felt ashamed that my most constant thought was fixing.1

Anna was able to negotiate for court-mandated detox and outpatient treatment. As her husband drove her to the hospital to be admitted for detox, she resolved to him and to herself that this was the time she was going to stick with it. But Anna had made similar resolutions in the past. Her addiction began as self-medication for the pain of depression. And her depression, with its deadening sense of dread and despair, had dogged her since childhood.

Anna grew up as a lonely kid with few friends and meager attention from her parents. As a teen, she gravitated toward alcohol to help her cope with her growing self-consciousness. When depression became a crushing problem in college, she found little relief from antidepressants. She found that opiates helped her relax and even make friends. By the time Anna got her first job, opiates had become a constant companion. She budgeted part of her salary for drugs. Over time, she transitioned from ingesting pain pills to injecting heroin, with its more rapid effects and lower price.

America faces a mental health crisis. This crisis was apparent before the COVID-19 pandemic, but the months of lockdown, job loss, and uncertainty exacerbated the trend, especially for young people. Outcomes for those with serious mental illness, like Anna, are dire, with high levels of incarceration, homelessness, addiction, and unemployment. Americans with serious mental illness (including schizophrenia, bipolar disorder, and severe depression) die twenty-three years earlier than those without, not just from the sorts of causes we associate with mental illness, such as suicide, but from untreated common medical illnesses like pulmonary disease and diabetes.2 In fact, as of September 2022, more than ten times as many Americans under age thirty have died “deaths of despair” (suicide and overdose deaths) as have died from COVID-19 since January 2020.3 If we consider the 14.2 million Americans with serious mental illness as a minority group, their rates of mortality, unemployment, homelessness, incarceration, and violent interactions with the criminal justice system would place them as our lowest caste, our “untouchables.” Tragically, there has been little recognition or reckoning of their needs, leading one of us to call this the Jim Crow era for serious mental illness.4 A challenge in addressing this mental health crisis is the limited number of well-trained clinicians. The gap between the demand for services and the supply of clinicians is a global problem. Even in the developed world, where we are spending unprecedented sums of money for mental health care, less than half of those who would benefit from care are in treatment. And for those who are lucky enough to receive care, the treatments they receive are often of poor quality, yielding disappointing results. Of course, the combination of high costs and bad outcomes points to a profound injustice, but it also reveals space for innovation. The multimillion-dollar lifetime cost for treatment for a patient like Anna is merely one of many signs that improved care could unlock significant value for her, for society, and for those financially responsible for her care.5 Other domains of medicine are experiencing transformational trends in response to technological breakthroughs (such as the shift to personalized, or precision, medicine). Is there likewise an opportunity to innovate in mental health by leveraging technological advances to improve outcomes at lower costs? Where is the opportunity and what tools could make a difference for Anna?

There are no laboratory tests for bipolar disorder, schizophrenia, or depression. In contrast to other areas of medicine that rely on invasive diagnostics, mental health diagnosis mostly relies on pattern recognition by an experienced clinician based on communication and observation. The disorders are defined by canonical collections of psychological and behavioral signs and symptoms. The diagnostic process is subjective, and the diagnostic labels represent clinical consensus of how symptoms and signs clump together.

Just as the field lacks objective tests for diagnosis, the treatments have neither the surgical interventions nor the curative medications found in other areas of medicine. Mental health treatment typically aims for changes in behaviors and improvements in well-being via skill building, psychological insight, and, often, medication. Most clinicians believe the healing relationship is key to treatment, but relatively few psychiatric treatments require that the patient and the provider sit in the same room or even on the same continent. In that sense, mental health care should be the most scalable of health treatments.

The co-occurring revolutions in digital connectivity, data science, and mobile technology have provided fertile ground for innovation in mental health care. We are still in the early phases of this digital mental health revolution, but some of the promises and some of the challenges have already become apparent. The first major transformation has been the shift of mental health care from brick-and-mortar offices to an online service, where a “consumer” can purchase medication or psychotherapy with a click and receive timely treatment delivered right to their home. Some of the nation’s largest providers of mental health care are online companies that did not exist five years ago. For people who live in remote areas or cannot take time off work to visit a clinic, this shift to remote care has democratized access, increased convenience, and often lowered costs and treatment delays compared with the traditional clinic-based model.

In a sense, the shift from brick-and-mortar to providing the same care via a tablet or laptop is hardly revolutionary. This shift, which might be considered Telehealth 1.0, introduces a realm of possibilities for transforming care by analyzing the audio and visual signals from sessions themselves. For example, artificial intelligence (AI) now allows for automated, real-time analysis of the most subtle aspects of facial expressions, speech, voice, and movement, enabling the sort of pattern recognition required to accurately assess anxiety and depressed mood, blunted emotional expression and impaired cognitive functioning, and even acute suicidal risk.6 These computational models provide the possibility of objective and precise measurements of the symptoms and signs that clinicians traditionally assessed subjectively by observation in an office. Given the economic advantages of software-based solutions over expensive clinician time, it is no surprise that the largest tech companies in the world, along with well-funded venture-backed tech start-ups, are building Telehealth 2.0.

But technology has also introduced a set of challenges for privacy, data protection, and quality of care. For people who were previously unable to access care or may have only received treatment while incarcerated or through emergency psychiatric services, these issues may seem to be acceptable costs of progress. But during this first phase of innovation and disruption, much more needs to be done to ensure trust in digital mental health care interventions, especially in the absence of a regulatory framework or widely accepted industry standards for privacy or data protection. A high-profile case or two of lax security (let alone deliberate malfeasance) may be all that it will take to derail the progress represented by so many well-intentioned efforts to leverage technological breakthroughs for the benefit of patients and health care providers.

Concerns about privacy and data protection may ultimately be addressed through better technology that can, for instance, encrypt communications with a therapist or analyze data within a device rather than sharing across a network.7 But the third concern, quality, will require more than a technological fix. Improving quality, not just increasing access, will be essential if the digital mental health revolution is going to improve outcomes and resolve the mental health crisis.

There are then two steps to addressing the mental health crisis with technology. One step focuses on increasing quality by improving measurement. Better data can lead to better care. The second step innovates on treatment itself, using digital tools to create new interventions that improve quality and ensure better outcomes.

Existing measurements of psychiatric illnesses are designed both to capture broad trends in symptoms and functioning over weeks or months and to put people into diagnostic categories. There has historically been no mental health equivalent to continuous glucose monitoring for diabetes or arrhythmia detection in cardiovascular disease. Not only do we lack biological diagnostic tests for mental illness, but mental health clinicians have largely failed to use existing measures (such as validated clinical surveys) to assess mood, cognition, and behavior, the basic components of mental health that are adversely affected by mental disorders. One study found that fewer than 20 percent of clinicians measure treatment progress with validated rating scales of symptoms.8 The lack of quality metrics reinforces a culture that has historically relied more on intuition than data.

One innovative approach to improving measurement has come to be known as digital phenotyping, a method that applies machine learning algorithms to data obtained from connected devices, such as smartphones or “wearables” (like the Apple Watch or Oura Ring), to measure psychological health in a continuous, objective, ecologically valid manner.9 In this framework, computational algorithms infer the signs and symptoms of mental illness, which a clinician would traditionally assess using patient self-report or direct observation in a clinical setting. For example, while a clinician may ask a patient about social isolation, data from a smartphone may reveal aspects of social activity directly through the record of calls, messages, or social media engagements. Likewise, smartphone and wearable data may serve as a more accurate and ecological measure of sleep or activity than a person’s own recollections.

Other examples of digital measurement include analysis of speech from voice samples for evidence of depression and anxiety, facial recognition software that infers mental status, eye-tracking software that detects PTSD, pupillometry for stress measurement, and analysis of social media content for relapse detection in youth with psychotic disorders.10

Digital phenotyping even shows promise for patients with serious mental illness, which is associated with characteristic departures from the basic daily patterns of life.11 For Anna, relapses were characterized by increasing social withdrawal and increased sleep, which lend themselves to the sort of pattern recognition for which machine learning algorithms have proven to be effective.12 This pattern recognition suggests the potential for a mental health “check engine” light that can identify the earliest signs of decompensation or relapse.

The insights produced by digital phenotyping can be useful to patients like Anna who are trying to understand connections between their rapidly changing mental states and their self-destructive behaviors. Anna noticed, for example, that poor nights of sleep and reduced social interaction were often followed by worsening depression and an increased urge to use opiates. When used in this way, digital phenotyping data can increase patient agency and prevent relapses.

Telehealth, including the digital delivery of psychotherapy, was one of the first technological innovations in mental health care. And evidence-based psychotherapy, such as cognitive behavior therapy, delivered via videoconferencing technology as well as text messaging, has been shown to deliver results comparable to in-person treatment.13 Although fully digitized versions of psychotherapy (“digital therapeutics”), in which a chatbot or video game delivers psychotherapy, represent a massively scalable opportunity to provide access to treatment in the remotest of areas, many studies have shown limited engagement unless there is a “human in the loop.”14

Since the time of Sigmund Freud, psychotherapy has been delivered in hour-long sessions (the so-called fifty-minute hour), most often once or twice per week (or five times per week in classical psychoanalysis). However, there has been little research to evaluate whether fifty minutes once a week is better than ten minutes five times a week or twenty-five minutes twice a week. A patient receives a course of treatment in fixed doses on a fixed frequency: you see your therapist on Thursday at 2 p.m. because that is the scheduled time, not because that is when you need help most or when an intervention is most likely to be of benefit. This is an example of a tradition-based, rather than an evidence-based, approach.

Digital mental health tools can in principle be deployed in any dose quantity and frequency, including on-demand. This presents the possibility that digital innovation might increase the efficiency of treatment: that is, the right treatment at the right time for every patient. For example, nightmares and restlessness are not only cardinal symptoms of post-traumatic stress disorder (PTSD), but are also thought to reinforce and exacerbate the disorder. Smartwatch technology that uses sensor algorithms to detect circadian disturbance in patients with PTSD has been developed. When a disruption in deep sleep is detected, such as during a nightmare, the watch gently vibrates to wake the patient, resulting in a reduction in PTSD symptoms and severity.15

In one recent study (coauthored by Aranovich), smartphone-based continuous measurement of mental health status powered a “precision digital therapeutic” for depression. That is, evidence-based psychotherapeutic content was sent to patients via a smartphone in response to real-time behavioral sensing in an attempt to match the therapeutic content to the patients’ context at that moment. The same behavioral sensing was then used to measure the impact of each behavioral suggestion, such that this “closed-loop” digital therapeutic became better and more tailored to the individual user with time. This led to significant improvements in outcomes compared with treatment as usual in a randomized controlled trial.16

Other examples of effective digital interventions include conversation bots that deliver psychotherapy interactively, virtual reality software that enables exposure treatment for phobias such as acrophobia, and cognitive games that treat attention-deficit hyperactivity disorder (ADHD).17

The technology-driven transformations of how we communicate and interact enable more effective information sharing between all parties involved, including the patient, family members, and members of a patient’s treatment team, leading to better integration of care. And for providers, better integration and tracking, when combined with better measurement, yield the feedback needed to improve the quality of care. The combination of mobile interventions, improved care management, and digital phenotyping can help create a learning health system in which care improves continuously based on outcomes.

While the potential is great, clinicians have historically been slow to adopt new technologies. In part, this may reflect the conservative guild culture of medicine, which is understandably wary of innovation that lacks adequate evidence of benefit documented in reputable sources. But this is also a result of the technologist’s inability to understand and adapt to the nature of the health care industry and their end users: patients and clinicians. As an example, when we first launched our text message—based clinical service, Anna’s clinical team was quick to point out that they could not bill insurance companies for interactions with patients that are carried out via text message, rather than in person or even by telephone. Innovators who have worked to understand and adapt to the complexity of the health care industry, rather than attempt to supplant it, have generally been the most successful in integrating into the care of patients.

There are currently multiple digital mental health “unicorns,” privately held companies worth over $1 billion. Investment in this space has grown rapidly, including more than $5.1 billion invested in 2021 alone. Technologies for remote digital measurement and care delivery are beginning to integrate into all levels of mental health care. Both patients and clinicians are beginning to expect convenient, tech-enabled care. The pandemic has led to a surge in both the prevalence of mental illness and demands for treatment. People are increasingly engaging in mental illness prevention and seeking care across the severity spectrum, from personal daily well-being via meditation apps to direct clinical care. In many cases, lay people rather than clinicians have found novel applications of new technology for mental health. The proliferation of online support groups offers one compelling example. Peer communities that traditionally have limited access to traditional mental health services have consistently harnessed technology to build networks of peer-to-peer support that traverse geographic boundaries that have traditionally left such individuals isolated and lonely.18 The emergence of social virtual reality is a good example: this innovation has led to the proliferation of peer-to-peer virtual reality groups ranging from mindfulness meditation and LGBTQ meetups to Alcoholics/Narcotics Anonymous.

Has the hype and unprecedented investment in innovation had an impact on population health? Our answer: not yet. As detailed above, technological advances have led to a significant increase in access to mental health treatment via telehealth. And digital interventions represent a real opportunity to address the quality crisis via improved measurement. However, adoption has been limited, and very little of the enormous investment in mental health technology has targeted the treatment of severe mental illness.19 There may be many reasons for this. The nature of venture-backed technology development may reward easy wins over solving large, entrenched problems. Automated guided meditation apps, sleep apps, or therapy chatbots have certainly received more support than clinical services that target people with serious mental illness. However, such innovations may have limited effect, even in relatively healthy people, without the integration of human relationships to create accountability and drive behavior change.

There has been both a wish and a worry that novel health care technologies will replace health care workers. Since labor is the most expensive and least scalable input to care, replacing clinicians with apps means more efficient, cheaper care, ceteris paribus. In certain areas of medicine, particularly those involving analysis of images (like radiology and pathology), for which advances in computer vision are naturally suited and machines may be more efficient than physicians, the concern among workers may be justified. But efficiency itself does not always lead to improved care or outcomes. Indeed, technological innovations that are introduced to clinical care to reduce costs and increase efficiency may negatively impact patient care rather than improve it.20

In the case of mental health, there is little chance of technology replacing humans in care delivery anytime soon. As detailed above, there are already hundreds of apps that deliver computerized versions of psychotherapies, such as cognitive behavior therapy, that have traditionally been delivered by humans. And there are chatbots and therapeutic video games that deliver care without another human in the loop. But digital therapeutics have yet to gain widespread adoption, and the demand for therapeutic apps has mostly been focused on filling gaps in the existing system, such as providing a care option for patients stuck on long waitlists. And though we view improved measurement as a promising use case for digital solutions, the accurate diagnosis of a complex mental illness is likely to remain the territory of trained clinicians with access to digital data.

Further, the rote, manualized parts of care that are most amenable to digitization may only account for a small portion of the variance in clinical outcomes. Across populations, numerous studies have shown that the factors that most influence outcomes are grounded in human relationships characterized by empathy, warmth, accountability, congruence, and therapeutic alliance, all of which are difficult to digitize.21 Even simple but profound aspects of care like medication adherence are largely influenced by the quality of the relationship between the clinician and their patient.22

Some of the most exciting technological innovations of the early twenty-first century are attempts to facilitate new types of human connection by removing geographic barriers. Internet-based peer-to-peer support groups, for example, have been shown to provide meaningful clinical care for diverse populations that are limited in their mobility or resources, including patients with cancer, new parents, LGBTQ youth, and people with serious mental illness.23 Which parts of the clinical interaction can be automated through improved conventional artificial intelligence and which require human interaction remains an open and important question.

While many of the digital mental health tools of the past decade have intentionally removed human therapeutic connections in favor of apps, many of the innovations that have emerged in areas as diverse as conversational AI, digital monetization, video conferencing, virtual and augmented reality, and wearable sensors can be similarly utilized to enhance the human elements of therapy and connect the disparate groups and individuals involved in care. They need only be put in the right hands to improve human connections rather than attempt to supplant them.

Every surge in innovation introduces new risks as well as new opportunities. Entrepreneurs understand that start-ups are high-risk ventures, with most failing or pivoting from their original mission. But when a mental health start-up fails or changes course, the consequences can be dire: for patients like Anna who may be abandoned, for patients’ privacy if data are breached, and for providers who may lose their livelihoods. While “move fast and break things” and youthful risk-taking have been endemic in tech culture, these features are unambiguous hazards for mental health tech culture, where trust and safety are essential.

Trust and safety may be difficult to bake into fully automated approaches, which usually lack the flexibility to manage the needs of a patient as complex as Anna. The relationship between the clinician and the patient is usually necessary for treatment engagement and improved outcomes, and as such, a bot-delivered treatment may never be as effective as a person-to-person connection, no matter how much the technology advances.24 But even human-to-human connection over the internet introduces significant risks. Unregulated and unmoderated social platforms are as much, if not more, a risk for mental harm than an opportunity for mental health. As an example, the live social role-playing platform Second Life was both widely used for LGBTQ peer-to-peer support and simultaneously notorious for trolling and communities organized around self-harm. While telehealth stands in a separate class, in which bullying and harassment are lesser concerns, telehealth platforms that aim to connect patients to either medications or therapy have struggled to provide reliable services. The Department of Justice is currently investigating at least one telehealth company that allegedly overprescribed a controlled stimulant drug to adults with ADHD.

Another significant risk of mental health care technology is that there is no regulatory framework for defining safety and efficacy. The Food and Drug Administration oversees drug development, but there is no agency that regulates psychotherapy, whether administered remotely or face-to-face. As a result, neither the apps for therapy nor the telehealth companies have been reviewed by rigorous, widely accepted standards. This lack of regulation makes it hard to know the potential and pitfalls of emergent technologies. While technologies used by health care systems are usually vetted, there is no such requirement and no set of external standards for technologies sold directly to consumers, or those provided through employers as a “wellness benefit.”

The COVID-19 pandemic revealed striking inequities in health care and health outcomes. Inequities are no less apparent in mental health: many communities do not have access to high-quality mental health care. While telehealth ostensibly can overcome some of the barriers to access, many families who lack access to a clinic may also lack access to the internet and, thus, in the era of digital mental health, could find themselves on the wrong side of the digital divide. The dissemination of broadband access may erode this digital divide, but in the short run, the move to digital mental health care risks perpetuating inequities from the brick-and-mortar era.

Finally, we note one other risk in mental health care technology development. Many of the advances in diagnosis and treatment of mental disorders have traditionally come from academia, with high standards of rigor and vetting through peer-review processes. But the tools needed to meet the best ambitions of mental health care technology developers today live in the tech industry, not academia. While innovation certainly occurs in academic science, only industry is capable of the design, engineering, and scale needed to provide the kinds of solutions we need to resolve the mental health crisis. This fact pulled all three authors from traditional academic settings into the mental health care technology industry. This move afforded us the opportunity to build solutions beyond the limitations of academic research. It also forced us to see limitations and risks of the industry first-hand.

In 2019, we each departed the start-up that had brought us together, where we had met each other and met Anna. While we saw the opportunity for innovation and impact, we worried that our efforts to detect a relapse or define a change in mental status were building a smoke alarm when Anna needed a fire extinguisher. At that early phase of digital mental health innovation, we felt uncertain of the impact our technology had on Anna’s or anyone else’s treatment. Few digital mental health care innovations have been subject to the kind of randomized controlled trials or peer-review processes we expected, as academics, of novel diagnostics and therapeutics. Indeed, the iterative changes in digital tools, with algorithms changing every few weeks or months, might make these tools difficult to evaluate by traditional clinical trials. There is a risk that innovations are being scaled for dissemination without the kind of intensive testing of safety and efficacy we expect with a new biomarker or drug treatment. Despite this, we feel hopeful. While we are far from demonstrating any impact on population health, we believe that digital tools can and—with further development—will improve access and quality. Yes, there are risks, but these are early days; we are still learning both the benefits and the risks.

As noted at the outset, one of the most urgent aspects of the current mental health crisis is the workforce crisis: the gap between the demand for services and the supply of clinicians. Though mental health care technology is nascent, a divided response to this workforce crisis is already visible. On one side are those technologies that focus on improving the delivery of mental health care through automation. In this framework, effective components of treatment such as guided meditation, psychoeducation, and medication management can be fully automated. The promise of this approach is that mental health care can be fully scaled for delivery anywhere at any time, at a greatly reduced cost. Indeed, mobile applications and virtual games have demonstrated efficacy for the treatment of psychiatric disorders including ADHD, PTSD, insomnia, and generalized anxiety disorder.25 This approach to mental health care technology aims to digitize the components of structured therapies to eliminate the need for a human in the loop. In this model, scale comes by removing the most expensive and least scalable component: the human therapist. Access to treatment means access to the manualized components of treatment as they are revalidated for digital surfaces, rather than as delivered by a licensed clinician in an office.

On the other side are technologies that see humans as central to achieving therapeutic goals. In this context, technology serves to scale rather than replace human connection, and technology development focuses on safe and scalable methods to connect patients to clinicians and their community. The most prominent example is the widespread shift to telepsychiatry and telepsychotherapy facilitated by large legal and cultural changes in the delivery of care in response to the COVID-19 pandemic.26 As noted above, this shift provided an enormous increase in access, rapidly decreasing geographic distance and time constraints as barriers to care. While telehealth has in many ways become synonymous with video conferencing, the concept of connecting clinicians to patients remotely may introduce whole new paradigms of treatment. For example, researchers have investigated the value of virtual reality for clinician-administered support groups in populations like caregivers for people with chronic illness, who have high psychological distress but low mobility.27

However, the proliferation of telehealth has not solved other fundamental limitations associated with access, including cost and the availability of licensed clinicians in relation to the need. In many ways, fundamental problems of access reemerge regardless of the technological platform. Even if telehealth has reduced costs, access remains tied to larger structural issues embedded in the health care system. Indeed, our shortages of a licensed and skilled workforce for mental health services were not solved but instead were reinforced by the emergence of telehealth.

One potential alternative to match the unmet need is nontraditional community-based forms of mental health care. As opposed to the church basements and VFW halls of previous generations, peer-to-peer support communities have emerged organically across open social platforms like Second Life, Facebook, YouTube, and AltspaceVR. Communities ranging from LGBTQ youth to people with schizophrenia to people with addiction have utilized social platforms to disseminate information and receive support at a surprisingly high rate.28 While we have yet to see the evidence that community-based social support will influence population health, for a generation of digital natives, this form of care may be preferable to office-based individual therapy.

Anna’s story is not over yet. She still struggles with periods of depression, sometimes followed by relapse. She has gotten better at recognizing the signs and, as such, is faster to call her psychiatrist, who now sees her via Zoom.

I’m better at noticing when I’m starting to slip. First of all, I always know I’m getting in trouble when my smartwatch notices that my sleep patterns are worse. I go to sleep later, I sleep in. This is always a great clue for me. I usually reach out to my psychiatrist. It’s easy to meet her on Zoom, but it still takes two weeks to get an appointment. I’ve found that Narcotics Anonymous [NA] is really helpful in the meantime. I always liked NA but it was such a pain to find a meeting that was at a convenient time and place. When I feel unmotivated, it’s just really hard to get there. Now I can literally find a meeting at any moment online. I’ve literally spent 3 hours on the couch trying to motivate myself to go. But then I go. It’s literally right there for me. I also use a lot of mindfulness and meditation techniques. There are all these apps out there now and some are really good. I like to use them while driving to work and even at lunch when I’ve had a stressful morning. All of these things help a bit on their own and seem to help a lot together.

And the field is starting to learn how the key pieces of digital measurement, real-time remote interventions, and comprehensive care management fit together to provide someone like Anna with the support she needs. We better understand that the role of machine learning is not simply to detect and report risk, but to be the engine that learns and meets the individual needs of Anna and all those invested in her care. Finally, and most important, we understand that the digital world is Anna’s milieu. It is where she goes both to find drugs and to find support for her sobriety. While new technologies emerge every day, Anna is left alone to navigate a sea of tools with unclear validity. Her therapist has the same experience. With the exception of Zoom, the clinical workflow has stayed much the same in the twenty-first century as in the twentieth. There remains little objective measurement and no consistent method to communicate between the many members of Anna’s care team.

Further, as in so many domains of her life in this new digital era, Anna is left feeling uneasy about her data privacy. Are federal HIPAA regulations and the EU’s General Data Protection Regulation adequate to protect patients in this age of data breaches and ransomware? At least Anna has access to technology—what of the millions who lack access to smartphones and reliable internet? Will access to technology further exacerbate the troubling trend toward greater inequality?

The greatest shortcoming in mental health technology to date is its siloed nature, developed away from the people, places, and sense of purpose that drive recovery and growth. Organically, people with serious mental illness have sought and found community in virtual spaces, greatly reducing the cost and effort associated with care. Similarly, clinicians have flocked to digital platforms as the opportunity emerged following COVID. The core challenge for the community of scientists, technology developers, and clinicians developing the future of mental health care is how we can scale those essential dimensions of treatment that support ongoing recovery that have fallen by the wayside because they are resource heavy, not because they are ineffective. Can key elements of community engagement return to prominence in mental health care through scalable technology? Can remote measurement improve feedback and accountability by moving the field from infrequent and inaccurate assessments of treatment needs to real time actionable information for both the patient and their clinical team? Can digital interventions and telehealth work together to support a larger patient treatment plan by embedding both automated and human care directly in the patient’s life?

In retrospect, the central focus on medication in the treatment of psychiatric illness may have been largely driven by the technological capabilities of the time. Community aspects of care have not scaled, making them shockingly expensive and inefficient, and even more shockingly hard to access for those most in need. As technology companies move into the era of Web3 and the Metaverse, where users immerse themselves in virtual spaces that travel across the many platforms with the user (for example, laptops, phones, virtual reality headsets, augmented reality glasses, watches), we are forced to ask how mental health care will be structured in this world. Will these virtual spaces be used to provide greater access to the communities of professionals, lay professionals, and loved ones involved in clinical care? What will the psychiatrist’s office on the main street of the metaverse look like? How will it be organized so that it is safe and effective? These remain unanswered questions, as virtual spaces have opened up a new frontier of opportunity and risk that we have not even begun to understand. Similarly, how will the digital signals between a patient like Anna and her care team be understood when these data sources represent a primary form of communication? If Anna were an avatar in a virtual space, how would she express deep emotion on her face or in her voice? How would a clinician “read” her expression or intent? Centrally, will the digital representations of measurement and treatment that are built to replace the analog world reduce barriers to care for people with serious mental illness, or will they provide an additional layer of alienation? This will ultimately be determined by the ability to connect the opportunities in efficiency to the structures that bear those costs, in a manner that is effective for the patient and their team.

Endnotes

  • 1Anna is a composite of multiple patients whom the three authors met while they worked for a mental health care tech start-up. Identifying details have been left out for privacy.
  • 2Centers for Disease Control and Prevention, “” Preventing Chronic Disease 3 (2) (2006).
  • 3Anne Case and Angus Deaton, Deaths of Despair and the Future of Capitalism (Princeton, N.J.: Princeton University Press, 2020); and John Elflein, “” September 13, 2022.
  • 4Thomas Insel, “” Science 376 (6596) (2022): 899.
  • 5Daniel Vigo, Graham Thornicroft, and Rifat Atun, “” The Lancet Psychiatry 3 (2) (2016): 171–178.
  • 6Anzar Abbas, Colin Sauder, Vijay Yadav, et al., “” Frontiers in Digital Health 3 (2021); Katharina Schultebraucks, Vijay Yadav, Arieh Y. Shalev, et al., “Deep Learning-Based Classification of Posttraumatic Stress Disorder and Depression Following Trauma Utilizing Visual and Auditory Markers of Arousal and Mood,” Psychological Medicine 52 (5) (2022): 957–967; Anzar Abbas, Bryan J. Hansen, Vidya Koesmahargyo, et al., “” JMIR Formative Research; and Isaac Galatzer-Levy, Anzar Abbas, Anja Ries, et al., “” Journal of Medical Internet Research 23 (6) (2021).
  • 7Shiqiang Wang, Tiffany Tuor, Theodoros Salonidis, et al., “” IEEE Journal on Selected Areas in Communications 37 (6) (2019): 1205–1221; and Caroline Fontaine and Fabien Galand, “” EURASIP Journal on Multimedia and Information Security 1 (2007).
  • 8John Fortney, Rebecca Sladek, and JĂĽrgen UnĂĽtzer, “” (Brigantine, N.J.: The Kennedy Forum, 2015).
  • 9John Torous, J-P Onnela, and Matcheri Keshavan, “” Translational Psychiatry 7 (3) (2017): e1053.
  • 10Schultebraucks, Yadav, Shalev, et al., “Deep Learning-Based Classification of Posttraumatic Stress Disorder and Depression Following Trauma”; David Lin, Tahmida Nazreen, Tomasz Rutowski, et al., “” Frontiers in Psychology 13 (2022); Tadas Baltrušaitis, Peter Robinson, and Louis-Philippe Morency, “” in 2016 IEEE Winter Conference on Applications of Computer Vision (WACV) (New York: Institute of Electrical and Electronics Engineers, 2016), 1–10; Amit Lazarov, Benjamin Suarez-Jimenez, Amanda Tamman, et al., “” Psychological Medicine 49 (5) (2019): 705–726; Bruno Laeng, Sylvain Sirois, and Gustaf Gredebäck, “” Perspectives on Psychological Science: A Journal of the Association for Psychological Science 7 (1) (2012): 18–27; and Michael L. Birnbaum, Sindhu Kiranmai Ernala, A. F. Rizvi, et al., “” NPJ Schizophrenia 5 (1) (2019): 17.
  • 11S. Malkoff-Schwartz, Barbara Anderson, Stefanie A. Hlastala, et al., “” Psychological Medicine 30 (5) (2000): 1005–1016.
  • 12Mark Matthews, Saeed Abdullah, Elizabeth Murnane, et al., “” Assessment 23 (4) (2016): 472–483.
  • 13Annaleis K. Giovanetti, Stephanie E. W. Punt, Eve-Lynn Nelson, and Stephen S. Ilardi, “” Telemedicine and e-Health 28 (8) (2022): 1077–1089.
  • 14Adaora Nwosu, Samantha Boardman, Mustafa M. Husain, and P. Murali Doraiswamy, “Digital Therapeutics for Mental Health: Is Attrition the Achilles Heel?” Frontiers in Psychiatry 13 (2022); John Torous, Jessica Lipschitz, Michelle Ng, and Joseph Firth, “” Journal of Affective Disorders 263 (4) (2020): 413–419; and Simon Makin, “” Nature 573 (7775) (2019).
  • 15Mary Chris Jaklevic, “” JAMA 324 (23) (2020): 2357.
  • 16Ellen Frank, Meredith L. Wallace, Mark L. Matthews, et al., “” Frontiers in Digital Health 4 (2022).
  • 17Siyu Chen, Ligang Yao, Yehliang Hsu, and Teddy Chen, “Development of an Interactive System for a Companion Robot Based on Telepresence Technology,” in Robotics and Mechatronics: Proceedings of the 6th IFToMM International Symposium on Robotics and Mechatronics, ed. Chin-Hsing Kuo, Pei-Chun Lin, Terence Essomba, and Guan-Chen Chen (Cham: Springer International Publishing, 2020), 219–226; Mark B. Powers and Paul M. G. Emmelkamp, “Virtual Reality Exposure Therapy for Anxiety Disorders: A Meta-´ˇ˛Ô˛ą±ô˛â˛őľ±˛ő,” Journal of Anxiety Disorders 22 (3) (2008): 561–569; and Jyoti Mishra, Rajesh Sagar, Angela Ann Joseph, et al., “” Translational Psychiatry 6 (4) (2016): e781.
  • 18Gunther Eysenbach, John Powell, Marina Englesakis, et al., “” BMJ  328 (7449) (2004): 1166.
  • 19Insel, “” 899.
  • 20James J. Cimino, “” JAMA 309 (10) (2013): 991–992.
  • 21Michael J. Lambert and Dean E. Barley, “” Psychotherapy: Theory, Research, Practice, Training 38 (4) (2001): 357–361; Stephen R. Shirk and Marc Karver, “Prediction of Treatment Outcome from Relationship Variables in Child and Adolescent Therapy: A Meta-Analytic Review,” Journal of Consulting and Clinical Psychology 71 (3) (2003): 452–464; Rosemarie McCabe and Stefan Priebe, “” The International Journal of Social Psychiatry 50 (2) (2004): 115–128; and John M. Kelley, Gordon Kraft-Todd, Lidia ­Schapira, et al., “” PLOS One 9 (4) (2014): e94207.
  • 22Ravi Lingam and Jan Scott, “” Acta Psychiatrica Scandinavica 105 (3) (2002): 164–172.
  • 23Mette Terp Høybye, Susanne O. Dalton, Isabelle Deltour, et al., “” British Journal of Cancer 102 (9) (2010): 1348–1354; Hannakaisa Niela-VilĂ©n, Anna Axelin, Sanna Salanterä, and Hanna-Leena Melender, “” International Journal of Nursing Studies 51 (11) (2014): 1524–1537; Clare Wilson and Laura A. Cariola, “” Adolescent Research Review 5 (2) (2020): 187–211; and Kathina Ali, Louise Farrer, Amelia Gulliver, and Kathleen M. Griffiths, “” JMIR Mental Health 2 (2) (2015): e19.
  • 24Lambert and Barley, “Research Summary on the Therapeutic Relationship and Psychotherapy Outcome.”
  • 25Joaquin A. Anguera, J. Boccanfuso, Jean L. Rintoul, et al., “” Nature 501 (7465) (2013): 97–101; Raquel Gonçalves, Ana LĂşcia Pedrozo, Evandro Silva Freire Coutinho, et al., “” PLOS One 7 (12) (2012): e48469; Colin A. Espie, Richard Emsley, Simon D. Kyle, et al., “” JAMA Psychiatry  76 (1) (2019): 21–30; and Jenna R. Carl, Christopher B. Miller, Alasdair L. Henry, et al., “” Depression and Anxiety 37 (12) (2020): 1168–1178.
  • 26Jay H. Shore, Christopher D. Schneck, and Matthew C. Mishkind, “” JAMA Psychiatry  77 (12) (2020): 1211–1212.
  • 27Mary-Frances O’Connor, Brian J. Arizmendi, and Alfred W. Kaszniak, “” Journal of Aging Studies 30 (2014): 87–93.
  • 28John A. Naslund, Kelly A. Aschbrenner, L. A. Marsch, and Stephen J. Bartels, “” Epidemiology and Psychiatric Sciences 25 (2) (2016): 113–122.