Can Algorithms Provide Mental Health Support?

If designed and deployed carefully, AI-based chatbots might be able to complement human experts and help with real-world issues at a scale.

In partnership with

💖 This week's byte: AI-based chatbot effectively and efficiently helps mental health issues at scale, though it needs careful human interventions and stakeholder engagements.

📖 The Story

Who, When, and Where — Context

In the field of mental health support in the United States, Woebot was founded in 2017 by clinical research psychologist Dr. Alison Darcy.

Why — Challenge

Many people in the US, despite the country’s wealth, suffer from mental health challenges, and the consequences can be tragic. However, there is not enough infrastructure to deliver appropriate support to those in need, due to the shortage of psychologists, the complexity of stakeholder engagements — including regulatory bodies in the medical domain, and uncertainty in delivering evidence-based, clinically tested solutions to the masses at scale.

What and How — Tech Solution

Woebot Health developed an AI-based chatbot that can efficiently support millions of people’s mental health via a chat interface with AI-generated messages. They took the challenge seriously, and they spent huge effort on clinical trials and communications with the Food and Drug Administration (FDA), though they clearly state that Woebot’s products are not FDA-approved medical devices. Meanwhile, the intense review and refinement mechanisms of AI models and machine-generated chat responses are in place throughout development to ensure effective support.

💡 Key Insights

  • Technology makes a solution scalable. The chatbot can be seen as a clone of real-world psychologists with limited capacity, which can work 24/7 and deliver a service to anyone anywhere at any time.

  • AI is better used when deployed in a hybrid, human-in-the-loop fashion. Even though automated machine power is useful, it needs to be trained, tested, and monitored very carefully, as things can go wrong easily (e.g., recommending wrong practices to patients).

  • A multi-stakeholder approach is critical to safely and effectively introducing a tech solution into the real world, especially when it’s something new to everyone, like AI. This approach can involve raising awareness and involving various players in the discussion.

✅ Try This

Recall the last time you felt unwell, either physically or mentally. How would you seek helpful information online to mitigate the pain, if medical experts are not readily available? How can we potentially use an AI tool like ChatGPT in such a scenario? Try asking hypothetical questions to the tool and see how they respond.

There’s a reason 400,000 professionals read this daily.

Join The AI Report, trusted by 400,000+ professionals at Google, Microsoft, and OpenAI. Get daily insights, tools, and strategies to master practical AI skills that drive results.

📊 Did You Know?

According to the National Institute of Mental Health, it is estimated that 23.1% of the U.S. adult population — 59.3 million in 2022 — live with a mental illness. Although there are numerous potential causes and various degrees of seriousness, it is clear that mental health issues are relevant to many of us who live in modern life.

💭 Share your thoughts: What are the potential harms and problems if you deploy an AI for healthcare support without testing, reviewing, and monitoring?

Reply

or to participate.