It seems like the real danger is the appeal of sacrificing nuanced treatment for convenient treatment. Your comment pointed out that for many people the choice might boil down to a supportive app or nothing. And as you noted, personal change can take years of dedicated person effort.
To me, it seems like the biggest risk in the long term isn't in whether or not people get treated. The article demonstrates that the technology needed to make widely available mental health treatment options is being explored and worked on. The danger in my mind comes afterwards when the technology is performing acceptably but not comprehensively. It seems like at that stage there is a possibility of people not seeking more rigorous/advanced treatment from a professional, once they have the opportunity to do so, because they may already view the app as being good enough and any failures to improve beyond what they've already experienced through the app as personal failures rather than treatment failures.
The overwhelming majority of people who need psychotherapy cannot get access to adequate treatment. Access totally dwarfs every other issue in psychotherapy.
Here in the UK, most people with moderately severe depression or anxiety have to wait several months to get access to CBT. That will usually comprise of six one-hour sessions, often with a mental health care practitioner rather than a clinical psychologist. Patients with less severe symptoms will be triaged to a telephone-based service, an online service or self-help books.
That's a perfectly typical story for a high-income country with an excellent healthcare system. Access to psychotherapy is routinely and severely rationed because of cost. In lower-income countries, the picture is far worse. For most people, the choice is not between nuanced treatment and convenient treatment, but between cheap treatment and no treatment.
In all honesty, I'm struggling to contain my anger at your comment - it's quite clear that you're from an extremely privileged background. $150 for a weekly session with a psychotherapist is an unimaginable luxury for the vast majority of people in the world.
I'd love for everyone to be able to access a clinical psychotherapist on demand, but it simply isn't going to happen. There isn't the money or the political will to make it happen. I welcome innovative approaches to psychotherapy, provided they're evidence-based and rigorously evaluated. We have a real opportunity to improve the wellbeing of hundreds of millions of people. I think that rejecting these possibilities because of a hypothetical risk is utterly churlish.
> 90.2 per cent waited less than 6 weeks and 98.7 per cent waited less than 18 weeks to enter treatment
Most people wait less than 6 weeks to start treatment.
I agree about the rest of your comment: the CBT model should be 12 to 14 weeks of one hour sessions face to face with a therapist (we're not sure if the experience level of the therapist makes much difference), and many people are getting a much reduced version of this: 6 to 8 weeks, of 45 minute sessions, sometimes in groups or over the phone.
I think you either misunderstood my statement or I did a poor job communicating my points. Either way, it seems clear from your response that the position I was trying to present was not the one that was interpreted. I take some amount of resignation in your assumptions about my upbringing based on a single comment but in general I'm more annoyed that instead of simply focusing on rebutting the points you thought I was making, which you did very effectively, you felt degenerating the discussion into personal attacks was somehow also necessary.
Now then, you're right about access to care. If you felt like I was being dismissive of the idea that someone might lack the available funds to get proper treatment for a mental disorder then I apologize. My statement about the long term wasn't to suggest that the access problem is solved now or that it will be solved within the next few decades which in my mind represent the near term as far as timescales go. I simply meant to imply that, assuming the app continues to show promise and is eventually completed as a functional product, and that similar derivative programs aimed at helping treat different mental illnesses can be developed, the concern after those accomplishments becomes ensuring that they don't dissuade patients from pursuing other more appropriate treatment paths once/if their situations improve to that point. I did not mean this as being the only concern or the primary concern afterwards. It was simply the concern I voiced because it was interesting to me when I thought about it and so I shared it.
I'm making assumptions so I may of course be wrong, but my guess is that the latter part of what I just talked about was what upset you. I didn't mean to imply that patients could simply change their situation and go after a different treatment paths. I simply meant that, in a world where the only consideration is treatment appropriateness, I thought it was worth considering the secondary effects app based treatment could have on patient treatment seeking. I didn't mean this to discredit app based treatments as a viable tool and I see no reason they couldn't work in isolation, synergistically with other treatments, or any other variation of "I'm sure they can work" that might be appropriate to this discussion. That seems to be the other interpretation you made from my post, that I was being dismissive of the app compared to other treatments, and this was not something I realized I was communicating. However, since, as far as I can tell, this was the reason you accused me of being churlish I thought it worth clarifying.
If remarking on your privilege constitutes an attack, then so be it. It felt necessary because of the desperate need that exists in the community, which your comment seemed not to recognise in the slightest. Globally, there is a suicide death every 40 seconds; the vast majority of those people were mentally ill but had received no treatment whatsoever. Suicide rates have spiked dramatically in many demographics since the great recession, with the greatest impact being felt in deprived communities.
Psychiatric hospitals across the developed world are discharging actively suicidal patients on a daily basis, because they're marginally less suicidal than the dozens of people who are waiting for a bed. Beyond suicide, there are vast increases in drug and alcohol related deaths in impoverished communities. We're in the midst of a mental health emergency and need all the help we can get.
You're speculating about whether an app might hypothetically dissuade people from seeking better treatment at some point in the distant future. I think my response is downright charitable.
Consider another hypothetical scenario. A great famine has struck. Someone is dying every 40 seconds. The UN start helicoptering in bulk quantities of freeze-dried rations. I remark that there is a great danger in providing food aid, because it might dissuade people from eating a balanced diet of fresh fruit and vegetables. What response would I expect?
I had a longer comment prepared but I deleted it because I realized this conversation isn't going to be productive. If you want to make assumptions about my upbringing and ignore that I'm not actually disagreeing with you then there's no real reason to continue and pretend like we're having a discussion. Your heart is in the right place and you seem aware of the relevant statistics in play so I wish you well.
Pamela from Woebot here. That's a great point. We actually plan to detect when users have not improved their mental state enough over time (based on a brief clinical screen every few weeks plus the daily moods), and in that case, we will suggest that we are not effective for them and point out other options. It would be up to the user to consider the options and act on them, of course.
Nearly any new method of treatment opens up new doors for human failings and there will inevitably be some nonzero number of people whose lives are made worse by an overwhelmingly positive experience for humanity and mental health. Let's first worry about making the apps "good enough" for people to feel like they're getting _any_ decent treatment, let alone comprehensive to the point of eliminating the desire for additional human components, before we worry about this strange, potentially irrelevant rabbit hole.
What makes you think it's potentially irrelevant? Go by any mental health clinic in your city and ask if there are any patients who have conditions that impair either their ability to come in for treatment or their ability to recognize that they need treatment. I agree with the majority of your rebuttal that, at the moment, it's more productive to focus on simply getting this new treatment to a point of reliable functionality. But that doesn't mean that its potential secondary effects on patient treatment outcomes shouldn't also be considered or explored during experimental trials.
I agree that a wide range of potential effects and side-effects should be considered. I'm sure a key metric in this experiment involves collecting data on how it influences users' further treatment. I think this will either put your fears to rest (for now) or immediately draw the concerned attention of everyone conducting this experiment.
Edit: Maybe it _is_ all a scheme by Big Insurance to give the masses minimal treatment and doesn't actually solve their problems but provides the illusion of a solution while sucking them into a vile dependence on closed-source AI personalities to feel any sense of therapy. Technology enables so many insidious business practices I don't know what to believe anymore.
To me, it seems like the biggest risk in the long term isn't in whether or not people get treated. The article demonstrates that the technology needed to make widely available mental health treatment options is being explored and worked on. The danger in my mind comes afterwards when the technology is performing acceptably but not comprehensively. It seems like at that stage there is a possibility of people not seeking more rigorous/advanced treatment from a professional, once they have the opportunity to do so, because they may already view the app as being good enough and any failures to improve beyond what they've already experienced through the app as personal failures rather than treatment failures.