top of page
Search
  • Writer's pictureJordan Conrad, PhD, LCSW

Therapy, Artificial Intelligence, and the “Hard Problem”: A Response to the New Yorker


A recent article in the New Yorker entitled Can AI Treat Mental Illness explored the question of whether artificial intelligence (AI) therapists could ever replace human therapists.


The article adopts a usefully outcome based view. In assessing suicidality vs mere linguistic conventions (e.g., “Ugh, he was so boring I’d kill myself just to get out of the date”), certain programs function about 85% as well as human clinicians; a program that assessed veterans' behaviors after returning home and reaching out to those that are struggling caused psychiatric admissions to fall by 8%; a randomized trial of patients with an addiction found that interaction with an AI psychotherapy program led to less self-reported substance use.


These are inspiring outcomes, regardless of whether human clinicians do a better job, because there is an unfortunate scarcity of human mental health practitioners. As the article explains:


“Roughly one in five American adults has a mental illness. An estimated one in twenty has what’s considered a serious mental illness—major depression, bipolar disorder, schizophrenia—that profoundly impairs the ability to live, work, or relate to others. Decades-old drugs such as Prozac and Xanax, once billed as revolutionary antidotes to depression and anxiety, have proved less effective than many had hoped; care remains fragmented, belated, and inadequate; and the over-all burden of mental illness in the U.S., as measured by years lost to disability, seems to have increased. Suicide rates have fallen around the world since the nineteen-nineties, but in America they’ve risen by about a third.”

Unfortunately there just isn’t an adequate supply of therapists to meet this demand and so for many people, seeking therapy from an AI program is a very real option.



“Books are no more threatened by Kindle than stairs by elevators”

– Stephen Fry


Many psychotherapists reject the possibility that AI could serve as effective therapists because of the degree of relationality, intuition, and empathy that therapy involves. While this is largely true, there are many ways to practice psychotherapy – cognitive behavior therapy (CBT), solution-focused therapy (SFT), dialectical behavioral therapy (DBT), etc. – and not all of these require a human to deliver services. As well, people have been helped to quit smoking cigarettes or become more organized and motivated by reading. Requiring that every person who wants to change their lives must participate in a deep exploration of their history and emotions is out of step with the needs and wants of some patients.


Nevertheless, far more often, patients (or their loved ones) identify an issue that they want to work on only to find that it is attached to a complex network of beliefs, patterns of behavior, relationships, and value systems and that altering the identified issue requires reconsidering the mental systems that sustain it. As any programmer or computer scientist knows, you can’t simply add or subtract lines of code without affecting the program as a whole; it is similar with humans. How can you learn to overcome your depression if the relationships you depend upon rely on you feeling bad or being in a submissive role? How can you stop drinking when the time you spend at the bar is the only part of your day you enjoy, despite it ruining the other aspects of your life? With issues such as these, it is important to have a person you can rely on, not just to understand you, but to care about you and be invested in your growth.


This is because people are invested in maintaining their current meaning systems even long after they have ceased becoming adaptive responses to the problems in their life. Therapists are trained to support patients while identifying contradictions or opportunities for change and inviting them to see a new, and more satisfying, way of living. But this requires appreciating what it is like to paradoxically want to hold onto the state that is causing pain. A person grieving their spouse may not want to stop crying every night as their grief symbolically keeps their love alive; someone with intense PTSD may resist treatment because they have become so identified with their trauma that abandoning it feels overwhelming; a patient may acknowledge that they date cruel people, but continue this pattern because, even though they hate the feeling, the cruelty feels like what they deserve. In these cases, therapists must share an emotional space with their patients and come to understand and indeed value both sides of the paradox. This is one reason why countless outcome studies report on the importance of the relationship between the therapist and patient regardless of methods used.[1]


The Hard Problem

However, the essential problem with AI therapy is that AI is fundamentally deficient in certain areas; it is not only that it does not have the emotions required know what patients feel, it doesn’t actually “know” anything. Let me explain.


Within philosophy and cognitive neuroscience, a single issue has been dubbed “The Hard Problem.” Briefly put, it simply isn’t clear how experience can be derived from physical systems. When we think, perceive, and act there is, in addition to a complex integration of multiple information processing systems, an experience of doing these things. There is something it is like to be a conscious organism – to read this sentence, see a color, to fall in love, to be you. The problem is that your brain does not have experiences, your mind does. There is nothing it is like to be brain matter, electricity, neurons, and chemicals, though all of these physical structures mysteriously give rise to experiences.


This point is helpfully illustrated by a famous thought experiment by Frank Jackson known as “Mary and the Black and White Room.”


Mary is a brilliant scientist who specializes in the neurophysiology of vision. Throughout her years researching the subject, she has acquired all the physical information there is about how perceptual processes operate in the brain; she understands, for example, “just which wavelength combinations from the sky stimulate the retina, and exactly how this produces via the central nervous system the contraction of the vocal cords and expulsion of air from the lungs that result in the uttering of the sentence ‘the sky is blue.”[2] However, Mary herself has never seen a color: she lives in a black and white room, receives information from the outside world via a black and white television, and for added measure her skin has been dyed black and white. One day Mary decides to leave the room and, upon opening the door, first sees a cherry red fire-engine driving past.

The question is: did Mary learn anything new by seeing a color? People have different intuitions about this, but I believe Mary does learn something: she learns what it is like to see the color red.


Mary and the Black and White Room highlights what it is at stake in the hard problem. There seems to be something mental that is not reducible to the physical system. As Jackson summarizes: “It seems just obvious that she will learn something about the world and our visual experience of it. But then it is inescapable that her previous knowledge was incomplete. But she had all the physical information. Ergo there is more to have than that, and Physicalism is false.[2]


AI and the Hard Problem


So, when I say that AI does not just have emotions, but does not “know” anything, what do I mean? AI can act as though it knows something just as a Roomba can act like it is “thinking” about where to go, but both are physical systems merely behaving as though they had inner experiences. This is helpfully illustrated by another famous thought experiment, this time by John Searle,[3] entitled “The Chinese Room”


Imagine you are seated at a desk in a small room. On the left side of the room is a small opening in which questions written in Chinese are passed through. Your job is to (1) receive these questions, (2) consult a book that has various instructions, such as “when you receive the card with X symbols on it, write Y symbols on a different card,” (3) follow the instructions in the book and write the new symbols on a new card, and (4) pass the new card through a small opening on the right of the room. Importantly, you neither speak Chinese nor are aware that the symbols passed through are written in Chinese; you are merely receiving notes in an unknown language, consulting a book, writing notes in an unknown language, and outputting them.

It is patently obvious that the person in this room does not know Chinese – they do not understand the input questions, what they are writing, or the effect it will have upon the person that receives the answer – though an external observer might conclude that whomever is in that room has an expert understanding of the language. This is precisely what a computer does when you ask it to perform an even rudimentary task. A calculator, for instance, does not know arithmetic - it receives an input (say 1+1), consults a program that states "when condition 1+1 obtains, produce the "2" symbol," and then outputs 2. The calculator understands arithmetic just as the human in The Chinese Room understands the Chinese language.


Searle explains: “The point of the argument is this: if the man in the room does not understand Chinese on the basis of implementing the appropriate program for understanding Chinese then neither does any other digital computer solely on that basis because no computer, qua computer, has anything the man does not have.”[3]


When AI Therapy Will Work... and When it Won't


For many, the fact that AI programs don’t actually “know” anything (just as a calculator doesn’t “know” arithmetic, though it acts as though it does) won’t be a problem. As the author of the New Yorker piece muses: “I knew that I was talking to a computer, but in a way I didn’t mind. The app became a vehicle for me to articulate and examine my own thoughts. I was talking to myself.


However, for many others, “talking to myself” is not what they want or need from psychotherapy. Aside from those suffering from loneliness for whom AI therapies might only advertise their isolation to themselves, the vast majority of patients – in particular teens and those seeking couples therapy – will benefit from the difficult process of learning to relate to another person who is devoted to their wellbeing.

 

[1] Flückiger, C., Del Re, A. C., Wampold, B. E., Symonds, D., & Horvath, A. O. (2012). How central is the alliance in psychotherapy? A multilevel longitudinal meta-analysis. Journal of Counseling Psychology, 59(1), 10–17.; Horvath, A. O., Del Re, A. C., Flückiger, C., & Symonds, D. (2011). Alliance in individual psychotherapy. Psychotherapy, 48, 9–16. [2] Jackson, F. (1982). Epiphenomenal Qualia. Philosophical Quarterly, 32(127), 127-136

[3] Searle, J. (1999). The Chinese Room. In R. A. Wilson and F. Keil (eds.), The MIT Encyclopedia of the Cognitive Sciences. Cambridge, MA: MIT Press.


 

Addendum: Functionalism

It must be noted that some prominent scholars do argue that humans are programmed in analogous ways as machines: just as a digital machine computes by following a program of instructions where each instruction specifies a condition and an action to carry out if the condition occurs, so too do organic machines (such as our brain) compute by following an evolutionary/genetic program of instructions where each instruction specifies a condition and an action to carry out if the condition occurs; just as digital machine functions are coded (at the most primitive level) in binary, human brains are coded (at the most primitive level) in neural activity.


According to this type of argument, known as "functionalism," just as grandfather clocks, digital watches, and sundials all tell the time but have different hardware, so too can consciousness be realized in multiple physical systems. Though functionalism is a popular account of consciousness, particularly among technologists, I believe its appeal wanes rather quickly. In order to see this, one must appreciate that there is nothing particularly special about the 1s and 0s of binary code - anything that could be used to function as a binary would work just as well. As philosopher of language Philip Johnson-Laird writes: "It could be made out of cogs and levers like an old fashioned mechanical calculator; it could be made out of a hydraulic system through which water flows; it could be made out of transistors etched into a silicon chip through which electric current flows; it could even be carried out by the brain. Each of these machines uses a different medium to represent binary symbols."[1*] The famed cognitive scientist, Zenon Pylyshyn, takes this even further, stating that computation sequences could be realized by "a group of pigeons trained to peck as a Turing Machine!" [2*]


Of course, because this would require a great number of cogs or pipes or pigeons many people's intuitions remain that this is possible. But when we consider the specifics, these intuitions often change. Consider the following objection, provided by Ned Block.[3*]


Imagine a fictional country composed of 86 billion people (roughly the number of neurons in the human brain) connected to a body just like yours, with the intent of replicating your consciousness. All the same sensory inputs your body receives is transmitted to these 86 billion people via satellites observable by all. Each individual communicates through radio transmitters where hyper-specialized tasks are assigned to different persons: "Suppose an input is coded as a 'G': "This alerts the little men who implement G squares. - 'G-men' they call themselves. Suppose the light representing input I17 goes on. One of the G-men has the following as his sole task: when the card reads 'G' and the I17 light goes on, he presses output button O191 and changes the state card to 'M'" This G-man is called upon to exercise his task only rarely. In spite of the low level intelligence required of each little man, the system as a whole manages to simulate you because the functional organization they have been trained to realize is yours.... The system of [86 billion] people communicating with one another plus satellites plays the role of an external "brain" connected to the artificial body by radio"

The question Ned Block asks is: do these 86 billion people communicating through radio and satellite and working to replicate your consciousness have an experience? The question is not whether each individual member of this country has an experience but whether there is something it is like to be the country itself.


"The force of the prima facie counter example can be made clearer as follows: Machine functionalism says that each mental state is identical to a machine-table state. For example, a particular qualitative state, Q, is identical to a machine-table state Sq. But if there is nothing it is like to be the [country of 86 billion people], it cannot be in Q even when it is in Sq. Thus, if there is prima facie doubt about the [country's] mentality, there is prima facie doubt that Q = Sq, i.e., doubt that the kind of functionalism under consideration is true." [3*]

Though this is certainly open to debate, I find it compelling. While each of the 86 billion people are individually conscious, they do not create a higher-order, unified consciousness, even when coordinated in the right way to function as though they do.


There are many other arguments against functionalism and various defenses of it from its proponents. The matter is still unsettled. However, even this argument appears correct to me and weakens the plausibility of the functionalist position.

 

[1*] Johnson-Laird, P. N. (1988). The Computer and the Mind Cambridge, MA: Harvard University Press

[2*] Pylyshyn, Z. W. (1984). Computation and Cognition: Toward a Foundation for Cognitive Science. Cambridge, MA: MIT Press.

[3*] Block, N., (1980), Troubles with functionalism. In N. Block (ed.) Readings in the Philosophy of Psychology (pp. 268-305). Cambridge, MA: Harvard University Press.



Comments


bottom of page