Introduction
In the movie Her (2013), Joaquin Phoenix plays Theodore Twombly, a lonely writer who is on the verge of separating from his wife Catherine, played by Rooney Mara. To cope with his loneliness, he purchases an OS (Artificially Intelligent Operating system), which is designed to adapt based on user interactions. He's fascinated by the new software, which is named Samantha, and starts conversing with it. Initially, the conversations start on an innocuous note, but eventually they bond over discussions about life and love. Gradually, he develops feelings for Samantha and starts treating her like a human. For Theodore, Samantha is as good as a "person" capable of loving and offering emotional comfort. A decade later, with AI companions floating around, the market is ripe for Samanthas, because it appears that the world is full of theodores.
A few decades ago, the proposition that machines might think and feel belonged to science fiction, but now it's a mainstream idea, espoused by many engineers, philosophers, and scientists.
Here's what Ilya Sutskever, one of the co-founders and Chief Scientist of OpenAI, tweeted
"it may be that today's large neural networks are slightly conscious."
If you’ve ever wondered why some of the smartest people in the world believe that AI could become conscious or is already, you aren't alone. Some call for AI rights and protection. At the outset, such beliefs can seem inexplicable. But in my view, it's important to examine the reasons for such considerations. Only then would it be possible to contest or support that line of thinking and that's my goal with this article.
As the title indicates, I contest that view. I take the view that the search for a conscious AI is a fad.
In order to buttress my view, I realize it's important to flip the pages of history and trace the development of this thought (That machines can think or feel).
So, in my article, I intend to do the following.
1. Contextualize this thought by charting out its history
2. Briefly discuss popular theories that offer credence to such thinking.
3. Share my view on why it's a fad.
Without further ado, let me begin.
A little bit of history
a. A journey through the Ancient & Medieval world
Time travel to most parts of the ancient or medieval world and the belief was that the world was divided into 2 realms. The superlunar realm - everything above the moon consisting of stars & heavens - and the sublunar realm - everything below the moon consisting of earth, humans, animals & plants. The gods resided in the superlunar world but also controlled forces in the sublunar realm, like thunder, lightning, and floods.
As part of this rubric of thinking, humans - across different cultures - believed in the idea of an immaterial soul (not made up of biological matter) made in the image of god (Particularly a christian view) as the animating principle of consciousness and life. While I've given the Christian example, most of the other cultures, be it the ancient Egyptians, Mesopotamians, Vedic people, Greeks etc have had the conception of an immaterial soul that was different from the biological substrate (bones, tissues, nerves etc). Sure, there were other concurrent strands of thought, but this was the dominant thinking. This thinking is reflected in the philosophical doctrine of Vitalism which was popular in the medieval times.
here's a definition of Vitalism,
"There are many opinions about what vitalism actually is. In general, it is the doctrine that life originates in a vital principle, distinct from chemical and other forces. It is a belief that there is a vital force operating in the living organism and that this cannot be reduced or explained simply by physical or chemical factors."
In a nutshell, the dominant thinking until the middle ages was that god was the architect of both the external world - thunder, floods, lightning, nature in general - and the inner world - thoughts, feelings, emotions, with the animating principle life as the source for this inner world.
b. Scientific revolution, Enlightenment, & Empiricism
However, the advent of modern science, kick started by the Scientific Revolution, fundamentally transformed our thinking about the external world. It paved the way for a more evidence-based, naturalistic understanding of it. Advances in astronomy, physics, chemistry, and geology offered a completely naturalistic explanation of the external world. What was once explained in terms of god was now explained with mathematical laws.
Advances in the sciences influenced philosophy, giving birth to the age of enlightenment, which celebrated reason and empirical thought. Historians refer to this period as the "Age of Enlightenment" to highlight the contrast between a perceived "ignorant" past and a future driven by reason. In this period, knowledge was parameterized, which meant that for something to be regarded as knowledge, it had to pass through the filters of reason, logic, and experimental observations. In a way, the new paradigm killed the necessity for a god and a soul and necessitated everything to be explained mechanistically in terms of scientific theories and mathematical equations. [This is what I meant by 'knowledge was parameterized.' There was now just ONE way to understand the entire gamut of reality, through equations. If you want just one takeaway from this section, it's this one.]
But there was a catch.
The rigorous scientific framework applied to understand the external world did not seem to readily lend itself to unpack the workings of the inner world. The world of feelings, thoughts, and emotions. While one could mathematically describe the laws of motion or the behavior of electricity, describing human behavior mathematically seemed elusive. So, mind sciences (Psychology) was always regarded as a 'soft-science.
But scientists did not give up. Over the years, they came up with many theories and models to explain the human mind and behavior. It's beyond the scope of the article to cover those but some of the key ideas were structuralism, psychoanalysis, associationism, and behaviorism.
I’d like to touch upon behaviorism briefly - the precursor to today’s computational models - to set the stage and illustrate the scientific climate in the mind sciences before the advent of theoretical models that underpin AI.
c. Understanding human behavior: Behaviorism & its limitations
Behaviorism states that human behavior is a response to environmental stimuli. Developed and popularized by scientists like Pavlov, Watson, and Skinner, it holds that all behaviors are learned through conditioned interaction with the environment. Which simply means that you could condition animals or humans to manifest certain behaviors (like salivating) based on environmental stimuli (bell sound).
Pavlov's dog experiment is a must read to understand classical conditioning.
While it had a scientific approach to studying behavior (observation, testing, replication), behaviorism treated the human mind like a black box and focused entirely on external stimuli. It also did not explain cognition, memory, language, and other higher order functions. Chomsky, amongst others, criticized behaviorism for being too simplistic and believed that it did not capture the workings of the human mind and behavior.
While mind-scientists grappled with behaviorism, a new technology emerged. Computers. As computers transformed sciences, humans started seeing an interesting parallel between computers and the mind. Computers worked based on rules, representations, symbol manipulation, and feedback loops. What if the human mind also worked in the same way? This simple assumption paved the way for the Computational Theory of Mind (CTM) - one that gave birth to Artificial Intelligence and Cognitive Science.
Models that under gird Artificial Intelligence
a. Computational theory of mind
Warren McCulloh and Walter Pitts were a pair of Cybernetics that pioneered neural networks in the 1940s, and it's they who came up with the "Computational Theory of Mind". They were the first ones to suggest that the human mind functioned, at the neural level, much like a Turing machine. Both manipulated symbols, utilized feedback loops, had mental models etc. Essentially, they argued that human thought was just computation.
Quoting from Wikipedia.
"The computational theory of mind holds that the human mind is a computational system that is realized (i.e., physically implemented) by neural activity in the brain. The theory can be elaborated in many ways and varies largely based on how the term computation is understood. Computation is commonly understood in terms of Turing machines which manipulate symbols according to a rule, in combination with the internal state of the machine. The critical aspect of such a computational model is that we can abstract away from particular physical details of the machine that is implementing the computation. For example, the appropriate computation could be implemented either by silicon chips or biological neural networks, so long as there is a series of outputs based on manipulations of inputs and internal states, performed according to a rule. CTM therefore holds that the mind is not simply analogous to a computer program, but that it is literally a computational system."
They drew parallels between how a computer worked and humans worked. I'll break it down the way I understand it.
Assume you want to add two numbers: 4 + 5. What do you do? You refer to certain concepts, representations, rules. For instance, you need to be aware of the concept of numbers. You need to follow the rule that when adding, you need to combine. You need to manipulate symbols (i.e mentally transform representations).
Computers do just that. Representations in computers are just binary patterns. So, it represents the numbers 4 (0100) or 5 (0101) in binary form. Then, rules are encoded in the software w.r.t what addition means. Symbol manipulation is transformation of bit patterns (actual job).
This does not work only for numbers or other mathematical concepts. It can be applied to language as well where everything is broken into a series of tokens and operated on by mathematical rules and statistical patterns.
b. Information Theory
McCulloh and Pitts' work coincided with Claude Shannon's information theory which is the mathematical study of the quantification, storage, and communication of information. In common usage, we intuitively treat information as something that carries meaning. That's because we use language to communicate information, and
language consists of both syntax (rules and relationships) and semantics (meanings).
For instance, when I say, "I had breakfast".
a. There are rules that govern that statement, where "I" denotes the subject; "had" -the verb; and "breakfast" - the object.
b. There are relationships that govern the structure. Ex: Verb must be placed after the subject etc
Meaning arises from the interplay between rules, structure, & shared conventions
As long as we agree what "I", "had", "breakfast" mean (Individually and in relation to each other in a sentence), we can infer that the subject consumed a morning meal at some point in time and this information carries meaning.
But Shannon’s revolutionary idea was to strip meaning out of information entirely.
What matters is only the structure, the symbols, and the probabilities with which they appear. Therefore, information became purely mathematical devoid of meaning. In a way, he redefined information to exclude the need for a conscious subject.
Listen to it directly from the horse's mouth.
c. Marriage between Computational & Information theory
For this section, I am quoting from
"God, Human, Animal, Machine" written by Meghan O' Gieblyn
[Page 14, 15]
"Taken together, this early work in cybernetics had an odd circularity to it. Shannon removed the thinking mind from the concept of information. Meanwhile, McCulloch applied the logic of information processing to the mind itself. This resulted in a model of mind in which thought could be accounted for in purely abstract, mathematical terms, and opened up the possibility that computers could execute mental functions. If thinking was just information processing, computers could be said to "learn", "reason", and "understand" - words that were, at least in the beginning, put in quotation marks to denote them as metaphors. But as cybernetics evolved and computational analogy was applied across a more expansive variety of biological and artificial systems, the limits of the metaphor began to dissolve, such that it became increasingly difficult to tell the difference between matter and form, medium and message, metaphor and reality."
This is exactly how the belief that AI could become conscious started taking hold.
In other words, from the scientific revolution to modern computational theories, humans were gradually described in mechanistic terms (remember the death of god and soul), reducing us to mere machines so that machines could, in turn, be elevated to the level of humans.
Search for a Conscious AI is a fad.
I have 2 reasons to think it's a fad, with the second reason being the most important one. However, it's useful to understand my first reason as well.
a. It's just syntactic
First, I'd like to admit upfront that it's impossible to make objective proclamations about AI consciousness. Forget about AI, I cannot prove that the one reading my article is a conscious being and not a bot, because, as Thomas Nagel mentioned in his essay "what is it like to be a bat," consciousness is experienced from a first-person perspective, and that's why Turing-test was the benchmark to ascertain machine intelligence. So, I fully recognize that the question will always be an open-ended one.
Yet I have reasons to believe that AI isn't or cannot become conscious. Forget about consciousness, I argue AI doesn't 'understand' or 'reason' in the true sense of the word.
We can understand this with a thought experiment called the Chinese room - proposed by the philosopher John Searle.
A quick summary of the argument.
Searle asks us to imagine a person who doesn’t understand Chinese sitting in a room with an instruction manual in English, explaining how to manipulate Chinese symbols. Through the slot in the door, he receives questions in Chinese. Now, he does not understand what the question means, yet he's able to respond with the help of an instruction manual. From the outside, it appears as though the person knows Chinese, yet in reality, he's just following rules.
Searle argues that this thought experiment highlights that a computer program that simulates human understanding of language, such as a chatbot, does not truly understand the meaning of the language it is processing, and it's only following a set of rules.
Note: I am sure most of the AI models don't work only based on simple rules and show much more complex "behavior." They "learn" from patterns, “apply” statistical probabilities, and "self-correct." Agreed. But at its core, it is all math - linear algebra, calculus, probability etc and manipulation of these concepts.
But are these accompanied by any first person experience? When an AI predicts customer churn or a revenue dip, is it going to fear being fired?
Stockfish chess engine has defeated Magnus Carlsen. But did it care when the game got long. Did it experience the fear of losing, or the jubilation of defeating the greatest chess player of all time? We don't know. But it's reasonable to assume it didn't.
7 could be your favorite number. Perhaps because it's your daughter's birthday or your wedding anniversary day etc. The context or the semantics is what makes us humans, and it's logical to assume AI does not have it, because it is designed to preclude the semantics.
b. It's just the latest metaphor
Metaphors are popular in language usage. In science and technology, it serves to break down complex ideas into relatable concepts. Reading the history of science informs me that we have used metaphors to make sense of ourselves and the world around us and this metaphor depended on the technology of the day. Going by this line of thinking, the 'mind-computer' metaphor is just the latest metaphor.
Here are a few examples from history.
Clocks: Post the scientific revolution, the entire universe was compared to a giant clock to underscore that it worked in predictable ways based on the classical laws of physics. The clockwork universe became a popular metaphor at the time. In line with the idea, the metaphor was applied to biological functions as well. In “Treatise on Man”, René Descartes wrote:
“These functions (including passion, memory, and imagination) follow from the mere arrangement of the machine’s organs every bit as naturally as the movements of a clock or other automaton follow from the arrangement of its counter-weights and wheels”
Telegraphs: With the invention of telegraphs, the brain was compared to telegraphs.
"Thirty years later, Ramon y Cajal used the telegraph network to explain the structure and function of a single neuron: The nerve cell consists of an apparatus for the reception of currents, as seen in the dendritic expansions and the cell body, an apparatus for transmission, represented by the prolonged axis cylinder, and an apparatus for division or distribution, represented by the nerve terminal arborisation. (Cajal, 1894). Cajal even used wiring as a way of explaining what was happening in the as yet unnamed synapse: current must be transmitted from one cell to another by way of contiguity or contact, as in the splicing of two telegraph wires”
Freud compared the brain to a steam engine while Descartes compared it to Hydraulics. In his essay, "Brain Metaphor and Brain theory", John G. Daugman vision scientist traces the history of different metaphors used to describe mind and body.
Back in my college days, I was good at Table Tennis. While my friends played with professional paddles that cost thousands of rupees, I played with a cheap one that was available in my local college store. I intuitively knew they had far superior ones, but never for a second did I confuse it with a tennis racket. But that is exactly what we are doing with AI. Confusing it with a tennis racket. Of course, AI systems are far superior than clocks or telegraphs, but it's still a machine. It's merely the latest technology in the long litany of technologies that the mind has been compared to.
Conclusion
When I set out to write the article, I did not really know where to start and what direction to take. I was tempted to reduce the length of this article by half without delving into the history and contemporary theories on Artificial Intelligence. But eventually, I decided to chart out the history of the development of this thought, because I deeply believe it's important to place every idea, theory or a scientific model in context.
The fall of medieval models, the rise of modern science, the Enlightenment, and concomitant intellectual movements were crucial for the development of this thought, because they changed our conception of knowledge and formalized the methods for acquiring it. Experimental science became the touchstone for acquiring knowledge. While it helped in unraveling the workings of the external world, it posed challenges in unraveling the inner world.
However, the computers (new technology) were revolutionary and with its rise, we saw our image reflected in them. With the advent of computational models (CTM, IT), the inner world, which remained a black box hitherto finally lent itself for scrutiny as these models provided a mathematical framework to study the mind, thus facilitating the birth of Artificial Intelligence and Cognitive Science.
The spectacular success of these models in creating complex technologies has now led many of us to mistake (in my view) the metaphor for reality. Is the human mind just an information processing system? Is thought nothing but computation? Does consciousness not require a biological substrate?
The proponents of AI consciousness will shout out a big YES. But we have the benefit of learning from history and a reading of the history of science, mind science particularly, illustrates how the human mind was compared to various technologies of the day.
Given our history, I wouldn’t be surprised if, 300 years from now, the mind is compared to some new technology of the day that contains an entirely different conceptual framework, with its proponents insisting they have finally found the true explanation for how the mind works or what consciousness is.
And for this very reason, I think that the search for a conscious AI is a fad.
