How the Philosophy of Mind and Consciousness Has Affected AI Research

“Brain in a Jar” is a thought experiment of a disembodied human brain living in a jar of food. The thought experiment explores human conceptions of reality, mind, and consciousness. This article explores a metaphysical argument against artificial intelligence on the grounds that a disembodied artificial intelligence, or “brain” without a body, is incompatible with the nature of intelligence.[1]

The brain in a jar is a different investigation from traditional questions about artificial intelligence. The brain in a jar asks if thinking requires a thinker. The possibility of artificial intelligence mainly revolves around what is needed to make a computer (or a computer program) intelligent. From this point of view, artificial intelligence is possible if we can understand intelligence and understand how to program it in a computer.

The 17th century French philosopher René Descartes deserves a lot of blame for the brain in a jar. Descartes fought materialism, which explains the world, and everything in it, as being composed entirely of matter.[2] Descartes separated mind and body to create a neutral space to discuss non-material substances like consciousness, soul, and even God. This philosophy of mind has been called Cartesian dualism.[3]

Dualism holds that body and mind are not one thing, but separate and opposite things made of different materials that interact inexplicitly.[4] Descartes’ methodology for doubting everything, even his own body, for the benefit of his thoughts, for finding something “unmistakable”, which he could least doubt, for learning something about knowledge is dubious. The result is an exhausted epistemological quest to understand what we can know by manipulating metaphysics and what exists. This type of solipsistic thinking is unwarranted but was not a personality disorder in the 17and century.[5]

The French philosopher René Descartes proposed the theory of dualism between mind and body.

There is reason to sympathize with Descartes. Thinking about thought has perplexed thinkers since the Enlightenment and spawned strange philosophies, theories, paradoxes and superstitions. In many ways, dualism is no exception.

It was only at the beginning of the 20and century that dualism was legitimately contested.[6][7] So-called behaviorism held that mental states could be reduced to physical states, which were nothing more than behavior.[8] Besides the reductionism that results from treating humans as behaviors, the problem with behaviorism is that it ignores mental phenomena and explains brain activity as producing a collection of behaviors that can only be observed. Concepts like thought, intelligence, feelings, beliefs, desires, and even heredity and genetics are eliminated in favor of environmental stimuli and behavioral responses.

Therefore, one can never use behaviorism to explain mental phenomena since the focus is on external observable behavior. Philosophers like to joke about two behaviorists evaluating their performance after sex: “That was great for you, how was it for me?” said to each other.[9][10] By focusing on the observable behavior of the body and not on the origin of the behavior in the brain, behaviorism has become less and less a source of knowledge about intelligence.

This is the reason why behaviorists fail to define intelligence.[11] They believe there is nothing to it.[12] Consider Alan Turing’s eponymous Turing test. Turing avoids defining intelligence by saying that intelligence is like intelligence. A jar passes the Turing test if it tricks another jar into thinking it is to behave intelligently by answering questions with answers that seem intelligent. Turing was a behaviorist.

Turing AI
Computer scientist Alan Turing suggested the imitation game, later called the “Turing test”, which aims to measure a machine’s ability to exhibit intelligent behavior.

Behaviorism has experienced a decline in influence which has directly resulted in the inability to explain intelligence. By the 1950s, behaviorism was widely discredited. The most significant attack was delivered in 1959 by the American linguist Noam Chomsky. Chomsky excoriated BF Skinner’s book Verbal behavior.[13][14] An examination of BF Skinner’s verbal behavior is Chomsky’s most cited work, and despite its prosaic name, it has become better known than Skinner’s original work.[15]

Chomsky sparked a reorientation of psychology toward the brain dubbed the cognitive revolution. The revolution produced modern cognitive science and functionalism became the new dominant theory of mind. Views of functionalism intelligence (i.e. a mental phenomenon) that the brain functional organization where individuated functions like language and vision are understood by their causal roles.

Unlike behaviorism, functionalism focuses on what the brain does and where brain function occurs.[16] However, functionalism is not concerned with how something works or if it is made of the same material. It does not matter if the thinking thing is a brain or if this brain has a body. If it works like intelligence, it is intelligent like anything that tells time is a clock. It doesn’t matter what the clock is made of as long as it keeps time.

Skinner
American psychologist Burrhus Frederic Skinner, known for his work on behaviorism.

American philosopher and computer scientist Hilary Putnam developed functionalism in Psychological predicates with computational concepts to form computational functionalism.[17][18] Computationalism, in short, views the mental world as based on a physical system (i.e. computer) using concepts such as information, computation (i.e. thought), memory (i.e. storage) and feedback.[19][20][21] Today, AI research relies heavily on computational functionalism, where intelligence is organized by functions such as computer vision and natural language processing and explained in terms of calculation.

Unfortunately, functions don’t think. These are aspects of thought. The problem with functionalism – aside from the reductionism that results from treating thought as a set of functions (and humans as brains) – is that it ignores thought. While the brain has localized functions with input-output pairs (e.g. perception) that can be represented as a physical system inside a computer, thought is not a loose set of localized functions. .

The famous John Searle Chinese room thought experiment is one of the strongest attacks on computer functionalism. The former philosopher and professor at the University of California at Berkley thought it was impossible to build an intelligent computer because intelligence is a biological phenomenon that requires a conscious thinker. This argument is contrary to functionalism, which treats intelligence as realizable if something can mimic the causal role of specific mental states with computational processes.

Searle
The philosopher John Searle proposed the “Chinese Room” experiment. Credit: Matthew Breindelthe

The irony of the brain in a jar is that Descartes would not have considered “AI” thinking at all. Descartes knew the automatons and mechanical toys of the 17and century. However, the “I” in Descartes’ saying “I so think I am,” treats the human mind as non-mechanical and non-computational. The “cogito” argument implies that for thought, there must also be a subject of that thought. While dualism seems to grant permission to the brain in a jar by eliminating the body, it also contradicts the claim that AI can never think because all thought would lack a subject of that thought, and all intelligence would lack an intelligent being.

Hubert Dreyfus explains how artificial intelligence inherited a “lemon” philosophy.[22] The late professor of philosophy at the University of California, Berkeley, Dreyfus was influenced by phenomenology, which is the philosophy of conscious experience.[23][24][25][26] The irony, Dreyfus explains, is that philosophers have spoken out against many of the philosophical frameworks used by artificial intelligence in its early days, including behaviorism, functionalism and representationalism which all ignore embodiment.[27][28][29] These frameworks are contradictory and incompatible with the biological brain and natural intelligence.

Admittedly, the field of AI was born at a strange philosophical hour. This has largely inhibited progress in understanding intelligence and what it means to be intelligent.[30][31] Of course, the achievements in the field over the past seventy years also show that the discipline is not doomed. The reason for this is that the philosophy adopted most frequently by friends of artificial intelligence is pragmatism.

Dreyfus
The philosopher Hubert Dreyfus is renowned for his critical view of artificial intelligence.

Pragmatism is not a philosophy of the mind. It’s a philosophy that focuses on practical solutions to problems like computer vision and natural language processing. The field has found shortcuts to solving problems that we misinterpret as intelligence primarily driven by our human tendency to project the human quality onto inanimate objects. AI’s inability to understand, and ultimately resolve, intelligence shows that metaphysics may be necessary for AI’s supposed fate. However, pragmatism shows that metaphysics is not necessary to solve real world problems.

This strange line of research shows that real artificial intelligence couldn’t be real unless the brain in a jar had legs, which is catastrophic for an arbitrary GitHub repository claiming artificial intelligence.[32] It also means the death of all companies that “do AI” because apart from the metaphysical nature, this is an ethical issue that would be difficult, if not impossible, to resolve without declaring the power cord and your computer mouse as part of an intelligent being or animal experiment. needed to attach legs and arms to your computers.

This article was originally written by Rich Heimann and published by Ben Dickson on TechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new technologies, and what we need to watch out for. You can read the original article here.