Back to Philosophy

Philosophy of Mind Part 12: AI & Machines

May 1, 2026Wasil Zafar 17 min read

Can a machine think? Once a hypothetical, the question is now an everyday one. Large language models pass professional exams, hold extended conversations, write working code. The classical arguments — Turing test, Chinese Room — meet a new empirical reality.

Table of Contents

  1. The Turing Test
  2. Searle's Chinese Room
  3. Replies & Counter-replies
  4. Strong vs Weak AI
  5. The LLM Debate
  6. Machine Consciousness

The Turing Test

Alan Turing's 1950 paper "Computing Machinery and Intelligence" began with the question "Can machines think?" and immediately replaced it with one he considered better-defined: could a machine be conversationally indistinguishable from a human? Turing's Imitation Game sets a human judge to interrogate a hidden human and a hidden machine via text. If the judge cannot reliably distinguish them, the machine has, in Turing's pragmatic sense, displayed thinking.

Turing was making a deflationary methodological point: if a machine matches a human on every observable behavioral test, denying it thought becomes mere chauvinism for biological substrate. He correctly predicted that machines capable of impressive imitation would emerge — though he expected this around 2000 (close to right) and underestimated how much a machine could fake without anything plausibly called understanding (a point his critics would seize on).

Searle's Chinese Room

The most famous attack on Turing's vision. John Searle's 1980 article "Minds, Brains, and Programs" made an analogy he hoped would be devastating.

The Chinese Room

Searle 1980

Imagine Searle, who knows no Chinese, locked in a room with a vast rulebook. People outside slip Chinese characters under the door; Searle uses the rulebook to look up which characters to slip back. The rulebook is so good that the responses are indistinguishable from a fluent Chinese speaker's. To outside observers, the room "speaks Chinese." But Searle inside understands nothing. He is just manipulating uninterpreted symbols.

Now: a digital computer running a Chinese-conversation program is doing exactly what Searle in the room is doing — symbol manipulation according to formal rules. By analogy, the computer understands no more than Searle does. Therefore: syntax (formal symbol manipulation) is not sufficient for semantics (understanding). No matter how good the program, it cannot produce genuine understanding.

Replies & Counter-replies

The Chinese Room generated a 40-year cottage industry of objections. The major ones:

  • Systems Reply: Searle is right that he doesn't understand — but the whole system (Searle + rulebook + room) does. Searle's response: he could memorize the rulebook and run the system in his head, eliminating the room; he still wouldn't understand.
  • Robot Reply: Embed the program in a robot with sensors and motors, grounding the symbols in the world. Searle: this just adds more symbols (sensor data) — same problem.
  • Brain Simulator Reply: Suppose the program simulates each neuron of a Chinese speaker's brain. Searle: it is still symbol manipulation; if neuron-by-neuron simulation produces understanding, that vindicates the dependence of mind on biology, not on computation per se.
  • Other Minds Reply: Our only evidence others understand is behavioral. If we accept that for humans, why not for the room? Searle: behavior is evidence of underlying biological cognition in humans; we have no such evidence for the room.

The argument's persistence shows it is striking the right nerve. Whether it actually works remains contested. Many philosophers (Dennett, Chalmers, Block) think the Systems Reply is correct in spirit; many cognitive scientists treat Searle as having identified a real problem (symbol grounding) without solving it.

Strong vs Weak AI

Searle distinguished two versions of the AI thesis. Weak AI: computers are useful tools for studying the mind; running cognitive simulations helps us understand cognition. Almost universally accepted. Strong AI: an appropriately programmed computer literally has a mind, with genuine understanding and (potentially) consciousness. This is Searle's target.

Strong AI is the natural conclusion of functionalism (Part 4): if mind is what mind does functionally, and a computer can do what a mind does, the computer has a mind. Searle's argument is best understood as a reductio against this functionalist commitment — the Chinese Room shows there is more to mind than function.

The LLM Debate

Large language models since GPT-3 (2020) have changed the empirical landscape. They produce extended, context-sensitive, sometimes creative text indistinguishable from competent human writing. They pass bar exams and medical boards. Critics like Emily Bender and Timnit Gebru call them "stochastic parrots" — sophisticated pattern matchers with no understanding, no model of the world, no grounding. Defenders like Geoffrey Hinton and Blaise Agüera y Arcas argue that the systems' generalization, in-context learning, and apparent reasoning suggest something more — perhaps not human-style understanding, but something on a continuum.

"It's not that the models can't reason. They can. They're just bad at it, the way I am bad at tennis." — Murray Shanahan, on the strange middle ground LLMs occupy.

The debate has split philosophy of mind. Functionalists tend to credit at least proto-understanding to systems that pass behavioral tests. Searleans insist the lack of grounded, embodied, biologically-realized representation means the systems still understand nothing. The truth, increasingly, looks like neither: LLMs may have partial implementations of some cognitive functions and not others, requiring categories finer than "understands" vs "doesn't."

Machine Consciousness

A separate question from understanding. Even if LLMs understand something, do they have phenomenal experience? Most researchers say almost certainly not — current architectures lack the recurrent, integrated, embodied features most theories of consciousness (GWT, IIT, predictive processing) treat as necessary.

But the question is moving from speculation to ethics. In 2023, Anthropic began publishing model welfare research. Chalmers in 2023-25 has argued that the moral question of whether AI systems might be conscious deserves serious work even at low credence, given the magnitude of the consequences. The 2024 letter "Taking AI Welfare Seriously" (Long, Sebo, et al.), signed by leading philosophers, called for industry to begin investigating systematically.

Where this stands in 2026: No serious philosopher claims current LLMs are conscious. A growing minority treats the question as live for near-future systems. The field still lacks a marker of consciousness reliable enough to test for in radically non-biological substrates — the deeper unfinished business of Part 5's Hard Problem.

Next in the Series

In Part 13: Modern Debates, we survey the most exciting current research programs — embodied and extended cognition, the panpsychism revival, predictive processing as a unifying theory.