Back to Philosophy

Philosophy of Mind Part 4: Functionalism

May 1, 2026Wasil Zafar 16 min read

Mental states are defined not by what they are made of but by what they do. Functionalism became the most influential view in late-20th-century philosophy of mind — the philosophical foundation of cognitive science, computational psychology, and serious AI research.

Table of Contents

  1. The Functional Idea
  2. Multiple Realizability
  3. Flavors of Functionalism
  4. The Chinese Room
  5. Open Problems

The Functional Idea

A carburetor is not defined by what it is made of (it could be brass, aluminum, or 3D-printed plastic). It is defined by what it does: mix air with fuel in the right proportions. Functionalism proposes that mental states are analogously defined: a mental state is whatever plays a certain causal-functional role.

"Pain" is whatever state is typically caused by tissue damage, causes wincing and avoidance behavior, causes the desire that it stop, and contributes to learning to avoid the cause. Anything that occupies that causal-functional slot — neurons, silicon chips, hydraulic pipes — is, by definition, pain.

Multiple Realizability

Hilary Putnam's 1967 paper "The Nature of Mental States" was the founding moment. He noted that pain occurs in mammals (with C-fibers), in octopuses (very different neural architecture), and presumably in any sufficiently sophisticated alien. If pain is identical to one specific physical state, all these creatures cannot share it. But intuitively they do. Therefore mental states are multiply realizable — the same mental kind realized by many different physical kinds.

The software analogy: Microsoft Word runs on Windows laptops, Macs, and (via emulation) phones. The software is not identical to any specific hardware. Mind is to brain as software is to hardware — same program, many possible substrates.

Flavors of Functionalism

  • Machine functionalism (early Putnam): Mental states are computational states of a Turing-machine-like system. Each mental state corresponds to a state in a probabilistic automaton.
  • Psychofunctionalism (Block, Fodor): The functional roles are those discovered by empirical psychology, not just folk intuition. Cognitive science fills in the role specifications.
  • Analytic functionalism (Lewis, Armstrong): Functional roles are extracted from folk psychology — from our ordinary causal generalizations about beliefs, desires, and behavior.
  • Teleological functionalism (Millikan): Functional roles must be understood evolutionarily — by what the state was selected to do.

Fodor's Language of Thought

Jerry Fodor developed functionalism into a sweeping research program. Cognition, he argued, is computation over mental representations in an internal symbolic language ("Mentalese"). The brain is a syntactic engine that respects semantic relations — exactly like a digital computer. This Computational Theory of Mind dominated cognitive science from the 1970s onward.

Searle's Chinese Room

John Searle's 1980 paper "Minds, Brains, and Programs" is the most famous attempted refutation of strong AI — the thesis that a suitably programmed computer would literally have a mind.

The Thought Experiment

Searle 1980

Imagine Searle, who speaks no Chinese, locked in a room. Slips of paper with Chinese characters come in through a slot. He has a giant rulebook in English that tells him, given any incoming string of symbols, exactly which output symbols to write and pass back out. The rulebook is so good that to a Chinese speaker outside, the answers are indistinguishable from those of a fluent speaker.

By any behavioral test — and by any functional specification — the Room "understands Chinese." But Searle inside understands nothing. He's just shuffling symbols. So syntactic manipulation, however sophisticated, is not sufficient for genuine understanding (semantics). Strong AI is false.

Functionalist Replies

  • Systems reply: Searle alone doesn't understand, but the whole system (Searle + rulebook + room + paper) does. Searle: "Then let me memorize the rulebook and walk around. Now I am the whole system, and I still don't understand Chinese."
  • Robot reply: Add sensors and effectors so the system interacts with the world. Searle: still just symbol shuffling at heart.
  • Brain simulator reply: Simulate every neuron of a Chinese speaker's brain. Searle's "China Brain" rejoinder: have a billion Chinese citizens send each other messages by phone simulating one neuron each. Does the population of China then collectively speak Chinese?

The debate is unresolved. Most cognitive scientists work as if functionalism were true; many philosophers think Searle has hit on something real.

Open Problems

Functionalism's central challenge — even more than the Chinese Room — is the absent qualia / inverted qualia problem. Could two systems be functionally identical while one has rich phenomenal experience and the other has none? Could two have inverted experience (your "red" looks like my "green")? If yes, then functional role does not pin down phenomenal character — and consciousness slips through functionalism's net.

This sets the agenda for Part 5, where we confront the Hard Problem of consciousness directly.

Next in the Series

In Part 5: Consciousness, we tackle what Chalmers called the Hard Problem head-on — qualia, philosophical zombies, the explanatory gap, and the leading scientific theories (Global Workspace, Integrated Information).