Philosophy of Mind
Foundations
What is mind? The mind-body problemDualism
Descartes, substance & property dualismPhysicalism
Identity theory, eliminativismFunctionalism
Mind as software, multiple realizabilityConsciousness
Hard problem, qualiaIntentionality
Aboutness, mental contentPersonal Identity
Self over timeFree Will
Determinism, compatibilismEmotions
Cognitive vs feeling theoriesPerception
Realism, sense dataSelf-Knowledge
Privileged access, self-deceptionAI & Machines
Turing test, Chinese RoomModern Debates
Embodied cognition, panpsychismApplications
Neuroethics, AI rights, mental healthEasy vs Hard Problems
David Chalmers, in his 1995 paper "Facing Up to the Problem of Consciousness," made the most consequential distinction in modern philosophy of mind. The "easy problems" of consciousness are easy only in comparison — they include explaining how the brain integrates information, focuses attention, reports mental states, controls behavior, distinguishes wakefulness from sleep. These are tractable for cognitive science, even if hard in practice.
The Hard Problem is different in kind: why is any of this accompanied by subjective experience? Why does information processing feel like anything? You could imagine all the cognitive machinery operating in the dark, with no inner light. The fact that it doesn't — the fact that there is something it is like to be you reading this — is what cries out for explanation.
Qualia & the Explanatory Gap
Qualia (singular: quale) are the qualitative, subjective characters of experiences — the redness of red, the painfulness of pain, the saltiness of salt, the way coffee tastes to you. Joseph Levine coined "the explanatory gap" in 1983: even if pain is neuron-firing, the identification leaves it utterly mysterious why this firing pattern feels this particular way rather than another.
For "water = H₂O", once you understand H₂O's behavior, water's behavior follows. For "pain = C-fiber firing", knowing about C-fibers tells you nothing about why pain feels painful. That asymmetry is the gap.
Philosophical Zombies
The Zombie Argument
Imagine a being physically and behaviorally identical to you, atom for atom and action for action — but with no inner experience whatsoever. The "lights are off." It says "I see red," it withdraws from heat, it claims to have feelings — but there's nothing it is like to be it.
If such a zombie is even coherently conceivable, then phenomenal consciousness is not entailed by the physical facts. Adding all the physical truths together does not logically force the existence of experience. So physicalism, as usually understood, must be incomplete.
Critics (Dennett especially) reply that zombies only seem conceivable because we don't conceive them carefully enough. If we really filled in all the physical detail, we'd see that consciousness comes with it.
Higher-Order Theories
Higher-Order Thought (HOT) theories, championed by David Rosenthal, propose that a mental state is conscious when there is a higher-order thought about it. Pain becomes conscious when, in addition to the first-order pain state, you have a thought "I am in pain." Without the higher-order representation, the first-order state remains unconscious.
This explains many puzzles (subliminal perception, blindsight) but faces objections: animals and infants seem conscious without sophisticated higher-order thought; a misrepresenting higher-order state might create a "phantom" consciousness of a non-existent first-order state.
Global Workspace Theory
Bernard Baars introduced Global Workspace Theory (GWT) in the 1980s; Stanislas Dehaene developed it into a leading neuroscientific model. The idea: the brain contains many specialized unconscious processors — vision, language, motor control, memory. A piece of information becomes conscious when it is "broadcast" widely across the brain via a global workspace, making it available to all subsystems for report, planning, and recall.
Dehaene's experiments using masking, attentional blink, and brain imaging show a sharp neural signature when information crosses the threshold into conscious access — typically a late, widespread P3b wave around 300ms post-stimulus. GWT explains the function of consciousness elegantly. It is silent on why broadcasting feels like anything.
Integrated Information Theory
Giulio Tononi's Integrated Information Theory (IIT) takes the boldest stance: consciousness is integrated information, measured by a quantity called Φ (phi). A system is conscious to the degree that it is more than the sum of its parts — that its current state generates information beyond what the parts alone could.
IIT yields surprising predictions: the cerebellum, despite having more neurons than the cortex, has very low Φ (mostly parallel modules), so it should be unconscious — and indeed cerebellar damage doesn't impair consciousness. Conversely, IIT predicts even simple systems with the right integrative architecture would be minimally conscious. This commits IIT to a form of panpsychism, which we examine in Part 13.
Next in the Series
In Part 6: Intentionality, we turn from the felt quality of mind to its strange property of being about things — Brentano's "mark of the mental," and the puzzles of mental content from Twin Earth to teleosemantics.