Political Philosophy
Foundations
Core questions, four domainsJustice
Plato, Aristotle, RawlsLiberty & Freedom
Negative vs positive, BerlinPower, Authority & State
Coercion, legitimacy, FoucaultSocial Contract
Hobbes, Locke, RousseauEquality, Rights & Justice
Distributive, equality types, rightsPolitical Ideologies
Liberalism, conservatism, socialismModern Political Philosophy
Global, feminist, postcolonialFreedom in Modern Systems
Surveillance, platforms, AIApplied Political Philosophy
Policy, justice systems, economicsResearch & Mastery
Methods, writing, the canonSurveillance Capitalism
Shoshana Zuboff's The Age of Surveillance Capitalism (2019) gave the most influential critical-theoretical framing of the contemporary digital economy. Zuboff's thesis: a new economic logic has emerged, distinct from industrial capitalism and from anything previously theorized.
The classical commodity is a thing produced from raw materials. The surveillance-capitalist commodity is a behavioral prediction derived from data extracted from human experience — what Zuboff calls "behavioral surplus." The raw material is human behavior; the product is predictions about future behavior; the customer is whoever wishes to influence that future behavior (advertisers, political campaigns, insurers).
This produces a structural incentive to extract as much behavioral data as possible from as many domains of life as possible, and (Zuboff's especially troubling claim) to engineer behavior directly so that predictions become more accurate — making us not only objects of observation but instruments of feedback loops we don't perceive.
The political-philosophical stakes are large. Zuboff argues that the autonomy presupposed by liberal political theory — the capacity for self-direction in a domain protected from external manipulation — is being structurally eroded, not by an authoritarian state but by ostensibly voluntary commercial relations. Whether her diagnosis is exaggerated or under-stated is much debated; that something significant is happening that the existing political-philosophical vocabulary does not adequately address is widely accepted.
Platform Power
The major digital platforms — by 2026 a small number of corporations whose user bases exceed any nation — have accrued forms of power that the existing categories of political philosophy struggle to capture.
They are not states: they cannot legitimately use physical force, do not claim sovereignty over territory, and operate within (and across) state legal systems. But they exercise functions traditionally associated with sovereign authority: rule-making (community standards), enforcement (account suspension), adjudication (content moderation appeals), and quasi-currency creation (tokens, ad credits). They condition the channels of public communication and economic exchange.
Two major frames have been proposed.
Republican non-domination (Pettit's framework, applied here by Frank Pasquale, Tim Wu, and others) — Even when a platform does not interfere with a user, the user is dominated to the extent that her access to the platform's services depends on the platform's arbitrary will. The combination of network effects, switching costs, and platform monopolization creates structural domination that political philosophy must address.
Functional sovereignty — Some scholars argue that platforms have become "private governments" or "functional sovereigns" requiring constitutional-style constraints (due process for moderation, transparency requirements, antitrust action). Treating them as ordinary commercial actors radically misunderstands their power.
Algorithmic Governance
An increasing share of consequential decisions affecting lives — credit, insurance, employment, parole, child welfare, immigration, public benefits — is being made or shaped by algorithmic systems. The political-philosophical questions are sharp.
Procedural justice: Traditional theories of legitimate decision-making emphasize that those affected by decisions should be able to understand the basis for them, contest them, and be heard. Many algorithmic systems are opaque even to their operators (deep neural networks especially), making meaningful contestation difficult or impossible.
Substantive justice: Algorithmic systems trained on historical data tend to reproduce the discriminatory patterns of that data. If past credit decisions discriminated against minorities, an algorithm that learns from those decisions will recommend more of the same. The literature on algorithmic fairness (Barocas, Selbst, Mitchell, etc.) is now substantial; no fully satisfactory technical solution has emerged.
Democratic legitimacy: Decisions about which algorithms to deploy, on which populations, with what consequences, are increasingly made by combinations of agency administrators and private vendors with little democratic oversight. Whether the institutions of democratic accountability can be retrofitted onto an algorithmically-mediated administrative state is one of the central institutional questions of our time.
The Attention Economy
If the previous topics concern threats to political freedom, this one concerns the underlying epistemic and psychological conditions of political life. Tristan Harris, James Williams, Yuval Harari, and others have argued that platforms designed to maximize engagement systematically corrode the cognitive and emotional capacities a self-governing citizenry requires.
The mechanism is well-documented. Algorithmic feeds optimized for engagement preferentially amplify content that triggers strong emotional response — outrage, fear, tribalism. Sustained attention to these feeds shapes default cognitive patterns: shorter attention spans, greater emotional reactivity, weaker capacity for nuance or sustained reasoning. The political consequences — collapse of shared facts, polarization, conspiracy susceptibility — are now visible in essentially every democratic society.
The political-philosophical question is whether political theory can adequately address conditions that the founders of the discipline did not contemplate. Mill's On Liberty argued that free discussion would, in the long run, drive truth to the surface and error to the margins. The argument depended on assumptions about the discursive ecosystem that no longer hold. Whether the structural reform of attention environments — through regulation, alternative platform designs, or cultural norms — should be a central political-philosophical project is being actively worked out.
AI as a Political Question
The most rapidly evolving frontier. AI systems in 2026 are sufficiently capable that a range of political-philosophical questions previously hypothetical are now urgent.
Who decides? The development trajectory of frontier AI is being set largely by a small number of corporate AI labs, with governments scrambling to catch up. The decisions being made — about safety, about deployment, about the values embedded in systems that will mediate billions of human interactions — are decisions of enormous political consequence being made through processes with little democratic input. The case for serious public-interest governance is strong; the practical mechanisms for it are unsettled.
Whose values? When an AI assistant gives advice, refuses a request, or rank-orders content, it embeds normative judgments. Whose judgments should those be? The corporation training the system? The aggregated preferences of users? Some democratically chosen baseline? Different cultural conceptions in different jurisdictions? The "alignment problem" is partly technical, but the deeper question is political — who has the right to specify what AI systems are aligned to.
Labor and distribution. The economic effects of AI on labor markets — already visible in 2026 across many white-collar occupations — raise distributive questions of a scale not faced since industrialization. The political institutions for managing the transition are mostly absent. Universal basic income, robot taxes, mass retraining, work guarantees, shorter working weeks: each has its proponents; none has been seriously implemented at scale.
Concentration of power. The economic returns to the most capable AI systems appear to be highly concentrated. Frontier AI development requires capital investments only a handful of organizations can sustain. If the resulting capabilities translate into market dominance, political influence, and (in the more concerning scenarios) decisive economic-or-military advantages, the implications for the distribution of power within and among societies are large. Political philosophy is overdue to take these questions as central rather than peripheral.
Next in the Series
In Part 10: Applied Political Philosophy, we look at how the frameworks of the previous parts apply to the day-to-day work of policy design, criminal justice, economic regulation, and public administration.