Back to Philosophy

Ethics & Moral Philosophy Series Part 6: Applied Ethics & Case Studies

January 25, 2026 Wasil Zafar 28 min read

Apply ethical principles to real-world moral dilemmas. Explore bioethics, environmental ethics, business ethics, AI ethics, and work through contemporary case studies that challenge our moral reasoning.

Table of Contents

  1. What is Applied Ethics?
  2. Bioethics & Medical Ethics
  3. Environmental Ethics
  4. Business Ethics
  5. Technology & AI Ethics
  6. Case Studies
  7. Series Conclusion

Introduction: What is Applied Ethics?

Series Finale: This is Part 6 (Final Part) of our 6-part Ethics & Moral Philosophy Series. Having explored the foundations and major theories, we now apply ethics to real-world problems.

Applied ethics takes the normative theories we've studied and brings them to bear on specific moral problems. It's where philosophy meets the real world—in hospitals, boardrooms, laboratories, courtrooms, and legislatures.

Key Insight: Applied ethics isn't just about "applying" theory mechanically. Real-world cases often reveal tensions between theories, expose hidden assumptions, and force us to refine our ethical thinking. The relationship between theory and practice goes both ways.

Methods in Applied Ethics

Applied ethicists use several methodological approaches:

Three Methodological Approaches

Methodology
Approach Description Strengths & Weaknesses
Top-Down Start with ethical theory, deduce conclusions for specific cases Systematic, but may ignore case-specific features
Bottom-Up (Casuistry) Start with particular cases, look for analogies and precedents Sensitive to particulars, but may lack principled basis
Reflective Equilibrium Move back and forth between principles and intuitions, adjusting both until coherent Balanced approach favored by many philosophers

Theory vs. Practice

How the Major Theories Approach Applied Cases

Theory Application
  • Utilitarianism: Calculate consequences. What produces the greatest overall good?
  • Deontology: Identify duties. What does the moral law require? What would respect persons?
  • Virtue Ethics: Consider character. What would a virtuous person do? What does practical wisdom advise?

In Practice: Most ethicists draw on multiple approaches. Different theories illuminate different aspects of complex cases.

Bioethics & Medical Ethics

Bioethics is one of the most developed areas of applied ethics, emerging in the 1960s-70s in response to advances in medical technology and notorious research scandals (e.g., the Tuskegee syphilis study).

The Four Principles (Beauchamp & Childress): The dominant framework in bioethics uses four mid-level principles:
  • Autonomy: Respect patient's right to make their own decisions
  • Beneficence: Act in the patient's best interest
  • Non-maleficence: Do no harm
  • Justice: Distribute benefits and burdens fairly

Patient Autonomy & Informed Consent

The Principle of Autonomy

Core Concept

Informed Consent requires that patients:

  • Understand the nature of their condition and proposed treatment
  • Know the risks, benefits, and alternatives
  • Voluntarily agree without coercion
  • Have capacity to make the decision

Paternalism Debate: When, if ever, may doctors override patient wishes "for their own good"? Strong autonomy views say almost never; moderate views allow exceptions (e.g., psychiatric emergencies).

Euthanasia & End-of-Life Care

The Euthanasia Debate

Key Distinctions
Voluntary Non-Voluntary Involuntary
Active Patient requests death; doctor administers lethal agent Patient can't consent; doctor causes death Patient refuses; doctor kills anyway (murder)
Passive Patient refuses treatment; death follows naturally Withdrawing life support from unconscious patient Withdrawing treatment against patient's wishes

The Doctrine of Double Effect: An action with a bad effect (death) may be permissible if the bad effect is foreseen but not intended, and the good effect (relief of suffering) is the aim.

Genetic Engineering & Enhancement

Gene Editing Ethics (CRISPR)

Emerging Issues

Therapy vs. Enhancement: A key distinction:

  • Therapy: Correcting genetic diseases (e.g., sickle cell, cystic fibrosis)
  • Enhancement: Improving "normal" traits (intelligence, athletic ability)

Ethical Concerns:

  • Safety and unknown risks of germline editing
  • Creating "designer babies" and genetic inequality
  • Playing God / "hubris" objections
  • Informed consent for future generations (who can't consent)

Environmental Ethics

Environmental ethics examines our moral relationship with the natural world. Should we care about nature only because it benefits humans, or does nature have value in itself?

Climate Change Ethics

The Ethics of Climate Change

Global Challenge

Key Ethical Issues:

  • Intergenerational Justice: What do we owe future generations? How much should we sacrifice now for people who don't yet exist?
  • Global Justice: Rich countries caused most emissions but poor countries suffer most. Who should bear the costs?
  • Collective Action: Individual actions seem trivial, but collective impact is huge. What are individual vs. collective responsibilities?
  • Discounting the Future: Should future harms count less than present harms? If so, how much less?

Animal Rights & Welfare

Two Major Approaches

Animal Ethics

Peter Singer's Utilitarian View:

  • Sentience (capacity to suffer) is what matters morally
  • Equal consideration of interests—animal pain counts like human pain
  • "Speciesism" (favoring humans just because they're human) is like racism or sexism

Tom Regan's Rights View:

  • Animals are "subjects of a life" with inherent value
  • This gives them moral rights—not to be used merely as means
  • Stronger protections than utilitarianism provides

Business Ethics

Business ethics examines moral issues in commercial activity—corporate responsibility, fair dealing, and the ethics of profit-seeking.

Corporate Social Responsibility

What Do Corporations Owe Society?

CSR Debate

Milton Friedman's View (Shareholder Theory):

  • "The social responsibility of business is to increase its profits" (within legal bounds)
  • Managers are agents of shareholders—spending on CSR is "stealing" from owners
  • Let the market work; government handles social problems

Stakeholder Theory:

  • Corporations have obligations to all stakeholders: employees, customers, communities, environment—not just shareholders
  • Long-term sustainability requires considering broader impacts

Whistleblowing

When Is Whistleblowing Justified?

Moral Dilemma

The Tension: Loyalty to employer vs. duty to public interest

Conditions for Justified Whistleblowing (DeGeorge):

  1. The harm is serious and considerable
  2. You've reported concerns through internal channels first
  3. There's documented evidence, not just suspicion
  4. There's reasonable chance that blowing the whistle will prevent the harm

When does it become obligatory? When you can prevent serious harm at relatively low cost to yourself.

Technology & AI Ethics

Technology ethics addresses moral issues raised by new technologies—increasingly dominated by artificial intelligence.

Artificial Intelligence Ethics

Key AI Ethics Issues

Emerging Field
  • Algorithmic Bias: AI systems can encode and amplify human biases (racial, gender, etc.). Who's responsible?
  • Autonomous Weapons: Should we allow lethal decisions without human oversight?
  • The Alignment Problem: How do we ensure AI systems pursue goals we actually want?
  • AI Rights: If AI becomes conscious, would it have moral status?
  • Job Displacement: What do we owe workers displaced by automation?

Privacy & Surveillance

Digital Privacy Ethics

Contemporary Issues
  • Surveillance Capitalism: Is the business model of harvesting personal data for profit ethical?
  • Government Surveillance: How do we balance security and liberty?
  • The Right to Privacy: What privacy expectations are reasonable in a digital age?
  • Data Ownership: Who owns your personal data? What rights should you have over it?

Case Studies

Let's apply our ethical frameworks to classic cases that illuminate how different theories approach real dilemmas.

Case Study 1: The Trolley Problem in Autonomous Vehicles

Applied EthicsReal-World Application

The Scenario: A self-driving car must choose between swerving (killing its passenger) or staying course (killing five pedestrians). How should it be programmed?

Analysis:

  • Utilitarian: Minimize deaths—swerve. But then who would buy such a car?
  • Deontological: There's a difference between killing and letting die. Actively swerving to kill is worse.
  • Virtue Ethics: What would a wise person do? Perhaps: build cars that rarely face such choices.

Real-World Complication: Survey shows people want others' cars programmed utilitarian, but their own cars programmed to protect them.

Case Study 2: Heinz's Dilemma

KohlbergMoral Development

The Scenario: Heinz's wife is dying. A druggist has a cure but charges $2,000 (10x his cost). Heinz can only raise $1,000. The druggist refuses to sell cheaper or let Heinz pay later. Should Heinz steal the drug?

Analysis:

  • Utilitarian: Probably yes—a life outweighs property rights and minor harm to the druggist
  • Deontological: Tricky. Stealing violates duty, but so does letting someone die when you can save them
  • Virtue Ethics: A compassionate, just person might steal. But also: a just society wouldn't create such dilemmas

Kohlberg's Point: What matters for moral development isn't the answer but the reasoning—are you thinking about punishment, social rules, or universal principles?

Case Study 3: The Ford Pinto Case

Business EthicsCost-Benefit Analysis Gone Wrong

The Facts: In the 1970s, Ford knew the Pinto's fuel tank was vulnerable to rupture in rear-end collisions. An $11 fix per car would prevent an estimated 180 burn deaths, 180 serious injuries, and 2,100 burned vehicles.

Ford's Analysis:

  • Cost of fix: $11 × 12.5 million cars = $137.5 million
  • Benefits: 180 deaths × $200,000 + 180 injuries × $67,000 + 2,100 vehicles × $700 = $49.5 million
  • Conclusion: Don't fix it—costs exceed benefits

What Went Wrong:

  • Utilitarian Critique: The numbers were wrong (undervalued lives, underestimated injuries)
  • Deontological Critique: Using people merely as means—sacrificing lives for profit
  • Virtue Critique: What kind of company (what kind of character) makes such calculations?

Series Conclusion

Congratulations! You've completed the 6-part Ethics & Moral Philosophy Series. You now have a foundation in ethical reasoning that can guide you through life's toughest moral dilemmas.

What You've Learned Across the Series

Series Summary
  1. Foundations: What ethics is, why it matters, and the fundamental questions
  2. Utilitarianism: Judge actions by consequences—maximize overall well-being
  3. Deontology: Judge by duties—some acts are wrong regardless of consequences
  4. Virtue Ethics: Focus on character—become the kind of person who does right naturally
  5. Metaethics: Debates about the nature and existence of moral truth
  6. Applied Ethics: How to bring these tools to real-world problems

How to Continue Developing Moral Reasoning

Practical Advice
  • Practice: Apply these frameworks to dilemmas you encounter. What would each theory say?
  • Read: Explore primary sources—Aristotle's Nicomachean Ethics, Kant's Groundwork, Mill's Utilitarianism
  • Discuss: Engage others in ethical conversation. Test your views against objections.
  • Reflect: Notice your moral intuitions. When do they conflict with theory? What does that tell you?
  • Act: Ethics isn't just theoretical—it's about how we live. Practice the virtues.
Socrates: "The unexamined life is not worth living." You've begun the examination. The goal isn't to arrive at final answers but to think more carefully, act more wisely, and live more deliberately. Ethics is a lifelong practice.
Philosophy