top of page

Do the Right Thing

NBC's Emmy-Nominated sitcom The Good Place does a great job explaining important moral philosophical ideas through one of the protagonists of the show, Chidi Anagonye, a professor of moral philosophy. One of the comedic tropes and important character traits Chidi demonstrates, unfortunately, is crippling indecision. He can't decide even between muffin flavors. The writers of the show emphasize this is just one of Chidi's major character flaws, but the character often excuses it by finding himself locked into moral indecision. As a scholar of ethics, and one who does not clearly fall into one or another theoretical school (e.g., he's not a utilitarian or a virtue ethicist), he is never quite certain which moral view is correct, and, as a man who has made ethics his life, he cannot allow himself to make a bad moral choice.


(C) NBC 2018

Don't be Chidi.

Ethics is supposed to help us make the right decision, not be crippled by uncertainty. The reason why philosophers from Aristotle to Kant to Bentham to Nussbaum offer theories to think about moral choices is because they want people to be able to make sound moral decisions. But, as Chidi demonstrates, if a little knowledge is a dangerous thing, a lot of knowledge can be catastrophic!

One easy way to avoid being Chidi-esque is to just make up your mind once and for all what type of moral thinker you are. You can decide focus on rules we need to live by and be a deontologist. You can decide to always focus on consequences for the most people and be a utilitarian. You can focus on the cultivation of good moral habits and be a virtue ethicist. But most of us, especially most of us Americans, don't think in those terms. There are times and places where we need to follow rules: at the DMV, at a restaurant, at the airport and many other places. When we encounter strangers, we typically expect that we are all going to follow unwritten rules that govern our society. But there are other times when we focus on maximizing people's happiness: when we're splitting a cake, inviting guests to a wedding or choosing a place for lunch. When we are working with people to attain a common goal, we typically think it's best for burdens and benefits to be fairly shared. And while we probably won't be thinking often about how our individual choices do or do not change our moral habits, if we never think about virtue, we're probably not going to become good people.

Properly speaking, we call this way of doing ethics pragmatism. Pragmatism means the rules that are relevant for a given moral choice are based in what the situation itself is. But pragmatism can easily slip into moral relativism (no standards) or ethical egoism (what's right is what benefits me) if we're not careful. So it's good to have some guiding principles.


(C) Working Title Films 1998

The good news is this is a problem a lot of people have thought about. If you pick up the average business ethics textbook, you'll find a guide to making moral decisions. Here are two examples of such things from the Markkula Center for Ethics at Santa Clara University and from the business ethicist Christopher MacDonald. After teaching various versions of this, including my personal favorite, the "RESOLVEDD method" from Raymond Pfeiffer and Ralph Forsberg's Ethics on the Job, I've found most of these include similar elements and steps. I've truncated and, I think, better organized the main ideas that are typical in these frameworks.


If you find yourself making a difficult moral decision and are unsure of what approach is the right one, or if you're worried about your motivations in the decision you want to make, it's good to do a step-by-step analysis. To do a thorough job, you should go through the following eight steps. Some are easier, others take more thought and reflection. But all together they help inform your decision making so you neither act like Chidi nor make decisions based on a whim!


1. Review all relevant information.

What exactly is going on? Who are the people involved in the moral problem? What are the stakes? Is it big or small? What is the context?

In this step, you're basically looking for a brief explanation of the problem you're facing. Try to take into account all the important details, but you don't need to think about every minuscule fact.

Let's try an example: a couple years ago, the Federal Bureau of Investigation asked Apple Computers to break their encryption on an iPhone belonging to a terrorist from San Bernadino. Relevant information here probably doesn't include the names of the terrorists, the number of victims, the time of the events, or Apple's net worth. Relevant information does include the potential threat of further terrorist acts and the need for FBI information, the right to privacy, Apple's obligations to both the state department and to its individual customers, the fact that Apple is one of the largest tech companies in the world (so it wields a lot of influence), etc. These facts help us think about what the stakes are and why this issue requires some careful moral deliberation. Note that questions about profit, brand image, or law are not necessarily important in this scenario because they aren't ethical issues.


2. Identify the moral problem.

Here you have to be a little critical in your thought. What exactly is the problem we're trying to solve? If the issue is a true moral dilemma, it will most likely be a problem between competing values or disvalues: the problem of benefiting consumers or protecting the environment, for example, or, as MIT's "Moral Machine" suggests for self-driving cars, the problem of killing car passengers or passersby.

In the case of Apple and the FBI, the moral problem is in choosing between consumer privacy and setting the right precedent for future problems, or choosing national security and (potentially) preventing future terrorist acts.


3. List at least two (2) viable choices.

A dilemma means a choice between two options. Technically a trilemma involves three choices, and the terminology only becomes more cumbersome after that. At this step, you need to simply lay out two choices which you can make, both of which should have good moral reasons behind them. If you don't have at least two moral choices, you don't have a true moral dilemma. Consider automation: if the only reason for automating certain processes is to save a company money, while not automating processes will help employees keep their jobs, you don't have a true moral dilemma (note: of course, the issue of automation is much more complicated than this, but there are industries that are flourishing yet creating technological unemployment through automation).

In the Apple case, the two choices were to cooperate with the FBI or to not. One option is directed toward public security, while the other is directed to individual privacy, both moral reasons.

These first three steps are the easiest and you probably shouldn't take too much time going through them.


4. Assess the impact of your choices on relevant stakeholders.

Stakeholder Theory is a fairly recent consequentialist method for analyzing the moral impact of business decisions. Over and against the traditional "shareholder theory" espoused by famed American economist Milton Friedman, stakeholder theory emphasizes that many decisions impact people beyond those who can trade company shares on Wall Street. Employees, customers, communities, international entities, the environment itself are also important stakeholders in major corporate decisions.

This step requires us to reflect on two important questions: who will benefit from each choice, and who will be burdened by each choice? In this step, we want to really emphasize who will bear the brunt of the harms in each choice, and who will enjoy the benefits? If one decision unfairly burdens one party while benefiting another, this is important moral information. Likewise if one choice burdens a few people while benefiting a lot, this is also important!

Take self-driving cars as an example. Ideally, they should benefit many people: commuters who won't be stuck in rush hour traffic, pedestrians who may be safer with a computer reacting to their movements than a human driver, the environment with fewer emissions due to shorter commuting times, etc. Some will be burdened though: people who currently make their income from driving like cab drivers or truckers, and perhaps people who enjoy driving for themselves.

In the Apple case, relevant stakeholders include Apple's customers, FBI counter-terrorism agents, and the American people generally. The benefits and burdens are not equally distributed. The FBI benefits tremendously from Apple's cooperation. Apple's consumers lose their trust in the company and lose security of information if Apple cooperates with the FBI. The American people may or may not benefit from the FBI gaining access to the iPhone (to date, I can't find information on whether the FBI found any useful information after unlocking the phone), though the benefit would likely be overall smaller than one expects--FBI investigations into potential terrorists typically don't rely on single pieces of evidence. There is also a potential danger to the American public, insofar as a decrypting tool could be used by nefarious agents to harm people.


5. List any relevant values for your two choices.

Here you need to think about important moral values that are at stake in the problem itself. It's hard to give an exhaustive list of moral values, but we might include things like: loyalty, friendship, health, life, honesty, peace, liberty, equality, beneficence, autonomy, responsibility, integrity, happiness, security, nonmaleficence and justice. You should think about how either given choice either support or enact these values or violate them. Not every value will be relevant, of course, so don't try to make them all fit for your problem. Justice will almost always be relevant, especially if the choice involves other people. Which others are relevant will likely depend on what the actual moral problem is (so check again with step 2). You should also explain how the values are either supported or violated by each choice.

Because this step can be difficult, it's also a good idea here to consult with any and all relevant ethical codes. These can include ethical codes for your profession (like IEEE) or for your company (like Los Alamos). They often include particular character traits you are expected to embody, general rules you should follow, and ideals to strive for. Think about how your options fit into your obligations as laid out in your code of ethics.

To apply this to the Apple case, we might first look for a code of ethics for Apple. Apple has a clear code for its suppliers, but it is harder to find a general code for employees. However, Apple does emphasize privacy as one of its key values. We can see in the case at hand that privacy was a major value supported by not cooperating with the FBI and violated by helping them. Security is another value which is supported by helping the FBI, but also supported in its own way by not cooperating: informational security can best be preserved by maintaining encryption on their devices. Justice is also important here: justice for Apple users means knowing that Apple fulfills their obligations to maintain informational security, while justice in the criminal justice system means the FBI needs access to all relevant information. Apple also has responsibilities to their customers, laid out already in privacy policies.

Finally, please note that some values are more important than others. The act of whistleblowing, for example, requires a person to prioritize justice over loyalty; you choose to protect other people against unfair harms over demonstrating allegiance to your employer. Justice is perhaps the single most important value. Many ethicists have even reduced ethics to questions about justice; if an act is unjust, it's unethical. Nonmaleficence is also typically of high value--it's better to not harm others than to do good for them, and much of the moral tradition focuses on rules designed to prevent harm. If a choice violates the values of either justice or nonmaleficence, it will need the support of a lot of other values to justify it.


6. Evaluate your choices using moral methods.

Now, you finally get to the meat of the choice. The rest of this stuff has been prelude; they set you up to see the relevant issues at hand, but ultimately don't make the decision for you. At this point, you have to embody Chidi, though only for a brief moment. You need to ask how various moral theories would analyze the choices you have.

A first challenge here is figuring out which moral theories are relevant. You probably don't need to go through every major theory from deontology to narrative ethics. My suggestion is picking three or four to give you a good mix. My methods of choice tend to be Kantian deontology, act utilitarianism, human rights, and virtue. A brief note on how these theories are applied and their focus.

Kantian deontology focuses on the intention of our acts. The two primary questions we have to ask about a given choice are: "Would I want to live in a world where everyone does this?" and "Am I treating people as ends in themselves?"

Act utilitarianism focuses on the outcomes of our choices and uses pure arithmetic to decide what's right. The big question here is "Which choice will benefit the most number of people?" (or, inversely, "Which choice will harm the least people?") Additionally, the amount of benefit or harm should be considered here, if possible.

Human rights tends to be pretty straight forward. The simplest question is "Do either of these choices violate human rights?" If one does, that's usually a bad sign for that option. Period. A more complex question to ask is "Do either choice allow me to fulfill my obligations that correspond to people's human rights?" Questions in this vein might be things like how to ensure people's access to just work and equal pay, how to give people access to their government or preservation of their intellectual property.

Virtue ethics tends to be the most difficult to apply because it's focus is primarily on moral development over time, not on specific choices at any given moment. There are two perspectives one can use virtues for in making moral choices, however. The first is to ask how a given choice will enable you to develop one or more virtues better. Virtues are developed by numerous choices, so if we want to be honest, we have to practice honesty by telling the truth. If we want to be courageous, we have to make courageous choices. The second perspective is to ask how our choices can help others become virtuous. Aristotle emphasizes that virtues are cultivated socially, so when we provide good opportunities for people to be temperate, courageous, honest or just, we are helping them become virtuous.

It's likely that in using theories, you'll find that one option has more support from different theories than the other. If you find that, it's a good sign that this choice is better than the other. You may also think about the extent that the theories support a given choice--if it's only small support or small disapproval, it may not be as significant as one might think.

Let's look at the Apple case here. From a Kantian perspective, employees at Apple have to ask if they'd like to live in a world where government agents can easily access suspects' smart phones. Most likely the employees would not. The question of "mere means" isn't super relevant here because an iPhone is not a person, but the FBI is treating the people at Apple as mere means to get the phone unlocked. From a utilitarian perspective, it seems more people will be benefited by maintaining security than allowing governments to access phones, but that's a point that can be easily disputed, especially in a post 9/11 world. Human rights are significant insofar as we do have a right to privacy (Article 12 of the UDHR), but we do acknowledge that in cases of emergency or when sufficient evidence is present, privacy can be invaded for the common good (e.g. we investigate murder suspects' homes when we have a warrant). Virtue is a bit tricky here. Is courage relevant? Is friendliness? Justice is, but, as noted above, that's a bit difficult to determine in this scenario. Does either choice clearly facilitate or hamper virtues? Not really. Wisdom will be the key virtue here, and, arguably, it's not wise to let the FBI have access to a "master key" because it will set dangerous precedent for abuse. In this case, it looks like most of our moral theories support the idea of not helping the FBI.

Please note that steps 5 and 6 are the heavy lifting of this process; you'll need to employ critical thinking and do the homework. Don't be surprised if this takes much longer than any of the other steps.


7. Make your decision.

Now that you've carefully evaluated your choices using stakeholder theory, an assessment of the various values at stake, and using multiple moral theories, you have enough information before you to take a stand. What are the strongest voices present? What information is relevant? What information is not? Synthesize this and then, knowing you've done the proper work, commit to your choice.

In the Apple case, there seems to be a lot more morally relevant material supporting the decision to not cooperate than to cooperate. So their actual decision was a good decision.


8. Defend your position.

Play devil's advocate for a moment. What are the objections people will raise? How can you respond to them? Give a brief defense of your choice using moral reasons and explaining why the objections aren't as important as your reasons.

In the Apple case, the strongest objection is that national security may be at stake if federal agents cannot access the information of terrorists' phones. This information could save dozens or hundreds of lives. The defense of this may be that it is better for people to know their information is secure than to relinquish control over that for the possibility of greater safety. Additionally, if this technology were to get into the wrong hands, including foreign governments or even terrorist cells, it could be more dangerous than the alternative.

These two steps will, once again, be quite short. If you've carefully done the work ahead of this, you should be able to easily conclude what decision has to be made and why it's the right one.


This methods is not a foolproof method. But going through these steps will force you to think carefully about the moral reasons behind choices you're making and should give you some guidance about why one choice may be better than the other. Ultimately, the ability to defend your position is the most important. If the defense is merely "it will make us more money," or "you shouldn't stand in the way of progress," you may need to reevaluate the choice you're making. After going through this, you should feel confident that you made the best choice you could. And, unlike Chidi, you won't be paralyzed when it comes to deciding between a bran muffin and a blueberry muffin.

bottom of page