Search tips
Search criteria 


Logo of jmpLink to Publisher's site
J Med Philos. 2016 August; 41(4): 369–383.
Published online 2016 June 2. doi:  10.1093/jmp/jhw010
PMCID: PMC4986002

Keeping it Ethically Real


Many clinical ethicists have argued that ethics expertise is impossible. Their skeptical argument usually rests on the assumptions that to be an ethics expert is to know the correct moral conclusions, which can only be arrived at by having the correct ethical theories. In this paper, I argue that this skeptical argument is unsound. To wit, ordinary ethical deliberations do not require the appeal to ethical or meta-ethical theories. Instead, by agreeing to resolve moral differences by appealing to reasons, the participants agree to the Default Principle—a substantive rule that tells us how to adjudicate an ethical disagreement. The Default Principle also entails a commitment to arguments by parity, and together these two methodological approaches allow us to make genuine moral progress without assuming any deep ethical principles. Ethical expertise, in one sense, is thus the ability and knowledge to deploy the Default Principle and arguments by parity.

Keywords: clinical ethics, ethics consultation, ethics expertise, medical ethics


Some of the debates regarding the possibility of whether there can be ethics experts turn on discussions of the nature of normative and meta-ethics. To wit, the possibility of ethical expertise, some argue, depends on the soundness of the following argument. 1

Skeptical Argument (I):

  1. There are no correct 2 ethical and meta-ethical theories.
  2. If there are no correct ethical and meta-ethical theories, then ethical expertise is impossible.

Therefore, ethical expertise is impossible.

In this paper, I want to establish two main claims. Firstly, I want to argue that the argument above is unsound. Specifically, premise (2) is false; that is, it is not true that if there are no correct ethical meta-ethical theories, then ethical expertise is impossible. Secondly, I want to sketch and defend a method of making progress in ethical disagreements that does not depend on the acceptance of any ethical or meta-ethical theories. My overall strategy is to show that by committing to resolving ethical disagreements rationally, participants agree to a set of basic rules that can guide them in resolving some ethical disagreements. These rules can provide substantive ethical conclusions without presupposing any ethical theories.

Before we begin to evaluate the Skeptical Argument I outlined, it is important that we distinguish variations of it. Some of these variations would contain premises whose truths are more dubious than others. Notice that in Skeptical Argument (I), the first premise makes a metaphysical claim about whether a correct ethical theory exists. Whether premise 1 (understood as a metaphysical claim) is true or not is an open question. There are, however, reasons to think that it is not true. For example, if it were true and we were aware of it, it would be a fool’s errand to attempt to come up with the correct normative theory (after all, there is no such thing). The fact that normative ethics is a healthy research area in which both professionals and graduate students continue to produce and sharpen normative theories is strong evidence that, at the very least, the jury is still out on whether there exists a correct normative theory. Given the fact that there is a healthy debate regarding the correct moral theory, it would be premature to conclude that there is no ethical expertise because there are no correct ethical theories.


A more charitable and plausible construal of the Skeptical Argument avoids making the strong metaphysical claim that there are no correct ethical and meta-ethical theories. Instead, it focuses on our inability to identify or to come to consensus regarding correct ethical and meta-ethical theories. This revised argument would look like this:

Skeptical Argument (II):

  • 1′. We do not know which ethical and meta-ethical theories are correct.
  • 2′. If we do not know which ethical and meta-ethical theories are correct, then ethical expertise is impossible.

Therefore, ethical expertise is impossible.

What reasons do we have to think that we do not know the correct ethical theory? One obvious justification is that there exist genuine disagreements in ethics about a number of foundational issues. For example: are there moral facts? Is consequentialism correct? Can we infer moral conclusions from facts alone? These disagreements are deep, and their resolutions are unlikely to come soon.

The epistemic version of the skeptical argument fares better than the metaphysical version because the first premise is more likely to be true. While the metaphysical version makes a broad claim about whether there is a correct ethical theory, the epistemic version merely relies on our inability to identify the correct theories. Nevertheless, the epistemic version of the skeptical argument faces a different problem.

Premise 2′ claims that if we do not know the correct ethical and meta-ethical theories, then ethical expertise is impossible. This conditional claim, I argue, is false. Consider a version of this claim in the other contexts. Supreme Court justices, for instance, regularly disagree among themselves with regards to the merit of a case. Razor-thin decisions consisting of 5-4 judgments are not uncommon (from Bush v. Gore, 2000, to Miranda v. Arizona, 1966, to Citizens United v. FEC, 2010). Disagreements in some of these cases reflect deep philosophical divisions regarding the nature of laws and the fine balance between states’s rights and the rights of the federal government. The existence of these disagreements, however, does not lead us to believe that none of the Supreme Court justices is a judicial expert or that judicial expertise is impossible. Indeed, foundational issues like whether there are true laws à la natural laws or whether legal positivism is correct are issues not settled by those working in the philosophy of law. Honest and genuine disagreements exist about the foundation of law as much as they exist about the foundation of ethics. Skeptical Argument (II) claims that the lack of consensus regarding the foundation of ethics entails that there is no ethical expertise. This seems implausible in light of the fact that an analogous argument fails in the case of jurisprudence.


Among the sciences, there exist fundamental disagreements that do not lead to skepticism towards scientific expertise. Consider modern physics, especially quantum mechanics and general relativity. Results from both of these domains have been repeatedly confirmed and reproduced. They represent two of the most well-accepted theories in modern physics. At the same time, we know that they are (as they stand) logically incompatible. To pick but one example, quantum mechanics requires the reversibility of time (i.e., if we can make retrodictions about the past based on current information). General relativity, however, prohibits that. Taken together, quantum mechanics and general relativity cannot both be true. Not only does this represent a fundamental disagreement in physics, it is theoretically based; that is, an agreement is logically impossible so long as time-reversibility differs between the two of them. If our inability to identify the correct scientific theory entails that there is no scientific expertise, then the current state of physics entails not only that there are no experts in physics, but that expertise in physics is impossible.

One might argue that the disagreements in meta-ethics and ethics are deeper than those between quantum mechanics and general relativity. Take the issue of ethical realism. Suppose it turns out that there are no moral facts, surely it would affect how we do ethics (so the proponents of the Skeptical Argument would claim). Knowledge entails truth (i.e., one cannot know something unless it is true). If there are no moral facts, then there is no moral knowledge. If there is no moral knowledge, there cannot be experts who know something about it.

The realism versus anti-realism debate is active and unsettled in the sciences as well. Yet, the existence of disagreements regarding whether there are scientific facts rarely leads us to adopt a skeptical stance against scientific expertise. One possible explanation is that the work that scientists usually engage in simply does not require them to resolve antecedently these foundational issues. In his groundbreaking work The Scientific Image, Bas van Fraassen argues that the aim of science is to devise empirically adequate theories; namely, theories that provide a literally true account of the observable world. Whether there are things in the universe that are unobservable (e.g., electrons) is irrelevant to the practice of science. A scientist, thus, can remain agnostic regarding the existence of unobservables. The traditional debate on the existence of unobservables simply does not affect the day-to-day practice of a scientist. His job is to devise a literally true account of the world regarding what he can observe, and if his theories contain unobservables, he can withhold his commitment to the existence of these entities and continue to use the theories to make predictions and explanations. Van Fraassen’s view, known as constructive empiricism, provides an account to explain why scientists do not first need to address the realism/anti-realism debate before proceeding to do science. 3

Hartry Field’s research on mathematical nominalism echoes a similar theme. Field argues that although much of contemporary science depends on mathematics, whether mathematical entities (e.g., numbers) exist or not does not affect the working of modern science. 4 Physics can be done, Field argues, without assuming the existence of numbers. The realism/anti-realism debates in science and mathematics are independent of the possibility of scientific expertise. However science is done, those who know how to do it well are therefore scientific experts.

Although my references to van Fraassen’s and Field’s works should not be seen as an endorsement of their metaphysical views, they are correct to note that we can often proceed to do science without solving foundational metaphysical questions about, say, the existence of scientific truths and numbers. The important lesson here is that the aim of a discipline determines the antecedent need to address deep metaphysical questions. In the next section, we will look at some of the aims of ethics, and I wish to argue that a careful examination of them shows that ethics can proceed without presupposing foundational ethical and meta-ethical views.


One of the main purposes of ethics is to regulate individuals’s, behaviors. Indeed, one wonders why there should be any ethical deliberations if we are merely inert beings whose behaviors have no impact on anyone or anything else. The need for ethics arises when two individuals have interests that cannot be jointly satisfied and they choose to resolve their conflict by appealing to reasons. To be sure, they can resort to force until one party is willing to concede. Alternatively, one might appeal to emotions, guilt-tripping, or sophistry to get the other person to refrain from pursuing his or her interests. When we appeal to reasons to resolve conflicts, however, we agree to forgo all these non-rational means and let reasons be the sole arbiters of our disagreements.

Of course, ethics is more than just resolving conflicts. Normative ethics, for example, is also about explaining and systematizing our first-order ethical judgments. Judith Thomson outlines the explanatory role of normative ethics nicely when she writes,

At the heart of every moral theory there lie what we might call explanatory moral judgments, which explicitly say that such and such is good or bad, right or wrong, other things being equal good or bad, other things being equal right or wrong, and so on, because it has feature F—for example, “Capital punishment is wrong because it is intentional killing of those who constitute no threat to others” (Thomson, 1990, 30).

For Thomson, the aims of normative ethics include the formation of normative theories that take pre-theoretical moral judgments as prima facie data and provide them with systematized explanations (i.e., explaining why we think certain practices are wrong, e.g.,). The relationship between data and theories is akin to theoretical reasoning in the sciences: we attempt to capture as much of the data as possible, and when a conflict between theories and data arises, we appeal to reflective equilibrium to proceed. Although the theoretical project is important, ethics is ultimately about the regulation of moral agents’s behaviors. It would be a pointless endeavor if by the end of our normative enterprise we have a wonderfully sophisticated moral theory that does not actually tell us what we ought to do in a situation. What distinguishes normative claims from non-normative claims is that the former has certain prescriptive force with regard to behaviors. If lying is wrong, then we ought not to lie. He cannot do as he pleases because there are ethical constraints to his behaviors.


The critics of moral expertise argue that since there is no universally or even widely accepted ethical theory, moral experts cannot make any recommendations about how they ought to proceed without imposing a question-begging normative framework. Whatever expertise they might have, ethics experts can only tell us what we ought to do conditionally (e.g., if one is an act-utilitarian, then one ought to do X). There is no “trans-theory” moral expertise.

This view should strike most non-philosophers as somewhat odd. Surely, in our day-to-day lives, we often resolve moral conflicts. Suppose we attempt to convince a friend that it is morally wrong to take books out from the library with the intention of holding onto them because the fines would cost less than purchasing the books. Suppose further the friend retorts, “Yes, I see why it would be selfish but what is your theoretical justification for why it would be wrong to be selfish?” This response appears not only needlessly academic, it is entirely unhelpful. 5 To demand that we must first have a clear and sound ethical foundation before we tackle any moral problems simply does not square well with how we in fact do ethics. If, as Thomson suggests, ethics is in the business of describing and explaining our moral practices, then surely it should account for the way we actually reason morally. When we try to determine what we ought to do, we do not take some broad ethical theory, plug in the particulars of the situation, and see what recommendation falls out. Moral problems, unlike calculus, are usually not solved by filling in the values for the variables.

Psychological research by Jonathan Haidt suggests that most people make ethical judgments without appealing to some foundational ethical theories. They often guide their decisions on the basis of moral intuitions shaped by social conventions. In one ongoing study, Haidt et al. present subjects with five stories, including two that are designed to elicit intuitive moral judgments. In the incest story, two adult siblings engage in consensual sexual relations. In the cannibalism story, a woman researcher cooks and eats human flesh donated for research at the medical school where she works. Subjects were then asked if the participants in these stories acted wrongly. Haidt et al. find that although the vast majority of the subjects felt that incest and cannibalism were wrong, “they reported relying on their gut feelings more than on their reasoning, they dropped most of the arguments they put forward, they frequently made unsupported declarations, and they frequently admitted that they could not find reasons for their judgments” (Haidt, Björklund, and Murphy, 2000, 10). In Haidt et al.’s term, the subjects were “dumbfounded” when pushed for theoretical justifications for their moral judgments. Dumbfoundedness, they argue, arises when “seeing-that” (a sort of pretheoretical moral judgment) conflicts with “reasoning-why” (searching for theoretical justifications for one’s judgments).

Thomson identifies the distinction between moral theorizing and engaging practical moral problem this way:

Participants in moral disputes in ordinary life aim only at convincing each other and are therefore content to take as data what is in fact agreed between them, even if they are aware that what is agreed between them might well be rejected by third parties. Theorists aim at convincing the universe and therefore try to be sure that what they take as data would be accepted by all. (Thomson, 1990, 32)

Thomson takes object-level moral judgments like “lying is prima facie wrong” as data on which participants might agree as background assumptions in an ordinary moral dispute. Whether these object-level judgments are true or not (and more importantly, why they are true) rarely comes up. On the other hand, normative ethicists wish to “convince the universe” when doing theoretical philosophy. In that respect, all object-level moral judgments are up for evaluations and analyses.

When we disagree on a moral issue, our discourse sits within a narrow context in which we assume some shared moral judgments, and we do not challenge the broad foundation of morality. The idea that contexts determine the scope of what is up for debate is not unique to ordinary morality. In criminal trials, for instance, juries render a guilty verdict that is “beyond a reasonable doubt.” A defense lawyer who questions whether we know the external world really exists, for example, raises an unreasonable doubt that misses the point of a trial. Likewise, when an epidemiologist concludes that smoking causes cancer, it would be a poor objection to insist that unless he has a coherent account of causation, his conclusion is unwarranted.

To be sure, whenever we engage in philosophical analysis of X, we have to balance the descriptive (the way we think of X) with the prescriptive (the way we ought to think of X). Philosophers are not just reporting the common usage of a concept; they are also interested in explicating it. Nonetheless, whenever someone wants to argue that, contrary to widespread ordinary practices, we are radically wrong about the nature of these practices, said person needs to offer an exceedingly compelling account to justify his or her “error theory.” It is akin to a scientist who wishes to investigate sharks’s mating behaviors but only to conclude that animals do not actually exist: it is not an impossible conclusion, but there had better be a remarkable justification.

Ethicists are often philosophers. This duality makes it easy for an ethicist to slide into the role of a philosopher and question foundational assumptions. In deliberating everyday ethical problems, doing philosophy in this universal skeptical sense is inappropriate. Nonetheless, the fact that we do not rely on ethical theories to help us solve moral problems does not mean that everyday moral practice is a matter of anything goes. The commitment to resolving ethical disagreements by appealing to reasons, I will show in the next section, generates substantive procedural guidelines that help us resolve some conflicts.


Consider an asymmetric disagreement in which A wishes to X and B wishes for A to refrain from doing X. 6 Suppose that A and B agree to settle their disagreement by appealing to reasons. What does that entail? There are logically two ways they can use reasons to solve their problem: either A offers a reason for why he is permitted to X or B offers a reason for why his prohibition of A doing X is justified. To say that someone needs to offer a reason for why he is permitted to do something just means that unless a reason is offered, he is not to do it. Similarly, to say that B needs to offer a reason to prohibit A from doing X is to say that unless B offers a reason otherwise, A is permitted to X.

Consider the first claim; that is, unless a reason is offered, one is not permitted to do what he wants. This claim is surely false. It would be absurd, for instance, to say that a person stranded on a desert island is not permitted to do what he wishes unless he offers a reason (to himself?). The need to justify one’s action arises only because B is in the way of A’s doing X. Without B’s presence, A would have been able to do X. To put the point more generally, one gets to do what one wishes unless there are reasons to think otherwise. We call this the Default Principle (DP).

The argument for DP can be stated formally this way:

  1. When there is an asymmetric moral conflict, either A has to justify why he is allowed to X, or B has to justify why A is not permitted to X.
  2. “A has to justify why he is allowed to X” entails that, unless there are reasons to justify why he is allowed to X, he is not allowed to X.
  3. “B has to justify why A is not permitted to X” entails that, unless there are reasons to justify why B is allowed to prohibit A from doing X, B is not allowed to prohibit A from doing X.
  4. B’s not being allowed to prohibit A from doing X entails that A is permitted to X.
  5. “Unless there are reasons to justify why A is allowed to X, he is not allowed to X” entails that, if A is all by himself, he is not allowed to X unless he can offer a reason to X.
  6. It is not true that, if A is all by himself, he is not allowed to X unless he can offer a reason to X.
  7. Thus, it is not true that, unless there are reasons to justify why he is allowed to X, he is not permitted to X (5 and 6).
  8. Therefore, when there is an asymmetric moral conflict, (3) is true (1 and 7).
  9. Therefore, when there is an asymmetric moral conflict, A is permitted to do X unless B justifies why A is not permitted to X (8 and 4).

The conclusion (9) is just the DP.

DP generates a number of substantive results. Firstly, the burden of proof rests on the side that wishes to restrict another’s autonomy. If one cannot identify a sufficient reason as to why an interlocutor cannot do what he pleases, he is free to act. Of course, I have not articulated exactly what constitutes a “sufficient reason.” I will argue shortly that DP not only provides a framework for how to solve ethical problems, but it also gives us mechanisms (such as arguments by parity) to help evaluate the sufficiency of reasons. Since DP derives from our commitment to resolve conflicts by appealing to reasons (and not from the dependency on specific normative theories), the availability of these mechanisms is thus theory-neutral.

By placing the burden of proof on the side that aims to restrict autonomy, DP alters the way we solve asymmetric ethical disagreements. To wit, the side that wishes to act does not need to put forth any arguments to justify why he should be permitted to do as he wishes. It is the autonomy-restricting side that needs to offer arguments to justify why he should not proceed. Take the abortion debate. The anti-abortion side needs to provide arguments to limit one’s (negative) right to abortions. If these arguments are unsound, those who wish to obtain abortions can do so, even if they are unable to provide arguments in support of their right.

Secondly, by shifting the burden of proof to the side that wishes to restrict one’s autonomy, we create a permissive bias. If, for instance, two people cannot agree on whether there exists sufficient reason to prohibit one of them from doing what he wants (i.e., a “tie”), it follows that there is no reason to prohibit the action. 7 DP tells us that he is allowed to do what he wishes. A tie can occur when, say, the difference between two people rests on a disagreement in values. Since value disagreements typically cannot be resolved by appealing to reasons, a moral disagreement that stems from a value disagreement, by definition, cannot be resolved by reasons. Thus, when there is a tie, the side that wishes to engage in an activity prevails. DP favors the permissive side not because it accepts the value of autonomy and liberalism in general. Indeed, conservative critics like Leon Kass have rightly argued that contemporary healthcare ethics is laden with liberal values. Unless one can defend these values objectively, they argue, adopting them as the basis of morality is at best question-begging and at worst an imposition of subjective values in public debates. Our argument here avoids the criticism. The bias we place on autonomy and other liberal value is justifiable because it comes from our antecedent agreement to solve conflicts by appealing to reasons: they cut across different normative and political beliefs. In other words, DP represents the first ground rule that we must accept if we wish to use reasons as the sole arbiters of our disagreements. The permissive bias follows not because of our love of liberalism; rather, it constitutes the very condition of possibility for rational conflict resolutions.


Although DP gives us a broad guideline that tells us how we should proceed when resolving asymmetric ethical conflicts (e.g., evaluate the arguments against the permissibility of a practice, ties go to the permissive side, etc.), it does not tell us what constitutes a sufficient reason to override one’s autonomy. In this section, I will show how some familiar evaluative principles that can help us determine the sufficiency of a reason fall out of our acceptance of DP.

Consider the use of arguments by parity in ethics. Suppose two individuals disagree about whether eating meat is morally permissible. One might point out that eating meat is not morally permissible because factory-farmed animals like cows and pigs do not differ from dogs from a moral point of view. If we think slaughtering dogs and eating them is morally wrong, then we should conclude the same for cows and pigs. Parity of reasoning essentially says that whatever reasons we have for assigning a certain moral attitude (e.g., permissible, morally wrong, and so on) to one practice, we ought to apply the same moral attitude to a relevantly similar practice. 8 Arguments by parity are ubiquitous in everyday reasoning: we utilize it not only in ethics but also in the sciences. The use of animal models in medical research, for instance, essentially relies on the soundness of parity arguments. If a drug successfully targets certain cancer cells in rodents and if these cells behave similarly in humans, its success in rodent models gives us evidence that the drug works in humans as well.

A philosophical defense for arguments by parity often depends on accepting the value of logical consistency. The persuasive power of arguments by parity lies precisely with the normative force of this antecedent commitment: one should accept the conclusion because one ought to be logically consistent. For moral deliberations, however, appealing to the value of logical consistency exposes one (again) to the criticism that one is injecting and favoring certain subjective values in how we resolve moral conflicts. To be sure, if one rejects logical consistency, it is not entirely clear how one can go about resolving rational conflicts. Nevertheless, we can meet the criticism by deriving a justification for arguments by parity from DP. Since DP does not require the acceptance of any values other than a commitment to solve moral conflicts with reasons, the use of arguments by parity would therefore not require the acceptance of the value of logical consistency.

The reason why arguments by parity follow from DP is that a violation of parity is in fact a violation of DP. Consider two practices, X and Y, that are relevantly similar. Parity tells us that if we consider X morally permissible, we should also consider Y morally permissible. Suppose one rejects this conclusion. In a nutshell, one insists that although X and Y are relevantly similar, we permit the former but not the latter. The problem with this conclusion is that whatever justifications we had for permitting X, they apply equally to permitting Y, by stipulation. To prohibit Y, thus, requires us to prohibit it without sufficient reasons, which is a violation of DP. A violation of parity is, therefore, a violation of DP, which means DP entails arguments by parity.

While DP gives us a broad framework for how we should approach an asymmetric moral disagreement (e.g., begin by evaluating the arguments against permitting a practice), arguments by parity provide us with a specific way of evaluating the sufficiency of a reason. Suppose A wants to have access to birth control pills without a prior prescription from a physician. Suppose B wants to prevent A from doing so by pointing out that there are health risks involved in taking birth control pills and that physicians should approve and monitor the usage. A can appeal to arguments by parity and point out that adults are permitted to engage in numerous activities of comparable risks without approval of or monitoring by a physician. For instance, adults are permitted to smoke and to drink, and the health risks involved in these activities far exceed the risks of taking birth control pills. If we permit adults to partake in potentially harmful activities like smoking and drinking, then we cannot point to the potential (and lesser) risks of birth control pills as reasons for why one should require a physician’s permission to take them. Using parity, we can evaluate the sufficiency of reasons offered to limit one’s autonomy. And, if after examining all of them, we find none of them convincing, then DP tells us that we ought to permit the practice.

It is worth pointing out that arguments by parity follow from our acceptance of DP and not on the basis of some allegiance to logical consistency per se. Moreover, since DP stems from our commitment to resolving ethical conflicts by appealing to reasons, the justification for arguments by parity thus follows from the same commitment to rational conflict resolution. In other words, if we are committed to resolving moral conflicts by appealing to reasons, then we can use arguments by parity without any further justifications.


Like any model that attempts to explicate rational deliberation, DP provides only a normative framework of how we ought to proceed. It is ultimately an idealization. In reality, participants are rarely pure rational deliberators. The inability to follow the logic of an argument, personal biases, power dynamics, dogmatic refusal to change one’s mind, anger, insecurity, and so on all represent real obstacles to rational deliberation. DP does not tell us how to overcome these obstacles. But, what it does do is provide a road map of how to use reasons to resolve ethical conflicts. We can think of it as a rough structure of rational deliberation. It is certainly an important area of research to uncover the psychological and social forces that are involved when we engage each other. These issues, however, are best left to behavioral economists and social psychologists. 9

Another shortcoming of DP is that it is only of use when we engage in asymmetric moral problems. DP cannot help us with symmetric moral problems in which neither party aims to limit what another party wishes to do. In these cases, we cannot assign the burden of proof to the side that wishes to limit someone’s autonomy, for none exists. For instance, in a debate about how we should prioritize dialysis machines, there is usually no participant who seeks to prevent someone from having access to it. Rather, the issue is how we can distribute resources in a way that maximizes certain desiderata (e.g., QALYs, fairness, etc.). Nevertheless, asymmetric moral debates are fairly common. From the abortion debate to the right to die, we regularly confront conflicts in which one party aims to prevent another from doing what he or she sees fit. DP and arguments by parity give us the tools necessary to tackle these problems without resorting to particular normative theories and personal values. They cannot solve all the problems, but they help us draw objective moral solutions on the basis of our shared commitment to reasons.

A more disconcerting worry is that arguments by parity have persuasive force when there are enough shared beliefs among the participants of a debate to serve as anchor for an analogous argument. In the previous example with regard to over-the-counter access to birth control pills, one argument is that birth control pills have certain health risks that warrant a physician’s involvement in their dispensing. The retort that we tolerate many risky choices (such as smoking) has force only insofar as both parties agree that competent adults should be able to make these choices without undue paternalism. If one rejects this claim, then this particular argument by parity would fail.

As a matter of fact, we share many moral beliefs. Pain is bad, all else being equal. Torturing someone for the fun of it is wrong. Lying is wrong, all else being equal. It is at least theoretically possible for someone to have such deviant values (or none at all) such that there is no common ground to stake an anchor to draw an analogy. Exactly how many beliefs we need to have in common in order to make meaningful moral advances is an important philosophical question. However, it is clear that for someone who has an internally consistent set of beliefs that is radically different from ours, it is impossible to deploy any arguments by parity to evaluate reasons offered to limit one’s autonomy.

To be sure, even in a case of perfect moral isolation, we can still make some advances. DP tells us that individuals should be permitted to do what they want unless there are reasons to think otherwise. In this case, because there are no shared beliefs to deploy arguments by parity to evaluate the reasons offered, we conclude that we cannot tell if these reasons are sufficient. Thus, the permissive bias allows us to say that individuals should be allowed to pursue their practices as they see fit. When two perfectly morally isolated individuals engage in an asymmetric moral disagreement, they essentially find themselves in a stalemate. As we discussed before, DP entails that stalemates go to the permissive side; that is, we should let participants do what they want.

Finally, when confronted by an argument by parity that shows inconsistency in one’s moral beliefs, one always has at least two courses of action. If holding belief X and belief Y is logically inconsistent, one can either reject X or reject Y or both. The hope is that if we anchor an argument by analogy with a deeply held moral belief (e.g., torturing someone for the fun of it is wrong) and draw a parallel conclusion that our interlocutor rejects, we can show that the price of rejecting the anchoring belief is so steep that our interlocutor is better off changing his or her mind on the parallel conclusion. Whether one is willing to pay a steep doxastic price is not a matter guided by ethics or philosophy, for that matter. It is ultimately a personal choice. We can hope that in the face of rejecting wildly plausible beliefs, our interlocutor would come to his or her senses and move nearer to us morally. But, it is equally logically permissible to reject plausible beliefs and retreat further into one’s moral isolation. DP and arguments by parity cannot ensure that our interlocutor moves closer to us. They can only ensure that our interlocutor cannot stay where he or she is, from a moral point or view.


If what we have outlined is correct, then it is possible to deliberate ethically without assuming any specific moral framework. Our commitment to resolving our differences by using reasons generates substantive principles and methods like DP and arguments by parity that help us find concrete solutions to asymmetric moral problems. The initial argument against ethical expertise depends on the assumption that ethicists cannot claim to be experts in solving ethical problems if they cannot identify a correct normative framework. This assumption, in turn, assumes that without a normative framework, one cannot solve any moral problems. It should be apparent that this assumption is false. In ordinary contexts, we often manage to make significant progress when trying to solve ethical problems. DP is one way we can make sense of how we are able to make ethical progress without assuming specific ethical and meta-ethical theories. We still have a great deal of work to do in articulating precisely the philosophy and psychology of moral deliberations, but at the very least, the skeptical conclusion against the possibility of ethics without theories is unwarranted.

In light of our discussion, one can see that ethical expertise is possible. DP requires competency in formal and informal reasoning. And, to deploy arguments by parity successfully requires having at one’s disposal accepted practices that can be used to anchor analogies. In clinical consultations, knowledge of common practices would be tremendously useful in evaluating whether one’s argument to limit autonomy is justified. Both types of knowledge (formal and informal reasoning skills and knowledge of accepted medical practices) are specialized enough that they require training. Moreover, one can be better or worse at exercising these skills. We can thus conclude that an ethics expert is one who is well-versed in logically reasoning, who can identify structures of arguments and help draw supported conclusions, and who possesses knowledge of relevant accepted practices.


Skepticism against the possibility of ethics expertise has led many to conclude that clinical ethical consultants can only act as facilitators. Consultants, they fear, cannot draw moral conclusions without imposing their favored normative theories and personal values. Since DP and the use of arguments by parity stem from a minimal commitment to reasons, they can help us draw moral conclusions without the fear of imposing personal views.

The arguments against ethics expertise wrongly assume that ethics cannot be done without deploying ethical and meta-ethical theories. Not only does this assumption fly in the face of ordinary practices, it incorrectly construes everyday moral disagreements as theoretical philosophical exercises in which participants try to “convince the universe.” Once we appreciate the structure of real moral debates, we see that ethical progress and ethical expertise are possible.

Our view entails one final consequence. Clinical ethics consultation committees have refrained from making specific clinical moral recommendations as a part of their services. The worry, I suspect, stems from a similar belief that underpins the skeptical argument against ethical expertise. If making moral recommendations requires normative theories and the use of a given theory is subjective, it then follows that making moral recommendations is a subjective affair. 10 However, since we claim that it is possible to solve moral problems without assuming any normative framework, the worry is thus misplaced.



1.See, for instance, Frey (1978); Crosthwaite (1995); Gesang (2010); Archard (2011); Schicktanz, Schweda, and Wynne (2012).

2.The exact meaning of “correct” is left ambiguous intentionally. It could, of course, mean a host of different things, for example, true, justified, warranted, proper, etc. I will examine how different meanings of correct affect the argument. For now, let us construe “correct” as the positive quality that the proponents of the Skeptical Argument would require in order for a particular ethical conclusion to be acceptable on the basis of said ethical and meta-ethical theories.

3.See §1.3 of Chapter 2 of van Fraassen (1980).

4.See Field (1980).

5.Such a response makes us think that our interlocutor is trying to change the topic by deflecting the criticism. A demand that we dig deeper theoretically does not advance the conversation, contrary to what we would expect if we needed theories to solve everyday moral problems.

6.An asymmetric disagreement differs from a symmetric one in that one side wishes to prevent the other side from doing what he wants. In a symmetric disagreement, both sides wish to pursue actions that cannot be jointly satisfied, but preventing the other party from doing what he wants is merely incidental. In other words, neither party has the intention to stop the other party from doing what he wants. Distributions of scarce medical resources (e.g., allocation of organs) would be examples of symmetric disagreements. Two parties want the same organ, but they cannot both have it. At the same time, neither party intends that the other party not have the organ.

7.When two parties cannot identify a sufficient reason to prohibit a practice, we assume that they have exhausted their deliberative resources, from carefully examining the logic of the arguments to confirming the truth and falsity of their premises. Their disagreement is thus not a matter of dogmatic adherence to their own views but a genuine failure to conclude that a particular reason is logically supported. This is clearly an idealization, and I will say more on this issue shortly.

8.To be sure, identifying exactly what constitutes “relevant” is a significant philosophical challenge, and much ink has been spilled to explicate the ceteris paribus clause. See Lange (1993), for a discussion of the issue.

9.See, for instance, Ariely (2011), and Ariely (2012), for recent research on the psychology of decision-making.

10.The American Society for Bioethics and Humanities, for instance, states that ethics consultants should refrain from making recommendations and function as facilitators for the participants of an ethics consultation. See ASBH (2011).


  • American Society for Bioethics and Humanities (ASBH). 2011. Core Competencies for Health Care Ethics Consultation. Glenview, IL: American Society for Bioethics and Humanities.
  • Archard D. 2011. Why moral philosophers are not and should not be moral experts. Bioethics 25:119–27. [PubMed]
  • Ariely D. 2011. The Upside of Irrationality: The Unexpected Benefits of Denying Logic. New York: HarperCollins.
  • ——. 2012. The (Honest) Truth About Dishonesty. New York: HarperCollins.
  • Crosthwaite J. 1995. Moral expertise: A problem in the professional ethics of professional ethicists. Bioethics 9:361–79. [PubMed]
  • Field H. 1980. Science Without Numbers: The Defence of Nominalism. Princeton, NJ: Princeton University Press.
  • Frey R. 1978. Moral experts. Personalist 59:47–52.
  • Gesang B. 2010. Are moral philosophers moral experts? Bioethics 24:153–9. [PubMed]
  • Haidt J., Björklund F., Murphy S. 2000. Moral dumbfounding: When intuition finds no reason. Lund Psychological Reports 1(2).
  • Lange M. 1993. Natural laws and the problem of provisos. Erkenntnis 38:233–48.
  • Schicktanz S., Schweda M., Wynne B. 2012. The ethics of ‘public understanding of ethics’—Why and how bioethics expertise should include public and patients’ voices. Medicine, Health Care and Philosophy 15:129–39. [PMC free article] [PubMed]
  • Thomson J. 1990. The Realm of Rights. Cambridge, MA: Harvard University Press.
  • van Fraassen B. 1980. The Scientific Image. New York: Oxford University Press.

Articles from The Journal of Medicine and Philosophy are provided here courtesy of Oxford University Press