In an article published shortly before his death, the political scientist James Q. Wilson took on the large question of free will and moral responsibility:

Does the fact that biology determines more of our thinking and conduct than we had previously imagined undermine the notion of free will? And does this possibility in turn undermine, if not entirely destroy, our ability to hold people accountable for their actions?

Wilson’s answer was an unequivocal no.

He has lots of company, which should come as a surprise given what scientific research into the determinants of human behavior has told us over the past four decades. Most of that research, as Wilson says, points to the same conclusion: our worldviews, aspirations, temperaments, conduct, and achievements—everything we conventionally think of as “us”—are in significant part determined by accidents of biology and circumstance. The study of the brain is in its infancy; as it advances, the evidence for determinism will surely grow.

One might have expected those developments to temper enthusiasm for blame mongering. Instead, the same four decades have been boom years for blame.

Retributive penal policy, which has produced incarceration rates of unprecedented proportions in the United States, has been at the forefront of the boom. But enthusiasm for blame is not confined to punishment. Changes in public policy more broadly—the slow dismantling of the social safety net, the push to privatize social security, the deregulation of banking, the health care wars, the refusal to bail out homeowners in the wake of the 2008 housing meltdown—have all been fueled by our collective sense that if things go badly for you, you’ve got no one to blame but yourself. Mortgage under water? You should have thought harder about whether you could really afford that house before you bought it. Trouble paying back your college loans? You should have looked more carefully at job prospects for sociology majors before you took out the loans. Unless of course “you” are “me,” in which case the situation tends to look a bit more complicated.

This has also been a boom time for blame in moral and political philosophy, partially in reaction to John Rawls’s A Theory of Justice (1971), which is widely credited with reviving these fields. Rawls focused not on personal responsibility but on ensuring fair conditions that would create opportunities for everyone to pursue their aims. Within a decade, however, Rawls’s theory was under attack from the left and right for giving insufficient attention to personal responsibility and associated attitudes toward blame. On the right, Robert Nozick’s 1974 Anarchy, State, and Utopia heralded a major libertarian revival, centered on individual rights and individual responsibility. On the left, Ronald Dworkin proposed an alternative to Rawls’s vision of liberal egalitarianism, one that brought personal responsibility into the egalitarian fold. On the one hand, Dworkin argued, our fate should not be shaped by “brute luck”—circumstances, whether social or biological, not subject to our control. But as to anything that results from our choices, blame away. As the philosopher G. A. Cohen said of Dworkin’s argument, it has “performed for egalitarianism the considerable service of incorporating within it the most powerful idea in the arsenal of the anti-egalitarian right: the idea of choice and responsibility.”

Why exactly are we trying so hard to make the world safe for blame? What have we gained and what have we lost in the effort? And is there an alternative?


The treatment of blame in moral and political philosophy closely tracks cultural and political sensibilities on the subject, and as a result will go far in answering these questions.

In the philosophical literature, arguments in praise of blame divide into two categories, distinguished according to whether free will is regarded as compatible with determinism. Compatibilists—as the name suggests—think the answer is yes: provided certain minimal conditions of voluntariness are met (you must not have been physically coerced into acting as you did, you must have the mental capacity to comprehend your actions, etc.), your actions are freely chosen, notwithstanding that they are predetermined. Incompatibilists think the answer is no: if a person’s actions are determined by antecedent conditions, such actions are not freely chosen.

Some incompatibilists, concluding that our actions are in fact predetermined, are reluctant to assign personal responsibility and blame. I will return to these “skeptical incompatibilists” later on. The category I want to focus on now are libertarian incompatibilists. Like skeptical incompatibilists, they believe that free will is incompatible with determinism. But they are libertarian incompatibilists because they reject determinism in favor of the view that we freely choose our actions. And, having stipulated that we are blameworthy if and only if we freely choose our actions, they conclude that we are blameworthy.

But what is the requisite sense of free will—of our actions not being determined by antecedent conditions—that makes someone blameworthy? And do we in fact have free will in that sense?

Recent decades have been boom years for blame—our collective sense that if things go badly for you, it’s all your fault.

For the metaphysician, the theoretical possibility that one could have acted otherwise in some alternative world may suffice to establish free will. But if the question is whether we should hold a real-life Smith blameworthy in this world, one would think that the requisite sort of free will is not metaphysical but practical: When all is said and done, how plausible is it to think that Smith could have acted differently?

To take an all too frequent scenario, suppose that Smith grew up in a neighborhood where drug dealing was the most common form of gainful employment. He was raised by a single mother who was a cocaine addict, and by the time he was twelve was supporting his family by selling drugs. When he was seventeen, he got caught up in a drug deal gone bad, and in the altercation that ensued, he shot and killed the buyer.

How should we think about Smith’s level of moral responsibility? Is there some magical moment at which Smith was transformed from the victim of his circumstances to the author of his own story? If so, when was it? What can we realistically expect of someone who finds himself in Smith’s circumstances with Smith’s history and biological endowments? And what is to be gained—and what lost—by adopting social policies that expect more? Given the high stakes of public blame these days, one might hope that libertarian incompatibilists would take these questions seriously. But most have simply assumed that whatever kind and degree of freedom is required for moral responsibility, all of us, except for a small class of “abnormal” people, have it once we reach seventeen years of age.

The reality is that we are all at best compromised agents, whether by biology, social circumstance, or brute luck. The differences among us are differences of degree that do not admit of categorical division into the normal and the abnormal. A morally serious inquiry into the requisite meaning of free will needs to face some basic facts about this society—for starters, that in the United States parental income and education are the most powerful predictors of whether a three-year-old will end up in the boardroom or in prison; that most abusive parents were themselves victims of abuse and neglect; that the norms of one’s peer group when growing up are powerful determinants of behavior; and that traits of emotional reactivity and impulsiveness, which have a large genetic component, are among the more robust predictors of criminal behavior. Such an inquiry would also need to address what evidence would suffice to conclude that Smith could have behaved differently. Is it enough that someone in a similar situation once pulled herself up by her own bootstraps? That the average person does? And how can we be sure that the situations are in fact similar in relevant ways?

Libertarian incompatibilism, in short, hangs profoundly consequential judgments on the insubstantial hook of an abstract possibility.

Compatibilism, in contrast, dispenses with these uncomfortable questions about the existence of free will by dispensing with any robust requirement of free will. Even if conduct is determined by antecedent conditions, the compatibilist argument goes, people nonetheless are free in other ways that suffice to make them blameworthy for their actions.

The compatibilist position has been around for a long time, with the role of determinism played variously by fate, luck, the gods, God, and social and biological forces. Jonathan Edwards, the great 18th century New England preacher, arguably had the hardest compatibilist hand to play. His cards included the Calvinist doctrine of predestination (a take-no-prisoners version of determinism) and the Calvinist doctrine of sin (a take-no-prisoners version of personal responsibility). But he played the hand he was dealt. In his 1754 essay “Freedom of the Will,” he offered the following grand equivocation: even if we do not will as we will (that is, do not choose what we will to do), we do as we will, and the latter suffices to justify God’s dangling us like spiders over the pit of hell in the event that our actions do not entirely please Him. In short, what matters is not how we came to possess a sinful desire, but that we had it and acted on it.

To the modern reader, Edwards’s argument is likely to seem too clever by half and the entire compatibilist enterprise a little baffling. Why are we knocking ourselves out to make a deterministic world safe for blame? If we really believe someone could not have done other than she did, might we not want to take a different tack altogether?

But the majority of contemporary philosophers writing on the subject are compatibilists. And many have offered what is essentially Edwards’s grand equivocation, updated for modern sensibilities. Character or attitudes have replaced God as the forces that determine what we will, and the two halves of Edwards’s equivocation—willing as we will and doing as we will—go by different names. But the basic argument is the same. In T. M. Scanlon’s words:

The lack of freedom that would be entailed by a general causal determinism need not [rule out responsibility and blameworthiness]. Even if our attitudes and actions are fully explained by genetic and environmental factors, it is still true that we have these attitudes and that our actions express them.

These days compatibilism is mostly the project of the left-liberal philosophical establishment, and, not surprisingly, has been given a kinder and gentler face. No more dangling over the pit of hell; indeed, in some versions, the consequences are no worse than the deliberate withdrawal of trust and friendship from those we believe have wronged us. But its indigestible core is unchanged: we are blameworthy for doing what we could not help but do.

Parental income and education are the best predictors of whether a three-year-old will end up in the boardroom or in prison.

That indigestible core is plainest to see when fate enters the scene not to determine what action we choose, but to determine its consequences—that is, when simple bad luck affects the outcome of our choices. Consider the following scenario. A bus driver is following his accustomed route, with all due care. A young child darts out in front of the bus. The driver, who does not see her and could not have seen her in time to stop, hits and kills the child. We may blame him for what he did, but in what sense is he blameworthy?

Focusing on such an unlucky outcome allows us to strip out two common distractions in discussions of compatibilism. The first is lingering doubt about free will. When a person makes a poor choice—say, the choice to drive recklessly—it is hard for us not to think that he really could have acted other than he did, if only he had tried harder. That thought often insinuates itself into compatibilist arguments, making the indigestible core go down more easily than it deserves to. In contrast, we have no difficulty believing that, having committed to a course of action, a person may—like the bus driver—have no control over the consequences.

The second distraction is special concern for antisocial conduct—that is, conduct that, whatever its consequences, we wish no one would engage in. Driving recklessly is one example. But, far from acting wrongly, the bus driver in our hypothetical scenario acted just as we would have him act. Someone had to drive the bus; he did the job and did it prudently. What more do we want from the guy? Why on earth should we blame him for doing what we would have had him do, just because things turned out badly?

The answer most compatibilists have given is: because that’s the way people are. We just do that sort of thing. Here is Thomas Nagel’s famous version of the argument:

It is tempting in [cases of decision under uncertainty] to feel that some decision must be possible, in the light of what is known at the time, which will make reproach unsuitable no matter how things turn out. But this is not true; when someone acts in such ways, he takes his life, or his moral position, into his hands, because how things turn out determines what he has done. . . . That these are genuine moral judgments rather than expressions of temporary attitude is evident from the fact that one can say in advance how the moral verdict will depend upon the results.

It may be predictable that we will blame others for the bad consequences of their prudent actions, although I think that response is less widespread and more amenable to reason than Nagel’s observation suggests. But the predictability of the response does not establish that it is a “genuine moral judgment” about the blameworthiness of the person as opposed to a pre-reflective emotional or psychological expression of upset at the consequences of what they have done. To establish the former requires a different sort of argument, one that I doubt can be made. If it can’t, then the claim that “how things turn out” determines the morality of “what one has done” simply raises hindsight bias to high moral principle.

Instead of defending the proposition that we are blameworthy for actions or consequences we could not control, many compatibilists have simply done away with the requirement of blameworthiness. More precisely, they have said, in essence, that our ordinary practices of blaming people settle who is blameworthy. This is at least suggested by Peter Strawson’s “Freedom and Resentment” (1962), a classic discussion of compatibilism that helped to launch its modern revival.

This account of blameworthiness leaves us with no vantage point from which to distinguish, say, our annoyance when our friend forgets to pick us up at the airport from the rage of a lynch mob demanding vengeance for a rape they believe, with no evidence, their quarry committed—unless of course the intensity of blame establishes the relative blameworthiness of its object, in which case the innocent victim of the lynching is much the greater sinner.

I do not mean to suggest that Strawson or his followers embrace any such perversities, or that they lack the resources to distinguish among instances of blaming.  Their project is, at root, a humane one. The aim is to insist that we are not just thinking reeds, that part of what it means to be human—and to relate to others on human terms—is to react to terrible losses or antisocial acts with anger, blame, and other negative emotions. But it is a long way from that observation to the conclusion that such “reactive attitudes,” as Strawson called them, are at the core of our humanity and at the heart of our relationships of mutual recognition and respect. They can be part of our nature without being the better angels of it.

Earlier I mentioned a third position on the issues of determinism, free will, and moral responsibility: skeptical incompatibilism. The skeptical incompatibilist agrees with the libertarian that we are blameworthy for our actions only if we have free will in the requisite sense (the incompatibilist part). But, contra the libertarian, the skeptic concludes that we don’t have the requisite free will, or at least there is no persuasive evidence that we do. Although a minority view, skeptical incompatibilism has many eloquent defenders in contemporary moral philosophy. I have trouble seeing the case against it. At least, I have trouble seeing how libertarian incompatibilism or compatibilism could be regarded as serious contenders given the empirical challenges to the former and the normative challenges to the latter.


Why, then, have so many thoughtful people invested so much intellectual energy in making the world safe for blame? Here are some possible explanations.

(i) We can’t not believe in free will, and hence in moral responsibility, because each person’s daily experience of life is as an agent. Our experience is—to use Jonathan Edwards’s terms—that we do not merely do as we will, but that we also will as we will.

Edwards proved himself an astute psychologist as well as a brilliant metaphysician when he urged this argument on his fellow Calvinists. Wouldn’t people recoil from the idea that God would dangle them over the pit of hell for what they did, even though He made them do it? Edwards saw no cause for worry, because most people are never really going to focus on the “God made me do it” part:

The common people do not ascend up in their reflections and abstractions to the metaphysical sources, relations and dependencies of things, in order to form their notion of faultiness or blameworthiness. They do not wait till they have decided by their refinings, what first determines the Will. . . . The idea which the common people, through all ages and nations, have of faultiness . . . . [is] a person’s having his heart wrong, and doing wrong from his heart. And this is the sum total of the matter.

The fact that we are all instinctive libertarians has given libertarian incompatibilism a free pass on the empirical front. If those instincts are impossible to dislodge—if they are the firm deliverances of ordinary experience—then some accommodation must be made. But if the predisposition to blame is no more than an instinct and habit, the argument for accommodation is not a moral one.

(ii) Even if conduct is not blameworthy, blame is an indispensable tool to control antisocial behavior. This justification does not rest on the moral desert of the party we blame. It rests on the social benefits that flow to the rest of us from locking up the morally blameless and throwing away the key. Those who wish to rely on it have a moral obligation to show that such benefits are great enough to justify the costs we are imposing on the morally blameless, their families, and their communities. In the current American criminal justice system, or the current American version of giving every child an equal opportunity to succeed in life, it is preposterous to think we have come close to meeting that test.

Is blame an indispensible tool to control antisocial behavior?

More importantly there are tools of social control that are directed specifically at harm reduction. The point of such tools is not to coddle criminals, or to deny their accountability or volitional capacities. It is to reduce future harm at a tolerable cost to all of us, wrongdoers included, by influencing wrongdoers’ future choices through rehabilitation, more carefully calibrated deterrence, and, when necessary, isolation from society. There are serious disagreements about whether harm-reduction policies have worked in the past, though there are no serious disagreements about the failures of mass incarceration. But we have some evidence that interventions can work if they are evidence-based and carefully tailored to the problems we are trying to fix. Since, unlike retribution, such tools are designed for the purpose of harm reduction, we should hardly be surprised if they do a better job of it.

(iii) Blaming others is a way to show respect for them. This very Kantian argument is at the core of much of the contemporary academic literature in praise of blame.

In the hands of hardcore retributivists, the argument has a decidedly Dickensian cast. To quote one proponent, when we punish someone, we respect his “fundamental human right to be treated as a person” by “permitting him to “make the choices that will determine what happens to him” and then respecting his “right to be punished for what [he has] done.”  Lord save us all from such respect.

In the hands of modern-day compatibilists, the stakes of the argument are much lower, and the delivery not so redolent of the Dickensian workhouse. Blaming others, Jay Wallace tells us, “is a way of taking to heart the values at the basis of morality” and of taking seriously “relations of mutual recognition.” Refusing to blame others, in contrast, “involves an attitude of superiority toward the person in question (something like the attitude of a parent toward a very young child) and thus represents a failure to take that person seriously as a participant in the relationship,” according to Scanlon.

A genuinely humane impulse is at work here. When we expect too little of others, we do in a certain sense fail to treat them as equals, and we also limit the kind of relationship we can have with them. But the argument presupposes that there are only two standpoints from which we can evaluate others: the subjective standpoint, in which we are enmeshed in a relationship and therefore in thrall to reactive attitudes such as blame; and the objective standpoint, from which we dispassionately evaluate others as fit objects for rehabilitation or instrumental social control, or as unfit candidates for friendship.

There are other possibilities that neither hold us hostage to reactive attitudes such as blame nor require us to view others from a position of moral superiority or indifference. We could begin by extending to others the interpretive generosity we would wish for ourselves were we standing in their shoes. Here is Erin Kelly’s eloquent account of what such a standpoint might entail:

While it seems to me that we are not morally required to enter into a wrongdoer’s perspective enough to appreciate the difficulty of the obstacles that led her to falter, the possibility of a compassionate recognition of the reasons for a person’s moral failures humanizes relationships and opens possibilities for understanding, forgiveness, and an honest reckoning with faults we might share.

Which kind of respect would you rather have?

In either the hardcore or softer versions, the “blaming you is how we show respect for you” argument runs into a serious PR problem when applied to bad actors whose moral agency is undeniably compromised: young children, the mentally ill, those in the throes of dementia, the severely retarded, and others who are commonly regarded as morally blameless. Retributivists and compatibilists have dealt with the problem by making an exception for these abnormal cases, acknowledging that the absence of meaningful moral agency renders such actors inappropriate objects of blame.

For the compatibilist, that concession is deeply problematic. Once compatibilism allows for the possibility that some forms of compromised moral agency excuse bad conduct, there is no logical stopping place short of incompatibilism. If a schizophrenic can introduce evidence that he is not a full moral agent, why not someone in the grips of a major depression, or impulsive anger, or drug addiction? A teenager growing up in gang territory, whose physical safety and social inclusion depends on choosing sides? Of course, the compatibilist may observe that we do commonly distinguish among different factors that compromise agency, allowing excuses in some cases (schizophrenia) but not in others (impulsive anger). But that observation, like Strawson’s view, merely describes current practice; it does not justify it.

For the libertarian incompatibilist, making an exception for the abnormal isn’t problematic in principle: we needn’t have free will always to have it some or even most of the time. It is, however, troublesome in practice. Nowhere is this clearer than in our current criminal justice system. Of the more than 2 million Americans currently incarcerated, 15 percent show symptoms of psychosis (delusions, hallucinations, etc.); another 25 to 40 percent have serious non-psychotic mental disorders. And this does not even get to the severe deprivation most prisoners faced growing up. But most libertarian incompatibilists see no reason to inquire into these or any other realities of our criminal justice system before concluding that we are finally giving criminals what they deserve.

(iv) Blame is here to stay, and if we can’t beat it, we might as well do what we can to civilize it.

Such fatalism is understandable. But there are lots of reasons to reject it.

First, while we experience many feelings toward others that contain some element of reproach, the feelings are more nuanced, more variable, and more mutable than the public face of our blame fest would suggest.

Public reactions to wrongdoing have been studied most extensively in the context of crime. Researchers have found that peoples’ evaluations of serious wrongfulness vary significantly across social conditions and individuals. Tellingly, the more information people have about the context of the crime, the person who committed it, and the circumstances he or she came from, the more nuanced are their views of moral responsibility. Peoples’ intuitions about appropriate punishment are as likely to be responsive to future-directed utilitarian concerns as past-oriented desert-based ones. There is little consensus about the absolute levels of punishment appropriate for different forms of wrongdoing, and, according to a number of recent studies of public opinion about punishment, the same people who describe current punishment policy as insufficiently punitive recommend replacing it with policies that are significantly less punitive.

The same is true for most of us in the personal realm. Even as we experience anger toward those who have harmed us, we are capable of fellow feeling as well. (He had a bad day; this is an issue he has a very hard time with; etc.) Many people placed in the position of the parents whose child was killed by the blameless bus driver would be capable of not blaming him—indeed, of sympathizing with him, knowing that for the rest of his life he will reproach himself, as others will reproach him, for an outcome for which he was in no sense blameworthy. It doesn’t take a saint or an emotional paralytic to feel that way. What it takes is empathy: the capacity to look at someone else’s life as we hope others will look at ours.

The fact that we alter our judgments of blameworthiness as we acquire greater knowledge of the person and the context in which she acted should put to rest any thought that our blaming practices are naturally immutable, or even recalcitrant. An hour listening to the average lifer in prison or the average at-risk teen talk about his or her circumstances, and most Americans would never view those groups in the same way again. Unfortunately, most of us will never spend that hour. Everything we know about people outside our social circles—assuming we know about them at all—is mediated by others (politicians, pundits, the media) who have every incentive to provide whatever information will elicit the emotional response they are looking for (anger, blame, sympathy, sorrow, etc.). It is hard to break out of that echo chamber, but it is possible.

Which brings me to the second reason to reject the fatalistic claim that blame, as we currently practice it, is not going away. Change always seems impossible—until it doesn’t. After 40 years of policies that have relentlessly ratcheted up punishment, the direction has shifted slightly in the last few years. New York and Massachusetts repealed their mandatory minimum sentences for drug offenses. California repealed the most egregious elements of its three-strikes law. The changes in New York and Massachusetts were spurred by budgetary crises and worked out by Republican and Democratic legislators in a manner that gave both groups political cover. In California, change came via a 70 percent majority in a popular referendum. The primary motivation voters cited for scaling back the three-strikes law was not money but rather a belief that the law was unfair. Both developments are encouraging, in different ways—the first, because it suggests the possibility of détente in the political arms race to prove which party is tougher on crime, the second because it suggests that political grandstanding on the subject may finally be losing its audience.

The final reason for cautious optimism is that we have gotten nothing from our 40-year blame fest except the guilty pleasure of reproaching others for acts that, but for the grace of God, or luck, or social or biological forces, we might well have committed ourselves. Our schools are broken, a new generation of kids has been lost, our prisons are crammed with petty offenders whose lives we have ruined in the name of a war on drugs that has been a total failure. And judging from the current mood of the country, the guilty pleasure of blaming others has not proved all that pleasurable.

I doubt there will be a groundswell of support any time soon for the view that others may not, after all, be to blame for the mess they (and we) are in. But the fact that we have gotten so little in return for our blame mongering at least opens up the possibility that people would be receptive to a new approach. The next time something goes terribly wrong, suppose that instead of immediately asking who is to blame, we were to ask: How can we fix this problem? Fixing problems is costly. But as we have learned from the past 40 years, so is not fixing them. In the long run, most of us stand to gain by changing the national attitude toward blame. Doing so won’t magically transform the world. But it will increase the odds of a better life for many, if not most, of us. That seems like a more-than-even trade for giving up a sense of self-righteousness that none of us has earned.

 

Editors’ Note: This forum appeared in the July/August 2013 print issue.