Who has the last word when it comes to the meaning of the Constitution? Who ultimately decides whether a state can regulate or outlaw abortion? Or whether Congress can legislate to protect the elderly or the disabled? Who decides the winner in a contested presidential election? On these and countless other matters of fundamental interest to society, the answer in recent years has been the Supreme Court. And most Americans seem willing, even happy, to leave it at that. Indeed, if recent surveys are to be believed, most think this is how our Founding Fathers meant it to be. What lawyers call “judicial supremacy”—the idea that judges decide finally and for everyone what the Constitution means—has found wide public acceptance. Other actors get to have their say, of course. The president, Congress, the states, and ordinary citizens can all express opinions about the meaning of the Constitution. But the Justices decide whether those opinions are right or wrong, and the Justices’ judgments are supposed to settle matters for everyone, subject only to the practically impossible process of formal amendment.

It was not always thus. On the contrary, broad acceptance of judicial supremacy is of surprisingly recent vintage, a development that really only began in the 1960s and did not come to fruition until the 1980s. Certainly the men and women of our founding generation would not have accepted—did not accept—being told that a lawyerly elite had charge of the Constitution, and they would have been incredulous if told (as we are often told today) that the main reason to worry about who becomes president is that the winner will control judicial appointments. Giving an unelected judiciary that kind of importance and deference “makes the Judiciary Department paramount in fact,” James Madison mused in 1788, “which was never intended and can never be proper.” The Constitution of the founding generation was a popular one: the people’s charter, made by the people. And it was, in Madison’s words, “the people themselves”—working through and responding to their agents in government—who “can alone declare [the Constitution’s] true meaning and enforce its observance.” The idea of turning this responsibility over to judges was simply unthinkable.

I

Americans of the founding era made this clear in what they did as well as what they said. The Revolution itself was provoked by disputes over the meaning of the British constitution. Natural law did not play an important part in the American cause until independence was formally declared, and even then its role was merely to explain why a consistent course of unconstitutional conduct by Britain justified Americans in renouncing their allegiance to the Crown. All the underlying complaints in the Declaration charged British officials with violating the customary constitution. Yet no one, at any time, on either side of the Atlantic, ever suggested that these disputes should be submitted to a court. Instead, Americans protested and petitioned and mobbed and relied on a panoply of popular devices to challenge unconstitutional government action. The most famous instance was the Boston Tea Party, held to prevent England from establishing its power to tax the colonies for revenue. Rather than submit their claim that the Tea Tax was unconstitutional to a court, Americans made this determination themselves and acted to frustrate the law by refusing to allow any tea to be landed.

Nor did Americans suddenly abandon this sort of popular constitutionalism upon achieving independence. Countless examples can be cited from the nation’s early years. In 1793 federal authorities prosecuted Gideon Henfield for serving aboard a French privateer. The court instructed jurors that Henfield’s constitutional defense—that he could not be prosecuted because his actions were not proscribed by an existing statute of the United States—was frivolous. But the jury ignored these instructions and acquitted Henfield, producing what Chief Justice John Marshall called “extravagant marks of joy and exultation” from a public that praised the jurors for upholding the Constitution despite the efforts of corrupt government officials. Asserting that the power to determine the constitutionality of legislative acts “lies solely with the judiciary,” wrote a correspondent for the Albany Register several years later, during the controversy over the Alien and Sedition Acts, “is removing the cornerstone on which our federal compact rests; it is taking from the people the ultimate sovereignty.”

This idea of popular constitutionalism is sufficiently foreign to modern sensibilities to warrant at least brief explanation. Constitutional law, as originally understood, was different from ordinary law. It was law created directly by the people to regulate and restrain the government, as opposed to ordinary law, which is enacted by the government to regulate and restrain the people. “AConstitution,” wrote Judge William Nelson of Virginia in the 1790s, “is to the governors, or rather to the departments of government, what a law is to individuals.” The object of constitutional law was to regulate public officials, who were thus in the position of ordinary citizens with respect to it: required to do their best to ascertain its meaning while going about the daily business of governing, but without ultimate authority. Instead, their actions and decisions were subject to direct supervision and correction by the superior authority of the people.

Just how “the people” exercised this authority changed over time. In the 18th century, when politics was mostly local and law enforcement depended on active community support and participation, popular resistance was informal and extralegal—consisting of everything from polite petitions for a repeal to outright obstruction of the law in the form of jury nullification and violent mob action. The creation of a national republic led to efforts to domesticate these sorts of activities. Whereas 18th-century constitutionalism had imagined a wholly independent people checking the government from without, republicanism made it possible to think of the people acting in and through the government, with the different branches responding differently to popular pressure depending on their structure and their relationship to the polity.

The resulting theory, which emerged clearly only in the 1790s, is known today as “departmentalism.” Best articulated by Madison and Jefferson, the idea was ultimately straightforward. Each branch of government—the legislature, the executive, and the judiciary—would be entitled to offer and act on its views of the Constitution when necessary in the course of ordinary business. In most instances, the branches were expected to agree, and when disagreements arose they could be resolved by negotiation and accommodation. If this proved impossible, Kentucky Senator John Breckinridge explained, “[a] pertinacious adherence of both departments to their opinions, would soon bring the question to issue . . . whose construction of the law-making power should prevail”—by which Breckinridge meant that adherence by different branches to conflicting views would force the only body with final authority in such matters to decide—that is, the people themselves.

Readers familiar with the Federalist papers, and especially the famous fifty-first essay, will recognize in this reasoning an extension of Madison’s general theory of separation of powers. Madison failed to emphasize courts in 1788 because judicial review was not yet a significant element in his thinking. The departmental theory folded courts into Madison’s broader scheme, but without changing its basic commitment to democratic deliberation and popular authority.

Madison’s answer to the problems of republican politics had never been to limit democratic decision making by undemocratic means. Nor had it been to remove the people from the process of governing. His solution was to complicate politics: to slow it down with internal checks so that what ultimately prevailed was not the immediate reactions of an unreflective populace but rather a reasoned popular opinion that had been refined through a process of extended public debate. Either house of Congress, or the executive with its veto, could prevent proposed legislation from taking effect. But their block was really a means to test the legislation’s merits and support by forcing advocates to respond to objections and appeal for greater public support. The checking and balancing of the different departments of government thus served as a device to prolong and inform the discussion of controversial proposals.

The departmental theory added judicial review to this process. A measure that passed Congress and was signed by the executive might still be held in abeyance on constitutional grounds by a court. But the judiciary’s decision would not, could not, finally resolve the measure’s constitutionality. It was, rather, a reference point for further deliberation, with the people themselves deciding the matter by how they responded to competing appeals from members of the different branches through petitions, protests, and popular reactions to legislation and executive action or inaction.

Departmentalism was not the only theory of judicial review to emerge in the 1790s. The modern idea of judicial supremacy also appeared in these years, put forward by conservative Federalists worried about the direction of politics in the young republic. The Federalists who spearheaded the drive for a new Constitution in the late 1780s thought that creating a strong national government would end the political turmoil that had plagued the new nation in its first decade. They believed the size and scale of the new national government would put men like themselves in control, and they expected amicably to govern a quiescent population content to follow their wise leadership. But terrible strains emerged as Americans divided over contentious issues of finance and foreign affairs. The French Revolution proved especially divisive, as Americans took to the streets to demonstrate their support for France or England and to urge the administration to deal harshly with one or the other of these European powers. People mounted petition campaigns and called conventions; they paraded, planted liberty poles, and burned effigies; they held feasts and delivered public toasts. Alexander Hamilton was stoned at one protest meeting for suggesting that the constitutionality of the Jay Treaty with England was a matter for the president and Senate, rather than the people themselves, to decide.

Yet clashes over Hamilton’s bank or the French Revolution were themselves merely expressions of a more fundamental disagreement about the proper role of ordinary citizens in day-to-day governance. Under the leadership of Jefferson and Madison, Republicans embraced an expansive ideal of popular authority, championing the people’s right to control their representatives at all times and on all issues. Hamiltonian Federalists, in contrast, became progressively more conservative and anti-populist, defending a philosophy that acknowledged the political power of ordinary citizens on election day but called upon them between elections to defer passively to “constituted authorities.” This was, in a sense, a logical extension of the Federalist ideology of the 1780s, but the anti-democratic strands in Federalist thinking became much more pronounced in the 1790s—a product of unexpectedly fierce political opposition at home and of fear that the violence wracking French society could be exported to America.

Confronted with the apparent failure of their constitutional strategy of 1787, the bewildered Federalists groped about for new ways to control the increasingly unruly and demanding public. Not surprisingly, some noticed connections to the judiciary that had not previously been emphasized. By mid-decade these so-called High Federalists were for the first time beginning to talk about judicial supremacy—resting their argument that judges should be assigned final say over constitutional law on what was then a novel claim that the federal courts had been specially designed to protect constitutional values from factional majorities. By decade’s end, as political strife reached a crisis point and Federalists tried to smother their opponents under the Sedition Act (which made it a crime merely to criticize the government), some judges began espousing the new legal constitutionalism from the bench, refusing to permit juries to exercise their traditional authority over questions such as the constitutionality of this controversial legislation.

The election of 1800 was, among other things, a referendum on constitutional authority, with the role of the Supreme Court and the question of judicial supremacy among its central issues. Republicans loudly proclaimed the right of the people and the states to decide whether the acts and actions of the national government were constitutional; Federalists responded that such decisions could be made only by judges. The Republicans’ landslide victory, followed soon thereafter by the repeal of the Judiciary Act of 1801 and another Republican triumph in the 1802 midterm elections, seemed conclusively to resolve this contest against the Federalists. Legal constitutionalism and judicial supremacy were overwhelmingly rejected in favor of popular constitutionalism in its reworked departmental guise.

The opinion in Marbury v. Madison, decided in 1803, evidences this rejection. The issue before the Court was loaded with political significance: could the (Federalist) Supreme Court order the (Republican) Jefferson administration to deliver commissions to justices of the peace appointed by John Adams in the waning hours of his presidency? Recognizing that an affirmative answer would almost certainly be ignored, the justices ducked the question by holding that the statute giving them the power to decide the matter was unconstitutional. In so doing, however, Chief Justice Marshall carefully and self-consciously steered away from using any of the Federalists’ arguments about judicial supremacy while cribbing departmental arguments from Republican judges like Spencer Roane and St. George Tucker. The difference, which is scarcely visible to us today, was glaring at the time. Jefferson and the Republicans did not ignore Marshall’s assertion of judicial review because Marshall cleverly acted to scale back the Court’s authority. They were perfectly capable of anticipating and appreciating that other uses could be made of the power. Rather, Republicans agreedwith the theory Marshall articulated, which in context reflected an abandonment of the idea of judicial supremacy. The way the modern Court cites Marbury as authority for its supremacy could hardly be more ironic—or more mistaken.

II

Though discredited among the general public, the idea of judicial supremacy never disappeared entirely. Federalists and former Federalists did not all change their minds simply because they lost the election of 1800 and suffered the repeal of the Judiciary Act of 1801. Some did, perhaps, but more than a few die-hards—such as John Marshall and Daniel Webster—clung to the view that the judiciary must be principally and finally responsible for interpreting the Constitution. The very diffuseness and decentralization of popular constitutionalism left room for these advocates of judicial supremacy to continue to nurse their claim. By the early 1840s, popular constitutionalism and judicial supremacy were sharing space in American political culture, co-existing in an uncertain and sometimes tense relationship.

Struggle was not constant. It consisted of periodic blowups occurring after years or sometimes decades during which active backers of the two principles jostled for position while ordinary citizens remained largely unconcerned. Yet whenever an issue or a leader managed to capture the general public’s attention—whenever, in other words, circumstances impelled Americans to crystallize their latent beliefs and choose sides—they consistently chose popular constitutionalism over the view that the Constitution was subject to authoritative control by the judiciary.

The major controversies are matters of common historical knowledge: the clash over slavery in the territories in the years before the Civil War; the controversy over congressional management of Reconstruction; the battle between the Progressives and the courts over social welfare legislation; and, of course, the New Deal crisis. When an overconfident Supreme Court concluded in 1857 that Congress had no power to exclude slavery from federal territories, it handed down perhaps the single most reviled decision in the canon of American constitutional law. Abraham Lincoln’s reassertion of the departmental theory in response to Dred Scott is famous, but Lincoln was hardly the only one to make this argument. Editorialists and politicians throughout the North and West savaged the Court for its “impertinence” in presuming to “act as the interpreter of the Constitution for the other branches of the government.”

It took nearly a generation for the wounds inflicted on the Court’s reputation by Dred Scott to heal. The Reconstruction Congress threatened the Court with “annihilation” and forced it to back down both by stripping its jurisdiction to hear cases in which the Court seemed likely to limit congressional power and by packing its membership by increasing or decreasing the Court’s size depending on who was in the White House. When the Court reasserted itself again at the turn of the century, it faced vigorous opposition from Progressives demanding “such restrictions of the courts as shall leave to the people the ultimate authority to determine fundamental questions of social welfare and public policy.” The American people must be made “the masters and not the servants of even the highest court in the land,” demanded Theodore Roosevelt in 1912.

Progressives were less successful than their predecessors in curbing the Court, but the battle continued. Support for “the people” as the interpreter of last resort remained strong among liberal lawyers and intellectuals. For a variety of reasons, matters did not come to a head until 1936, when the Supreme Court struck down central elements of Franklin D. Roosevelt’s New Deal on the ground that Congress had exceeded what the justices thought should be the scope of federal power. The role of the Court became a contested political issue for the general public as New Dealers reasserted the people’s right to decide when and how the Constitution allows the federal government to address dire social and economic problems. Like his cousin and predecessor in the White House, FDR made his case by appealing directly to the legacy of popular constitutionalism. “The Constitution of the United States,” he insisted, is “a layman’s document, not a lawyer’s contract.” Although Roosevelt’s most overt attack on the Court—his Court-packing plan—failed to attract widespread support, its ultimate success was indicated when the justices suddenly reversed courses and upheld the second New Deal in 1937, rendering further pressure unnecessary. Through a combination of changing votes and changing members, the Court repudiated key elements of its Progressive-era jurisprudence, and a new accommodation emerged, defining more-lasting boundaries for a chastened judicial supremacy and a resurgent popular constitutionalism.

III

The New Deal settlement—which drew a line between constitutional questions governing the scope of federal power (left to the political process) and certain categories of individual rights (policed by judges)—lasted for nearly six decades, from the late 1930s to the mid-1990s. The Warren and Burger Courts, which sat between 1954 and 1986, were definitely “activist,” in the sense that they used the power of judicial review to strike down a great deal of legislation, but their activism remained for the most part within the terms worked out after 1937. While making their presence felt on questions of individual rights, these Courts respected the space carved out for popular constitutionalism at the time of the New Deal and left questions respecting the scope of national power to the political process.

Yet the justices of the Warren and Burger Courts, perhaps unwittingly, set in motion a process of unraveling this constitutional settlement. For within the limited sphere marked out for courts in the New Deal, they effected tremendous change. When New Dealers advocated a two-tiered system of judicial review, they probably envisioned the courts’ role protecting individual rights as a small thing—a reasonable expectation given prior experience. But beginning with the 1954 decision in Brown v. Board of Education, the Supreme Court showed what an ambitious judiciary was capable of accomplishing even within this previously limited domain. Constitutional settlement or not, bold decisions on such matters as race, sex, abortion, school prayer, the rights of criminal defendants, and the death penalty were not going to pass unnoticed.

Challenges to these decisions may have played a role in getting the Court to pull back in some areas, but they also induced it to forcefully reassert its supremacy. This happened most famously in 1958, when Arkansas and other Southern states sought to defy the Court’s school desegregation ruling. All nine justices signed an extraordinary opinion in Cooper v. Aaron insisting that states were bound to obey the Court’s decisions while arguing that Marbury had “declared the basic principle that the federal judiciary is supreme in the exposition of the law of the Constitution” and that this idea “has ever since been respected by this Court and the Country as a permanent and indispensable feature of our constitutional system.”

Marbury, of course, had said no such thing. Nor, despite the Court’s persistent efforts, had judicial supremacy ever been accepted as constitutional doctrine. The justices in Cooper were not reporting a fact so much as trying to manufacture one, and notwithstanding the Eisenhower administration’s reluctant decision to send troops to Little Rock to enforce the Court’s judgment, the declaration of judicial interpretive supremacy evoked considerable skepticism at the time.

But here is the striking thing: after Cooper v. Aaron, the idea of judicial supremacy seemed gradually to find public acceptance. The Court’s decisions were still often controversial. State legislatures sometimes enacted laws they knew the Court would strike down, and compliance with the justices’ most contentious rulings—such as those involving abortion or school prayer— was willfully slack in many places. But sometime in the 1960s, these incidents of noncompliance began evolving into forms of protest rather than claims of interpretive superiority. Outright denials of the Supreme Court’s authority to define constitutional law seemed largely to disappear.

By the 1980s most protests that touched on constitutional matters were being directed at rather than against the Court, and acceptance of judicial supremacy seemed to become the norm. Rather than deny the justices’ supremacy, opponents looked to change the law by changing the Court’s membership through new appointments. The stakes in the appointment process soared, leading to ugly battles such as those surrounding the nominations of Robert Bork and Clarence Thomas.

Explaining this rather extraordinary development is not easy. One factor, certainly, has been the general skepticism about popular government that came to characterize Western intellectual thought after World War II. The seeming eagerness with which mass publics in Europe had embraced fascism and communism eroded intellectual faith in what the political scientist Robert Dahl derisively referred to in the 1950s as “populist democracy.” The new thinking, associated most closely with Dahl and with Joseph Schumpeter, denigrated democratic politics as an engine for developing substantive values and portrayed it instead as a self-interested competition among interest groups. (Although Dahl himself was never enthusiastic about the Supreme Court, his early reduction of democratic politics to interest-group bargaining was used by proponents of a more assertive Court, who saw it as the place where bargaining might give way to reasoning and interests to principles.) Viewing electoral politics in this unflattering light made it easier to defend the judicial process as a comparatively better setting in which to preserve constitutional commitments and carry on the moral deliberation that everyone agreed was a crucial aspect of democratic government. Thus was born the curious notion of the judiciary as a “forum of principle.”

Closer to home in promoting acceptance of judicial supremacy was the still more curious fact of the Warren Court itself—a liberal activist Court that, for the first time in American history, gave progressives a reason to see the judiciary as a friend rather than a foe. This had never been a problem for conservatives. Going all the way back to the Federalist era, conservatives had always embraced an ideal of broad judicial authority, including judicial supremacy, and they continued to do so after Chief Justice Warren took over. For them, the problem with the Warren Court was simply that its decisions were wrong. Their protests were directed at the substantive interpretations of the liberal justices, whom they saw falsely using the Constitution as cover to deal with matters that constitutional law did not in fact address. Few conservatives rejected judicial review, and almost all supported the idea of judicial supremacy over the Constitution as they understood and interpreted it—continuing to insist, for example, that the New Deal Court had been wrong to abandon judicial enforcement of limits on federal power.

Liberals had a more difficult time responding to the Warren Court. For while they believed deeply in the substantive goodness of the Court’s decisions, their teachers and heroes had led the fight against the Progressive-era Court, and many of them had devoted their professional lives to the idea that courts acted inappropriately when they interfered with the popular will. Judicial innovations likeBrown, Miranda, and Roe were a wrenching test of the traditional liberal commitment to judicial restraint.

As Warren Court activism crested in the mid-1960s, a new generation of liberal scholars discarded their opposition to the courts and turned the liberal tradition on its head by embracing a philosophy of broad judicial authority. The upshot was—again, for the first time in American history—that conservatives and liberals found themselves in agreement on the principle of judicial supremacy. They continued to disagree about its proper domain and even more about the appropriate techniques for judges to use in interpreting the text. But liberals and conservatives alike took for granted that it was judges who should do the interpreting and that the judges’ interpretations should be final and binding. The idea of popular constitutionalism faded from view, and judicial supremacy came to monopolize constitutional theory and discourse.

What is more, the principle was no longer confined to the limited domain of individual rights—at least not according to the Court. As articulated by the justices, the Court’s supremacy in constitutional interpretation was unqualified, equally applicable to every question of constitutional law. Yet the Court’s actual behavior did not match this ambitious claim, for (as noted earlier) the Warren and Burger Courts continued to respect the New Deal settlement by leaving the political branches generally free to define the scope of their own constitutional authority.

The result was a glaring disjunction between the theoretical scope of judicial supremacy and its practice. An immense body of scholarship soon emerged to explain the post–New Deal structure of judicial review, but tension remained at a deep intellectual level. Those who found its political consequences troubling latched on to the seeming disconnect between a Constitution that was supposedly subject to judicial oversight and the practice of leaving questions respecting the Constitution’s limits to be settled by political institutions. In recent years, this group has consisted chiefly of conservatives unhappy with what they viewed as an unwarranted expansion of federal authority. They increasingly sought a solution in the form of more aggressive judicial enforcement of limits on Congress. By the late 1980s, five of them—William Rehnquist, Sandra Day O’Connor, Antonin Scalia, Anthony Kennedy, and Clarence Thomas—were on the Supreme Court.

The consequence has been a substantial change in Supreme Court practice, as the Rehnquist Court has carried the theory of judicial power developed by its predecessors to its logical conclusion. Reaffirming judicial supremacy in the domain of individual rights, the present Court has gone beyond the Warren and Burger Courts by discarding or constricting the doctrines that served after 1937 to limit the Court’s authority in other areas—striking down federal legislation at a pace far greater than any other Court in American history. The new jurisprudence rests explicitly, moreover, on a claim that judges and judges alone are finally responsible for interpreting the Constitution. “No doubt the political branches have a role,” the Chief Justice said recently, “but ever since Marbury this Court has remained the ultimate expositor of the constitutional text.”

IV

When confronted by similarly aggressive Courts in the past, Americans have always reasserted their right, and their responsibility, as republican citizens to say finally what the Constitution means. Are we still prepared to insist on our prerogative to control the meaning of our Constitution?

To listen to contemporary political debate, one has to think the answer must be no. Why else has the appointment process come to matter so much? Liberals and conservatives fight so hard because both sides believe that, once in office, the justices will and, more importantly, should have the power to decide matters once and for all. The triumph of judicial authority is still more apparent in the all-but-complete disappearance of public challenges to the justices’ supremacy over constitutional law. Apart from a handful of grumpy academics, pretty much everyone nowadays is willing to accept the Court’s word as final—and to do so, it seems, regardless of the issue, regardless of what the justices say, and regardless of the Court’s political complexion. To spot today’s prevailing consciousness, one need look no farther than Senator Patrick Leahy, ostensible leader of the Court’s Democratic opposition in Congress. While often questioning the justices’ decisions, Senator Leahy takes great pains to purge his speeches of any hint that he means to challenge the Court’s authority as final arbiter of constitutional law. “As a member of the bar of the Court, as a U.S. Senator, as an American,” he says, “I, of course, respect the decisions of the Supreme Court as . . . the ultimate interpretation of our Constitution, whether I agree or disagree.”

“Of course”? Whatever else one might think about such sentiments, they reflect a profound change in attitudes from what has historically been the case. Sometime in the past generation or so, constitutional history was recast—turned on its head, really—as a story of judicial triumphalism. The Supreme Court’s monopoly on constitutional interpretation is now depicted as inevitable, as something that was meant to be and that saved us from ourselves. The historical voice of judicial authority is privileged, while opposition to the Court’s self-aggrandizing tendencies is ignored, muted, or discredited.

Bush v. Gore is a telling illustration of just how much things have changed. One need not take sides on the merits of the case to see that public acquiescence to the Court’s decision cannot be explained as a matter of widespread indifference, much less political consensus. Only acceptance of the Court’s claim of authority to decide the matter explains the silence that followed. Compare the similar deadlock that occurred in 1876, when Samuel Tilden won the popular vote but disputed Electoral College votes gave the election to Rutherford B. Hayes. Faced with the possibility of massive resistance, the controversy was ultimately resolved by an ad-hoc political commission consisting of representatives from all three branches. Significantly, at the time of this earlier election no one—on or off the Court—ever dreamed of trying to resolve it in litigation, due in no small part to the fact that the half of the country that supported the loser would not have stood passively by and allowed the justices to dictate the outcome.

The reaction to Bush v. Gore is suggestive, moreover, of a larger point. Perhaps a majority of the country supports what the Rehnquist Court is doing. That still would not explain why all those who disagree, and disagree strongly, nevertheless feel constrained to passively accept the Court’s rulings while waiting for justices to die or retire in the hope that they will be replaced by others with more sympathetic views. Nor would it explain why someone like Patrick Leahy thinks it his duty “as an American” to affirm that decisions of the Supreme Court settle constitutional law no matter how wrong he or anyone else thinks they are.

What would explain facts like these? The expansion of judicial authority in the closing decades of the past century was not dictated by logic or evidence or history or law. It was, as Richard Parker noted in his book Here the People Rule, simply a change in sensibility. The dominant sensibility among lawyers, judges, scholars, and even politicians became (to use Parker’s term) “Anti-Populist,” which is to say, dominated by fears of what ordinary citizens might permit or encourage political actors to do. The modern anti-populist sensibility presumes that ordinary people are foolish and irresponsible when it comes to politics: self-interested rather than public-spirited, arbitrary rather than principled, impulsive and close-minded rather than deliberate or logical. Ordinary people are like children, really. And being like children, ordinary people are insecure and easily manipulated. The result is that ordinary politics, or perhaps we should say the politics that ordinary people make, is not just foolish but positively dangerous.

It comes as no surprise that people who hold these sorts of beliefs about ordinary people would gravitate toward something like judicial supremacy. Seeing democratic politics as scary and threatening, they find it obvious that someone must be found to restrain its mercurial impulses, someone less susceptible to the demagoguery and short-sightedness that afflict the hoi polloi. This is High Federalism redux. And like the High Federalists of the 1790s, modern commentators have come to see the Constitution in exclusively countermajoritarian terms, a protection against the tyranny of the majority—as if this were self-evident, as if a constitution could be nothing else.

Other commentators have similarly noted the profoundly anti-democratic attitudes that underlie modern support for judicial supremacy: attitudes grounded less in empirical fact or logical argument than in intuition and supposition. Mark Tushnet points to a “deep-rooted fear of voting” among modern intellectuals and suggests they “are more enthusiastic about judicial review than recent experience justifies, because they are afraid of what the people will do.” Jack Balkin describes a dominant “progressivist sensibility” constituted by “elitism, paternalism, authoritarianism, naivete, excessive and misplaced respect for the ‘best and brightest,’ isolation from the concerns of ordinary people, an inflated sense of superiority over ordinary people, disdain for popular values, fear of popular rule, confusion of factual and moral expertise, and meritocratic hurbris.” Roberto Unger identifies “discomfort with democracy” as one of the “dirty little secrets of contemporary jurisprudence.”

Those who seem themselves as targets of such critiques may bridle at the pejorative overtones, choosing to present what they think about ordinary politics using kinder, gentler terms. But they would not deny or repudiate the essential argument: that constitutional law is motivated by a conviction that popular politics is by nature dangerous and arbitrary; that “tyranny of the majority” is a pervasive threat; that a democratic constitutional order is therefore precarious and highly vulnerable; and that strong external checks on politics are necessary lest things fall apart.

We see this sort of skepticism about people and democracy in persistent misreadings of the founding that selectively focus on statements expressing fears of popular majorities, that do not evensee the more important, more pervasive theme celebrating the rise of popular rule. We see it, too, in the rise of the “cult of the Court” and in the complacency accompanying even the most aggressive judicial interference in politics.

A profound mistrust of popular government and representative assemblies is, in fact, one of the few convictions (perhaps the only conviction) that the Right and the Left today share. The Right prefers the invisible hand of a market—decentralized, unselfconscious, uncoordinated—to a body in which deliberate choices about how to govern are made. From the Left, in the meantime, we get “deliberative democracy,” a theory that defines popular rule as legitimate only if stringent prerequisites are satisfied: prerequisites that it just so happens can be met only by small bodies far removed from direct popular control. And now we have the emerging discipline of behavioral economics, which at least some practitioners find attractive because it helps them to “prove” how ordinary people cannot be expected to act rationally and need to defer more to experts and specialists.

The point is not that modern scholars want to abolish democracy or are secretly hankering for some other form of government. Nor is it that they hate ordinary people. But Parker is right that most contemporary commentators share a sensibility that takes for granted various unflattering stereotypes of ordinary people and their susceptibility to committing acts of injustice.

These deep-seated misgivings about ordinary citizens explain why modern intellectuals worry so about the risks associated with popular government and why these risks loom so large in their eyes. Their qualms consistently lead them to resolve disputes about the proper structure of democratic institutions in ways that favor minimizing or complicating popular participation. This is just being “realistic,” they say, and it is this sensibility that explains why so many of them find the question of judicial supremacy to be easy and obvious.

For those with a different sensibility, the opposite conclusion seems just as easy and just as obvious. Absent some reason to believe that other members of society are not approaching questions with the same good faith we attribute to ourselves—and the fact that they reach conclusions we disapprove of is not itself such a reason—we have no basis to presuppose that “we” are right while “they” need discipline and control.

Once again, one must be careful not to overdraw the argument. Just as supporters of judicial supremacy are not secretly itching for monarchy, its opponents are not dreaming of some pie-in-the-sky model of Athenian direct democracy. They recognize the need for representation and do not object to institutional arrangements designed to slow politics down (e.g., a separation of powers). Still, there is a qualitative difference between political restraints like bicameralism or a veto and a system of judicial supremacy. It is the difference between checks that are directly responsive to political energy and those that are only indirectly responsive, between checks that explicitly operate from within ordinary politics and those that purport to operate outside and upon it.

This is, of course, a very old conflict. In an essay written in the form of a dialogue between “Republican” and “Anti-republican” and published in 1792, James Madison asked “Who Are the Best Keepers of the People’s Liberties?” Republican answered that “the people themselves” were the safest repository—to which Madison had Anti-republican reply: “The people are stupid, suspicious, licentious” and “cannot safely trust themselves.” “Wonderful as it may seem,” Anti-republican continued, “the more you make government independent and hostile towards the people, the better security you provide for their rights and interests.”

Look ahead six decades to Martin Van Buren’s 1857 Inquiry into the Origins and Course of Political Parties in the United States and one finds the same arguments being made. Following Madison, Van Buren said that American politics had always been defined by a struggle between two great principles, which Van Buren labeled “democracy” and “aristocracy” and which he described in terms of their appeal to those who have “a proper respect for the people” and those who have “an inexhaustible distrust . . . of the capacities and dispositions of the great body of their fellow-citizens.” Van Buren shared Madison’s hostility toward the aristocratic impulse, but he was neither wrong nor off-base in identifying the persistence of these two views and in emphasizing their centrality in shaping politics.

Simply put, the supporters of judicial supremacy are today’s aristocrats. One can say this without being disparaging, meaning only to connect modern apologists for judicial authority with that strand in American thought that has always been concerned first and foremost with “the excess of democracy.” Today’s aristocrats are presumably no more interested in establishing a hereditary order than were Alexander Hamilton, Gouverneur Morris, or Joseph Story. But like these intellectual forbears, they approach the problem of democratic governance from a position of deep ambivalence: committed to the idea of popular rule, yet pessimistic and fearful about what it might produce and anxious to hedge their bets by instituting extra safeguards.

Today’s democrats, in the meantime, are no less concerned about individual rights than were their intellectual forbears: Jefferson, Madison, and Van Buren. But like these predecessors, those with a democratic sensibility have greater faith in the capacity of their fellow citizens to govern responsibly. They see risks but are not convinced that the risks justify circumscribing popular control by overtly undemocratic means. In earlier periods, aristocrats and democrats found themselves on opposite sides of such issues as executive power or federalism. Today the point of conflict is judicial supremacy, as it was for much of the 20th century.

What is different is that, unlike in any period in our past, the forces of aristocracy seem to have prevailed. There were in the past always those for whom fear of democracy was paramount, but theirs was a minority viewpoint. Most Americans resisted handing control of the Constitution over to what Van Buren condemned as “the selfish and contracted view of a judicial oligarchy.” It seems, however, that two generations of relentless skepticism by intellectuals and opinion-makers on both the Left and the Right have taken their toll, and the public today seems to have accepted their pessimistic assessment of the capacity of the people when it comes to questions of constitutional meaning.

What Americans must ask themselves is whether they are truly comfortable with this state of affairs: whether they share this lack of faith in themselves and their fellow citizens, or whether they are prepared to assume once again the full responsibilities of self-government. And make no mistake: this is our choice. The Constitution does not make it for us. Neither does history or tradition or law. We may choose as a matter of what Sanford Levinson has called “constitutional faith” to surrender control to the Court, to make it our Platonic guardian of constitutional values. Or we may choose to keep this responsibility, even while leaving the Court as our agent to make decisions. Either way, we decide.

The point, finally, is this: To control the Supreme Court, we must first lay claim to the Constitution ourselves. That means publicly repudiating justices who say that they, not we, possess the ultimate authority to interpret the Constitution. It means publicly reprimanding politicians who insist that “as Americans” we should submissively yield to whatever the Supreme Court decides. It means refusing to be deflected by arguments that constitutional law is too complex or difficult for ordinary citizens. Constitutional law is indeed complex, because legitimating judicial authority has offered the legal system an excuse to emphasize technical requirements of precedent and formal argument that necessarily complicated matters. But this complexity was created by the Court for the Court and is itself a product of judicializing constitutional law. In reclaiming the Constitution we reclaim the Constitution’s legacy as, in Franklin D. Roosevelt’s words, “a layman’s instrument of government” and not “a lawyer’s contract.” Above all, it means insisting that the Supreme Court is our servant and not our master: a servant whose seriousness and knowledge deserves much deference but who is ultimately supposed to yield to our judgments about what the Constitution means and not the reverse.

We cannot do this unless we are willing to invoke the sorts of tools used by earlier generations to keep the justices in line. The Constitution leaves room for countless political responses to an overreaching Court: justices can be impeached, the Court’s budget can be slashed, the president can ignore its mandates, Congress can strip it of jurisdiction or shrink its size or pack it with new members or give it burdensome new responsibilities or revise its procedures. The means are available, and they have been used to great effect when necessary—used, we should note, not by disreputable or failed leaders, but by some of our most admired presidents, such as Jefferson, Jackson, Lincoln, and Franklin D. Roosevelt.

That merely mentioning such possibilities sends chills down the spines of lawyers and legal scholars today is just one more sign of how much things have changed. And, of course, political responses like these should not be invoked lightly. But as history demonstrates, a great irony of making clear that we can and should punish an overreaching Court is that it will then almost never be necessary to do so. For, as Madison and other proponents of the departmental theory saw as early as the 1790s, a risk averse and potentially vulnerable Court can be expected to adjust its behavior to signs of popular unrest expressed though the other branches.

Making this shift would not entail major changes in the day-to-day business of deciding cases. There would still be briefs and oral arguments and precedents and opinions, and the job of being a Supreme Court justice would look pretty much the same as before. What presumably would change is the justices’ attitudes and self-conceptions as they went about their routines. In effect—though the analogy is more suggestive than literal—Supreme Court justices would come to see themselves in relation to the public somewhat as lower-court judges now see themselves in relation to the Supreme Court: responsible for interpreting the Constitution according to their best judgment, but with an awareness that there is a higher authority out there with power to overturn their decisions—an actual authority, too, not some abstract “people” who spoke once, two hundred years ago, and then disappeared. The practical likelihood of being overturned by this authority may be small, but the sense of responsibility thus engendered, together with a natural desire to avoid controversy and protect the institution of the Court, would inevitably change the dynamics of decision-making. It is this, in fact, that explains how the Supreme Court historically husbanded its authority even without judicial supremacy, as well as why crises occurred only when an overconfident Court claiming to be supreme paid too little mind to the public’s view of things.