Image via Matt Hrkac, Wikimedia Commons.

Late last June, in its decision in Dobbs v. Jackson Women’s Health Organization, the Supreme Court of the United States announced that the constitutional right to abortion that had been part of American law for 50 years was no more.¹ In many red states, laws criminalizing abortion suddenly came into force.

Dobbs, like all Supreme Court decisions, is scholarly in form. It has many footnotes to cases, statutes and the evidentiary record. But – also like most Supreme Court decisions, at least where there is a strong political valence to the result – you wouldn’t need to be a legal scholar to predict how each justice would vote. If you knew which justices were appointed by Republican presidents and which ones were appointed by Democrats, you would have been able to predict the result perfectly. All six Republican appointees voted to uphold the Mississippi law restricting abortion that was before them (although Chief Justice John Roberts did so on narrow grounds that would have left some room for a constitutional right to abortion). The three Democratic appointees voted against.

The question Dobbs raises is one that has bedevilled American constitutional law – and the law of countries that have copied the American model of judicial review of legislation based on a written constitution – since the beginning. Is it really even law? Or is it just politics? And if it is just politics – if, as Justice William Brennan once remarked, the only thing that matters is how to count to five – what justifies its being such unaccountable politics?

The thesis that politically salient and controversial Supreme Court cases are better understood politically than legally is known as “legal realism” in law schools, or as the “attitudinal model” in political science. Its classic statement came not from a legal theorist or judge, but from Finley Peter Dunne’s fictional Irish-American bar owner, Mr. Dooley. As quoted in one of the most popular syndicated newspaper columns of 1917, Mr. Dooley opined that “no matter whether th’constitution follows th’flag or not, th’supreme court follows th’illiction returns.”²

A century after Mr. Dooley’s quip, his cynicism seems vindicated. The death of Justice Ruth Bader Ginsburg during the 2020 election campaign gave the Republicans the chance to cement a 6-3 majority on the Supreme Court. In the 2022 term, the result was a major shift of American law to the right. Dobbs wasn’t the only significant decision: the Supreme Court ruled that states and localities could not prevent concealed firearm use and made it impracticable for federal environmental and health agencies to regulate “major questions” such as the use of fossil fuels in electricity generation or whether occupational health requires a mandate that employees take measures against the spread of COVID-19.³

But it could get worse. The judges may not just follow the “illiction returns” – they may determine them.

This is a particularly stark problem in the United States, which has no nonpartisan election bureaucracy and in which the right to vote has long been central to racial and party politics. Elections in the United States are conducted by partisan elected officials, with oversight by the courts. Since the 1960s, the Supreme Court has been the final actor in deciding how these elections will take place: it favoured wider and equal election participation early on, but more recently the Court has made it easier for local and state governments to set partisan rules and harder for any government to restrict the impact of money on elections.

The upcoming term may have as stark an effect on how federal elections are conducted as the last term did on abortion rights. In Moore v. Harper, the Court will consider whether state constitutions can even impose any limits on a state legislature that decides to resolve an election, including one for presidential electors.

State legislatures lean more Republican than the electorate as a whole (as a result of other judicial decisions). State Republican parties are increasingly dominated by people who regard the 2020 election as stolen from Donald Trump. And they have always been hostile to broader voting. If state legislatures are not constrained by state constitutions or federal legislation, then both state and federal elections can be made less democratic to benefit Republican candidates. Moore shows the potential for a feedback loop of undemocratic elections leading to unpopular judicial decisions leading to more undemocratic elections. This in turn gives each election a feeling reminiscent of Weimar Germany, as each side treats it as potentially the last.

Judicial review takes shape

The U.S. Supreme Court is now less popular than at any time since polling began, and a number of justices have pushed back against questions about its legitimacy. But in a highly divided country, is there any way to avoid partisan conflicts over the legitimacy of supreme courts making fundamental policy decisions based on interpretations of vague texts?

Such debates cannot be avoided. Judicial constitutional review is in deep tension with democratic values, but it is not and cannot be apolitical. Once we reject the pre-Enlightenment belief that some people have deeper moral insight than others because of their status, we have to come up with a democratic justification for an electorate of nine dictating to a nation of 330 million. The modern conservative legal movement has tried to address this question by calling for respecting the original meaning of constitutional texts. But while this answer is not wrong, it is also not necessarily useful. Looking at text and history changes how courts talk about the fundamental issues, but does not really cure the divisions themselves.

Mr. Dooley’s century-old cynicism reminds us that there is nothing new about the worry that the U.S Supreme Court does politics, not law. The controversy over judicial power – and whether it could be distinguished from political power – goes back to the beginning of the Republic.

The 1776 Revolution unleashed a wave of popular democracy that the framers of the 1787 Constitution were concerned to restrain. In Federalist No. 10, James Madison identified as the chief danger in a republican form of government a majority “faction” interested in forgiving debts and redistributing property – which, for a Virginia planter, critically included slaves. Madison saw representative government and the “greater sphere of country” via a federal government as the solution because they would weaken democracy. In Federalist No. 78, Madison’s coauthor Alexander Hamilton added the institution of judicial review of legislation by “the least dangerous branch” of government. A life-tenured judiciary would be an ”excellent barrier to the encroachments and oppressions of the representative body.”

In the early 19th century, after the Federalist Party of George Washington, John Adams and Alexander Hamilton had been defeated politically, Chief Justice John Marshall made the U.S. Supreme Court the Federalists’ last bastion, proclaiming the Court’s power of judicial review over congressional statutes it found conflicted with the Constitution. Initially the Court used this power sparingly.⁴ But in the 1856 Dred Scott decision, the Supreme Court entered the centre of national debate, a position it has never vacated. Chief Justice Roger Taney, a Jacksonian Democrat, ruled that African Americans could not be citizens or access federal courts and that the federal government could not prohibit slavery in territories that had not been made states.⁵ The U.S. Civil War was the direct outcome of this decision.

The antislavery Republican Party was thus born hostile to judicial power, even as it made peace with the Constitution earlier abolitionists had labelled a “covenant with death and an agreement with hell.” Abraham Lincoln responded to Dred Scott and the Supreme Court’s hostility to his Republican Party by denying that the Court had any jurisdiction to do more than decide the specific case in front of it. It could not have been otherwise, since he was elected in 1860 on the promise to prevent what Taney had said was constitutionally required – the expansion of slavery into the territories. Lincoln expected the U.S. federal government to follow his interpretation of the Constitution, not the Supreme Court’s.

Republican hostility to the Supreme Court continued after the Civil War during Reconstruction. The Fourteenth Amendment (see box) – which is the foundation of modern American constitutional law and looks to Congress to enforce it – was written in direct response to the Court’s invalidation of the 1866 Civil Rights Act, which was intended to establish racial equality.

Constitution of the United States: Amendment XIV

1: All persons born or naturalized in the United States, and subject to the jurisdiction thereof, are citizens of the United States and of the State wherein they reside. No State shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States; nor shall any State deprive any person of life, liberty, or property, without due process of law; nor deny to any person within its jurisdiction the equal protection of the laws …

5: The Congress shall have power to enforce, by appropriate legislation, the provisions of this article.

The New Deal and civil rights

After the defeat of Reconstruction in 1876 – which led to a one-party apartheid system called Jim Crow in the former Confederacy – the Supreme Court used the Fourteenth Amendment to stop legislatures from enacting pro-labour legislation. This period is known as the “Lochner era” after a decision striking down state legislation limiting working hours as an interference with freedom of contract.⁶ As a result, the Supreme Court was a favourite target of ire for progressives, populists and socialists.

It was Franklin Delano Roosevelt who, in 1937, ultimately tamed the Court, thereby making possible the New Deal. Roosevelt’s threat to increase the number of Supreme Court justices above nine (“court packing”) was widely believed to have motivated Justice Owen Roberts to vote to overrule Lochner and uphold minimum wage laws.⁷

For the New Dealers, the Supreme Court was an obstacle to be neutralized: a progressive judge was one who would get out of the way. It was only after the Second World War that progressive movements – notably the National Association for the Advancement of Colored People (NAACP) – turned to the Supreme Court to make positive progress. Some of the justices appointed by Roosevelt (including Felix Frankfurter) continued to think that the main job of judges in constitutional cases was to let elected politicians get on with governing. But other Roosevelt appointees listened to the NAACP, which sought a powerful ally for the unfinished business of Reconstruction.

Under the leadership of Earl Warren – a liberal Republican appointed by Dwight Eisenhower – the Court ordered the integration of segregated schools; the principle of “one person one vote” in state elections; abolition of school prayer; numerous procedural protections for criminal accused, most famously the “Miranda warning”; and a conception of free speech broader than anywhere else in the Western world.⁸ The legitimacy of the Warren Court was put in doubt by conservative Americans, with “Impeach Earl Warren” bumper stickers popping up throughout the south. Prodded by social movements that looked back to the Civil War and Reconstruction, the Warren Court read the Fourteenth Amendment in light of this revolutionary moment, without concerning itself too much with whether its authors would have agreed with its specific decisions.

While the Warren Court was certainly politically controversial, that controversy was not yet partisan. Like Warren, the leading judicial liberal William Brennan was appointed by the Republican President Eisenhower. While Eisenhower came to regret this, the Warren Court was possible because American political parties were not ideologically well sorted. So while Democrats appointed Democratic lawyers and Republicans appointed Republican lawyers, that did not mean much. Liberal Republicans were common, while the most reactionary elected officials were usually from the all-Democratic south.

The 1964 Civil Rights Act and the 1965 Voting Rights Act – which finally brought the apartheid system of formal, legal segregation to an end – were among the proudest accomplishments of Democratic President Lyndon Johnson, but they carried with Republican votes in the north against Democratic senators in the south. As Johnson predicted, though, they also led to a realignment of the American party system, as southern White Democrats joined the Republican Party and the parties increasingly became sorted on ideological lines.

This ideological sorting of the late sixties led to a new politics of judicial appointment. In the watershed 1968 election, Richard Nixon made the Court a political issue. He promised to appoint “strict constructionists” to the Supreme Court who would slow or reverse the changes of the Warren Court. When Nixon won, the Court was never to be the same.

The culture war, political realignment and constitutional law

The dilemma for American progressives in the 1970s was that with the decline of the cross-racial working-class coalition that Roosevelt had created, they found themselves on the unpopular side of what we now call the “culture war.” This became clearer with Nixon’s landslide victory in 1972 against George McGovern, whom the Republicans branded as the candidate of “acid, amnesty and abortion.” The AFL-CIO tacitly supported Nixon in that election, and White working-class voters north and south overwhelmingly voted for him. Key elements in Nixon’s support were fear of crime – which was surging – and fatigue with measures for racial integration that went beyond formal legal equality, most notably “busing” children from different racially defined neighbourhoods to attend one another’s schools.

Since Hamilton and Madison, America’s tradition of legalist antimajoritarianism had been about protecting property and primarily elite liberty from a popular “mob” bent on redistribution. In the wake of Nixon’s victory, and the turn of American politics to the right, it was tempting to look to the courts as a way of protecting the gains of Johnson’s Great Society and the Warren Court from a right-populist backlash. But with the forced resignation of the Johnson appointee Abe Fortas in 1969, Republicans obtained control of the Court, which they have retained ever since. What strategy could possibly lead to victories for the legal left in these circumstances?

The answer arose because of a split within the Republican coalition between its more libertarian, elite wing and the populist right that turned to Nixon out of opposition to the cultural revolution of the 1960s. Republican elite lawyers were, not surprisingly, disproportionately aligned with the elite wing. As a result, while the Supreme Court in this era had a solid bloc supportive of deregulation in general, breaking down state barriers to America’s internal market and restricting regulation of money in politics, on social issues the left could still obtain victories, albeit usually with narrow margins and always with reverses.

So it was under Nixon appointee Warren Burger that Harry Blackmun, also a Nixon appointee, wrote the Roe decision making access to abortion a national constitutionally guaranteed right in 1973. Both Burger and Blackmun had backed Nixon’s “tough on crime” agenda, most notably on the death penalty. It is unlikely that either expected the response that Roe provoked on the religious right: up until the Roe decision, abortion had been an issue that primarily divided Protestants from Catholics. But the New Right that developed into the Reagan coalition changed that, as mainline Protestants gave way to evangelicals who made common cause with Catholic conservatives.

From the early 1970s until the retirement of Anthony Kennedy in 2018, the median vote on the Supreme Court was a pro-business conservative with liberal impulses on some important social issues. Moving lightly over the decades, Roe was upheld by an overwhelmingly Republican-appointed court in 1993 in Casey, while the gay and lesbian movement eventually won important victories: despite a decision upholding sodomy laws in 1986, the quintessential moderate Republican justice Anthony Kennedy wrote decisions invalidating state restrictions on local governments enacting antidiscrimination measures, decriminalizing sodomy in 2003, and finally recognizing same-sex marriage.⁹ A succession of liberal pragmatists – Brennan, John Paul Stevens and most recently Elena Kagan – tried their best to find a way to put together a winning coalition, following Brennan’s dictum that what mattered most in the Supreme Court was how to count to five.

This was frustrating for the social-conservative wing of the conservative coalition, but not necessarily bad for the Republican Party. On the core issues that had brought the Nixon coalition to power, the Supreme Court reversed the Warren Court: the death penalty, invalidated in 1972 nationwide, was restored four years later, and capital sentences became increasingly hard to challenge.¹⁰ The Court upheld America’s becoming the country with the highest per capita population of incarcerated people in world history.

Most importantly from the perspective of the Republican Party, the Court in this era not only rejected challenges to the undemocratic features of the American electoral system, such as partisan – and de facto racial – gerrymandering, but also made it difficult for Congress to change these features, especially in the case of election spending and pre-election clearance of rules with racial impacts in the south.¹¹

Most famously, the Court split 5-4 to allocate the 2000 presidential election to George W. Bush in Bush v. Gore, first enjoining a further count of votes on the grounds that it could “delegitimize” Bush’s election, and then adding that holding that a statewide recount would violate the “equal protection” guarantee of the Fourteenth Amendment if different standards were used in different counties. This justification has been abandoned even on the legal right, but it gave the Republicans the presidency, the ability to enact significant tax cuts and ownership over the “Global War on Terror” after September 11, 2001.

The Burger (1969–86), Rehnquist (1986–2005) and early Roberts (2005–20) courts were characterized by narrow conservative majorities, with the decisive vote being in the hands successively of Sandra Day O’Connor, Anthony Kennedy and ultimately Roberts himself. All three were pragmatic pro-business conservatives – and O’Connor and Kennedy both had socially liberal streaks. Overall, the decisions resulted in the Court retaining wide popular respect, especially in comparison with the other branches of government.

The ascendancy of the ideological legal right

But the social conservative movement grew more frustrated. With the possible exception of Byron White – who was appointed by John F. Kennedy in 1963 and served until 1993 – no Democratic appointee proved to be anything but a liberal jurisprudentially. By contrast, while Nixon, Reagan and George H.W. Bush did each manage to appoint a fearsome conservative (William Rehnquist, Antonin Scalia and Clarence Thomas respectively), a number of Republican appointees moved to the left, including John Paul Stevens (appointed by Gerald Ford) and David Souter (appointed by the elder Bush). The breaking point was the 5-4 Casey decision in 1993, in which every member of the majority that upheld Roe had been appointed by a Republican president.

The “Impeach Earl Warren” slogan was updated to “No More Souters.” But to avoid unreliable Republican appointees, there needed to be a way of identifying reliable ideologues. A network of right-wing lawyers was developed through the Federalist Society to channel young lawyers toward clerkships with right-wing judges. On the academic side, this movement produced the doctrine of applying neoclassical economics to legal issues and the principle that constitutional and legislative texts should be interpreted in light of their linguistic meaning at the time they were enacted (“originalism” or “textualism”).

The decisive moment was George W. Bush’s failed 2005 nomination of Harriet Myers, his White House counsel and personal friend. Even though she was a Republican partisan and an evangelical Christian, Myers was not part of the ideological network of right-wing lawyers centred on the Federalist Society. That was enough for the conservative legal elite to have her replaced by Samuel Alito, who has been a reliable conservative vote in the subsequent 17 years and was, ultimately, the author of the Dobbs opinion.

As I noted in my article about the appointment of Neil Gorsuch, Donald Trump was permitted ideological unorthodoxies but an absolute line was drawn on the question of judges.¹² He was only able to rally the Republican coalition in 2016 on the promise that he would select those judges the right-wing legal establishment dictated, and he followed through. After Gorsuch – who replaced the fearsome Scalia and thus left the balance in the Court unchanged – he was able to replace the moderate Kennedy with Brett Kavanaugh and the liberal icon Ginsburg with Amy Coney Barrett. This final act of his one-term presidency turned a 5-4 conservative majority into an impregnable 6-3.

In the 2022–23 term, the Supreme Court will consider the reach of federal environmental law into wetlands, whether California can require pork sold within its borders to be raised humanely and whether the Biden administration can reverse Trump-era immigration policies.¹³ It seems clear that the conservative majority will rule that considering racial diversity in higher education is unconstitutional and contrary to civil rights–era laws.¹⁴ It is not quite as certain that it will overturn the Indian Child Welfare Act, since Gorsuch has been a longstanding supporter of Indigenous sovereignty despite his otherwise conservative and “colour blind” views.¹⁵ But this may not matter: in one of the 2022 decisions, the other five conservatives voted, over a passionate Gorsuch dissent, to give states criminal law authority on Native American reservations.¹⁶

The danger of the “independent state legislature” theory

The biggest threat to American democracy is the Moore case. Moore arises out of a unique feature of the American Constitution: that elections to federal office are governed by rules determined by state legislatures. Article I, Section 4 says the times, places and manners of holding elections for senators and representatives “shall be prescribed in each State by the Legislature thereof,” while Article II, Section 1 says that each state shall appoint electors to the Electoral College that decides presidential elections “in such Manner as the Legislature thereof may direct.” In both cases, conventional constitutional understanding is that the manner in which a legislature “directs” is determined through the ordinary legislative process of that state, including any constraints that may be imposed by the state constitution as interpreted by the state supreme court.

In the notorious Bush v. Gore decision, some conservative justices argued that the reference to “Legislature” in Article II meant that the Florida Supreme Court could not have the last word in interpreting Florida’s election law, since this would permit that court to decide differently from the legislature. A flaw in this reasoning is that all legislation must be interpreted by some court, so the justices were not really replacing the Florida Supreme Court’s interpretation with an unmediated legislative will, but rather with their own, competing, interpretation. Ultimately the basis on which the majority gave the 2000 election to Bush was that of the more moderate Republican justices, Anthony Kennedy and Sandra Day O’Connor, who preferred to rely on the equal protection clause of the Fourteenth Amendment.

The interpretation of “legislature” came up again in 2015 in a lawsuit brought by the Republican Arizona state legislature against the state’s independent redistricting commission set up as a result of a popular referendum.¹⁷ Once a firmly Republican bastion, Arizona is now a swing state. As is typical of such states, its legislature is to the right of the electorate as a whole because of the concentration of Democratic voters in the major cities of Phoenix and Tucson. The state legislature claimed that the voters could not amend the state constitution to provide for a nonpartisan commission to govern redistricting. Since a conservative majority was established on the U.S. Supreme Court in the early 1970s, numerous cases had given the green light to partisan gerrymandering. The question therefore was whether a state’s voters could impose a less partisan way of redistricting. With the increasing sophistication of artificial intelligence models of likely geographical voting patterns, by 2015 this issue was critical to how state and congressional elections would be decided.

The decision in Arizona Independent Redistricting Committee upheld the state constitutional provisions allowing for nonpartisan decisions over House districts. The case was decided 5-4, with moderate Republican Anthony Kennedy voting with the Democratic appointees. However, the Republicans who remain on the Court today (and the now-deceased Scalia) all supported the “independent state legislature theory,” which would prevent any nonpartisan approach to rules for federal elections, leaving these rules entirely in the hands of state legislatures.

Moore raises the same question. While according to the principle of stare decisis courts are generally supposed to follow precedent, Dobbs shows that this principle is in fact no obstacle if a majority thinks the precedent was wrong. There is every reason to expect that under its new composition, the Court will uphold the independent state legislature theory.

It would be too much to say that this would be the end of American democracy, which has always had the peculiarity of being run with partisan election rules. But these efforts take place in the context of the rise of conspiracy thinking within the mainstream Republican Party. It is not hard to imagine Republican state legislatures in swing states effectively deciding a presidential election based on patently undemocratic reasoning. And the idea that a president chosen under such circumstances would go on to further change the composition of the highest court requires no imagination at all..

What is to be done?

Whatever the results in specific cases, America’s constitutional structure will frustrate majority rule. It will be extremely hard for the more progressive politics favoured by the millennial generation (those born after 1980) to be translated into actual public policy. Even if progressive Democrats win increasing shares of the popular vote, a combination of urban concentration and state gerrymandering will mean that it will require supermajorities to get a majority in the House of Representatives. Moreover, the Senate and the Electoral College have inherent biases in favour of rural voters. And even if Democrats manage again to win the presidency, the House and the Senate (as was narrowly accomplished in 2020), a Supreme Court chosen through what amounts to actuarial lottery provides a final veto point on substantive change.

The result will be national public policy dominated by symbolic culture war issues that promote angry “engagement” on social media but have no effect in the hard currency of legislation, taxes and spending. America could find itself lurching toward unconstitutional change.

Where does that leave the Supreme Court? The Warren Court felt that it could only give effect to the post–Civil War constitutional amendments by making change against the will of popularly elected legislatures, particularly at the state level. For critics of the Warren Court – notably Robert Bork whom Reagan tried and failed to appoint to the Supreme Court – the legitimately republican attitude was that unelected judges should do as they are told by democratically accountable legislatures. The only exception would be if the people themselves adopted overarching rules through a supermajoritarian procedure, as they do when they ratify and amend written constitutions. But either way, the will of the judge should be subordinate to the will of the people, whether in legislative or constituent form. For Bork, this meant that judges could only legitimately decide a case in accord with the “original intent” of either the legislator or the constitutional ratifier.

For his part, Scalia – the most theoretically minded conservative justice of his generation – recognized that the undisclosed “intent” in the brain of Madison or some more recent legislator cannot be binding because it is only the “public meaning” of the text that could be known to the ratifiers or legislative majority. But Scalia, an opponent of racial preferences for African Americans, had to recognize that the Reconstruction authors of the Fourteenth Amendment were strong proponents of federal action that was highly conscious of the obvious racial divide of the post–Civil War south.

Scalia argued that, in late-20th-century circumstances, the meaning of equal protection led to an application that was opposite to what it would be in the 1860s. But of course he could only argue this on the basis of his own, controversial, understanding of late-20th-century racial politics, rather than purely on the basis of “original public meaning.” And indeed, the newly appointed Justice Ketanji Brown Jackson, the first Black woman on the Court, was able to make deeply originalist arguments against what her conservative colleagues are going to do to affirmative action in universities, based on her different understanding of the extent to which racial politics have changed since the post–Civil War era.

The problem with “originalism” is thus not that it is wrong, but that it does not provide the solution to the problem of disagreement based on political priors. That is not to say that the meaning of legal terms changes without using the amending process. But when it comes to the meaning of legal terms that refer to contestable normative principles like “equal protection” or “liberty,” those who apply those terms must have reference to that contestable debate.

Since that debate is also what politics is about, originalism does not solve the problem it poses. As Justice Kagan has said, “we are all originalists now” – constitutional briefs from all sides are now focused on history and text. But as originalists, we seem to have the same differences that we always had in saying what the implications of text and history are for our current disputes.

It is therefore not enough to accept that American judges should give effect to the U.S Constitution as Canadian judges should give effect to the Canadian. Unless we can say more than this, we have no prospect of getting out of the death spiral, since political actors will continue to appoint judges who – completely sincerely and in good faith – see the constitutional commands as serving the policy goals of those political actors. And U.S. experience suggests that they just get better at doing this as time goes on. Yet this undermines the systemic legitimacy that it presupposes.

Law schools have generated many proposed solutions to this “countermajoritarian dilemma” – and presumably will keep doing so as long as they grant master’s degrees in constitutional law. The problem is that any solution will either appeal only to one side of the national debate or be too generic to resolve the issues at stake. A solution that is adopted academically will no doubt shape the rhetoric of Supreme Court justices, especially since much of the actual writing is done by clerks who recently graduated from those schools. But it won’t affect their votes.

The problem that faces proposals of conceptual reform also faces proposals of institutional reform. Some reforms would be good. The United Kingdom Supreme Court is larger, and actual decisions are made in panels that are randomly assigned. This promotes a more cautious, institutional style. The Canadian Supreme Court has mandatory retirement, which usefully promotes turnover and at least avoids the ghastly importance of timing of death.

It seems hard though to see how any reform can escape the trap that it will inevitably be against the interests of one side in the American national ideological struggle, and will therefore simultaneously be difficult to enact and legitimacy-destroying if it is. Perhaps the only realistic conclusion is that legitimacy crises have no solutions and must either be mitigated through good luck or result in a spiral of decomposition.

America has been lucky in its past crises. Perhaps we are left with no more than the hope that it will be again.

Continue reading “Crisis of Legitimacy”

Joseph Heath, The Machinery of Government: Public Administration and the Liberal State. New York: Oxford University Press, 2020. 444 pages.

There is a basic dilemma at the heart of public administration in complex, diverse societies like Canada. Because these societies are complex, many of the decisions with the widest impact on the public can only be made by people who are – or at least are advised by – specialized technical experts. But because they are also diverse, there is no broadly accepted normative framework in which technocrats can make these decisions, leading to a basic problem of legitimacy. In The Machinery of Government, Toronto philosopher Joseph Heath explores this paradox and claims to have a solution to it.

The book was written before the COVID-19 pandemic, but two years of public health mandates have really brought both sides of the problem home. Statistics, immunology, virology, computer modelling and professional instinct are all relevant to whether and to what extent changes in law and social behaviour will slow transmission of deadly, contagious diseases. All these disciplines had to be employed to decide whether to allow people to go to restaurants or churches or whether to require airline passengers or public servants to get vaccinated, and the people affected had to largely take the science on faith.

COVID-19 is just a dramatic instance of a much broader, but more abstract, story. Another example is monetary policy, a technical domain in which Prime Minister Justin Trudeau has expressed his lack of interest. In the face of inflation at levels not seen in decades, the Bank of Canada – mostly consisting of macroeconomists – will decide how quickly to increase interest rates. This will immediately affect the bottom line of every homeowner and business. It will be intended to have a direct impact on the labour market and therefore everyone’s wages and many people’s chance to have a job at all. The hope is that affecting aggregate demand will reduce the now-alarming rate at which the price of consumer goods is increasing. These highly consequential and immediate decisions are nonetheless spoken about in terms that only a tiny group of specialists purport to understand.

While central banks and public health officials are especially visible examples of technocratic power, there are countless other issues that are, in practice or in theory, resolved by specialized decision-makers, typically with some professional or technical accreditation. That is how we decide who gets into the country, who can build what where, what level of industrial pollution to allow, how quickly natural resources may be harvested, when children may be removed from their parents, when businesses need to be broken up and countless other questions that silently or loudly affect us all.

Some but not all such decision-makers are integrated into a formal departmental or ministerial structure, at the top of which stands a cabinet minister accountable to the legislature and thereby the electorate. Central bankers are different in that they have legal guarantees of independence; the same is true of prosecutors, utility regulators, labour boards, telecommunications regulators and many others. In all cases, these decisions, in practice, necessarily require a large degree of autonomy.

By comparison, Parliament and legislatures pass a tiny number of new rules in the form of statutes and budgets: when these are important, they almost always require, and at most structure, further decision-making by unelected officials. As Heath demonstrates convincingly, while specialized public servants are supposed to be – and usually are – accountable in some sense, this accountability is complicated and can rarely be understood as direct supervision by elected politicians.

Even the most populist of us will in practice have to concede that – at least in some cases – people with training in scientific, medical and engineering disciplines know something we don’t. We make that concession every time we undergo anesthesia or step into an elevator. The epistemic claims of some other disciplines modelled on natural science – macroeconomics comes to mind – are more disputable.

But whether the claims to knowledge are flimsy or secure, they are necessarily too limited to be the basis for decisions. What characterizes the break between post-Galileo empirical science and more traditional wisdom claims is that science, as we now understand it, is confined to descriptive claims about how the world is, not how it should be. At most, science can say that if you want A, you must do B. It cannot say whether A is worth striving for or whether it outweighs C, some unwanted effect of B. The maxim that we should “listen to the science” is therefore naive, if what we are listening for is a decision on what to do.

Public policy decisions thus cannot be derived from solely descriptive truths. It may be a fact that vaccines have a certain efficacy rate against infection by a certain variant of SARS-CoV-2, but it is only with necessarily controversial prescriptive assumptions that this can be translated into a decision that people in certain jobs must be vaccinated. Whether the bodily autonomy of the vaccine sceptic should give way to the biosecurity of the immunodeficient airplane passenger is not a scientific question.

Even if factual claims about the causal impact of increases in interest rates on inflation and unemployment were much more secure scientifically than they are in the real world, only additional normative assumptions could tell us whether a risk of higher unemployment is better or worse than a risk of higher inflation. In the modern scientific world picture, the technocrat cannot claim any special expertise in how things should be. Nor are technocrats representative of the people affected by their decisions, either in the sense of being directly accountable to them or in the sense of being demographically similar.

There is thus a basic dilemma. The people and their representatives cannot know enough to sensibly make many important decisions, but the people with the best claim to such knowledge are only loosely accountable to the people for the fundamentally normative choices that are necessarily involved.

The fraying of consensus over what to do about the COVID-19 pandemic was partly about pseudoscience and “fake news.” But the fraying also reflects the truth that, while science has a common language in which to discuss what is true or false, it has undermined the idea that there is a common metric for what is good or bad. Along with other developments in the modern West that seem irreversible, science has certainly dealt a blow to the idea that we could ever recognize a moral expert, or respect a self-proclaimed one.

The great merit of Heath’s book is that he puts this problem front and centre. He painstakingly reveals that fully transparent political accountability doesn’t solve the problem because of the information gaps necessarily involved, and a nonnormative, purely technical approach to public administration doesn’t solve it either.

Heath is nonetheless an optimist who thinks there is a way to resolve this dilemma in a way that is compatible with a deeper accountability of bureaucracies to citizens. He makes two big claims:

  • Bureaucracies in societies like Canada have already evolved an internal system of ethics that successfully negotiates this dilemma.
  • The key value in this system is “efficiency,” understood in the sense of neoclassical welfare economics.

For Heath, efficiency is the core value not just of economic bureaucracies but even of health, environmental and welfare bureaucracies, like public health agencies managing the COVID-19 pandemic. It may be mitigated somewhat by liberty or equality, but it is the default value of public administration and policy.

Heath takes what is known in philosophy as an “expressivist” approach: while he does not claim that bureaucrats would explicitly understand what they are doing in his terms, he does claim that he is making explicit what is already implicit in their practices, at least when those practices go well. He therefore denies that he is imposing some extraneous theory from the outside.

At first blush, Heath seems to have tried to resolve one problem by creating a deeper one. While “efficiency” is at least arguably accepted as a central value for economic policy, the idea that efficiency should be at the core of social, health or environmental policy is itself paradoxical. Referring to efficiency to decide how to deal with a deadly disease, family abuse or refugee determinations seems cold and bloodless, if not sociopathic and evil. Efficiency is controversial even in the economic sphere, especially on the left. It is definitely not how most social workers or physicians are trained to think. Even less is “efficiency” the kind of thing that has a wide social consensus behind it: first-year economics courses are famously counterintuitive.

Heath sees the problem and patiently explains what he means by “efficiency” in the hope of making it both plausible as an organizing principle for how bureaucracies actually make decisions and desirable as legitimizing what they are up to. As Heath explains, a decision is “Pareto efficient” if it makes at least one person better off while making no one worse off. The classic example for economists is a voluntary trade between fully informed parties in circumstances where their transaction affects no one else. Assuming a fairly low level of rationality, neither party will enter into the trade unless it makes them better off by their own lights.

The critical philosophical move by liberals (broadly understood) is to say that, under these assumptions, nobody else has a legitimate objection to the trade going through. Efficiency is a reason that the contract should go through, but it is a reason that both the parties ought to grant on the basis of their own values and therefore is not imposed on anyone. The same logic that applies in pecuniary transactions applies as well to forming a family or other relationship, joining a church or temple, or identifying with a gender or ethnic group. As long as everyone directly involved is informed and agrees, and no one else is affected, efficiency and liberty are the same, private ordering should be respected, and doing so is following a norm but not imposing a controversial version of the good.

Because real-world markets frequently fail to meet the idealized conditions in which everyone affected is better off by their own lights, efficiency is not confined to permitting voluntary arrangements and can justify overriding them. A situation in which efficiency and private ordering diverge is, in the jargon of neoclassical economics, a “market failure.” For Heath, understanding the neoclassical logic of market failures allows efficiency, as a norm, to escape being the domain of the pro-capitalist right and allows it to be the unknown common ground of people of varying political beliefs in societies like Canada (which, in an earlier book, he described as The Efficient Society).

As a theorist of business ethics, Heath has argued that commercial morality can be explained as a way of addressing market failures in private-sector contexts. In The Machinery of Government, he expands this to the public sector. Market failure explains why efficiency diverges from liberty as understood by classical liberalism. It is commonplace to see environmental regulation, for example, as a way of addressing the straightforward market failure when the effects of pollution fall on people who have no market relationship with the polluter. Other standard functions of the modern regulatory, welfare state – including social programs usually thought to be motivated by egalitarianism and solidarity – can be explained through information asymmetries, adverse selection insurance dynamics and other suitably technical terms of neoclassical economics.

To be sure, these situations usually provide no single Pareto-optimal solution, so it becomes necessary to substitute the concept of “Kaldor-Hicks” efficiency, after left-wing economists Nicholas Kaldor and John Hicks. A decision that necessarily has winners and losers cannot be Pareto efficient, but the choice can be made so that the winners could (in principle) compensate the losers. The decision leaving the greatest benefit to the winners (by their own lights) after they compensated all the losers is considered Kaldor-Hicks efficient: since governmental decisions are never without some losses, this is the relevant conception of efficiency.

This is, and should be, more controversial than Pareto efficiency since the “compensation” is hypothetical rather than real: in some – perhaps most – cases, it is to the one who hath that more shall be given. One idea is that over the long run the losers in one situation will be the winners in another, so that a large number of Kaldor-Hicks efficient decisions will be good for almost everybody. Another is that the compensation function can be placed in another area of policy, such as a progressive tax system.

Kaldor-Hicks efficiency can be relevant to decisions that go beyond money. The regulatory state is constantly faced with tradeoffs between risks of premature death or disease and economic activity. While the COVID-19 pandemic perhaps made this more obvious, the same choice exists whenever there is a decision about a speed limit or an occupational safety standard. If in one area of policy the state imposes a higher economic cost to save a life than in another, there is an opportunity for efficiency. By equalizing the implicit value in the two areas, more lives would be saved for the same economic cost, less economic cost would be imposed for the same number of lives saved or some combination of the two.

Cost-benefit analysis around lives offends industries that are costing more lives than average (most egregiously by imposing air pollution). It also offends widespread moral intuitions – and therefore, as Heath notes, is only mentioned in legislation when it is forbidden. Nevertheless, it has expanded in Western countries. Heath plausibly generalizes this to note that cost-benefit analysis fits well with an ethic that is internal to the practice of science-based regulation and modern bureaucratic liberalism.

One objection would be that actual existing administrative systems are very far from any technocratic ideal of evidence-based cost-benefit perfection. Policy initiatives are often poorly evaluated for effectiveness. Anyone who has been in an area of public policy for any length of time has seen fads come and go without much rigorous analysis. Moreover, there are basic issues with making cost-benefit analysis determinative, including the inherent uncertainty of social science, the lack of a methodologically defensible “social discount rate” and methodological questions about the robustness of many technocratic ways of evaluating nonpecuniary goods like pristine nature or social connection. Heath’s response to these objections is that efficiency is a regulative ideal, and that of course there is much work to be done to improve actual policy and administration to bring it closer to this ideal.

In addition, there are deeper normative objections that question whether efficiency is what we should be aiming for at all. Some of the critics base themselves in traditional natural law and virtue ethics; others in radical critiques of capitalism. Heath recognizes, but does not really address, these objections. He feels he can ignore any foundational debate because he is trying to express the normative standards already implicit in successful bureaucratic practice. For him, the payoff is that it can both explain and justify specifically administrative decision-making. The political system can and will occasionally depart from the most efficient solution in favour either of equality (the value of the left) or freedom (the value of the right), but this is not a problem since what the technocrat needs is a default system of normativity that applies when the political system says nothing.

On the basis of my experience as a public lawyer working for government, I think there is much to recommend Heath’s explication of what is implicit in administrative practice. In one respect, I think he even undersells things, because he argues that administrative and constitutional law place an external constraint on the internal bureaucratic value of efficiency in favour of a more classically liberal value of freedom. In this chapter, his examples tend to be American, even though he largely refers to Canada for his examples of administrative practice.
In fact, outside the United States, the dominant principle in public law – whether constitutional or administrative – tends to be “proportionality,” which can be understood very much in efficiency terms. For example, under the Canadian Charter of Rights and Freedoms, limits on rights will usually be upheld if they pursue a legitimate public goal and “minimally impair” the interests of the rights holder: a test that explicitly draws on Pareto efficiency. Similar standards are adopted by European courts and in other liberal democratic countries with active judicial review. While the American situation is more complicated, there too “least drastic means” tests are important in constitutional law, and American federal courts have gone further than most in recognizing cost-benefit analysis as the gold standard for administrative decision-making.

What is more disputable is whether this is a good or even a sustainable thing. Administrators need to find some way of making choices, which requires a normative standard. But they also need to be politically and socially neutral. Efficiency can, Heath thinks, achieve both goals, at least in North Atlantic societies that have modern liberalism embedded in their institutions and culture. The problem is that Heath paints a picture of a very tame kind of politics, in which a pro-redistribution left pushes to ameliorate efficiency with equality and a pro-market right emphasizes economic growth, but everyone basically accepts the framework of efficiency and philosophical liberalism. If that was a reasonable idealization of politics in the 1990s, it bears no resemblance to the politics of the West today.

The right is increasingly emphasizing the importance of the state defending an existing national culture from what it sees as demographic and ideological threats of uncontrolled immigration and wokeness. In so doing, it is denying the efficiency hypothesis that one person’s set of first-order values should count the same as another’s, which for populist nationalists undermines the sense of national community necessary to have a functioning state at all. Moreover, populist nationalists are aware that technocrats have cultural values that are different from their own. They are not going to accept efficiency as neutrality, and their leaders will increasingly characterize technocrats as the “Deep State” in the hope of obtaining less constrained power.

Equally, the centre-left emphasis on addressing market failures and redistributing income is losing out to a more radical view that this kind of thinking is at the root of a half century’s increase in material inequality and loss of working-class power. The most sophisticated and influential version of this critique can be seen in the work of economist Mariana Mazzucato. In a number of books, she has argued that the state must do more than promote efficiency: it must have a vision of change and drive the private sector toward it.

In her most recent book, Mission Economy: A Moonshot Guide to Changing Capitalism, Mazzucato draws on the experience of the Manhattan Project and the Apollo program to argue that successful public administration is primarily about vision and experimentation, not correcting market failures. The disagreement between Heath and Mazzucato can be seen most directly in the difference between a climate change policy centred on carbon pricing and subsidies, on one hand, and one that would put transforming the energy and agricultural systems of the world at the heart of a new, ambitious industrial policy (the “Green New Deal”), on the other.

Heath knows that populist politicians will promote what he sees as inefficient policies, and he sees what those populists would regard as the “Deep State” as an important safeguard for liberal values. The problem is that any plausible vision of political neutrality has to be defined in terms of the real political divisions in the society: if they are between a nationalist populism and a progressive one, the very fact that bureaucracies in the West are oriented to efficiency may mean that we are all in for a bumpy ride, in which decision-makers and angry citizens find it harder and harder even to understand each other.

The problem is that Heath does not address how technocratic decision-making is to be reconciled with political struggles over deeper issues of identity and vision. One flaw in Heath’s discussion is that he never really integrates technocratic efficiency thinking with popular input. Heath sees politics and democracy primarily as a threat to liberal values. But we have seen not only that technocrats become viewed as out of touch without popular input, but also that they make profound mistakes: the hollowing out of manufacturing capacity in an unqualified embrace of globalization, the Iraq war, the 2008 financial crisis and its aftermath of self-defeating austerity and the opioid crisis, to name a few.

One possible proposal is the increased use of sortition, the random choice of citizens to act as deliberative decision-makers. There has been some interest in sortition in philosophical circles, but Heath seems unwilling to give populism its due. While Heath’s argument helps us understand public administration and could even make it work better, he is ultimately too sanguine about efficiency and the internal logic of technocracy to really help us get out of what he has rightly identified as a crucial dilemma.

Gareth Morley’s review of Jared Diamond, Collapse: How Societies Choose to Fail or Succeed (New York: Viking, 2005) appeared in Inroads 17 (Summer/Fall 2005).

Since at least the Club of Rome’s 1972 report on “Limits to Growth,” the most consequential intellectual debate at the intersection of social and natural science has been between Malthusians and Cornucopians.

Malthusians point out that nothing can go on forever. In mathematical language, exponential growth of anything must run into constraints, and if that growth is not controlled the constraints will come in the form of a catastrophe. They see the growth in human use of energy, land and other scarce things and become deeply pessimistic about the future of modern civilization.

Cornucopians acknowledge that other animals and preindustrial peoples were subject to this logic. But they are impressed by the institutions that the modern world has used to get the exponential growth that worries the Malthusians – science, engineering, markets, democracy, free speech. They point out that wealthy, educated societies have all reduced their birthrates voluntarily and that this is now happening throughout the world. Energy use continues to rise and humans continue to displace other species from their habitats, but Cornucopians are confident that these problems too will be overcome by the complex structures through which modern societies find truths, develop useful technologies and innovations, and regulate themselves.

As our generation reels from increasingly unstable weather and faces the challenge of transitioning from a modernity based on fossil fuels to a “net zero” economy in which we stop adding carbon dioxide to the atmosphere, this debate is of more than academic interest. Will some combination of renewable generation of electricity, better storage and transmission, and electrification of transport, heating and industry allow a better, cleaner industrial society? Or will we be buried in our own wastes?

Jared Diamond’s 2005 Collapse: How Societies Choose to Fail or Succeed was an intervention in the debate, mostly on the side of the Malthusians. Diamond – an ornithologist and biogeographer with a personal familiarity with a number of very different societies – is a polymath who had already demonstrated his willingness to take on hard questions about human history without deference to disciplinary boundaries and somehow turn these thoughts into bestsellers. Collapse looked at how a number of different cultures, past and future, traditional and industrialized, had dealt with or been overwhelmed by problems arising out of their relationship with their environment.

In my review in Inroads 17, I argued that the book was ambitious and challenging, but ultimately flawed by a failure to consider how prices regulate the relationship of market societies to natural scarcities. I thought he missed the strength of the Cornucopian argument: that market economies will not inexorably deplete finite resources so long as they are priced, because facing those prices will encourage enterprises built on using exponentially more to economize. At the same time, he missed its biggest weakness: that actually existing capitalism left some of the most important scarcities unpriced and therefore unvalued. The principal example of an unpriced and unvalued resource is the finite capacity of the atmosphere and the oceans to assimilate carbon dioxide wastes from the combustion of fossil fuels without catastrophic warming and acidification. I was not impressed by his invocation of consumer activist pressure on corporate “brands” to do more about greenhouse gas mitigation, which I felt would get nowhere without political action to create a price signal for use of the atmosphere and oceans to store carbon dioxide.

Looking back 17 years later, I have to admit that pressure on corporations – and especially on investors – has had more results than I thought it would. More importantly, the political obstacles to effective carbon pricing have proven sufficiently imposing that many observers sympathetic to the technocratic elegance of pricing have argued that we are better off with less efficient, but more politically practical, ways of dealing with the problem.¹ This reality has made me reconsider one of the principal messages of Collapse: that the cultural patterns that societies develop to help them in their survival and reproduction at one time turn out to be what destroy them in the end. If this is right, it is not just that the economic system has to be corrected by a political intervention – an idea I certainly agreed with in 2005. Rather, a broader change of political institutions is also called for by the crisis. But the solutions to past problems create identity-defining inhibitions to any solution.

In 2005 it was clear that both the Bush administration and China’s massive demand for resources were problems for the kind of market-oriented approach I was advocating. But I saw some hope in the European Trading System built up after the Kyoto Accord. It clearly was not sufficient but I thought it could be an “institutional base for future progress.”

That hope grew dim in the ensuing decade. The European Trading System suffered low prices as a result of a glut of licences until reforms were introduced in 2017. Even once the Bush regime was gone, the Waxman-Markey bill to bring a cap-and-trade system to the United States could not pass a Democratic Congress. Closer to home, Stéphane Dion’s proposal for a “green shift” involving a carbon tax lost to Stephen Harper’s policy of opening up tar sands development. British Columbia’s carbon tax, although it helped reduce carbon emissions per capita, has also never been priced at a level that did heavy policy lifting. No government anywhere introduced pricing at the level where it could have more than a modest incremental effect on emissions. But the inexorable logic of finitude meant that every tonne added to the atmosphere was a tonne less that could be emitted in the future without catastrophic consequences.

By the beginning of the 2010s, it was easier to maintain pessimism of the intellect than optimism of the will. In most of the rich West, it had become common political wisdom that a carbon price was a political disaster – any progress at all had to be kept quiet from the voters. Despite the failure of Waxman-Markey, the Obama administration funded some farsighted developments in technologies that are now paying off, but the only politically relevant aspect of this program was a single ill-fated loan to Solyndra.

The 2011 Fukushima Daiichi nuclear disaster inspired what was, from a climate perspective, an unhelpful backlash against the only carbon-free source of thermal electricity. The global South was clearly in no mood to allow concerns about the small greenhouse gas budget left by the industrialization of the North to get in the way of its own industrialization: China, India, Indonesia, South Africa and many other countries doubled down and expanded coal-fired electricity. Technology, democracy, markets, capitalism, state socialism and industrial policy all seemed to be pointing in the same direction: many degrees of warming.

A decade later, the situation is paradoxical. Some of the developments in politics and technology have gone beyond the most optimistic forecasts of a decade ago. A virtuous cycle in at least some countries of subsidized development followed by deployment has driven down the costs of solar photovoltaics, wind turbines, lithium ion batteries and (soon) hydrogen electrolysis. This means that solar and wind are now cheaper than fossil fuels as a source of electricity on the “levelized cost” basis that energy wonks use. Since the sun does not always shine and the wind does not always blow, they must be “integrated” on the grid, but much progress has been made in doing this.

In the rich world, natural gas and renewables have already put coal into sharp decline: the situation in newly industrializing countries is more complicated, but in the long run it seems that cheap solar will enable the dream of cheap, clean, domestic energy, and no government is going to be against that. If coal seems doomed, the future of oil and gas is more contested. In the fall of 2021, both have recovered their price and much of the world is facing a serious energy shortage. Demand for oil appears likely to recover to prepandemic levels at least for a while. But if the market (and the Chinese government) are to be believed – and they are – the victory of electric vehicles over internal combustion now seems like just a matter of time. It is difficult to believe that Saudi Arabia will allow higher-cost producers to retain much share of a declining petroleum market.

Politics also has shown some ability to process the information about what the future will hold and turn it into concrete steps. At the elite level, the Paris Accord of 2015 created a framework in which every country has been expected to put forward, on a purely voluntary basis, a “nationally determined contribution” (NDC) with a goal of reaching net zero emissions for the whole world somewhere between 1.5 and 2.0 degrees Celsius above preindustrial times. The NDCs, taken collectively, in fact imply considerably higher levels of warming, but there is now a recognition that total decarbonization is necessary and that “ambition” will have to increase. The 2018 Intergovernmental Panel on Climate Change Special Report on what was implied by 1.5-degree warming had more impact than any other such report in getting governments to at least make commitments to reach net zero by midcentury. And at the popular level, the 2018 Friday school strikes for climate seem to have led to a movement with more staying power – and certainly greater breadth of support – than any of its predecessors.

The paradox is that all of this is better than any concerned climate watcher of 2012 would have had any right to expect, and yet the “future” reality of unfolding disasters is now truly upon us and there is no prospect that it will get better in the lifetime of anyone alive today. There is no prospect of avoiding climate disasters – they have already happened at about 1.1 degrees above preindustrial averages. There will be more Lyttons. And a 1.5 degree endpoint is unlikely to be achieved: while it may just be technically feasible, it is not realistically consistent with the vagaries of politics, international and domestic, as we are almost certain to experience them. Moreover, as former B.C. Green leader Andrew Weaver has recently warned, it is possible that the fetishization of the 1.5 degree endpoint will lead to despair and demobilization as it becomes clear that this endpoint will not be reached – even though the difference between 2.0 and 1.9 is as great as between 1.6 and 1.5, if not greater.

The truth is we can no more leap out of our existing institutional and cognitive frameworks than could the Norse in Greenland or Polynesians in Easter Island. Although there are examples of the capitalist institution Diamond favoured in 2005 – the consumer brand – leading to useful investments, there has also been a lot of greenwashing, of nonsense “offsets,” of empty symbolism and insincere blather. But the capitalist institution I was favouring – prices determined by caps on quantity or vice versa – has also not yet turned out to be able to be both politically viable and meaningful. Less efficient policies – feed-in tariffs, renewable portfolio standards, tax credits and, above all, good old-fashioned command-and-control regulation – have done what lifting has gotten done.

What was most valuable in Diamond’s original book was the eye of the natural scientist and geographer on the products of cultural evolution – the eye that saw how varied societies created governance structures, systems of symbolic knowledge and tacit behaviour as adaptive responses to the challenges of their environment, as well as to their own social cohesion and the threats of neighbours. The reason that this should not lead us to Cornucopian delusion is that these structures and systems are accumulated through a process of trial and error, not foresight, and there is no guarantee that the next error will not be our last.

Prices are just one way modern societies learn, but I continue to think they are a critical one. Europe and Canada are now beginning to see prices at a level that can realistically bring about significant change. The electrification of transportation and heating and the cleaning of electricity supply now seem unstoppable, if unfortunately not unslowable. In the end the lesson from Collapse and from the paradoxical energy transition of the intervening 17 years is that there are no guarantees, either of success or of failure.

Continue reading “The Paradoxical Energy Transition Since Jared Diamond’s Collapse”

Photo: Jesse Wagstaff, via Flickr.

Public health orders restricting in-person gatherings have faced legal challenges across Canada. The argument is that these orders are contrary to the Canadian Charter of Rights and Freedoms, especially its guarantee of “freedom of religion.” Many of these challenges have been organized or funded by the Justice Centre for Constitutional Freedoms, a Calgary-based NGO headed by John Carpay, which considers COVID-19 a “political pandemic,”1 but more mainstream civil libertarian organizations such as the Canadian Civil Liberties Association have also been involved. So far these challenges have been unsuccessful, although much remains undecided.2

Similar battles have played out in the United States, but with more uneven effect. The U.S. Supreme Court has visited the question of whether public health restrictions on religious gatherings violate the free exercise guarantee of the First Amendment on three occasions. Each of its decisions has been divided, and none of them have been final.3 Its most recent decision, issued shortly after Ruth Bader Ginsburg’s death, was deeply divided on partisan lines with Justices Sonia Sotomayor and Elena Kagan accusing their Republican-appointed colleagues of playing a “deadly game in second guessing the expert judgement of health officials.”

It is hard to imagine a creature less interested in the subtleties of rights, law or ideology than the SARS-CoV-2 virus. It relentlessly focuses on its Darwinian mission of using the resources of human cells to make copies of its genetic material. But like everything else that fulfils that mission by spreading through networks of interacting human bodies, it both shapes and is shaped by all aspects of society, very much including the constitutional law of liberal democracies.

We usually think of rights as being about individuals. “Collective” rights seem exotic to Western, Educated, Industrialized, Rich and Democratic (WEIRD) people. At best, they make sense as a concession to vulnerable minorities. Normal rights – the kinds of rights everyone in Western societies gets to claim – are thought to be claims against society by an individual acting alone. But the COVID-19 pandemic makes clear – in the most literal possible sense – that these old-fashioned liberal Enlightenment rights like freedom of religion or expression, mobility and even privacy are about social interactions. Because social interactions are also how the virus spreads, interpretations of these rights shape the course of the pandemic.

In addition to demonstrating how social “individual” rights are, the pandemic has also put stress on Isaiah Berlin’s distinction between “negative” liberty (against state action and private coercion) and “positive” liberty (enabling people to fulfil their capacities). This distinction plays a controversial role in political philosophy. It is also embedded in Canada’s Charter of Rights. Section 32 makes it clear that the Charter only applies to governments and legislatures, reflecting the classical liberal idea that it is the state that threatens rights and freedoms.4 Section 7 protects a right to “life” and “security of the person” – rights very much at stake when a deadly novel virus is spreading exponentially through an immunologically naive population. But on closer inspection, it turns out that the Charter only protects these rights from “deprivation” by the state. Those who need protection by the state against the free action of others have to make creative arguments in Canadian courts under the Charter. (To be fair, those countries that explicitly recognize “positive” rights have also struggled with meaningfully vindicating them.)

Human beings in general, and lawyers in particular, are bad at thinking through exponential growth. If the average infected person goes on to infect more than one other person, and if this is sustained, then the number of infections in a naive population will explode. As Italy and New York proved to the world in the spring of 2020, no health care system – no matter how advanced – can cope with the consequences. The only responsible strategy on the part of public health authorities is therefore (a) to try to drive the number of infections to zero and then try to keep it there by restricting entry from more infected places (the “zero COVID strategy”) or (b) to keep the number of cases from increasing above what the health system can handle by keeping the average number of persons infected by each infected person around 1 (the “flatten the curve strategy”).

Until vaccines could be deployed in sufficient number, the only realistic way of keeping case numbers down involved keeping people apart and/or keeping them under surveillance. There was no way to avoid a confrontation with liberal rights, especially when those rights are understood in an absolute or “deontological” way (i.e. regardless of consequences). The SARS-CoV-2 virus may not care about the West’s longstanding ideological fixation on the conflict between the state and the rights of the citizen, but any attempt to stop the spread of the virus was bound to get caught up in that ideological fixation.

It was not surprising that this conflict would take its sharpest form in the American court system. Americans are litigious and their judiciary has become increasingly ideologically polarized, while the Trump Administration’s response to the pandemic created a deep partisan divide around “lockdowns.” To be sure, there are some ideological ironies in the way the free exercise of religion cases have played out. Justice Antonin Scalia – the longtime lion of conservative jurists who died in 2016 – was the author of the 1990 Employment Division v. Smith decision,5 which held that the First Amendment did not protect religious practices from generally applicable laws unless their “object” is the prohibition of those practices. While it is not hard to find people on the internet who think that public health measures are motivated by hostility to religion in general or Christianity in particular, real officials – no matter how secular their personal belief system – are anxious to get support and “buy-in” from religious communities. Unsurprisingly, there has been little evidence of antireligious animus presented to any courts.

In addition, control of communicable diseases has long been understood to have a special status as a justification of governmental measures that would otherwise be unacceptably coercive. Quarantines in the pre-antibiotic era were drastic everywhere, including in the United States. George Washington’s administration was faced in 1793 with a yellow fever epidemic in Philadelphia, then the American capital. Other states quickly moved to detain ships and travellers from Philadelphia, although the tiny federal government of the time did no more than hasten its own relocation to the District of Columbia. After the Civil War, the Supreme Court decided that the Fourteenth Amendment prevented states from exercising their powers in a way that infringed the liberty of employees to agree to long working hours, but was unsentimental about liberty when it conflicted with curtailing the spread of infectious diseases.6

Given this history of interpreting liberties in light of public health needs – well supported by liberal political theory, which has always permitted state intervention when individual actions directly harm others – why did COVID-related measures become so controversial within the American court system? There is no question that some of the explanation lies in the ideological divisions laid bare by the fierce battles over replacing Justices Scalia, Kennedy and Ginsburg with Neil Gorsuch, Brett Kavanaugh and Amy Coney Barrett. Journalistic accounts were not wrong to highlight this dimension, as well as Chief Justice John Roberts’s role as a swing justice.

But reasoning matters too, and much of the legal debate turned on how to understand what counts as a “law of general application.” In 1990, Justice Scalia said the state would be on solid ground if it implemented measures that were not intended to disadvantage religious practices. By 2020, the conservative justices insisted on a much more muscular notion of evenhandedness between religious and secular activity. When Chief Justice Roberts upheld California Governor Gavin Newsom’s executive order at the end of May 2020, he pointed out that various secular activities – including lectures, concerts, movie showings, spectator sports and theatrical performances – were subject to similar or more severe restrictions. Californians could gather in no greater numbers to hear a reading of Richard Dawkins’s God Delusion than to hear a passage from the book of Genesis. By contrast, Justice Kavanaugh, speaking for the conservative wing of the court, pointed to grocery stores, restaurants and factories, where more people were allowed.

In such comparisons, everything turns on what is considered comparable. While the virus does not care about the reasons people gather, it will transmit more easily depending on how they act once they are together. No indoor gathering of any size is entirely safe in a pandemic, but a number of factors make a difference: the prevalence of the virus in the community, the demographic profile of those present, the behaviour of people once gathered (singing, chanting and loud talking are particularly likely to spread the virus) and how likely people are to comply with mandates that they stay away from one another (presumably more likely in more transactional situations).

An additional difficulty is that “flattening the curve” is about managing the average number of transmissions in the community. While every death is a tragedy, it is not in the nature of public health to be able to prevent every death. A flattening-the-curve strategy involves avoiding increases – especially sustained increases – in the number of cases. No jurisdiction has attempted a total lockdown of a year or more – those that have successfully implemented a “zero COVID” strategy have not had to and those that have sought to flatten the curve have tried to maintain schooling, other health services and economic activity to the extent consistent with that strategy. This necessarily involves choices.

Chief Justice Roberts and the liberal members of the U.S. Supreme Court – while willing to scrutinize some measures they thought went too far – were also willing to allow some flexibility for choices by those accountable to the electorate. But by November, a conservative majority established an approach of looking at restrictions on religious activity that went beyond any secular comparator with the eyes of “strict scrutiny.” This follows a trend in American rights jurisprudence, found on both right and left and criticized by Columbia Law School professor Jamal Greene in his recent book How Rights Went Wrong, of viewing rights as “trumps.”7

Canadian constitutional law has always been more willing to protect religious practices from unintended restriction than the U.S. Smith case. But it has also been more willing to give governments latitude than the deontological tradition in American law, especially in the protection of public health. To be fair, not all American judges take an absolutist approach to rights – and sometimes Canadian fuzziness can make it difficult to predict how cases will be decided. But while it is possible that Canadian courts will take an approach like that of the conservative justices in the United States, so far that has not happened.

The first set of cases that have been decided in Canada involved applications to “stay” public health orders until the constitutional issues could be fully argued. This is generally a hard, but not impossible, thing for a person challenging governments to obtain. In the case Canadian lawyers still refer to, the tobacco companies facing the Chrétien government’s proposed unattributed health warning on cigarette packages failed to get an exemption while the case went to the courts, even though they (rightly or wrongly) ultimately persuaded the Supreme Court that their “freedom of expression” was unjustifiably infringed.

In light of the more urgent public situation of a pandemic compared with the chronic health issue of smoking, it is perhaps not surprising that preliminary “stay” applications have been rejected by Ontario, Quebec, Alberta and Manitoba courts. A Newfoundland and Labrador trial court has rejected a challenge to the “Atlantic Bubble” policy of keeping residents of other provinces out – while it accepted that this policy violated mobility rights guaranteed by section 6 of the Charter, it saw this as a “reasonable limit” of the kind the Charter’s section 1 allows. The case is being appealed by the Canadian Civil Liberties Association.

In British Columbia, I was part of a small team of lawyers for the defence in a constitutional challenge to Dr. Bonnie Henry’s Gatherings and Events Order (Beaudoin v. British Columbia). This order restricted in-person religious gatherings, with exceptions for limited-attendance baptisms, funerals and weddings – and later small outside gatherings, first for Orthodox Jewish congregations and now for anyone whose practices are consistent with outdoor gatherings.

By the time the case got to court, Dr. Henry had already exempted outdoor political demonstrations: B.C. Supreme Court Chief Justice Christopher Hinkson held that an earlier blanket ban on those was an unjustified infringement of freedom of peaceful assembly in light of the evidence and situation at the time. But he upheld the restrictions on in-person religious gatherings, specifically approving of Dr. Henry’s consultative style. As he noted, genuinely comparable religious and secular activities have been treated equally. Religious and secular education, for example, have both been allowed, as have religious and secular weddings and funerals. Like the Atlantic Bubble case, this case has been appealed; meanwhile,  other challenges are before the courts across the country.

I cannot pretend to be a neutral observer, but I think that, so far, the Canadian approach better reflects what a legal system can appropriately do. Judicial review can make public health decisions more transparent and evidence-driven, so long as it is sensitive to legitimate needs to make decisions quickly – and therefore necessarily before all the evidence can be in. Epidemics, like life, can only be fully understood backward, but must be lived forward. Lawyers and judges need not accept unreasoned claims of “expertise,” but they should have appropriate humility in relation to their ability to evaluate the management of inherently complex problems. This is especially true when one of those complex problems is the judicial system itself, and especially its accessibility for ordinary people – and neither lawyers nor judges have figured out how to fix it. But most of all, lawyers should always recognize that “rights” are inherently social claims, and that it is society that must negotiate their resolution.

To read more on religion and public health in the age of Covid, click to read Holy or Irresponsible by Martin Lockshin.

Continue reading “Rights and Religion”

photo by Ajay Suresh via Flickr under CC 2.0 

A spectre is haunting academia, journalism and politics – the spectre of Cancel Culture. The powers aligned together against this spectre today are as diverse as the Holy Alliance of “Pope and Czar, Metternich and Guizot, French radicals and German police spies” that Marx and Engels identified in 1848. But is “Cancel Culture” – as Marx and Engels proudly claimed of Communism – truly a world power recognized as such by the existing world powers? To use a more contemporary lexicon, is it even really a thing? Is this “New Intolerance” either new or intolerant?

The Republican National Convention and the Harper’s Letter

Some people think so. The danger of “Cancel Culture” was the theme of the August 2020 Republican National Convention. The 2020 RNC was the first in history not to include a platform of policy proposals, all the better to emphasize the cultural grievances of the staatsvolk, those whose ethnic identity is to be simply Americans.

This is hardly a new theme for Republicans, of course, who have been railing against the predecessors to “Cancel Culture” – “political correctness” or the “nattering nabobs of negativity” – since the 1960s. But the relative emphasis has clearly shifted. Freedom is no longer U.S. self-determination under threat from foreign actors – what remains of that theme has been left to MSNBC personalities like Rachel Maddow worried about Russian interference in American elections. Nor is Freedom really conceptualized any longer as market capitalism under assault from an Economic Left promoting redistributive taxation and spending or intrusive regulation of business. Republicans still talk about “socialism,” and the policy legacy of the first Trump Administration is undeniably lower business taxes and fewer environmental regulations, along with a lot of federal judges who will make these changes difficult to reverse. But that is not where the passion is.

To anyone listening to the speeches, it became clear that Freedom is really under threat from the Cultural Left: reformers or radicals who oppose “systemic racism,” particularly police violence disproportionately affecting black Americans, or contest traditional ideas of gender and sexuality, particularly to promote the rights of transgender people. The RNC occurred shortly after mass Black Lives Matter protests expanded across the world – sometimes accompanied by looting and violence. The BLM movement had been a target of Trump’s 2016 campaign and he quite clearly welcomed the ability to fight against it again, rather than defend his record on the COVID pandemic.

Conservatives have always promoted the defence of order and property against mobs, and American conservatives have long resisted demands for any sort of racial reckoning as disrespect to America’s Founding Fathers and tradition. It is also perfectly normal for conservative parties to support traditional gender norms and resist the demands of sexual minorities. What was perhaps surprising – at least to a listener who had not been paying attention to recent American political rhetoric – was that the Cultural Left was represented not just as a threat to law and property, but also, and especially, as a threat to the American public’s ability to speak. The party of racial and sexual order presented itself as the party of transgression, while supporters of change were presented as puritan scolds and censors. “Cancel Culture” was attacked not so much for undermining traditional propriety as for stopping Red Americans from saying whatever they want whenever they want. No one represents this position more than Trump, whose rhetorical style is borrowed from standup comedians and “shock jock” radio show hosts.

This rhetorical frame of healthy transgression stifled by the schoolmarms of the Cultural Left is also found among opponents of the Trump Administration and the current Republican Party. If there is a manifesto of this camp, it is the July 7 open letter to Harper’s magazine. The letter was signed by a popular front of the National Security Right, the Democratic-establishment Centre and the Economic Left, ranging from David Frum and Michael Ignatieff through baby boomer feminist icons Margaret Atwood and Gloria Steinem to anti-imperialist leftist academics Noam Chomsky and Cornel West. The Harper’s Letter (as it soon became known) described what this coalition is against as “a new set of moral attitudes and political commitments that tend to weaken our norms of open debate and toleration of differences.” The letter was careful to pair this new threat to tolerance with the “forces of illiberalism” associated with Donald Trump.

Perhaps as a price of such a broad coalition, the Harper’s Letter was unclear as to what events in the world it was responding to. It observes that “powerful protests for racial and social justice are leading to overdue demands for police reform,” a clear reference to the upsurge in mass protests that punctuated the COVID shutdowns as a result of the killing of George Floyd on May 25. The Letter implied that this movement, and movements associated with transgender rights, had “intensified” a challenge to norms of open debate in favour of that enemy of American individualism and sixties counterculture alike: “conformity.”

It is of course not at all uncommon for moderate members of reformist movements to criticize those more radical than themselves for extreme tactics or unrealistic goals. But, as with the Republican National Convention, the Harper’s Letter took up the more surprising perspective of the id against the superego. The problem with the excesses of the Cultural Left, as suggested by the cooler and wiser heads who signed the letter, were not the traditional problems of radicalism. Rather, their rhetoric, like the Republicans’, was about excessive conformism and limitations on debate. The warning was not of the risks of violence or utopianism, but of the danger posed to letting one’s freak flag fly.

While no single case seems to have provoked the Harper’s Letter, it was widely seen as a response to the resignation of James Bennett as senior editor for the New York Times op-ed page. On June 3, the Times published an opinion piece by Tom Cotton, a right-wing senator from Arkansas, calling for using the U.S. military to suppress the protests. Cotton had earlier tweeted that there should be “no quarter” for “for insurrectionists, anarchists, rioters, and looters”(the order of “no quarter” in war means to kill surrendering soldiers and is universally regarded as a war crime).

The op-ed was published two days after the Trump Administration apparently directed the use of tear gas by federal police to disperse protesters in Lafayette Park near the White House so that Trump and other senior administration officials, including the Defense Secretary and the Chairman of the Joint Chiefs of Staff, General Mark Milley, could cross to a nearby church. Milley later apologized for his involvement, amid reports that the military had been asked – and refused – to become involved in policing the Black Lives Matter protests. The printing of the Cotton op-ed sparked a backlash both within the New York Times staff and among its predominantly upper-middle-class liberal readership. Bennett resigned on June 7.

The Harper’s Letter was widely interpreted as having been triggered at least in part by these events. Bennett-hire and Harper-letter-signator Bari Weiss publicly resigned, accusing the Times of discrimination, hostile work environment and constructive discharge. Weiss came to prominence organizing a campaign to deny a Palestinian American anthropologist tenure at Barnard because of controversial criticisms she had made of Israeli archeology. Weiss had long been a leading figure among ostensibly liberal critics of the censorious nature of “identity politics” linked, coherently or otherwise, with a frequent tendency to see anti-Semitism lurking under criticisms of Israel or America’s pro-Israel foreign policy. Weiss received support from many centrist pundits and politicians, including Andrew Yang, the pro–universal basic income candidate for the Democratic Party nomination.

There are paradoxes here. It might seem that the obvious, even proud, authoritarian in the story was Senator Cotton. Even on the most sympathetic view, he represented the party of order and tradition, not of transgression or sceptical thought. The inclusion of “anarchists” – an ideological category that includes Chomsky – among those who should be put down by violence suggests a lack of concern with the First Amendment, however understood. Cotton followed up by calling for restricting federal funding to any state or local education system that used a Pulitzer-Prize-winning New York Times series, the “1619 Project,” which argued that race and slavery and had not been made sufficiently central to American history. By September, President Trump had instructed the Department of Education to follow up on this and promised he would consider Cotton for the Supreme Court if reelected. Critics like Bari Weiss wondered whether they had correctly identified the main threats to freedom of expression and civil liberties in America today.

The anti-Trump-anti-Cancel-Culture coalition was more concerned with what happened to Bennett, as editor, than with really defending the Cotton op-ed. They saw here a threat to the kind of cross-ideological-but-curated discourse represented by the New York Times. The paradox here is that the New York Times is a for-profit enterprise that, like the Washington Post, increased its subscriber base by more or less explicitly presenting itself as part of the opposition (“resistance”) to the Trump Administration. Its op-ed page is intended to generate money. It does not of course purport to represent all opinions tolerated under the First Amendment, or even a reasonable cross-section of actual American political views: it has three regular “Never Trump” conservative columnists (ranging from the interesting Ross Douthat through the past-his-prime David Brooks to the execrable hack Bret Stephens), despite the tiny share of the electorate that this strand of opinion represents.

The New York Times’s subscriber base certainly thinks of itself as open-minded and likes to be “challenged,” but like everyone else, they have their limits. In July 2020, in the midst of a worsening pandemic, the Trump Administration was preparing to put troops in the streets of the big cities where those subscribers live. From the perspective of the kind of people who keep the Times afloat, this was a personal threat, not a debating point. The Times is not a charity. And the customer is always right. In the best traditions of American capitalism, the Times acted to protect shareholder value – and while we can all sympathize with Bennett’s fate, he knew the business he was getting into.

Of course, lingering on a particular example may miss the point. A culture is not a law, and it is not a single instance either. I could have given the more Canadian example of Don Cherry’s loss of his perch at “Coach’s Corner” last year for accusing immigrants of not wearing poppies for Remembrance Day. Or any number of other controversies that seem to punctuate the news cycle. But a list of examples never gets us closer to a concept.

Opposing “Cancel Culture” gives meaning to both a certain kind of Cultural Right and a Cultural Centre in just the same way that opposing Communism did to homologous parts of the political spectrum after World War II. But Communism had a clear referent in the form of the Soviet government. It had secret police and loyal party members. It was clearly devoted to a form of coercion that liberals, of all stripes, could coherently oppose. By contrast, Cancel Culture consists of a loose array of human resources professionals, youthful activists and cultural anthropologists exercising the preeminently liberal rights of employers, protesters and academics to contract, assemble and theorize.

But as we move from a single example, we have trouble getting hold of the thing itself. The paradox is in making a specific conception of open debate beyond legitimate debate, and in labelling the questioning of polite, social tolerance of certain “differences” an intolerable difference. It is possible to forbid some considerations from entering into state action against certain ideas. And it is also possible to say that other considerations should not be part of deciding what is published in particular forums, or grounds for promotion or firing. But it is not possible to forbid cultural sanctions for expressing opinions – at least not without formal censorship. It is not even possible to criticize such sanctioning without engaging in sanctioning of one’s own. We are in need of some analysis, whether linguistic, historical or psycho.

Intolerance: An intolerably confused concept

Anyone who wants to supply analysis better show their credentials – particularly in a case like this one. They need to “situate” themselves.

I am a middle-aged White man with a well-paid professional job. I am probably more sympathetic to the movements of the Cultural Left than the median person fitting that demographic profile. But it would not be hard to find someone woker than me. For example, I am not in favour of abolishing or defunding the police, although I think some of the reforms unfortunately gathered under one or both of those slogans are worth taking a look at. While there is no doubt that social problems in North America – from COVID deaths to police violence – are disproportionately racially distributed, or that the reasons this is the case are the product of racial and colonial histories, I agree with those who say these problems have primarily race-neutral class-based solutions.

While I am not going to bite the bullet of defending every statement or action by every antiracist or transgender activist, though, I do not think that the “Cancel Culture” frame is defensible. In some cases – most transparently Trump’s or Cotton’s, but also among their anti-Cancel critics – it is a rhetorical device to silence or marginalize the people the user of the phrase disagrees with. Trump regularly calls for people who criticize him to be fired or even prosecuted and has repeatedly bemoaned the restrictions the First Amendment places on his ability to sue people for defamation. But even people more self-reflective than he is sometimes confuse criticism with censorship.

Alternatively, “Cancel Culture” might be identified with discourteous or self-righteous expressions by activists, especially online. To be sure, “piling on” on social media can be destructive and it would be unresponsive to contemporary reality to regard much of what happens when a large group focuses on one person’s alleged misdeeds as just “criticism” that should be addressed with resilience. The paradox, though, is that this is precisely what free speech centrism counsels. Moreover, whatever the sins of the Cultural Left in this regard, they are hardly the worst offenders. Anyone who wants to be controversial online faces trolls, most of them right-wing. Women who express controversial opinions can count on threats of sexual violence and ethnic minorities can be sure of racial epithets. While there is indeed a problem of “troll armies,” it is a cross-ideological problem, and one of lack of effective regulation of speech – to which American First Amendment fundamentalism has undoubtedly contributed.

One issue that quickly comes up in these conversations is how big a deal it is to label someone’s actions or statements “racist.” Outside the Cultural Left, at least in the North American middle class, racism is seen as an individual moral flaw that is both rare and terrible. From this perspective, accusing someone of being racist is essentially accusing them of being a member of the Ku Klux Klan. This is not how the Cultural Left understands things: racism for them is primarily structural. It is difficult to get White liberals to understand that this implies that claims of racism are therefore injunctions to reform, but not statements of irremediable evil. By contrast, middle-class men in heterosexual relationships have no particular difficulty understanding that a claim something they did or said was “sexist” does not imply that they are morally indistinguishable from Marc Lepine. It is a call to rethink behaviour. Whether you agree with a particular claim or not, it would obviously be unacceptable to make it a precondition for any polite conversation to preclude the possibility that anyone other than the most violent misogynist is in any way sexist. But it is considered a perfectly reasonable demand by both right and centrist critics of the Cultural Left that talk of racism be limited to references to neo-Nazis.

The final type of “cancellation” that raises difficult issues is the exercise of economic power over hiring and firing – either directly by employers themselves or through the market power of major customers – to discipline those who have engaged in what is considered intolerable expression.

From the libertarian or classical liberal perspective adopted by both Canadian and American free speech law, the exercise of economic power does not violate constitutional guarantees of freedom of expression. Some people on the Cultural Left take this as meaning that targeting a person’s job for what they say cannot raise freedom of expression issues in a broader sense. As a social democrat, however moderate, I disagree because I regard employer power as power potentially as despotic as that of the state. This is a particularly stark reality in America, where almost all employment is “at will” and few jurisdictions have protections against employer retaliation for political expression.

Of course, a right to say things an employer or its customers do not like to hear cannot possibly be absolute – a vice president of marketing cannot be expected to be allowed to praise a competitor’s products as better than those of her own company. But the American system of total protection from censorship by the state – to what seem to me like ludicrous extremes (the U.S. Supreme Court struck down laws against pretending to have a military medal or limiting the use of racially offensive trademarks1) – combined with total vulnerability to censorship by employers seems to me a real problem. And, it must be conceded, sometimes this power is exercised by the Cultural Left.

But most of the time? Is it really true that economic power to restrict expression is mostly from the Left? No, it is not. The most comprehensive data set of political firings at American postsecondary institutions since 2000 is maintained by Acadia Univesrity professor Jeff Sachs.2 There are interpretation issues, but it is clear that the majority of terminations occur because of criticism from the right (usually for being unpatriotic or too critical of Israel). As Sachs points out, since there are far more left-of-centre academics than right-of-centre ones, the probability of being fired from an academic position for political speech is lower on the left. But academics in fact have unusually high levels of job security. If we broaden our gaze to American society more generally, there can be no question that job insecurity chills speech, but also no reason to think it particularly chills right-wing speech.

By any reasonable metric, there is a broader array of political opinions available than ever. While social and economic pressures as well as the unwanted attention of troll armies make most people unwilling to attach their own names to controversial views, pretty much any opinion can be expressed on the internet pseudonymously. Canada, like every other country outside the United States, takes a less absolute view on free expression as a matter of constitutional law.3 But Canadian law is more protective of freedom of speech than it has ever been.4 More practically, it has proven very difficult for any country that wants to participate in the global internet to enforce more restrictive standards than those permitted in America. While all this speech has not led to the flourishing of the reasoned discussion hoped for by John Stuart Mill, that perhaps speaks more to the lack of realism of Mill’s ideal than any culture of intolerance.

Why cultural change is experienced as silencing

Nevertheless, we overwhelmingly think “Cancel Culture” or “political correctness” is a thing. In a comprehensive study in 2016, Angus Reid found that two thirds of Canadians thought political correctness had “gone too far,” with a similar number agreeing that “it seems like you can’t say anything without offending someone these days.”5 Americans are polled on these issues more regularly: they agree with similar statements in similar numbers. While people say things are worse than they used to be, they have always said they are worse than they used to be – there is no upward trend over time in people thinking this is a problem. The sentiment that political correctness has gone too far is held in similar numbers across racial groups in both countries, although it is slightly higher among men than among women.

The ubiquity of this sentiment makes sense once we accept that any speech act will take place in a context of social approval and disapproval. Unless we are absolute monarchs, when we say something we are simultaneously asserting some kind of authority and making ourselves responsible to the judgement of those who are listening. These norms are invisible when they are traditional and universal. But cultural reform consists precisely in seeking to change those norms, based on some higher norm of equality or autonomy. It can only be expressed as disapproval of the existing structure of value, and therefore only experienced by those within that existing structure as an unexpected loss of status.

Think of Mill’s complaint in On Liberty of the “tyranny of custom” restricting the principle of individuality in Victorian England, particularly of women or eccentric men. The only way this culture could change was by a self-conscious group of reformers – the first wave of feminists, along with the Victorian/Edwardian freethinkers so influenced by Mill. But the disapproval of these feminists and freethinkers for what they saw as the bigotry of more conventional Victorians was experienced as elite condescension at best and as suppression of the freedom of Englishmen at worst.

Moreover, any movement of reform must rely on solidarity. If those within that movement are seen as conceding to the social structures it is struggling against, they can only be disciplined by social disapproval within the movement. In some cases, this results in sectarian division, in others in conformity around the cause. But for anyone whose identity is caught up in the broader movement, disapproval by those “to one’s left” is likely to sting more than it would for the self-consciously reactionary.

Of course, once the cause is won, the norms that the movement sought to create become part of the tyranny of custom. I grew up in the 1980s in a relatively liberal city, Victoria. But I can assure you that no one at my high school was as free to say they were gay or to express a nonconforming gender identity as their children are. This newfound freedom is only possible because homophobic and transphobic abuse became subject to social sanction (and sometimes school discipline), which they were not in the 1980s. Then and now there were things that could be said and things that could not be said. The total amount of “tyranny of custom” has been conserved – but it has been redistributed in a way that allows for greater freedom and equality.

Not every effort at social reform in the past succeeded, and many of those efforts may not have been good ideas. And I would not suggest that all such efforts in the future will or should succeed either. But if they are meaningful at all, they will all involve changing what is socially disapproved. Custom may change from a tyrant to a constitutional monarch, but will never cease to rule. In that sense, someone will always feel cancelled.

Continue reading “Is Cancel Culture a Thing?”

China’s report to the World Health Organization on December 31, 2019, of a “pneumonia of unknown cause” in Wuhan – what we now know will be one of the pivotal events of the 21st century – at the time drew hardly any attention. Indeed, right up into March, politically engaged Canadians were deeply divided over another issue: the construction of the Coastal Gas Pipeline through the traditional territory of the Wet’suwet’en people – with the approval of the Wet’suwet’en’s elected chiefs and councils, but against the will of those claiming to represent their traditional governance structures. While the pandemic blew this (along with every other issue) out of the news cycle, it remains unresolved, is likely to flare up again and points to broader issues Canadian society will have to live with for the foreseeable future.

Coincidentally, this controversy was sparked by another relatively little remarked event that occurred while Canadians were preparing to celebrate the New Year. On December 31, 2019, Justice Marguerite Church of the British Columbia Supreme Court granted the company building the Coastal GasLink Pipeline an injunction against protesters blockading a bridge on the Morice West Forest Service Road, near Smithers, B.C.

The protesters said they were there to prevent people from accessing the territory of the “Unist’ot’en” without the consent of their traditional chiefs. The judge described the Unisto’ot’en as a matrilineal group of houses within the Gil_seyhu (Frog) Clan of the Wet’suwet’en. However, the most direct connection appears to be with Dark House, which has kept organizationally independent from the Office of the Wet’suwet’en representing the hereditary chiefs, but shares their opposition to the Coastal GasLink Pipeline traversing traditional Wet’suwet’en territory.

When the RCMP moved in to enforce the injunction in February, solidarity protests occurred across the country – most notably, Mohawk protesters blocked Canada’s rail arteries in Ontario and Quebec, making what had been a provincial story a truly national one.

The controversy raised deep issues dividing both Indigenous and non-Indigenous Canada: about what postcolonial reconciliation would look like, or whether it is even possible; about the relationship between democratic elections and representation; and about the future of the fossil fuel economy. Underlying all of these is the meta-issue of whether it is possible to think about these issues in a nuanced way in an era of polarization and social media.

The media largely moved on in March: first, hereditary chiefs agreed to a protocol with the federal and provincial governments about continuing rights and title discussions, and then North America finally started taking COVID-19 seriously. But the issues on the ground – and, of course, the more fundamental ones – have not been resolved. On May 1, elected chiefs objected to the process on the basis it would occur entirely within the hereditary system and the issues of the actual Coastal GasLink route are not part of it.

coastal gas pipeline

The Dream of LNG

The Coastal Gas Pipeline project involves building a link between the vast natural gas reserves of northeastern British Columbia and a liquefication facility near Kitimat on the Pacific coast. B.C.’s provincial government has long supported the goal of one or more major liquefied natural gas (“LNG”) facilities in the north, especially as the long-run prospects for North American natural gas prices plummeted in the wake of the massive increase in supply as a result of the shale revolution. Though world prices were also very low, even before the COVID-19 shock, proponents hope this is temporary.

Support is bipartisan: the B.C. Liberals pulled off an unexpected victory in the 2011 provincial election after a campaign focused on the benefits of LNG, and the current NDP government of John Horgan has also supported its development. Horgan’s government depends on its alliance with the anti-LNG B.C. Green Party for “confidence and supply,” but the Greens ultimately decided not to make LNG an issue on which they would bring the government down. While they have opposed all legislation to enable LNG, it can easily pass with the votes of the NDP and the Liberals.

The relationship between LNG and climate politics is contentious. Proponents argue that for the foreseeable future, exports of LNG will have the effect of displacing coal as a source of dispatchable electricity generation: while burning methane (the main component of “natural gas”) creates carbon dioxide, it is more efficient than coal (or oil, for that matter) and is vastly less toxic in its effect on ambient air quality. LNG proponents therefore argue that this is fossil fuel infrastructure that will benefit the environment, especially since British Columbia can use its abundant hydroelectricity to provide zero-carbon electricity for the liquefication process.

On the other hand, if methane escapes without being burned, it has a far greater warming effect than carbon dioxide, the greenhouse gas caused by combustion. The question of whether natural gas development is good or bad for the climate therefore depends on the degree of escape and the extent to which this can be reduced. Recent empirical work suggests that the release of methane into the atmosphere has been severely undercounted.1

Pragmatic cost-benefit arguments along these lines may seem irrelevant to those who see a deeper energy transition as a moral imperative, one that can only be fulfilled by ceasing to build any more infrastructure for extracting and transporting fossil fuels. Getting the planet to “net zero” carbon emissions by the middle of this century is not compatible with using natural gas, or any other fossil fuel, to generate electricity or heat homes – at least in the absence of significant developments in carbon capture technology.

For different reasons, arguments that natural gas is superior also irritate residents of Alberta and Saskatchewan, whose hydrocarbon economy is more reliant on heavy oils – the transportation of which has been a point of conflict between those provinces and British Columbia. But despite occasional rhetoric about West Coast hypocrisy, the oil and gas industry has been completely supportive of the Coastal Gas Pipeline, recognizing that if it cannot get built, the prospects for heavy oil projects are even more remote. Certainly, most British Columbians – particularly in the north – support the development of LNG as a source of employment and revenues for public services.

Critically, the British Columbians hoping for these benefits include a large proportion of the Indigenous people living in the north. The Kitimat facility is to a very large degree the product of efforts by leaders of the Haisla Nation, where it will be located. Among these leaders is Ellis Ross, the B.C. Liberal MLA for Skeena and a particularly lacerating critic of opponents of the pipeline. As with other major pipelines, Indigenous opinion is divided. Canadians quickly became aware that the elected chiefs representing First Nations along the line of the Coastal Gas Pipeline had agreed to “community benefit agreements,” but that hereditary chiefs of the Wet’suwet’en, in particular, had not.

The Dream of Reconciliation

The dream of a resource boom is the oldest one of settler British Columbia. The first resource boom – the marine fur trade – was a joint enterprise of Indigenous and European peoples, but it brought species loss and epidemics. Later booms – the gold rush, the coal rush, the timber rush, the hydro rush, the real estate rush – were pure manifestations of colonial state capitalism, with some succeeding on their own terms while others worked only for those who got out early.

In British Columbia, settlers and their government essentially appropriated land and resources without any attempt at reaching agreement with the Indigenous people living there. The only exceptions were a few mid-19th century Vancouver Island treaties with the Hudson’s Bay Company as agent of the Crown, and the extension of Treaty 8 from Alberta into the part of northern British Columbia east of the Rockies – where the natural gas deposits are.

Throughout the 20th century, the British Columbia government took the view that the land and its resources simply belonged to the province: if any aboriginal rights ever existed, they were “extinguished” long ago. It persisted in this view after section 35, affirming “existing aboriginal and treaty rights,” was added to the Canadian constitution in 1982. The B.C. government of Premier Bill Bennett signed on to that amendment claiming that it would have no effect west of the Rockies, since there were no treaties and, outside of reserves and food fishing, no aboriginal rights continued to “exist.” This continued to be the provincial government’s position for another decade.

It was the hereditary chiefs of the Wet’suwet’en, along with their Gitksan relatives, who brought the landmark litigation that challenged the B.C. government’s historical approach, bringing a vast amount of information about their traditional governance structures onto the public record. In 1997, the Supreme Court of Canada’s rendered the Delgamuukw decision, making it clear that aboriginal people in British Columbia continued to have legal rights to the land that governments and resource companies could not simply ignore. It was often claimed during the social media free-for-all surrounding these events that the decision vindicated the position of the hereditary chiefs. The truth is more complicated – and the unfinished business of that case is the necessary backdrop for what happened in the winter of 2020.

The Wet’suwet’en have a completely distinct language from their neighbours, but developed an interlocking matrilineal kinship/governance structure with them. Larger clans are subdivided into smaller houses.2 Although the system is called “hereditary,” it is not based on principles similar to European feudalism such as primogeniture: an individual becomes a chief of a house by being selected to carry on the name of the chief of that house: the process has sometimes been contested, but is ideally based on consensus attained at a feast – potlatch in the trade pidgin Chinook.

In addition to these traditional kinship/governance structures, there is a system of elected chiefs and councils, first created by the federal Indian Act. Enrolled members of Indian bands – now usually called First Nations – can periodically vote for a chief and band council. While no one denies that the origin of this system was colonial, these structures clearly now gain their legitimacy the same way that other elected governments do – on the basis of their mandate and the fact that they can be replaced if those they represent collectively decide to do so.

Opinions among politically active Indigenous people differ on the weight each of these structures should have in a postcolonial world: the moderate view that both traditional and elected structures should have a role is found in a number of modern treaties, but there are “traditionalists” who reject the elected system altogether or say its authority should be limited to the reserves, while there are others who feel that the traditional system should be limited to ritual and persuasive roles.

The Delgamuukw case was originally brought as a claim for “jurisdiction” and “ownership” by the hereditary chiefs of the Gitksan and Wet’suwet’en houses. The lands claimed by each house had been delineated, in the case of the Wet’suwet’en, at a 1986 feast where the entire claim area was divided into 133 territories assigned to 71 houses. Essentially, each house chief claimed to have a form of both sovereignty and property ownership over the specified territory, to be held in accordance with Wet’suwet’en law. For better or for worse, the Supreme Court of Canada did not accept that proposition. Instead, it set out, at length, its own concept of “aboriginal title” and how this could be proved. On the grounds that the evidence in the case had not been aimed at this (newly formulated) concept, the Supreme Court said there would have to be a new trial. That new trial never took place.

The difference between what the claimants in Delgamuukw were originally seeking and the “aboriginal title” that the Supreme Court ultimately described has been referred to as a “technicality” – but it is at the root of issues that remain with us two decades later. The Supreme Court made it clear that, in its view, aboriginal title had to be asserted at the nation level (i.e., by the Gitksan and Wet’suwet’en as a whole) and not at the level of a house. Decisions about how lands subject to aboriginal title will be used must be made “by the community.” Aboriginal title was also not held to be absolute; rather, it is subject to justifiable “infringements” by the federal or provincial governments, although those infringements have to meet a strict test in court.

Putting Postcolonialism into Practice

A number of practical issues arose out of the Delgamuukw decision. The most pressing was how resource decisions would be made while the extremely complex process of proving aboriginal title took place. The general answer to this came in the 2004 Haida decision – and it is really that decision that created the operative framework applied ever since. In Haida, the Supreme Court established a flexible doctrine of a “duty to consult” and, in some cases, accommodate, Indigenous entities with potential claims for aboriginal title (or for rights that do not go quite as far as title). On the one hand, this is “not a veto”; on the other hand, it means that any resource project to which there are objections by groups plausibly having aboriginal rights or title claims is potentially subject to legal challenge.

Different stakeholders would undoubtedly have different accounts of how well and how justly the system of resource and land development that resulted from Haida has worked. It is a complex area of law, resistant to simplistic summary – and certainly subject to reasonable criticism from all sides. But what may not have been appreciated in the national and global media conversation is that this system is definitely not the same as the unilateral authority of provincial resource ministries that prevailed in the last century.

This works both ways. Development requires the involvement of Indigenous peoples, but decisions not to develop can be challenged as well. Since Indigenous people face the same cross-cutting considerations of economic and environmental priorities and values as everyone else, the implications are complicated. In practice, the duty to consult has led to a resource industry that can only operate through often-complex agreements providing employment opportunities and funding of public services for Indigenous groups.

The classic legal question of how to address the “holdout” of one property owner who says no to a linear project whose value depends on going through everyone’s land arises in this new, hopefully postcolonial, context. If one community holding a claim to aboriginal rights or title says “no” to a project that is located solely in their land, they say no for themselves. But if they say “no” to a project that traverses multiple territories, then the “no” is for everybody. While this may sound appealing to opponents of fossil fuel infrastructure, the same would be true of transmission lines connecting zero-carbon run-of-the-river or wind farms to the electricity grid.

The difficult question is how to balance the right to consent to development with the right not to consent. Because the “duty to consult” is “not a veto,” this problem can be resolved – albeit not speedily and not without potentially alienating dissenters – by the courts deciding that the objectors were consulted sufficiently. But to that extent, “free, prior and informed consent” becomes an aspiration rather than a legal prerequisite.

The other problem Delgamuukw left unresolved was how to determine the will of each individual community. The Supreme Court stated that land use decisions were collective, not individual, but avoided the classic political theory problem raised: how does a community decide when its members disagree? The postcolonial dilemma posed by this problem arises because outsiders do not have the legitimacy to resolve disputes over the right system of governance, but cannot avoid having to deal with some governance structure.

Justice David Vickers struggled with this issue in his decision in the only other major title case to come to trial in British Columbia. brought by the elected chief and council of the Xeni’Gwetin First Nation (formerly the Nehemiah Valley Indian Band) on behalf of a Tsilhqot’in Nation (which, in his decision, Judge Vickers identified as a cultural nation like “French Canadians”) that had no definitive organizational or political existence. The elected government, as a political entity, could, depending on the social facts, exercise some of the rights of this prepolitical people. (The difficulties of a judiciary whose own authority necessarily comes from the “colonial settler state” determining as a “question of fact” the political representatives of an ethnos is perhaps an unavoidable paradox of postcolonialism in this context.) The Supreme Court adopted Justice Vickers’s approach without the same visible struggle and without necessarily giving guidance to how it might be approached in other contexts with different “social facts.”

One of the implications of the “duty to consult” regime is that such governance questions can, to some degree, be avoided. It is not absolutely necessary for a non-Indigenous government to determine these questions, so long as title is unsettled – it may be legally obliged to consult with both elected and traditional governance structures, and if there are differences of view, it can try to persuade a court that it did its best. But the precondition for consulting with everyone is that there be no specific entity that has a clear right to give or withhold “free, prior and informed consent”: the ”settler” government must listen, but ultimately, and subject to review by the colonial courts for how reasonably it has done so, it decides whom to heed.

In 2020, these contradictions manifested themselves on the streets and in social media comment threads. The Wet’suwet’en are divided as to whether a pipeline through their traditional territories is in their collective interest. Both traditional and elected structures have ways of resolving, but also asserting, these differences. Inevitably, opposing forces within the “settler” population were drawn on the side of different “authentic” representatives of the Wet’suwet’en depending on their own attitudes to natural gas development. The same, of course, can be said about Indigenous communities across the country: they too saw the quarrel in terms of their own disputes about governance and development.

A right to develop and a right to control development cannot both be truly absolute without coming into conflict. Depending on who is entitled to exercise the rights of the Wet’suwet’en, the Haisla may be able to exercise their right to develop only if not every group along the route has to fully and freely consent. If development does not happen, that also has implications for the interests of those upstream and downstream. These are the longstanding problems of pluralism and federalism – postcolonialism may mean that Indigenous people are brought into such problems as full partners, but it cannot mean that these problems will not exist.

Construction of the Coastal GasLink pipeline continues, as (over Zoom) do discussions between Wet’suwet’en hereditary chiefs and both levels of government about aboriginal title – as I write, the elected chiefs have objected to being frozen out of that process. The bottom has fallen out of energy markets – no one knows what will happen to them once the COVID pandemic ends.

Disputes about governance will, of course, always be with us as long as human beings disagree and have conflicting identities. These disputes become more complex once new voices are in the mix – but the simplicity of the colonial diktat is the peace of the grave and we should not be nostalgic for it.

A long, but neglected, strand in the Western tradition emphasized that the best regime is a mixed regime: neither democracy nor tradition should rule without the other. Finding the right mix for a particular culture is a problem that outsiders cannot solve. Nor has any culture definitely solved the problems of how different polities can compromise over matters that affect them all – and outsiders do need to be part of that one. Virtues of patience and practical wisdom are needed – something that traditional cultures (Western or Indigenous), for all their faults and all their differences, would see immediately. As a consequence, they would also see why the flattening democratic populism of social media will not make things better.

Continue reading “The Perils of Postcolonialism”

Whatever the reason for Canada being one of the world’s oldest and most stable democracies, it is not because Canadians understand exactly how it works. Since Confederation we have had 17 changes in the party controlling government at the federal level, and hundreds provincially – all of them peaceful, some of them consequential. But as the end of the 2019 federal campaign made clear, Canadians can be quite confused about how governments are chosen.

While citizens of France and the United States vote for who their president will be (in the U.S. case, if we ignore the Electoral College), Canadians vote for their prime minister only if he or she happens to be running in their riding. Instead, we elect a federal House of Commons or provincial legislative assembly. The partisan makeup of the house determines who gets to wield executive power. When one party wins a majority of seats, how this occurs is pretty straightforward: the leader of that party becomes the first minister – that is, prime minister or premier – and selects a cabinet.

But things are not so clear if there are more than two parties and none of them gets a majority of seats. If one party gets the most seats, does that party automatically get to form the government? Or is it legitimate for the other parties to agree among themselves and depose the government without an election? What happens when the incumbent party does not get the most seats, but the former opposition does not have a majority? Should the incumbent first minister give way to the leader of the opposition, or can he or she try to stick around and put together a working majority? When a government is defeated on a matter of confidence, when does that mean it must give the reins over to someone else and when can it “go to the people” in an election?

All of these issues have been matters of partisan debate in Canada in the last decade. In some cases, there is an expert consensus, but sometimes controversy arises among the coterie of constitutionalists – a group with no formal membership qualifications, no principle of accountability to anyone and no clear way of resolving disputes.

A Murky Area

Constitutional issues other than government formation are legal matters – which, while sometimes uncertain, can at least be authoritatively resolved by the courts. But the courts have refused to step into government formation – although even that principle took a beating in the United Kingdom when its Supreme Court held Boris Johnson’s request for prorogation to be unlawful and of no force and effect. Canada’s system is based on the U.K.’s, but if courts were to step in here, it would be revolutionary and inevitably controversial. And of course, since what is at stake is power, these disputes are not going to be conducted disinterestedly.

To be sure, most of the time, the system works whether it is universally understood or not. In practice, the “conventions of responsible government” – however mysterious they may be to the laity and even sometimes the clerisy itself – give a clear result about who is supposed to occupy 24 Sussex Drive (assuming it is ever made habitable). Contrary to semi-informed opinion, the representatives of the Crown – the governor general at the federal level and the lieutenant governors in the provinces (collectively, the governors) – rarely have any real choice in what to do.

But there are real question marks. In Canada, there is a particular question about whether an incumbent government that gets fewer seats than one of its rivals but can see a way to put together a working majority must give the party with the most seats a shot at governing. Since this eventuality almost happened in both of the last two federal elections – and in fact occurred in New Brunswick in September 2018 – we really should have some clarity about it.

Unfortunately, the answer requires some nuance, which partisan politics and media regard the way cats feel about baths. A governor would – and should! – let an incumbent first minister, no matter how many seats his or her party got, put a throne speech to the house. If the incumbent government lost a confidence vote at that time or shortly afterwards, then, and only then, the governor would call on the leader of the party with the most seats.

However, this does not mean the first minister who decides to do this is off the constitutional hook. First ministers are supposed to give governors the right advice. So even if the governor lets the first minister meet the house, that leaves open the question of whether the first minister ought to put the question to the governor in the first place.

In my view, an incumbent first minister whose party (or pre-election coalition) does not get a plurality should advise the governor to call on his or her more successful rival to take the first crack at governing. In this respect, Andrew Scheer was right in October 2019 to argue that there is a “modern convention” that the party with the most seats has a right to try to govern.

However, Conservative partisans were wrong to suggest – either in 2019 or in 2008 – that their opponents are obliged to leave them in office. On the contrary, if a parliamentary majority supports the old government, then the right thing to do would be to let the plurality party give a throne speech, but move an amendment that the house has no confidence in the new government. If that passes, the old government has every right to come back and govern as long as the new parliament lasts.

Put that in your 30-second ad buy.

Election 2019 and the Constitution

Election 2019 was not all about the Prime Minister’s more or less youthful ventures in racially insensitive costuming or the Leader of the Opposition’s inability to keep straight his qualifications to practise as an insurance broker or to get a U.S. passport. In addition to such substantive issues as climate change, tax policy and the prospects of a national pharmacare plan, the campaign briefly touched on the mysteries of the constitutional principles governing the formation of executive government.

In the end, we had a fairly boring result from the perspective of an enthusiast of Westminster system arcana. As a result of the much greater efficiency of their vote compared to that of the slightly larger portion of the electorate that voted Conservative, Prime Minister Justin Trudeau’s Liberals, while denied a majority, received a strong plurality of seats in the October 21 election. Since Jagmeet Singh’s New Democratic Party made it abundantly clear in the campaign that it would never support a Conservative government, and since many of the NDP’s policy objectives overlap with those of the Liberals, no one doubts that Trudeau can continue as Prime Minister.

Pundits can of course speculate on how long a Trudeau government will last before another election, but the Liberals clearly have the authority to remain in government with the cooperation or acquiescence of the smaller parties. The Liberals have ruled out formal cooperation, but they will be able to get any measure passed as long as they have the support of one of the Conservative Party, Bloc Québécois or NDP. A premature end to this Parliament forced by the opposition seems unlikely.

But if things had turned out slightly differently, we might be in the midst of a constitutional crisis. Shortly before voting day, Conservative leader Andrew Scheer created a ruckus by claiming that if his party received the most seats, “modern convention” meant he should get the first chance at being prime minister after the election. He could point to a similar statement by Justin Trudeau before the 2015 election, when it appeared quite likely that the Liberals would get a plurality at the expense of the Harper Conservatives, but before their last-minute momentum delivered a majority.

Scheer’s advisers appear to have thought that raising hypotheticals about what might happen after an election was a strategic misstep and he quickly turned to emphasizing the benefits of a Conservative majority. The only party leader who directly engaged Scheer’s constitutional claim was Elizabeth May, leader of the Green Party, who claimed that Westminster tradition gives the first right to form a government to the incumbent party, regardless of how many seats it gets.

The unceasing electronic bar fight / seminar that makes up our contemporary public sphere briefly filled the gap. Constitutional law professors, media talking heads and partisan trolls with five Twitter followers debated questions of “convention”, “precedent” and “principles of responsible government.” And just as quickly, the election was decided and the bar fight / seminar turned to new entertainments.

It might be worth thinking about the fact that this leaves an unexploded landmine in our political garden party. Sooner or later, the scenario Scheer raised will happen. The controversy was reminiscent of the debate that followed on the non-Conservative parties’ brief attempt to replace the Harper government at the end of 2008. Lovers of Canadian party politics can point to numerous earlier examples of party conflicts over the rules of the Canadian political road, most memorably the King-Byng crisis of 1926.

In these situations, media speak loosely. Partisans argue partisanly. Once upon a time, perhaps, there were universally recognized constitutional experts such as the late Senator Eugene Forsey who could upbraid imprecise punditry and silence the hacks. But in today’s flattened opinion environment and general distrust of expertise, who will play that role when a future electorate steps on the landmine?

The Mysteries of Westminster Government

It would be nice to clarify beforehand how power would peacefully be transferred. So what can we say for sure? What are the fundamentals of how Canadian governments are chosen?

For most purposes, if we are interested in how the right to exercise executive power gets determined, we can fairly simply divide democracies up into those with a parliamentary system and those with a presidential system. In a Madisonian or presidential system like the United States, the legislature and executive are elected separately, and they frequently have different partisan alignments. In the United States, as I write, the Democratic House of Representatives is about to impeach Republican President Trump. Whether he is removed from office by the Republican-controlled Senate (which seems unlikely) or not, whether he is reelected or not, and whoever replaces him, we can expect American politics to be dominated by conflict and occasional compromise between the executive and legislative branches for the foreseeable future.

In parliamentary systems, this is not supposed to happen (although the Brexit imbroglio, touched on elsewhere in this issue of Inroads, shows it sometimes can). The executive is not separately elected. Instead, the right to exercise executive power depends on being able to get the support or acquiescence of the legislature. If the executive and legislative branch come into serious conflict, then one of them must go: either the executive by a change in government or the legislature by a new election.1

Parliamentary systems have arcane and technical distinctions in how it is decided who has the right to be in government when the will of the legislature is not clear – differences that do not matter when there is a clear majority for a party or coalition, but can make a big difference when there is not. Many parliamentary systems provide for an explicit “vesting vote”: after each election and a transition period, the legislature votes for who the new executive will be.

For example, section 46 of the Scotland Act gives the Scottish Parliament the power to nominate one of its members as first minister during a transition period after an election or the fall of an old government. While that person is technically appointed by the Queen, in effect the Scottish people elect the legislature and the legislature elects the executive. Most democracies in the world that have avoided the United States’s separation of executive and legislative authority provide for some similar process.

But this is not how it works in the United Kingdom as a whole or in countries, like Canada, that have adopted its specific form of parliamentary government. While executive power depends on the “confidence” of the legislature, this is not determined by the relatively straightforward method of a vote at the outset of a parliament, but through conventions about when first ministers are supposed to resign and when governors are supposed to dismiss them.

Our Misleading Constitution

Canada’s written constitution, although it purports to declare “the Nature of Executive Government,” is in fact completely misleading on the subject. If you just read Canada’s constitution, you would think executive power belongs to the Queen (section 9), that she delegates it to a Governor General who serves at her pleasure (this went without saying), who in turn appoints members of the Privy Council (section 11), decides on judges (section 96) and has to agree to all legislation (section 91). The prime minister is not mentioned at all in the 1867 constitution – he or she is just an unnamed member of the Privy Council chosen and removed by the obviously more important governor general “from Time to Time.”

The prime minister only plays a cameo role in the rest of our written document: under section 35.1 of the Constitution Act, 1982, he or she gets the neocolonial power to decide what representatives of Aboriginal people will be invited to constitutional conferences about amendments that affect them and must attend such conferences. That’s it for textual attention to the most powerful official in the country. As a written document, the Canadian constitution is about as deceptive a guide to what is really going on as Stalin’s 1936 Constitution of the Soviet Union. Where the USSR was a personal dictatorship pretending to be a democracy, on textual evidence Canada is a democracy pretending to be a personal dictatorship.

One thing the constitution does make clear is that the executive power and legislative authority are legally distinct. This principle predated the loss of effective authority by the monarch personally and was inherited both in the United States (where the executive became directly elected) and in Westminster systems. The legislature is sovereign, except as limited by the constitution: the executive only has the powers the law gives it and in particular must always abide by statutes the legislature enacts. This legal superiority of the legislative branch can create a cynical contrast with the effective power of the executive in a system of party discipline – but it can also sometimes bite governments when they don’t expect it.

But despite the importance of this legal principle of legislative sovereignty, it would be unwise to rely very much on the text of the constitution to understand the link between “supreme executive power” and a “mandate from the masses” (in the words of Dennis the Peasant in Monty Python’s classic take on the British constitutional tradition). The unwritten “conventions of responsible government” provide the missing link between Canada’s monarchical written constitution and its representative reality. With very limited exceptions (the “reserve powers”), the governor must always do what he or she is told, whether by the first minister (for example, in appointing a cabinet), by cabinet (in passing an order in council) or by the legislature (in giving assent to legislation). Since the first minister decides who the cabinet is, the key question is who gets to be first minister.

The rule is not, as in Scotland, the positive one that the legislative body elects or nominates the first minister. Rather, the rule is a negative one. Canadian governments do not die of natural causes: they must be killed or commit suicide. A government continues until the first minister resigns or (much more rarely) is dismissed. The rules about when these things are supposed to happen are therefore all that keep Canada democratic. Governments are not elected: the provincial legislative assemblies and the federal House of Commons are the only elected bodies in our system.

Some of the conventional rules are clear. If a government loses the confidence of the elected house, then the first minister must either resign (in which case the governor will call on the leader of the opposition party with the most seats) – or ask for a new election. The governor will accede to a first minister’s request for a new election if the parliament has been around for a while – about six months – but not otherwise (unless it has been demonstrated that forming a stable government is impossible). In 1981, the Supreme Court of Canada added that there is a convention that if the opposition obtains a “majority” at the polls, the government must resign “forthwith” (in practice, whenever the new government is ready to be sworn in).

These bare-bones rules are enough to say what will happen with the current Parliament. Justin Trudeau has nether resigned nor been dismissed, so he remains Prime Minister. No opposition party obtained a majority at the polls, so there is no convention that requires him to resign “forthwith.” He will have to ask the Governor General to appoint a new cabinet. While it is conceivable that he might nominate cabinet members from other parties, he is under no obligation to do so.

The government will put forward a throne speech. The other parties will have the opportunity then or later to vote nonconfidence, but they will not do so unless they see the advantage of an alternative government or a new election. If the Liberals lost a confidence vote, they would not be entitled to an election for the first six months or so, but after that they could have one any time the Prime Minister decided it was in his political interests. (Canada has a fixed election law, but it has an exemption in these circumstances and the courts have already made it clear that they will not get in the way.)

How Minority Governments Govern

That does not mean it would make sense for the Liberals to try to rule as if they had a majority, as Joe Clark rashly promised to do when he had a minority government in 1979. In the last Parliament, the Liberal Party controlled every legislative committee and the government could prevent any legislation passing against its will if it was willing to whip its own caucus. This is no longer the case. The Liberals have said they will not try to get a coalition or even confidence and supply agreement with another party: they will have to rely on the desire of other parties to avoid elections to get budget measures and other matters of confidence through. But there is very little doubt about what is supposed to happen.

In 2008, when a coalition of the Liberals and NDP supported by a confidence and supply agreement by the Bloc Québécois declared its readiness to unseat Stephen Harper’s minority Conservative government, the Conservatives argued that this arrangement was illegitimate. Their argument was as constitutionally unfounded as it was politically effective.

In the current Parliament, the opposition parties could, in principle, vote nonconfidence in the government – so long as they do so early in the Parliament – and the Governor General would call on Andrew Scheer to be Prime Minister. This is what occurred in Ontario in 1985 (when Bob Rae’s third-place NDP supported David Peterson’s second-place Liberals) and in British Columbia in 2017 (when the Green Party supported the NDP). However, as a matter of political reality, it seems next to impossible to imagine the federal Liberals facing any similar effort by the opposition parties this time, even if the NDP had not explicitly ruled out cooperation with the Conservatives.

There is a Modern Convention …

But what if the 2019 election had been slightly different? What if the mysteries of voting efficiency and strategic voting had resulted in the Conservatives obtaining more seats than the Liberals? What would convention call for then? Was Scheer correct that “modern convention” implies that he would have had the first chance at governing?

If we go back far enough in Westminster history, the answer would have been a clear no. Trudeau would remain Prime Minister unless and until defeated on a confidence vote, and so the situation would not be materially different from the one that actually unfolded. This is because responsible government emerged out of a system of dual confidence: in the 18th century, for example, the government of the day required both the confidence of the monarch personally and the confidence of the House of Commons, so that the monarch could continue to tax and spend (“enjoy supply”). Just as a minister could continue on so long as the monarch had not announced that this was no longer his or her pleasure, so too he could continue so long as there was no affirmative denial of confidence or supply by the lower house. The Hanoverians eventually lost the practical ability to dismiss governments for policy reasons, so the principle of responsible government became in effect that the Crown hired governments and the House fired them.

But it would be wrong to end the development of the principles of responsible government at the accession of Queen Victoria. In Canada, the aftermath of the 1896 election created a new principle. The Conservatives, led by Charles Tupper, lost that election to Wilfrid Laurier’s Liberals, who obtained a majority of seats in the new Parliament. Tupper took the perfectly orthodox view that he remained Prime Minister until the House met. He hoped, no doubt, to do some kind of deal with some Liberal MPs – a greater possibility in the late 19th century than it has since become. But the Governor General, Lord Aberdeen, refused to take instructions from Tupper on appointments, forcing Tupper to resign. Aberdeen asked Laurier to take office as prime minister.

Tupper complained about this breach of the principles of responsible government for the rest of his political life, but Aberdeen’s actions seem obviously correct to us now. Two conventions come out of this event: first, if another party obtains a majority, an incumbent first minister must resign effective as soon as the other party’s leader desires, and second, in such circumstances, the government must act as a “caretaker” – a role that has now been expanded to the entire period from when the election is called until it is affirmatively established who has the right to govern (in a minority, by the acceptance of the throne speech).

In the early-19th-century model, while the House decided when a government came to an end, the Crown had a great deal of discretion about whom to call on and whether or not to give a defeated government the option of going to the electorate. Scholars like Forsey and, more importantly, actual practice have constrained that discretion to a number of rules.

In particular, it is now widely (if not universally) accepted that when a government falls, the Crown must call on the leader of the party with the next-most seats and that it is wrong for the governor to use his or her own sense of who could command a working majority. It is also now accepted, on the basis of both scholarship and practice, that an election request will be denied early in a Parliament (with exceptions where it is beyond reasonable dispute that no government can function) but will be granted after six months or so. These are all additions to the original rule that governments continue until they lose the confidence of the house. They make sense based on practice and on the principles that the Crown should avoid controversial partisan decisions and parties should be treated with symmetry.

Viewed in this light, Scheer’s claim (earlier made by Justin Trudeau) – that an opposition party that obtains the most seats in an election should get the first opportunity to meet the house – would appear to have merit as a “modern convention.” It is what has in fact happened federally in every minority parliament after 1925.

In 1925, Mackenzie King’s Liberals initially held on with the support of Progressive and Labour MPs despite getting fewer seats than the Conservatives – this constitutional fact has long been overshadowed by the more famous decision of Lord Byng to deny King an election when the Progressives pulled their support. After the 1926, 1957, 1963, 1979 and 2006 federal elections, incumbent governments stepped down when they received fewer seats, while no incumbent government at the federal level has ever stepped down when it received a plurality of seats in a minority parliament.

Precedents can be read in multiple ways. If a course of action has made political sense in the past, this does not mean it is a convention. A convention requires both that the course of action be viewed as binding and not merely strategic and that it make sense in principle. Here too, Scheer’s claim seems vindicated. The prime ministers who gave way all those times no doubt believed, without exception, that their own policies were better for the public interest than those of their opponents; they gave over power because they thought they were obliged to. In many cases, it is easy to imagine means they could have used to avoid a nonconfidence vote.

Moreover, Scheer’s approach is supported by principle. Having the first-mover advantage in a minority parliament carries the significant benefit that the other parties must affirmatively displace you – after six months or so, at the risk of an election. Therefore, if the system is to be symmetrical, this benefit should be allocated in a way that minimizes Crown discretion and is allocated as much as possible on the basis of how the people voted. That is why it is now more or less universally accepted that after a defeat, the governor should go to the party with the next most seats to form a government. The number of seats, while not determinative of the ability to command the confidence of the house, is an objective fact based on how people voted, and not a subjective decision of the governor or existing first minister.

… But it Has a Caveat

While the Conservatives were right this time, there was some suggestion that they were going to take the position they took in 2008 as well: that if they got the most seats, they not only had first crack at government in the new Parliament, but also last crack – that any arrangement between the Liberals and one or more of the third parties would be illegitimate. I was unable to track any example of Scheer himself making this claim, but there was some suggestion of this by some of his surrogates.

This would of course turn a reasonable position into the entirely unreasonable notion that governments, like parliaments, are chosen on a first-past-the-post basis. This is unreasonable because the whole point of responsible government is confidence of the majority of the legislature that can pass laws and approve taxes. If the Conservatives had received the most seats, Scheer would have the right to meet the House, but the House would have the right to vote him out and put Trudeau right back in office.

For this reason, there is undoubtedly a caveat to the modern convention. If an incumbent first minister could arrange for an agreement that guaranteed confidence and supply very quickly (say, in the first few days after the election), then calling on the leader of the party with the most seats would be pointless, since it would just lead to a nonconfidence vote and restoration. So, if the election had resulted in a situation where the Liberals came second, but there was a more or less instant promise of support from, say, the NDP, and the Liberals and the NDP combined could constitute a majority, then it would be legitimate for Trudeau to stay on. If it included this exception, Scheer’s “modern convention” could arguably even encompass the 1925 election, after which the Progressives and J.S. Woodsworth’s Labour group quickly supported the incumbent King government.

Many of those in the opinion sphere who argued that Trudeau could simply continue on if he failed to get the most seats cited Philippe Lagassé, a Carleton professor and expert on the Westminster system of government formation. In fact, Lagassé recognizes that, in Canada, incumbent governments that receive fewer seats in minority parliaments have not tried to stay in power since 1925 and that there is a norm that supports Scheer’s claim. He insists on calling this norm a “custom” rather than a “convention.” Since conventions are not laws and derive from the accepted norms of political actors, it is hard to see how this line can be successfully maintained.

To be fair, I would agree with Lagassé that a governor would probably not actually dismiss an incumbent first minister who tried to continue after getting fewer seats in a minority parliament. This is what Brian Gallant tried to do after the 2018 New Brunswick election, for example.

But where I disagree is that this is because there is no convention. A governor should only dismiss a first minister in the clearest of circumstances, where there is absolutely no doubt that the first minister ought to resign. For that reason, as long as everything is running correctly, a dismissal should never be necessary. A first minister should resign when convention dictates, and if first ministers regularly do resign – and feel themselves obligated to resign – in certain circumstances, then that is sufficient for there to be a convention. It is the first ministers themselves who are the first line of defence of conventions, although governors sometimes have to stand up for them independently.

If Trudeau had tried to continue with fewer seats and no agreement to get an effective majority – something he never said he would actually try to do – the Governor General might not have dismissed him on grounds of lack of clarity, but that would not itself mean he was acting appropriately. Of course conventions, like all norms, can ultimately cease to have force if they are violated enough, although they can sometimes get greater strength if they are violated but the violator is punished. This is true of Scheer’s modern convention – and also of all the other norms that are essential to the operation of the system.

Continue reading “Who’s On First? A Guide to Minority Government”

Communism failed because it ignored human nature. The question the current era presents to us – the question that underlies the crises represented by the words Brexit, Trump and gilets jaunes, but will also outlast them – is whether liberalism has the same problem. Communism could not handle humans’ individual and familial self-interest. Can liberalism handle their inherent need to be part of a group that defines itself against other groups?

Groups define themselves against other groups not only in the sense that they distinguish themselves on the basis of what they are not but also, unfortunately, in the sense that they compete with those other groups for status and resources. This is inevitable. Liberalism is, at bottom, the conviction that state coercion – also inevitable – should be neutral between fundamental aspects of the identity of the citizens that are subject to it. Coercion can be justified, if at all, only if it serves the equal freedom of all who are subject to it.

As a result, liberalism in the 21st century has to advance an idea that will undoubtedly meet with resistance. Since modern states are as bound to be ethnically pluralistic as they are to be religiously pluralistic, liberalism must advocate separation between nation and state, just as it earlier fought for separation of church and state. The social fact that nations and states overlap but do not coincide leads inexorably, for liberals, to the normative conclusion that no state should belong to a single people. The ability of past liberals to avoid this implication of their basic principles depended on historical circumstances that are now passing away.

If this is right, then it is not surprising that liberalism is facing resistance, or that the triumphalist narratives of globalization and democratization from the 1990s look hollow. The concept of democracy and sovereignty that we inherit is bound up with the nation-state. If nations and states are to be separated, who is the we that makes democratic decisions? How does the state retain the loyalty needed to fulfill its functions? How do the ethnicities that have identified with that state for centuries understand themselves when their countries become postnational?

It has taken liberalism a long time to get to this principle, and as a pragmatic movement comfortable with power it will naturally try to soft-pedal the implications. But liberalism now faces a global countermovement and needs to get its foundational commitments straight. And pragmatism increasingly pulls in the same way as principle: the coalitions that could put liberal parties in power in the West do not belong to a single “people,” and they will want the policies they vote for to reflect that.

Many liberals have proposed “civic nationalism” as a halfway house. But this will not work. If civic nationalism involves no real ideological commitments, it is too weak to count as nationalism. But if it is strong enough to have any real ideological weight, civic nationalism is no more compatible with liberalism than the ethnic version.

While there is no doubt that problems and struggles lie ahead, liberalism does have resources to address this problem. Liberalism is the ideology best equipped to deal with “intersectionality,” the principle that one has multiple identities and that the way each identity is experienced depends on the presence or absence of the others. Intersectionality is usually associated with a radical moralism that does not fit well with liberalism, but this is a contingent fact that can be changed. With a less individualistic and a more intersectional understanding of why states need to be limited and pluralistic, liberalism could be an appealing philosophy for younger people in the West and could regain enough vigour to put up a fight against its populist enemies.

Liberalism and Nationalism

People need to belong to groups bigger than themselves or their immediate families, but smaller than humanity as a whole. And those groups necessarily define themselves by the fact that they are not part of another group. This is a phenomenon familiar to anyone who engages in political speech online and, indeed, to anyone who went to high school. According to paleoanthropologists, it was true of our ancestors on the East African savannah. Everyone has particularistic loyalties to “their own” – a phrase characteristic of George Grant, English Canada’s leading critic of liberalism – just because it is their own.

This is a problem for certain traditional liberal theories that focus only on the rights of the individual and the need for a state to define and protect those rights. The essential goal of that form of liberalism is to figure out how to constrain the state from becoming so powerful that it threatens the individual, while ensuring that it is powerful enough to protect individuals from one another. The classical liberal solution was a state governed by the rule of law and representative democracy, appropriately constrained by guarantees of individual rights.

In the 20th century, most liberals recognized that negative rights needed to be supplemented by progressive taxation and social insurance. In the English-speaking world this recognition was notably expressed in the 1942 report of Sir William Beveridge, a British aristocratic liberal whose work was enthusiastically embraced by the democratic socialist movement.1 They also recognized that the state needed to play a role in regulating total demand to avoid periodic economic crises, as taught by John Maynard Keynes, another liberal toff who became a source of intellectual inspiration for labour politicians. But the key point about the whole picture is that it did not specifically refer to any groups other than the state as a whole. Individuals would react primarily to economic incentives. States would be insurance companies with navies.

To be sure, liberals always emphasized the importance of freedom of association and freedom of religion as ways of guaranteeing group loyalties defined in contrast to the state. The foundational struggle for liberalism was to detach the state from a particular church, but the coalition in favour of doing this rested fundamentally on the social community of minority churches. Liberals welcomed voluntary communities as a source of sense of meaning and loyalty to their “own.” The only price of membership in the liberal state was that these groups must not coerce members who seek to leave and must not threaten the state itself.

There may be doctrinaire cosmopolitan rationalists somewhere who are offended by any claim of a community less inclusive than humanity itself. These people are bound to be disappointed by humanity’s tribalism, just as a doctrinaire communist would be bound to be disappointed on realizing that real proletarians were never going to be the “new socialist man.” People differ in how groupish they are, a measure of personality that psychologists label “openness” and can quantify as one of the five basic dimensions of personality. There has indeed been an evolution in the “WEIRD world” – Western, Educated, Industrial, Rich Democracies – toward higher and higher levels of openness with each generation. But no one – not even an Esperanto-speaking world federalist – can exist without a tribe. Liberalism prides itself on being a pragmatic way of thinking that does not seek to coercively impose a utopian vision on people, but rather to give them institutional space to decide for themselves. It therefore has to learn to live with this fact about human beings.

The trouble begins with the question of what sources of group identity legitimately hold the state together. Groupishness, as a universal human phenomenon, is not on its own enough to explain nationalism, which is not. For most of human existence, the groups that commanded loyalty and defined themselves against others were small enough for everyone to know one another. The territorial state as part of an international system is a product of European modernity, along with wage labour, the world market and the colonial empire.

Territorial states gradually undid the overlapping secular and ecclesiastical jurisdictions of medieval Christendom and replaced them with a single sovereign authority defined against other, similar sovereign authorities. These absolutist states needed to channel universal human groupishness into identities that secured their own cohesion. Modern institutions of public education were developed to try to reeducate the inhabitants of France and England (and later Italy and Germany) to be citizens of a country, in priority to all other smaller or larger loyalties. As the transformation of the Renaissance Kingdom of England into the19th-century United Kingdom of Great Britain and Ireland suggests, this process could not occur without violence and exclusion of stubborn attachments within.

In the 19th century, liberalism and nationalism were assumed to be allies. The “self-determination” of peoples seemed to be consistent with the self-determination of individuals. The idea of liberal nationalism was that each people would get its own state, once the “artificial” borders of traditional multinational empires had been broken up. The high point of this vision was Woodrow Wilson’s Fourteen Points. While it inspired many people at the end of World War I, it was fundamentally compromised by the reality of the Versailles Treaty and the use of the language of self-determination by the Nazis in their designs on multiethnic Czechoslovakia in 1938.

The trouble is that peoples do not conveniently locate themselves exclusively within contiguous borders. As a result, a state for one people is necessarily a state defined against some of those who live within it. Moreover, since history does not end and powerful forces drive peoples to move across borders, or cause them to have different rates of demographic increase, the ethnic relationships within the territory of the state will constantly change.

This problem could be ignored as long as those outside the ethnos but within the state could be ignored. This was never an option for countries like Lebanon, Belgium or Canada where no ethnic group could really triumph, but it could work as a matter of realpolitik in countries with numerically smaller ethnic minorities.

However, one of the features of liberalism is that it encourages internal critique, as the limit of the circle of equal, autonomous persons is expanded on the demand of those left outside it. Enlightenment liberalism was simultaneously a project of white bourgeois males and one making claims based on the situation of all human beings. This contradiction can be the basis for presentist condemnations of the racism and sexism of the Enlightenment project, but its more important consequence was that it provided rhetorical space for the excluded to demand change in the ruling elite’s own terms. The demands of equal liberty made by bourgeois white men have been rejected by leftist intellectuals, but actual progressive social movements embraced these demands while insisting that the scope of equality and liberty be expanded. Once this happened for peoples whose existence does not correspond to an existing (or even possible) border, the liberal answer to empire can no longer be nation, but rather some messy multicultural federation – a federation being a democratized empire.

Canada, for example, originated as the federal union of British North America. The Victorian conception of Britishness was complicated, involving racial mythology about Anglo-Saxons, the political economy of free trade, the science of the industrial revolution and the redescription of the common law as an instrument of individual freedom. “Britishness” meant different things to George Brown and to George-Étienne Cartier. But no matter what its exact connotations, the idea that any part of the world, no matter how distant geographically from the original islands, could be made British was an unmistakably imperial idea.

However, already with 1867, it was necessary to separate state and nation to accommodate the reality of a Catholic, French population that could neither be given full authority over a particular territory nor denied a share of political power altogether. This need to accommodate was not a given. It contrasted with how the British Empire treated the Acadians conquered in Queen Anne’s war in the early 18th century and with the hopes for assimilation expressed by Lord Durham – an English radical – in his 1839 report. After Durham, however, it was clear to Baldwin and LaFontaine that Canada could only be democratic on a binational basis.

This accommodation was originally offered primarily to French Canadians and to a lesser extent English-speaking Catholics, primarily of Irish descent. But compromises were also made with the Métis with the Manitoba Act and with the Indigenous groups of the west, at the nadir of their strength, with the numbered treaties. Although these promises were disregarded by the Canadian state with the full flowering of settler colonialism, they were not forgotten by those to whom they were made.

Since the 1960s, partly in response to Quebec nationalism and partly in response to its own increased diversity, English Canada has largely abandoned any British identity in favour of a “multicultural” one. At that time, English Canada expressed a nationalism directed primarily at the United States of America, and this nationalism remained politically salient up until the free trade election of 1988. But since then, urban English Canada has identified too closely with “Blue” America to be really nationalist, while the conservative belt of rural Canada has been more fertile ground for a populist nationalism that is ethnic in a broad sense. This has caused a counterreaction in urban Canada, which has basically rendered the idea of a Canadian identity based in a peoplehood untenable.

Francophone Quebec’s initial response to secularization and modernization was a thinly disguised ethnic nationalism inspired by anticolonialism, and formulated now in terms of language rather than religion and descent. Left Quebec nationalism has obviously not disappeared, but it is no longer the beating heart of progressive Quebec. As in English Canada, it is issues of immigration and assimilation that have the most resonance with populist nationalism.

Much of Indigenous Canada has embraced an anticolonial nationalism of its own. Some have disclaimed any identification with the Canadian state at all. But for pragmatists, at least, the real objective is to be integrated into the Canadian federal structure with an alternative source of sovereignty to that of the federal and provincial governments, along with tacit or explicit acceptance that the sovereignty so claimed is one that is shared with the transethnic institutions of the Canadian state.

Canada’s situation in these respects is not simple and is the consequence of its own history. But it also echoes developments throughout the world, where the cause of liberalism and the cause of postnational states have become more closely identified. Moreover, the struggle over the nonidentification of nation and state has increasingly replaced the 20th-century struggle between labour and capital or over the amount of government redistribution as surely as that struggle displaced earlier ones about the place of the throne and the established church.

Brexit is the perfect example. Its proponents see themselves as protesting against a federal Europe displacing the sovereignty of the United Kingdom. But its main obstacle has been how it has disrupted quasi-federal arrangements within the United Kingdom itself, particularly in Northern Ireland, unfortunately the laboratory of the identity conflicts of modernity from the 17th century to the 21st.

To the extent that the separation of nation and state becomes a core liberal value, it will face a backlash, which will not disappear with better economic times. National identity fits well with basic human groupishness. It has been central to both personal identity and state formation in the West, and in the world influenced by the West, for centuries. It is therefore hard to imagine that declining ethnic majorities will abandon nationalism, and the pretense that it is nonethnic will become increasingly thin. But since the “people” as defined by the populists will never really be all the people in the state, and since those excluded will easily perceive this fact, the ethnic majoritarian coalition will inevitably give rise to a countercoalition.

Since liberalism is fundamentally defined by the idea that the state should not enforce one particularist conception of the good against dissenters, it cannot really be neutral in this conflict, which is going to define politics for the foreseeable future. Just as liberalism emerged as a pragmatic response to religious diversity, while often having to manage unwieldy coalitions of dissenters from the dominant religion, now it must do the same with ethnic identity. Contemporary liberalism’s demand must be that nation and state be separated. Its base consists of those who are threatened by uniting them. Principle and strategy leave no retreat: liberalism allows ethnic identity, but it must deny that identity state power.

The Mirage of Civic Nationalism

But is there a compromise between modern liberalism and nationalism that we can live with once ethnic nationalism is excluded? Is there a “civic nationalism” that is demanding enough to represent an alternative to the ethnic variety, while being consistent with liberal principles? A number of writers worried about the threat of nationalist populism to liberal institutions – including Yascha Mounk, Francis Fukuyama and John Judis – hope so.2 Mounk, Fukuyama and Judis are all liberals and can all see that if nationalism defines itself by claiming that membership in the state should be coincident with membership in the nation, illiberal results follow. But they intervene to ask the “left” to embrace a “civic nationalism,” arguing that without a thick sense of national identity, there will not be the will to put together projects like the social welfare state.

If “civic nationalism” means nothing more than that it is good if the citizenry identify with the state they live in as a common enterprise, and reasonable that they expect it to look out for their interests, then it is consistent with liberal principles. In this sense, though, the “civic nation” plays no greater role than the “civic province” or “civic municipality.” Public-spirited Torontonians or Manitobans expect their local or provincial governments to look out for their interests. A patriotism about a country that is similar to that felt for one’s city can certainly be a benign sentiment that no liberal would quarrel with. But patriotism is an emotion, while nationalism is an ideology. An ideology must define itself against something. So “civic nationalism” – if it is worthy of the name – must define the nation in a way that excludes some “civic” perspectives.

As Americans, Fukuyama and Judis want something like a commitment to the Declaration of Independence, the Constitution and an optimistic, entrepreneurial attitude to life as an identity substitute for blood and soil. The dilemma is that any ideological identity that is thick enough to fulfill the emotional needs met by nationalism will be as exclusionary as an ethnic identity – and even more in conflict with the liberal commitment to free debate of ideas. The phrase un-American has a nasty connotation for a reason. For all the flaws of old Europe, and for all the problems with its essentially ethnic understanding of national identity, at least the concept of an “un-Dutch” idea makes no sense.

Let us take Canadian examples of the problem with an ideological conception of national identity. In Lament For A Nation, George Grant claimed that Canadian identity depended on a less individualistic and more deferential approach to social life than prevailed in the United States.3 As a result, he wrote the Liberal tradition off as hostile to Canada – even though it had been the dominant tradition since Laurier and, in the 19th century, had led to the development of responsible government and had been part of the grand coalition leading to Confederation. Grant had to distance himself from the obviously individualistic strains in the Diefenbaker Conservatism that he was defending. Not surprisingly, since Canada has always been a pretty individualistic place, Grant had to conclude that Canadian identity was doomed before it could start.

Grant’s Lament foreshadowed numerous attempts to tie public polices about which there should be debate in democracies to national identity, about which there cannot be debate. Grant did this with federal Crown corporations, a theme that their CEOs have taken up ever since. Liberals and social democrats did it with the Canadian model of medicare and, after 1982, with a Charter and model of judicial review borrowed from the United States. Conservatives responded relatively harmlessly by tying national identity to peewee hockey and coffee-and-donut chains and less harmlessly to a more militaristic foreign policy. The liberal objection to all this is that it makes support for or opposition to certain contingent public policies matters of loyalty to the state.

It is not clear that there is actually a positive relationship between a strong sense of national identity and social welfare. Countries that have long struggled with a common national identity – like Canada and Belgium – do not seem to differ in any important way on this dimension from countries that have not, like France and the United States. If the welfare state is an efficient means of delivering social insurance – and it is – then it is not clear why it would not be enough for its citizens to recognize this. It is an empirical question, and the empirical evidence is not very strong that a specific ideological commitment is necessary for people to be public-spirited.

More fundamentally, though, civic nationalism faces the strategic and political problem of having no constituency. The resistance to Trump and Brexit, for example, comes primarily from the people who feel most excluded from their definitions of “American” or “British.” An opposing coalition must consist in people who have a wide variety of incompatible identity commitments. Negotiating such a coalition requires bracketing various commitments and promising them some space in the public policy that will result if the coalition succeeds. Liberalism is good at creating that kind of space. One possibility would be to define the common denominator of the antiethnic coalition as the “true” civic national identity. But this would just further enrage the majoritarian populists: they would not be “true” citizens of their own country! It is better to recognize the inherent asymmetry in the two contending coalitions.

The Intersectional Liberal Federalism of George-Étienne Cartier

We cannot put the question off any longer. If neither a single ethnic identity nor a single political identity for a state is compatible with liberalism, then how does liberalism learn to live with groupishness? Are we stuck with the pessimistic conclusion that the principles of John Stuart Mill and Benjamin Constant are for a species with a different evolutionary history from our own, possibly descended from solitary gibbons? As Edward O. Wilson, one of the world’s experts on ants, said of communism, “Great idea, wrong species.”

One response is that no one ever said it would be easy. Liberalism, like socialism, has a tendency to think of its success as guaranteed by history, so that when inevitability is put in doubt, the alternative seems to be despair. A more realistic approach would be to keep normative commitments separate from short-term success or disappointment.

Still, liberals do need a strategy. An alternative might start with the observation, banal on the cultural left, that identities are “instersectional.” This much-mocked word contains two useful and yet undeniable insights. The first is that every person is subject to multiple particularist loyalties and experiences: we are not just women/men or Canadians/Americans, but Canadian women/American women/Canadian men/American men – in exponentially more specific intersections of these sets. The second is that the experience of being part of the same group differs depending on the other groups to which one belongs: African-American women differ from African-American men not only in their gender identity but also in how they experience their racial identity. This example can be generalized indefinitely. Intersectionality is, in this sense, an undeniable fact.

And it is a problem for nationalism of any kind. That is because nationalism needs to elevate one identity cleavage to supreme importance while diminishing all the others. For a consistent nationalist, one must be an American or a Pole, but not a Polish-American. At minimum, such fractures are threatening to the national identity and to the idea of one people. From liberalism’s perspective, however, this is good news. Its enemies have a problem with the species they belong to as well, since, in fact, people do not spontaneously keep to a single identity.

But liberals have generally been suspicious of intersectionality. One problem is that those employing intersectional vocabulary tend to confuse oppression with virtue. They will often treat every identity distinction as a vertical one of oppressor and oppressed, and never a horizontal one of groups that must share a common space. Moreover, they will explicitly say that the oppressor can never judge – or even understand – the claims of the oppressed. If taken to the extreme, this would mean that differences could never be justly resolved or even effectively negotiated. Liberals have always differed with radicals in that they doubt that a politics in which the perspective of the “oppressor” can be ignored entirely would be either just as a moral matter or likely to succeed as a prudential matter.

But many liberals who object to intersectionality fail to recognize that an acknowledgement that everyone’s identity is complicated constitutes the best argument against radicalism. Very few people would fit into all the “oppressor” boxes, and even fewer would be “oppressed” in every respect. Even at the individual level, everyone has to find more or less principled compromises. An intersectional radical cannot pretend that there will be a single revolutionary subject, like Marx’s proletariat.

Another problem for liberals is the worry that a focus on intersectionality will lead to despair about the possibility of communication and collective action. If we cannot talk across identity categories, or if statements must simply be accepted, then a virtually infinite proliferation of such categories would create a hyperindividualized nightmare of noncommunication. On this point, it is precisely the liberal tradition, which has long been focused on the problems of common governance across divides of commitment to comprehensive worldviews, that has the resources to be useful to those concerned with the intersectional nature of identity.

Canadians should be more familiar than they are with George-Étienne Cartier’s 1866 speech in favour of Confederation, in which he called for a new “political nationality” with which neither the “national origin” nor the “religion of any individual” would compete. In Cartier’s vision, this political nationality would be shared by people of all parties and was not intended to replace ethnic or religious loyalties. In Cartier’s exposition on the new federal scheme, the political nationality of being a Canadian would serve those interests where religious, linguistic or ethnic identity was irrelevant. Cartier accepted that English-speaking Protestant Upper Canadians, French Catholics, Irishmen and Maritimers would all need to be represented in the councils of the political nation; were he alive today, he would no doubt modify his list to include women, Indigenous people and visible minorities.

Cartier’s concept of a political nationality should be distinguished from a civic nationalism dependent on allegiance to a substantive political ideology. Just as ethnic, linguistic and religious nations would meet in the institutions of the new political nation to hammer out their differences, so too would ideological groups. No doubt Cartier’s approach presupposed that anyone engaging in sedition against the state order would be suppressed. But it did not require any greater commitment than a willingness to work with the institutions as they existed.

One advantage the Confederation generation had over us today is that the socially and economically dominant English-speaking Protestants of Canada West were able to conceive of themselves not only as the true British North Americans but also as a section within British North America. As such, they advanced their interests through their representatives in a framework that implicitly accepted that others would also advance their interests. This did not prevent various identity panics on the part of this group, from the Riel rebellion through the Manitoba Schools controversy to the Conscription Crisis in the First World War. But through all of this, a framework remained in which Protestant Ontarians participated as one (loud) voice among many.

By contrast, neither left nor right is comfortable viewing the declining demographic “majorities” of the West today as one identity group among many – with legitimate interests, but also with an obligation to compromise those interests with the interests of others. For right populists, these groups just are “the people” and their identity demands are the demands of the nation as such. “Race” or “ethnicity” is something that only the Other has, which implies that the majority is raceless and without ethnicity. If challenged, the declining majority identity points to its acceptance of the principle of colour-blindness in law and its openness to the support of members of minorities willing to assimilate unreservedly to the majority.

The left sees through this and is understandably reluctant to acknowledge the legitimacy of a majoritarian identity politics. However, the left then goes on to insist that the majority just accept the moral untenability of its own identity as the corollary of accepting that its identity is just one among many. Instead of just making a principled demand for the separation of nation from state, the left in effect asks one people to cease to exist altogether.

The challenge is how to turn declining majorities into participants in multicultural compromise. Only on the racist and fascist far right is the contradiction resolved in favour of an explicit advocacy of “white” interests, but of course any use of multicultural language from this corner is just cover. This problem is insurmountable so long as the majority ethnicity defines itself as “white,” or as “French”, “Dutch” etc. The political entrepreneurs who seek to redefine majoritarian concerns in terms that speak to identity will probably continue to claim to speak for “the” people, as Trump and the Brexiteers do. From a liberal perspective, this is no better than an explicitly racist appeal, because it dissolves the universal into the particularities of one group.

While denying the concept of human nature, communism did speak to some perennial human characteristics: a longing for collective action, a dislike of hierarchies of rank and status. Its failure was its inability to integrate these aspects of human nature with others. Nationalist populism speaks to the need to be part of a group, and the need for that group to be “one,” but it suffers from the reality that our identities are multidimensional. On the left, this has been understood as “intersectionality,” but this insight has suffered from the left’s lack of attention to institutional realism. Liberals should give up on trying to renew their tradition by going back to the well of a single national identity and instead embrace the multiplicity they are best placed to reconcile with social order.

Continue reading “Is it Time to Separate Nation and State?”

Liberalism, broadly understood, is on the defensive. As political scientist Larry Diamond has pointed out, while the number of liberal democracies increased from the early 1970s to the turn of the millennium, since then we have been in a “democratic recession” with global measures of freedom – understood in a liberal sense – in decline.

Twenty years ago, economic determinism seemed to be on liberalism’s side. When the 20th century ended, it seemed that free markets, political democracy and a liberal version of the rule of law were the secret of economic success. It was widely thought that this had been demonstrated by the collapse of the Soviet bloc and by the success of the newly democratic east Asian tigers like South Korea and Taiwan. But today, the continued economic rise of the People’s Republic of China and the apparent stability of its one-party system of “socialism with Chinese characteristics” have made that claim pretty hard to sustain.

At the beginning of the new millennium, the conventional wisdom was that new information and communications technology would empower people in authoritarian countries to overthrow tyrants while deepening democracy at home. While there are some examples that have vindicated this hope, few people using Facebook or Twitter today feel these are unmitigated blessings. The reversal of democratic advance in the developing world, the success of Vladimir Putin’s Russia in pushing public opinion in Europe and the United States toward a nationalist right or antimarket left, and the deepening epistemic closure of the various political tribes in the rich countries makes any technologically determinist optimism increasingly implausible.

In the 1990s, it seemed as if freer movement of goods, people and capital did not even have to be argued for. It was inevitable. The idea that “globalization” was an irresistible force was shared by those who favoured it and those who trashed downtown Seattle to protest it. But since September 11, 2001, borders have become harder and religious and civilizational identities sharper. And since October 2008, the faith that markets should be left alone to increase wealth has been shaken, leading both to a healthy rethinking of global finance and a revival of mercantilist ideas that in trade one nation can win only if another loses. Never mind that Adam Smith and David Ricardo showed almost two hundred years ago that voluntary transactions usually leave both parties better off. In these populist times, who is going to listen to dead white males who were also globalist elites?

The ideological tendencies that have been the pillars of the Western liberal consensus since the Second World War – social democracy and Christian democracy – appeared perfectly healthy when the world woke up to find out that the Y2K panic was overblown. Today, both are in electoral decline, losing ground to populist nationalists on the right, hard-line Marxists on the left and idiosyncratic personality cults in the “centre.” Broadening our perspective to democracies in the global South complicates the picture, but also provides reasons for disquiet. I write shortly after the first round of the Brazilian presidential election, in which Jair Bolsonaro – a right populist long considered marginal in the political scene – obtained 46 per cent of the vote (Bolsonaro was elected President in the October 28 runoff).

Not long ago, the English-speaking world seemed different. Granted, it had been through some unwinnable wars and a financial crisis. But anglophone elites could smugly reassure themselves that a foundational liberal consensus, spanning the electable left and the electable right, would not be seriously threatened in the lands of John Locke, Thomas Jefferson and John Stuart Mill. But then came Brexit – and Trump.

To be sure, there is nothing intrinsically illiberal about leaving the European Union. Some Brexiteers argued that a fully sovereign Britain could recapitulate the liberal Little England dreams of Richard Cobden and John Bright by developing its own tradition of rights protection and enter into freer trade relations with the world. But polling evidence suggests that few Leave supporters are interested in a more open Britain, as opposed to preserving what they see as its historic identity. Moreover, leaving Europe has complicated the greatest liberal achievements of the Tony Blair years in finding a political accommodation for the contending Unionist and Nationalist identities in Northern Ireland. While electoral politics in the eras of John Major, Blair and David Cameron were dominated by a broadly liberal consensus around a civic definition of national identity and support of markets mitigated by social insurance, the major parties in Brexit-era Britain are dominated by a nationalist and nostalgic right and a left that is profoundly suspicious of business, markets and the institutions of the liberal international order.

As for Trump, as I write (in October), it seems unlikely that his populist nationalism will radically change America’s institutions. While he has made immigration enforcement nastier, for the most part he has left policy to conventional congressional Republicans who favour lower taxes and less regulation. But Trump clearly has transformed the rhetoric of the American right in a way that does not seem obviously reversible. Ronald Reagan and the Bushes rhetorically embraced the conservative conception of America as an idea – one of democratic politics, personal freedom and free markets. Trump instinctively rejects this bourgeois-liberal view of human nature, and his emotional connection to the Republican base (which includes most politically active white Christian Americans) shows, to my mind, that they instinctively reject it as well. Trump has consistently refused to claim that America is, or should aspire to be, morally superior. Trump values America solely because it is his, and he identifies its interests with his own. While all American presidents have failed to live up to liberal democratic ideals, he is the first in living memory to reject them.

The relationship between Trump’s Twitter stream and actual public policy is unclear. What is obvious is that he can, to the approval of approximately 40 per cent of the American electorate, deliberately dehumanize ethnic and religious groups and rage against norms constitutive of American liberal democracy such as the independence of criminal prosecution from partisan politics It is hard not to worry about how far a more disciplined leader of the same authoritarian coalition might get in future.

Mounk: Sensible proposals from the centre-left

Yascha Mounk, The People vs. Democracy: Why Our Freedom Is in Danger and How to Save It. Cambridge, MA: Harvard University Press, 2018. 400 pages.

The decline of support for the institutions of liberal democracy is not confined to a single country or a single age group. A number of depressing statistics are laid out in gory detail in Yascha Mounk’s The People vs. Democracy. Across western Europe and North America, trust in democratic institutions has been declining since the 1950s and is now at all-time lows. Each age cohort is less committed to democracy than the previous one: while 71 per cent of Americans born in the 1930s told pollsters it is “essential” to live in a democracy, only 29 per cent of those born in the 1980s gave the same answer. Similar results can be shown in every wealthy democracy, including Canada. More people support military rule (16 per cent of Americans in 2011) and a “strong leader who does not have to bother with elections” (32 per cent) than ever before. While older voters are more likely to support democracy in the abstract, they are also more likely to express racial resentment and, at least in the United Kingdom and the United States, to vote for right populists like Donald Trump.

The democratic recession is now undeniable and can no longer be dismissed as a blip. It requires rethinking the certainties of the 1990s. Rethinking is something liberals are good at. For the liberal intelligentsia – very much including those on the right primarily motivated by free market economics and keeping the liberal world order secure – the Trump election in particular has finally destroyed whatever complacency survived 9/11 and the 2008 financial crisis. Naturally, a demand for “big think” books to tell us what this all means has never been greater, and a supply has followed.

Mounk’s contribution to this literature approaches the problem from the antipopulist centre-left. His analysis begins by reminding us of the tension between democracy (a system of majority rule) and liberalism (a system of limitations on government). To be sure, some limits on what governments may do in repressing opposition and competitive sources of power are necessary for democracy to continue. But there is no guarantee that the majority will want these, or any other, liberal guarantees.

Mounk grants that liberalism can restrict democracy in questionable ways. Institutionally, judicial review, independent central banks, trade and investment treaties and other international institutions – most dramatically, the European Union – have reduced the choice set of elected politicians compared with the postwar era. These golden handcuffs were put in place out of a legitimate fear of illiberal demagogues. But such unaccountable institutions foster a sense of learned helplessness in the public. If all the important decisions are going to be made by the central bank or constitutional court or in Brussels, how much does a vote matter anyway?

While democracy in its most minimal sense merely requires that there be reasonably competitive elections in which the people can freely choose among contending elites to govern them, as an ideal it aims at equality of political influence. But in real democracies, it is the concerns of those with access to wealth, status and education that most sway public policy. While globalization has vastly increased the incomes of people in poor countries since 1980, it has also brought greater inequality of wealth and income to the rich world, especially its English-speaking sector. As a result, the rich countries have become less substantively democratic. Mounk notes all of this and, as a social democrat, he has proposals to improve the well-being of the population in the bottom half of the income distribution.

Mounk notes the risk of liberal institutions restricting democracy but, in light of his diagnosis of the dangers of populism, he is not willing to support reversing this. It is the danger of illiberal democracy that lies at the heart of his analysis. Although Mounk occasionally mentions the increasing strength of the pro-Russia left, he (correctly, in my view) focuses on the populist anti-immigrant right.

Like the far left, this tendency is generally supportive of (and reportedly assisted by) Vladimir Putin’s Russia. It defines “the people” in ethnic terms and is hostile both to cosmopolitan elites and to immigrants as outsiders. In Hungary and Poland, the populist right has taken power and has “deconsolidated democracy.” Mounk sees similarities in these European developments to those in Turkey, Russia and India It remains to be seen how far things go in Italy. For Mounk, as for Steve Bannon and (in some moods) Trump himself, the 2016 U.S. presidential election was the first step in a similar deconsolidation in America itself.

Mounk’s comparative approach is welcome context for North Americans marinated in the latest newsflash about the Trump administration, but with no comparable connection to events in Europe, let alone Turkey and India. There is no doubt that the European populist right and the Trump wing of the Republican Party have inspired each other.

At the same time, like any comparative enterprise, Mounk’s runs the risk of throwing together very different national situations. It is possible to argue that “authoritarianism” is a single thing that either “will happen here” or will not. But liberalism and democracy are both things we can have more or less of. Digging into Mounk’s discussion of what is happening in individual countries, it becomes clear that except for places like North Korea, authoritarianism, populism, democracy, corruption and even liberalism are not all or nothing. This seems to be true in North America as well. Antiterrorism panics have made us more illiberal in some ways. But victories by minorities have made us more liberal in others. Is America under Trump really less liberal than under McCarthy and Jim Crow? Or even than it was in the immediate aftermath of 9/11?

Quite properly, Mounk will not be diverted by any easy optimism, even one based on reminding people how bad the good old days really were. He can show that support for democracy has been declining everywhere in the West as memories of fascism and even Communism fade. He attributes the problem to social media, economic stagnation and “identity” – by which he means a feeling of threat and resentment among ethnic-racial majorities against outsiders. I find the simplest explanation best: the rise of the populist right is the result of identity threat. While social media make the situation more visible, I doubt they have independent causal force. And, from the data, Mounk himself shows that there is no real link between economic prosperity and support for the populist right. The populist right is strongest in countries like Hungary and Poland that have had the most economic growth. In general, it has been steadily increasing in strength, and the 2008 financial crisis does not seem to have made a particular difference. By contrast, Angela Merkel’s decision in 2015 to suspend enforcement of the EU’s Dublin policy and stop returning asylum seekers to their first port of entry in the EU in the wake of the Syrian refugee crisis appears to have been a turning point.

We might as well face the reality that ethnic identity is a powerful political motivator, and that the populist right can most convincingly tell threatened or resentful ethnic majorities that it will fight for them. Greater inclusion for other citizens of the country is thus coded as threat. Liberals can and should try to frame greater inclusion as of net benefit to everyone, and viewed from the perspective of economic interest this is largely true. Social democrats should put forward proposals about how to minimize economic disparity. But the very fact that societies with all levels of redistributive institutions and economic growth are facing the same phenomenon shows that economic solutions are not enough.

Mounk proposes a very sensible set of principles for centre-left reform of the welfare state, including coordinating taxation of the internationally mobile, ensuring that people who own property in a country pay taxes there, increasing housing supply and decoupling benefits from work. Western politics would be improved to the extent that we focused on these issues. But the real problem is still how to defang the identity threat felt by ethnic majorities. Mounk supports reinvigorating civic nationalism, in recognition that nation-states remain the locus of democratic decision-making.

If this civic nationalism is of the kind that public-spirited people feel for their cities and towns or subnational units, then I am all for it. But this is too weak a brew for genuine nationalists. I am sceptical that a “creedal nationalism” of the kind found in the United States, Canada and Australia is really much better than an ethnic nationalism. The problem with identifying citizenship with a set of beliefs is made obvious by the phrase “un-American” beloved of Joe McCarthy or, less seriously, the tendency of the Liberal Party of Canada to identify its own shibboleths with being Canadian. For all the problems with national identity in Europe, at least you cannot imagine someone getting hounded out of a job for “un-Dutch” opinions.

Of course people will always have particularistic loyalties broader than their families and smaller than the human species as a whole. But I see no reason that liberals should concede that these loyalties have to be targeted on one single entity. Liberals can support various kinds of federal approaches to inevitable identity conflicts, while preaching the truth of the cosmopolitan insight that a person’s moral worth does not depend on where they are born. That truth may not be popular, but showing the courage of your convictions – as Mounk urges liberals to do – means taking lonely stands.

Despite Mounk’s half-hearted support for civic nationalism (and if it could be quarter-hearted, I might even go along!), I consider his book an excellent guide to our current perilous state. Well-written, factual and with sensible proposals for orienting the resistance to right populism, it should be on the secular wintertime gift list for anyone with a liberal cosmopolitan in their life open to big-picture rethinking.

Goldberg: The battle for the true meaning of conservatism

Jonah Goldberg, Suicide of the West: How the Rebirth of Tribalism, Populism, Nationalism and Identity Politics is Destroying American Democracy. New York: Crown Forum, 2018. 464 pages.

Sadly, I cannot say the same for Jonah Goldberg’s Suicide of the West. My inability to endorse the book is sad because Goldberg, despite becoming wealthy and well known as a happy warrior for the American right (his Liberal Fascism: The Secret History of the American Left from Mussolini to the Politics of Change was a bestseller), has been politically orphaned by the Trump phenomenon. He acknowledges that Trump’s rise in the Republican Party demonstrates that he got it wrong. The American right, he now realizes, is not currently a coalition held together by a commitment to free markets, family values, a vision of foreign policy or a strict reading of the U.S. Constitution. Rather, its glue consists in the identity politics grievances of white Christians (in an ethnic, if not doctrinal, sense). This sense of grievance and the way it is expressed has obvious analogies to the left-wing “identity politics” activists that Goldberg targeted, but without the justification of any genuine history of oppression or subordination.

In the end, Goldberg finds that this right-wing identity politics is an understandable-if-regrettable response to the excesses of the left. I find him unconvicing, but I do not rule out the possibility that a well-presented argument for this thesis by a conservative writer could make for an interesting book. Unfortunately, Suicide of the West is too unfocused to do the job. Goldberg likes serious ideas and discusses various theories of the origins of the Industrial Revolution and the Enlightenment, the thought of Rousseau and the influence of Romanticism on Hollywood, with a long detour on Woodrow Wilson and the origins of the American administrative state.

Unfortunately, Goldberg is obviously out of his depth and should have focused on the postwar American conservative movement that he knows extremely well. He starts by saying God does not appear in the book, but he immediately attributes providential qualities to what he calls “the Miracle,” a combination of the Scientific Revolution, English common law, laissez-faire capitalism and the American constitution as it was before Woodrow Wilson and Franklin Delano Roosevelt ruined it. Rousseau (although clearly a leading figure in the Enlightenment) and Romanticism-influenced Hollywood (although clearly a major part of what made American capitalism great) are the bad guys. The British Empire and pre-Roosevelt America had their faults (slavery and the dispossession of Indigenous people), but according to Goldberg these were inessential. What was essential were the benefits of longer life expectancies and greater personal freedoms that we enjoy today but are on the verge of losing if we do not show enough gratitude for the Enlightenment and eschew Romanticism and all its works – which include the aforementioned Woodrow Wilson, gender studies and Donald Trump.

Goldberg is unable to say how the combination of economic and technological progress and liberal values got started in western Europe in the first place. Fair enough: experts argue about this and no one really can say. But by contrast, he is sure why they are threatened: their beneficiaries are not “grateful” enough for what they have brought to us.

The European Enlightenment and the United States of America are, like all human things, a mixed bag. They are not a “choice” and they are not going to commit suicide merely because people are not reverential enough towards them. Goldberg is aware of the paradox that the greatest critics of the institutional legacies of the Enlightenment and liberalism are the ones who have most thoroughly accepted its demand that authority be justified in light of the equal freedom of all. But he fails to see that this paradox cuts in both directions.

Colonialism, slavery and racism were just as essential to the Enlightenment and the U.S. Constitution as science and rights. As Orlando Patterson has argued, ideas of freedom and redemption have always been understood in terms of slavery and manumission. Or as Dr. Johnson put it, “How is it that we hear the loudest yelps for liberty among the drivers of negroes?” When setting out his thesis that the identity politics of the populist right is a response to the left, Goldberg never even considers the obvious progressive rejoinder that the identity politics of racial minorities was a response to the identity politics of white (originally, white Protestant) America.

To be sure, there is a sense in which the abolition of slavery was the “truth” of the Declaration of Independence. But that sense is a retrospective sense, made possible by the clash of the Civil War and the rhetoric of Lincoln. This was not a cheap truth, but Lincoln recognized that if the price were “all the wealth piled by the bondsman’s two hundred and fifty years of unrequited toil” and “every drop of blood drawn with the lash … paid by another drawn with the sword” the redemption would still be providential. This was identity politics with a vengeance, and out of it a genuine idea of freedom was born, and then betrayed with the defeat of Reconstruction. The civil rights movement of the 1960s was yet another attempt at redemption – and Goldberg’s movement, in its modern Goldwater-Reagan form, regained the majority status it had lost with the New Deal by opposing this attempt.

In his battle with the Trumpites over the true meaning of the Goldwater-Reagan movement, Goldberg should also realize that this truth is also defined rhetorically and retrospectively. Goldwater and Reagan won over the base of Dixiecrats and George Wallace followers to a vision of creedal nationalism compatible with de jure racial equality and more universal values. Many on the left have failed to see the moral progress implicit in this transformation, a moral progress exemplified by the room that the coalition now makes for black conservatives. At the same time, many on the more ideological and cosmopolitan right have been in denial about the historic roots of their coalition or the attitudes of its Trumpite followers about social insurance and globalization.

The ideological right was blindsided by the attraction felt by the “base” for a protectionist former Democrat with zero interest in conventional virtues, the Atlantic Alliance or free market orthodoxy. Trump presented as a tough guy who would fight for “real” Americans against foreigners and “unreal” Americans. The conservative intelligentsia exemplified by Goldberg were surprised by the true feelings of their own movement in a way that the most knee-jerk American progressive was not. In effect, the National Review crowd made the same mistake about the “base” that they made in invading Iraq: because they think of themselves as people who put ideas above identity, they assumed others would as well.

Those who feel left behind by progressive changes need representation too. It was inevitable that this group would sooner or later rebel against being voting cattle for a project of ultramarketization that was never the reason they joined in the first place. If we agree that all politics is identity politics of some kind, the problem becomes how to represent their interests in a civilized way, make appropriate compromises and bring home the bacon. Some of Goldberg’s colleagues, such as Ross Douthat, Reiham Salam and Yuval Levin, have started down this road. But Goldberg has not. He is correct that liberalism, in its left and right forms, is a creed that cuts against the tribal aspects of human nature. But it should not ignore them.

Fukuyama: Equal or superior recognition?

Francis Fukuyama, Identity: The Demand for Dignity and the Politics of Resentment. New York: Farrar, Straus and Giroux, 2018. 240 pages.

Standing between Goldberg and Mounk is Francis Fukuyama, whose Identity: The Demand for Dignity and the Politics of Resentment ploughs much of the same ground. A former neoconservative who remains critical of the American left for failing to connect, Fukuyama gets pride of place when it comes to “rethinking,” if for no other reason than that his 1989 article “The End of History?” (and his 1992 book The End of History and the Last Man) crystallized the post–Cold War liberal optimism now being rethought.

To be sure, The End of History was not the triumphalistic book it has been caricatured as being since it first came out. When Fukuyama referred to the End of History, he was not claiming that, after the collapse of the Soviet bloc – which occurred after the publication of the article and before the book – there would be no more events. Fukuyama was using the word history in a distinctive sense that owed its meaning to the early-19th-century German philosopher Georg Hegel, as interpreted by the mid-20th-century Russo-French philosopher and framer of the European Union, Alexandre Kojève.

For Hegel, as understood by Kojève, “history” as a coherent narrative can be contrasted with a more or less random sequence of events to the extent it is a development of ideas of freedom. Hegel saw in the aftermath of the French Revolution the generalization of the idea that everyone is a rights-bearing free subject. State authority can no longer be justified as the natural right of the strong to rule, but as rationally justifiable in light of this equal freedom. For Hegel – at least as Kojève told it – once the powers of the world gave even lip service to this idea, history was over. It did not matter that the rise of America and Russia and the world wars – all of which Hegel predicted – lay ahead. The abolition of slavery, universal suffrage, the rise of the labour movement, women’s legal equality, the end of the traditional European imperial dynasties and the fall of colonialism were all – from this Olympian perspective – details about how to work out this revolutionary idea and therefore did not count as history at all.

As the Soviet bloc fell, Fukuyama argued that this showed Hegel had been right after all. Although Fukuyama has often been interpreted as saying 1989 represented the end of history, he was in fact suggesting that it showed Hegel was right when he said that 1806 had that paradoxical result. Marx’s claim that history would end after the revolution was no longer believable. There was no longer an appealing potentially universal idea that could compete with liberal democracy in the sense of a mixed economy with guaranteed rights for individuals and competitive elections based on universal suffrage. Fukuyama did not dispute that there were many particularistic ideas that would continue – loyalty to family, clan, sect or nation among them. But his assumption at that time was that particularistic ideas were ultimately no match for universalistic ones, so if Marxism was no longer a competitor with liberalism in that space, history had indeed ended along with the Holy Roman Empire.

While Fukuyama thought that a modern economy required price signals and therefore some play for market forces, he was not making the argument beloved by the Economist magazine of the era that liberal capitalism would sweep all before it as a result of economic forces. He noted that authoritarian development in Asia (including China under Deng Xiaoping) was perfectly consistent with rapid technological development and some degree of market mechanisms. Fukuyama followed Hegel in believing that history is not primarily about economic forces, but about struggles for recognition, dignity or status (what he called thymos). Fukuyama noted that the demand for recognition can be either the demand of the subordinate for equal recognition (isothymia) or of the dominant or would-be dominant for superior recognition (megalothymia). He thought that both are deeply rooted in human nature. The advances of democracy, from the overthrow of the remnants of European fascism in Greece, Spain and Portugal in the mid-1970s through the realization of democracy in South Korea and Taiwan in the 1980s to the dramatic revolutions of eastern Europe in 1989 and the collapse of apartheid in South Africa in the 1990s, were expressions of the demand for “isothymia,” equal recognition, a demand that had taken liberal ideological form since the time of the American and French revolutions.

Marxism shared the basic Hegelian belief that it was possible to make sense of history, and that its coherent narrative can be explained in terms of the expansion of freedom through the struggle of the “slave” to obtain equal recognition. By contrast, after the mindless carnage of the First World War, most non-Marxist intellectuals became persuaded that the idea of any meaning to history at all was merely a secularized version of Christianity. Fukuyama had a point in noting that the spread of revolutionary ideas across the Soviet bloc in a short time suggested this was too quick. Ideas of global scope could still give history a meaning, if only after the fact and from the perspective of the present.

In addition to the criticism that Hegel-style history found too much sense in what he himself knew was a slaughterhouse with no one in charge, another criticism was that it was crudely Eurocentric. There is no denying the truth of this criticism as applied to Hegel himself: he dismissed Africa and pre-Columbian America outright and saw the civilizations of China and India as simply preparations for Greece and Rome. Hegel was the product of his time, of course. Paradoxically (or dialectically), the demand that institutions reflect isothymia unleashed by the European Enlightenment has been turned against the exclusions of the Enlightenment itself. John Locke, the avatar of the equal natural rights of all and toleration of religion, was an apologist for slavery and dispossession of Indigenous people. Immanuel Kant, author of “Idea for a Universal History from a Cosmopolitan Point of View,” was also the author of several tracts of “scientific” racism. From a postcolonial perspective, this was not just a matter of the failings of particular individuals. Rather, the scientific and liberal intellectual achievements of the Enlightenment both enabled and were rooted in European domination of the rest of the world.

For the End of History–era Fukuyama, it was sufficient to respond to these criticisms by noting that they were phrased in an ideological vocabulary rooted in the demand for isothymia that was itself a product of the Eurocentric process of history Hegel had described. “Eurocentric” can only be felt as a criticism if equal recognition is accepted as an ideal. This in itself demonstrated the universal implications of the specific and interrelated historic developments of Western science, economics and politics. While ideas that originated in the West could be turned against Western domination, in doing so the critics were acknowledging that their ultimate aspiration remained “getting to Denmark.” As non-European societies absorbed an increasing proportion of this aspiration, the contradiction inherent in the European origins of the ideal of equal recognition and its cosmopolitan implications would fade. Cultural relativism, like the Marxist state after the revolution, would wither away.

In 1989, Fukuyama was writing against a tradition of deep intellectual pessimism about the prospects for bourgeois liberal democracy dating back to the First World War. His claim that a demand for equal recognition is rooted deeply in human nature and that this demand’s only sustainable ideological expression was liberal clearly had comparatively optimistic implications. But The End of History also contained considerable discussion of what he considered the weaknesses and instabilities of the “post-historical” order. Some of these discussions seem prescient now. For example, Fukuyama thought that, if economic growth in America and Europe faltered, and the West’s cultural cohesion continued to disintegrate from an East Asian perspective, the relatively recent embrace of liberal democracy by economically successful east Asian countries might give way to a new deferential authoritarianism legitimizing itself on the basis of Confucian values. Fukuyama also saw the threat that refugee and migrant crises from the “still historical” worlds of the Middle East and Africa posed to “post-historical” Europe. He could see that the desire of the European public to keep migrants out would be in tension with the principle of equal recognition, and that Europe had no good answer to this dilemma.

Fukuyama was most troubled by the prospect that a liberal “post-historical” order could not tame the innate human desire for more status than others. In the late 19th century, Friedrich Nietzsche ridiculed the well-behaved product of egalitarian liberalism as the “last man” and instead celebrated the “over man” (superman) who would not shy away from explicitly trying to dominate other people. In Fukuyama’s view, it was this critique of 19th-century bourgeois society – and not the Marxist one – which led to the near-destruction of liberalism in the trenches of the First World War and in the rise of fascism in its aftermath. For Fukuyama, the danger to liberalism is not material deprivation, but boredom and the lack of an outlet for the domineering aspect of human nature.

The solution, if there is one, is to channel these drives away from politics and violence and toward making money or pastimes like extreme sports. It is in this context that Fukuyama discussed Donald Trump in The End of History. At that time, Trump was a metonym for esthetically vulgar capitalism. Fukuyama adopted John Maynard Keynes’s attitude that it was “better that a man tyrannise over his bank balance than his fellow citizens.” In other words, one of the advantages of capitalism is that it allowed instincts of domination the relatively harmless outlet of commercial success and consumerist one-upmanship. Fukuyama still worried that this would not be enough, and that the drive for megalothymia would lead to wars and an internal revolt against the constraints of bourgeois liberalism from those conceiving themselves as the strong.

In Identity, Fukuyama returns to the themes of The End of History now that the Donald is no longer content to tyrannize over his – possibly exaggerated – bank balance. Canvassing the intervening decades, Fukuyama makes a convincing argument that the demand for equal recognition continues to lead people to push up against existing structures of authority, as with the 2011 Arab Spring. While these revolts often do not lead in a liberal direction, there is still no coherent alternative universal idea to compete with liberalism, broadly understood. Fukuyama acknowledges that in emphasizing liberalism’s advantages over its universalistic rival, communism, he understated the appeal of particularistic alternatives. He continues to take the approach of viewing intellectual history as primary with politics ultimately being about the working through of ideas whose expression is most developed by philosophers and other intellectuals.

One story Fukuyama adds to Hegel’s account of how the primordial conflict between master and slave ultimately leads to the ideal of universal recognition of equal freedom owes a lot to Canadian philosopher Charles Taylor. This story describes how our contemporary idea of personal identity came to be. Premodern societies simply assumed that it was the job of the individual to conform to social norms and that failure to do this was obviously evidence of bad character. This was first challenged by Martin Luther and the Protestant Reformation, which introduced the idea that God worked through the individual conscience and that if a properly inspired individual was in conflict with the society, then society should change, not the inspired conscience. (Arguably, Fukuyama is oversimplifying here by identifying premodernity with Aristotle and Confucius, while ignoring counterexamples like the Hebrew prophetic tradition, the cynics or Jain and Taoist sages.) Rousseau secularized Luther into the idea, made familiar by modern popular culture, that everyone has a natural self that is repressed by society. After Freud, this discovery of the true self became conceived as a therapeutic process and this idea of therapeutic self-actualization has either replaced religion or has restructured it (as with many versions of evangelical Christianity or modernized Buddhism, both of which tend to use therapeutic idioms).

On Fukuyama’s current analysis, “identity politics” in the modern sense combines the struggle for isothymia with the therapeutic discovery of the true self as suppressed by society. To the extent that this is a claim for equal recognition, it fits within the Hegelian story and a liberal society simply needs to widen the circle of recognition to include new ways of being. The political problem, according to Fukuyama, is that the connection to deeply personal issues of psychological well-being makes it difficult to engage in the kind of compromises that are the key to democratic politics. In this respect, he contrasts these issues with the economic issues of redistribution and class relations that dominate “materialist” politics.

I do not doubt that this analysis is useful in understanding feminism and the liberation movements of gender and sexual minorities, as well as phenomena on the right like the prosperity gospel and Jordan Peterson’s Jungian self-help retelling of biblical stories. But I do not think it is actually useful in understanding the forces that have given rise to the democratic recession. The “identity politics” that has mattered is the traditional one of ethnic differences drawn around racial, linguistic and religious lines. It is the last of these, religion, that is fuelling the rise of the populist right in Europe and America. Indeed, in Europe especially, but also in North America, the anti-Muslim right will use the rhetoric of progressive expressive individualism as an ethnic marker between enlightened native Europeans and foreign invaders. It is difficult to argue that there is anything postmodern, or even post-Reformation, about ethnic politics. Human tribalism is as old as humanity, and managing it is something democracies have always had to do and something they have often failed at.

While there are obviously conflicts about abortion, gay rights and transgender washrooms in Trump’s America, it seems to me that these sorts of questions are manifestly not threatening democracy and are less salient than they were when Reagan was President. What is new since then is the fear of whites with an ethnic Christian identity that they are becoming a minority in America. In 2000, George W. Bush tried to reach Hispanic and Muslim voters on a shared social conservatism. Trump represents the abandonment of that strategy, and his overwhelming popularity among white evangelicals demonstrates that ethnic identity “trumps” any allegiance to the sexual morality of traditional religion.

Fukuyama acknowledges the legitimacy of demands for equal recognition by historically marginalized ethnic groups and the need to address their grievances (most saliently in North America, the overcriminalization of young black, Hispanic and Indigenous men). But he says redressing these grievances should take place within the context of a shared civic national identity and agreement that immigrants should assimilate to the norms of liberal democracy. While he will no doubt get grief from some campus activists for this, I frankly do not see any politically significant group in the relevant communities that would disagree with him. Certainly, Barack Obama had no trouble articulating an aspirational postracial American identity to be united by civic morality.

The trouble is that it is precisely this settlement that is threatening to the traditional ethnic majority. The further trouble is that while an objective observer might consider “red” Americans’ identity politics an exercise of megalothymia, they themselves would view it as a demand for isothymia (not in those terms, of course). Just as Protestant America saw itself becoming a minority in the 1920s and reacted by reviving the Klan and shutting down immigration, the broader (but still exclusive) white Christian ethnic identity forged after the Second World War also sees itself as losing equal recognition, regardless of whether this is true. Unfortunately, there are no neutral adjudicators in the struggle for recognition. Even more unfortunately, in Trump, ethnic majoritarian identity politics found a man whose genius is in combining the threatening dominance of Hegel’s master with the sullen resentment of the slave.

In other words, the problem is not that identity politics is inherently any more resistant to compromise than economic issues. The problem is that the political system has not developed a civilized form of ethnic brokerage politics that both includes traditional white ethnic majorities and requires them to see themselves as merely one interest among many. This is the problem that the identity politics left has labelled the problem of “white privilege” or “white fragility,” and it is a real one that could use someone of Fukuyama’s dialectical abilities and equanimity to unravel. He could also have updated the hints in The End of History of the contradictions between a global posthistorical order and national orders that remain historical in his sense, contradictions he saw as at the root of a potential migrant crisis in Europe, which came to be 20 years later. Unfortunately, Identity fails to do this, and so is a bit disappointing.

Faced with an increasingly ascendant populist right, backed by Putin’s rabid petrostate, liberals cannot afford complacency or fatalism. There is still no alternative universalistic vision that competes with limited government based on equal individual rights, competitive elections and a mixed economy. Liberalism’s strong point is that it recognizes the limits of the political in answering the fundamental questions of eternity and identity, and it allows people to optimize their own life chances based on their own decisions. But these are its weak points too. While rethinking will not be the answer to a fierce enemy, it is good to have Mounk and Fukuyama’s analyses; hopefully, another movement conservative can do what Goldberg failed to, and seriously rethink the history of the American classically liberal right.

Every area of study has its classic puzzles; the “anomalies theorists” pay their dues by proposing explanations. For the biology of sexual selection, it might be the peacock’s tail. For early-20th-century physicists, it was the black body radiation problem. For comparative political sociology, it is, in German historical economist Werner Sombart’s phrase, “Why Is There No Socialism in the United States?” For over a century, the absence of a mass socialist or labour party has been a defining aspect of “American exceptionalism.” But what if that were no longer true? What if socialism were to become a major force in American politics, even as it declined in Europe?

Since the First World War and the Bolshevik Revolution, almost every major democratic country has had a self-proclaimed labour, socialist or communist party as a major contender for power. Most of the undemocratic world either had a self-proclaimed socialist government or underground insurrectionary movement (and, not infrequently, both).

The United States was different. It exited the 20th century with the same Democratic and Republican parties it has had since the 1860s, and without mainstream politicians rhetorically proposing an alternative to capitalism. The fact of an exceptional American aversion to socialism was undisputed, with leftists and academics alike arguing about the reasons: the racial legacy of Jim Crow and slavery, the immigrant experience, the frontier, Protestant revivalism or the canny political instincts of Franklin Delano Roosevelt.

But looking around in 2018, we might wonder whether this classic contrast makes sense any more. In Europe, these are gloomy days for the successors of August Bebel, Kier Hardie and Jean Jaurès, with the traditional parties of the left wiped out in France and Italy, in apparently terminal decline in Germany, and riven by serious internal crisis in the United Kingdom. On the other side of the Atlantic, things are looking up for the estate of Eugene Debs and Norman Thomas. In 2016, an unteleginic self-proclaimed “democratic socialist” almost won the Democratic nomination for president. Arguably (although, of course, controversially), in an anti-establishment election decided in the Rust Belt, Bernie Sanders would have won.

The election of Donald Trump has, naturally enough, led to increased attention to right-wing populism and the racist “alt-right.” But it is at least possible that developments on the left will be of longer-term significance. Trump’s support is overwhelmingly among older Americans, while the even older Sanders won big among younger voters regardless of race and gender. A 2017 YouGov poll showed that 44 per cent of millennials (defined in this case as people born after 1987) would prefer to live in a “socialist” country, compared with 42 per cent opting for a “capitalist” one. Other polls with other questions consistently show more positive associations with “socialism” than with “capitalism” among younger Americans.

Polls of inchoate public attitudes are one thing; organizational power and intellectual influence are another. Here too something is happening among millennials outside the visible parts of mainstream American discourse. The once moribund Democratic Socialists of America (DSA) have received a remarkable “Trump bump”; membership has gone from under 7,000, when Sanders’s campaign began, to its present 30,000. This growth in membership has occurred along with a sharp turn to the left, as the DSA in 2016 cut its ties with the Socialist International of mainstream social democratic parties, ties that its founder, Michael Harrington, worked hard to build in the early 1980s.

A larger ecosystem of a millennial socialist left – not to be confused with mainstream Democratic progressives or liberals – including the DSA, a “podcast” scene led by the popular and profane Chapo Trap House, “red rose Twitter” and Jacobin magazine has spread beyond its native university milieu. Common to all of these is the combination of a millennial cultural vibe with a remarkably “old left” orientation around class (as opposed to an “identity politics” primarily oriented to race and gender), Marxist theory, traditional activism and the internecine debates of left history.

It is important not to get carried away. The organized off-campus socialist left might be growing rapidly, but it is still tiny. The DSA is small, compared not just with the German SPD or the British Labour Party but even with other fringe American organizations like the Libertarian Party. High abstract support for “socialism” among young Americans might turn out to be a lifecycle phenomenon they grow out of rather than a cohort phenomenon presaging future political realignment – the old cliché that a person who is not a socialist at 20 has no heart while one who is not a capitalist by 30 has no brain may be relevant here. The DSA is also small compared with Eugene Debs’s Socialist Party, which obtained almost a million votes in the presidential election of 1920, 3.4 per cent of the total, before it split into Communist and anti-Communist factions. Like other ideologues, American socialists are undoubtedly overrepresented online.

Still, given the vast attention lavished on the alt-right that everyone ignored until Trump came along, it may be worth asking whether the left might also be able to mount a challenge to American consensus values. No generation before the millennials has ever reported a preference for socialism over capitalism to pollsters. DSA is already bigger than any overtly socialist organization since Students for a Democratic Society (SDS) imploded in 1969. According to John Michael Colon, DSA, unlike SDS, consists primarily of former university and college students, who are often facing downward mobility and large student loans. This is interesting, given Peter Turchin’s evidence that internal social conflict is correlated with the “overproduction of social elites”: in the modern world, this typically occurs when many more people have postsecondary educations than can use them in the economy.

Trump proves that the longstanding certainties of American politics have become less reliable. At a minimum, it is quite possible that socialism, in some form or other, might be on the verge of a breakthrough in the United States. If this happens, since a substantial proportion of the country will continue to view socialism in apocalyptic colours, the already bitterly divided American political culture will become even more polarized.

Straddling the Left Traditions

Characterizing this new trend is a difficult task, if we want to avoid both inaccurate generalization and a level of detail about obscure disputes that would induce eye bleeding in even the most tolerant reader. As Monty Python’s Life of Brian illustrated, the overeducated/underemployed in general, and Marxists in particular, have a love for nuanced theoretical-programmatic differentiation. To get a handle on things, and at the risk of offending anti-individualist principles, we need a representative figure.

Bhaskar Sunkara, the editor of Jacobin, will do as well as anyone. One way in which the 28-year-old Sunkara is typical is that, politically, he tries to straddle the social democratic, Leninist and anarchist traditions that characterized the 20th-century left. Sunkara defines this mission of Jacobin in explicitly generational terms, as “the product of a younger generation not quite as tied to the Cold War paradigms that sustained the old leftist intellectual milieus like Dissent or New Politics.” Sunkara endorsed Bernie Sanders’s purely social democratic program and points to Scandinavian countries as models of the kind of change he hopes for in the United States. He says he is not opposed to markets in principle, although Jacobin never supports the free-market side in any controversy.

At the same time, as the name Jacobin suggests, Sunkara uses revolutionary imagery and has published sympathetic articles about the Russian Revolution and the Communist tradition. The DSA combines Democratic Party elected officials with a “tankie” fringe of Leninists who retroactively support Soviet military suppression of democratic working-class rebellions in Hungary, Czechoslovakia and Poland against Communist rule.

Sunkara’s generation of left activist is defined in reaction not only to post-9/11 U.S. military interventionism and the post-2008 financial crisis and Great Recession, but also to what they rightly perceive as fundamental failures in the movements against those things. The anti–Iraq War movement essentially disappeared when Barack Obama was elected president, despite the substantial continuity in policy with the Bush administration when it came to the war in Afghanistan, the surveillance of Muslim Americans, drone strikes around the world and disastrous regime change policies. Neoconservatism gave way to a functionally similar liberal internationalism insisting on “U.S. leadership,” but any mass movement dried up with a Democrat in the White House.

While Obama expressed some disagreement with the interventionist foreign policy establishment and may have blunted their most bellicose instincts, he never expressed any interest in spending political capital in transforming U.S. foreign policy. Despite the anti-interventionist instincts of the American public, especially younger Americans, he faced no political pressure on foreign policy from the left. The political left just ignored these issues after Bush left office, while the intellectual left either recycled sixties anticolonial ideology or was sympathetic to liberal internationalism. The most interesting and hardheaded critiques of American hegemonism tended to come from conservative realists such as Stephen Walt and John Mearsheimer, writers at the American Conservative and antiwar libertarians.

Occupy and Identity

The financial crisis and its aftermath of low employment rates hit millennials harder than any other age cohort. The immediate leftist response, the Occupy movement, is a target of particular ire among millennial Marxists. Occupy combined complete inflexibility on its chosen tactic of creating ungoverned camps in urban public spaces with hostility to programmatic responses to the Great Recession. The leaders of Occupy, influenced both by anarchism and by traditional American hostility to telling anyone what to do, went out of their way to discourage the movement from posing any specific demands or analysis. No doubt this was a way of resisting Leninist entrism, but it also reflected a basically antipolitical refusal to debate alternatives. One common feature of the Jacobin circle is their disgust at this aspect of Occupy, which they analyze as an internalization of the post–Cold War narrative that “there is no alternative” to “neoliberalism.” Sunkara (and others like him) saw Marxism as a hardheaded and systematic alternative to this disgustingly New Agey “anarchist” antipolitics.

From my own perspective, Marxism has very little interesting to say about periodic financial crises and the business cycle, beyond the (salutary) emphasis that they are endemic to capitalism. Marx himself was never happy with his crisis theory, which Engels published after his death. With its dependence on the labour theory of value and a self-contradictory account of the determination of profit rates, it has little to offer now. It would be more useful to read the 20th-century American economist Hyman Minsky or various post-Keynesians.

But for young activists looking for something less soupy than Occupy was able to supply, the old left tradition seemed refreshingly hard-edged. Occupy, Obama and the antiwar left all celebrate feelings and moralism; old-line socialism told underemployed-but-overeducated young activists to open up books and argue about economics, philosophy and history as well as show up for demonstrations. That might appeal to intelligent young people with high student debts and suddenly limited job prospects.

Even more important, old leftism might liberate young people weighed down by the fraught world of identity culture but unwilling to embrace a right-wing backlash narrative. By now, anyone of any age knows how dangerous the online “call out” politics of race, gender and sexuality has become. The Jacobin–Trap House milieu gets to be the moderate middle here, a position that appeals to many millennials. They can parody or analyze both the moral posturing of the “social justice warrior” crowd and the anti–social justice warrior industry.

The 2016 Democratic primaries pitted Sanders’s class-based appeal against Hillary Clinton’s promise that the first female president would be transformative. Clinton explicitly made identity-based appeals to defeat Sanders. In her stump speech in the primaries, Clinton asked, “If we broke up the big banks tomorrow, would that end racism? Would that end sexism? Would that end discrimination against the LGBT community?” From a Marxist perspective, this looks like weaponizing identity politics in defence of “neoliberalism.” Sanders’s male supporters were labelled as (and sometimes acted like) “Bernie bros,” holding Clinton to sexist standards. One major intellectual influence (albeit himself a curmudgeonly Slovenian baby boomer), Slavoj Žižek, went so far as to endorse Donald Trump as the true proletarian candidate in the general election. While the millennial Marxist milieu certainly supports broadly feminist and antiracist positions, it also provides space for criticisms of identity politics that many on the centre-right would agree with.

The old left was often opposed to racism and sex oppression, a tendency that can be dated back to Marx and Engels, but beyond that the Marxist tradition never worked out its relationship to gender and national-racial inequalities. At its worst, Marxism engaged in genocidal politics, a thread that runs from Engels’s call to eradicate Slavic national identies after the 1848 revolution through the racist as well as murderous policies of Stalin, Mao and Pol Pot. Even at its best, Marxism never worked out how class analysis and what it called “special oppression” worked together.

It would be insane, in the era of Trump, to discount the significance of racial and gender loyalties on how people vote. But gender or racial polarization seems like a dead end for the left politically and rarely of much use as a lens into policy solutions. Even crucial racial issues like police violence and mass incarceration turn out to affect a numerically larger, albeit proportionately smaller, group of whites. The resistance to social democratic or even liberal solutions to America’s problems around access to healthcare and reasonable-quality education have everything to do with race. However, sensible solutions would redistribute power and resources primarily along lines of class, not race.

Moreover, objectively, class gaps kept getting deeper over the decades between the 1970s and 2016 as America was making cultural progress in its representations of nonwhites, women and sexual minorities, and the business and professional elite endorsed at least symbolically the principle of racial, gender and sexual inclusion. Trump’s election, which puts this cultural progress in doubt, can be seized on as evidence that a failure to address class will ultimately undermine even this progress.

For the millennial Marxists, the glory of the Sanders movement was that it challenged the “neoliberal” consensus they believe has prevailed since the end of the Cold War. In some ways, the rise of right-wing populist nationalism challenges neoliberalism as well. Now that the inevitabilist “end of history” illusions of the 1990s are finally shattered, it becomes possible to engage again with the Marxist tradition, hopefully without the dogmatism and hostility to civil liberties that disfigured it. The millennial Marxists explicitly see themselves as in continuity with those on the socialist left who tried to find a way between Leninism and social democracy, including the Eurocommunists, the British Labour left and strands of the New Left.

Defining the Enemy

Assuming, then, that millennial Marxism really is a “thing,” is it a good thing or a bad thing? My own perspective is that of a Generation Xer radicalized in the 1980s moment of solidarity with Nicaragua, anti-apartheid activism, zines and punk rock. It was a lesser moment for North American leftism than the 1960s or the present, but I have to be careful to avoid paternal condescension. I was in the minority of my generation in being attracted to orthodox Marxism precisely because it seemed to provide a hardheaded analysis as against anarchism, postmodernism and identity politics.

I certainly despised Clinton and Blair when they were elected. But I gave up on any emotional identification with the far left during the travelling antiglobalization protests of the late 1990s. It seemed to me then, and seems to me now, that the benefits of freer international trade for the global poor between 1989 and 2001 had to be prioritized if internationalism was to be meaningful. It also seemed to me that the dominant wing of capital was open to pushing for a postethnic West with a more egalitarian sexual morality. In other words, capitalism had not stopped playing the “most revolutionary” part Marx and Engels spoke of in the Communist Manifesto. I briefly thought there might be something to the “Third Way” of Blairite social democracy, although I was disappointed by Blair’s embrace of the Iraq War and then shaken by the financial crisis.

From this somewhat idiosyncratic perspective, there are some things to welcome in the development of millennial Marxism. While the mainstream left might long ago have been too exclusively focused on the concept of class, for many decades it has seemed to pay insufficient attention to this concept, in light of the growing disparities of wealth and income in the West. No decent person should regret the enormous reduction in global poverty as a result of globalization or the relative opening of cultural space to women, racial and sexual minorities as a result of the logic of commodification breaking down older patriarchal structures. But the Marxist tradition has special insight into the dialectical nature of these developments. Capitalism has both its brutal progressive side and its tired conservative side.

Any movement can be understood by how it understands its enemy. For the millennial Marxists, its enemy is “neoliberalism.” This core concept is a slippery one, both intellectually and politically. It basically includes everybody the millennial Marxists disagree with, other than right-wing nationalist populists and Stalinist tankies. “Neoliberalism,” in the hands of the millennial Marxists, becomes an oddly ahistorical and idealist concept, a spectre haunting not only Europe but the world, betwitching people into supporting policies clearly contrary to their interests, in very different political contexts and affecting movements with very different social bases, simply because it is the spirit of the (post–Berlin Wall) age.

Politically, it throws together every mainstream politician in the Atlantic democracies over the last two generations – from Mitterrand, Reagan and Thatcher to Obama, May and Macron. Intellectually, it includes mainstream economists behind the Washington Consensus in the 1990s or Clinton- and Blair-style centre left governments as well as economists such as Friedrich Hayek and Milton Friedman who rebelled against the postwar mixed economy from a libertarian direction. Neoliberalism is not only the explanation for the Iraq war and the financial crisis, but also for why movements that the millennial Marxists like (Syriza in Greece, Chavismo in Venezuela) have ended in disaster. Sunkara sometimes puts forward Scandinavia as a model, without perhaps realizing the extent to which it has combined high taxes and social spending with a more rigorous commitment to free market liberalism in many areas than prevails in the United States.

Millennial Marxists need to develop a more historically grounded analysis of the limits of the liberalism/social democracy of the era of Clinton, Blair and Obama, one that starts from the historical problems the forces associated with those names had to solve. By the 1970s, as the public sector increased, it became increasingly difficult to simultaneously satisfy the producer interests of public sector workers, meet the demands on already-established public services, not frighten off middle-income taxpayers and keep the positive-sum spirit of “les trentes glorieuses” (the 30 years of relative prosperity after the Second World War). High levels of aggregate demand led to widespread strikes and inflation. In the Anglo-Saxon world, Thatcher-Reagan-style conservatism appeared to provide a way out of these problems, with tight money, a harder attitude toward unions and a limit on the growth of the tax-and-transfer state (although no real attempt to actually reduce it in size).

During this time, Michael Harrington’s project of ideologically sorting the major parties actually succeeded: conservative southern Democrats left, the Democrats became the unquestioned party of labour and minorities, and organized themselves programmatically around filling in the clearest hole in the U.S. welfare state, the lack of universal healthcare coverage. Republicans, by contrast, became firmly committed to opposing any tax increases, even as market income diverged and even as these became necessary to pay for the federal government’s commitment to Social Security and Medicare. But with the defeat of Walter Mondale and Michael Dukakis in the 1980s, the Democrats tried to move to the centre culturally (to attract white working-class voters) and on taxes and spending (to attract upwardly mobile middle-class voters). This more or less corresponded to similar moves within the British Labour Party under Tony Blair and the German SPD under Gerhard Schroeder, so while disappointing from a left perspective it did not really falsify Harrington’s bet that the Democrats were becoming a social democratic force in all but name.

Outside the English-speaking world, so-called “neoliberalism” was not an ideological phenomenon at all, but a reaction to the reality that postwar social democracy faced the limits of the nation-state as a structure. The program of Mitterrand’s Socialists to “change life” had to be abandoned not for ideological reasons but because it necessarily implied a devaluation of the franc against the Deutschmark. The ultimate solution for these problems, the European monetary union, cannot be reconciled with an effective social democracy until and unless there is a European working class that thinks of itself as such. And that does not seem forthcoming. The structures of internationalism, and even of Europeanism, do not seem capable of being democratized the way the nation-state was in the 20th century.

If a breakthrough for the left does not seem to be coming from Europe, what about the United States? The difficulties are different. One is the nature of the U.S. Constitution, with its multiple veto points, which would render a coherent social democratic program hard to introduce. As the millennial Marxists rail against the failure of Clinton and Obama to accomplish more, they tend to ignore these structural problems. Another difficulty, which gets more analysis, is the way in which group status competition – around race, religion and education – fails to correspond to economic class, but is far more motivating. In the very long run, generational changes may make this less important, but as Keynes pointed out, we do not live in the long run.

Even more important to whether the end of this particular kind of American exceptionalism leads to good or bad consequences is the extent to which millennial Marxists avoid reproducing the illiberalism of the Leninist tradition in a desire to appear radical. The American left has often felt it can avoid the moral ambiguity of the often oppressive legacy of 20th-century socialism precisely because any extremism on the left will inevitably be so marginal to American politics as to be harmless. But that will no longer be true if socialism is no longer marginal in America. Even if they avoid a Leninist ancestor cult, American leftists will not get anywhere unless they embrace the pragmatic nature of their country, as well as create roots at the state and local level. But if they do these things, they might promote a better society at home and give some impetus to the left internationally. In any event, they are something to watch out for.