Photo: Jesse Wagstaff, via Flickr.

Public health orders restricting in-person gatherings have faced legal challenges across Canada. The argument is that these orders are contrary to the Canadian Charter of Rights and Freedoms, especially its guarantee of “freedom of religion.” Many of these challenges have been organized or funded by the Justice Centre for Constitutional Freedoms, a Calgary-based NGO headed by John Carpay, which considers COVID-19 a “political pandemic,”1 but more mainstream civil libertarian organizations such as the Canadian Civil Liberties Association have also been involved. So far these challenges have been unsuccessful, although much remains undecided.2

Similar battles have played out in the United States, but with more uneven effect. The U.S. Supreme Court has visited the question of whether public health restrictions on religious gatherings violate the free exercise guarantee of the First Amendment on three occasions. Each of its decisions has been divided, and none of them have been final.3 Its most recent decision, issued shortly after Ruth Bader Ginsburg’s death, was deeply divided on partisan lines with Justices Sonia Sotomayor and Elena Kagan accusing their Republican-appointed colleagues of playing a “deadly game in second guessing the expert judgement of health officials.”

It is hard to imagine a creature less interested in the subtleties of rights, law or ideology than the SARS-CoV-2 virus. It relentlessly focuses on its Darwinian mission of using the resources of human cells to make copies of its genetic material. But like everything else that fulfils that mission by spreading through networks of interacting human bodies, it both shapes and is shaped by all aspects of society, very much including the constitutional law of liberal democracies.

We usually think of rights as being about individuals. “Collective” rights seem exotic to Western, Educated, Industrialized, Rich and Democratic (WEIRD) people. At best, they make sense as a concession to vulnerable minorities. Normal rights – the kinds of rights everyone in Western societies gets to claim – are thought to be claims against society by an individual acting alone. But the COVID-19 pandemic makes clear – in the most literal possible sense – that these old-fashioned liberal Enlightenment rights like freedom of religion or expression, mobility and even privacy are about social interactions. Because social interactions are also how the virus spreads, interpretations of these rights shape the course of the pandemic.

In addition to demonstrating how social “individual” rights are, the pandemic has also put stress on Isaiah Berlin’s distinction between “negative” liberty (against state action and private coercion) and “positive” liberty (enabling people to fulfil their capacities). This distinction plays a controversial role in political philosophy. It is also embedded in Canada’s Charter of Rights. Section 32 makes it clear that the Charter only applies to governments and legislatures, reflecting the classical liberal idea that it is the state that threatens rights and freedoms.4 Section 7 protects a right to “life” and “security of the person” – rights very much at stake when a deadly novel virus is spreading exponentially through an immunologically naive population. But on closer inspection, it turns out that the Charter only protects these rights from “deprivation” by the state. Those who need protection by the state against the free action of others have to make creative arguments in Canadian courts under the Charter. (To be fair, those countries that explicitly recognize “positive” rights have also struggled with meaningfully vindicating them.)

Human beings in general, and lawyers in particular, are bad at thinking through exponential growth. If the average infected person goes on to infect more than one other person, and if this is sustained, then the number of infections in a naive population will explode. As Italy and New York proved to the world in the spring of 2020, no health care system – no matter how advanced – can cope with the consequences. The only responsible strategy on the part of public health authorities is therefore (a) to try to drive the number of infections to zero and then try to keep it there by restricting entry from more infected places (the “zero COVID strategy”) or (b) to keep the number of cases from increasing above what the health system can handle by keeping the average number of persons infected by each infected person around 1 (the “flatten the curve strategy”).

Until vaccines could be deployed in sufficient number, the only realistic way of keeping case numbers down involved keeping people apart and/or keeping them under surveillance. There was no way to avoid a confrontation with liberal rights, especially when those rights are understood in an absolute or “deontological” way (i.e. regardless of consequences). The SARS-CoV-2 virus may not care about the West’s longstanding ideological fixation on the conflict between the state and the rights of the citizen, but any attempt to stop the spread of the virus was bound to get caught up in that ideological fixation.

It was not surprising that this conflict would take its sharpest form in the American court system. Americans are litigious and their judiciary has become increasingly ideologically polarized, while the Trump Administration’s response to the pandemic created a deep partisan divide around “lockdowns.” To be sure, there are some ideological ironies in the way the free exercise of religion cases have played out. Justice Antonin Scalia – the longtime lion of conservative jurists who died in 2016 – was the author of the 1990 Employment Division v. Smith decision,5 which held that the First Amendment did not protect religious practices from generally applicable laws unless their “object” is the prohibition of those practices. While it is not hard to find people on the internet who think that public health measures are motivated by hostility to religion in general or Christianity in particular, real officials – no matter how secular their personal belief system – are anxious to get support and “buy-in” from religious communities. Unsurprisingly, there has been little evidence of antireligious animus presented to any courts.

In addition, control of communicable diseases has long been understood to have a special status as a justification of governmental measures that would otherwise be unacceptably coercive. Quarantines in the pre-antibiotic era were drastic everywhere, including in the United States. George Washington’s administration was faced in 1793 with a yellow fever epidemic in Philadelphia, then the American capital. Other states quickly moved to detain ships and travellers from Philadelphia, although the tiny federal government of the time did no more than hasten its own relocation to the District of Columbia. After the Civil War, the Supreme Court decided that the Fourteenth Amendment prevented states from exercising their powers in a way that infringed the liberty of employees to agree to long working hours, but was unsentimental about liberty when it conflicted with curtailing the spread of infectious diseases.6

Given this history of interpreting liberties in light of public health needs – well supported by liberal political theory, which has always permitted state intervention when individual actions directly harm others – why did COVID-related measures become so controversial within the American court system? There is no question that some of the explanation lies in the ideological divisions laid bare by the fierce battles over replacing Justices Scalia, Kennedy and Ginsburg with Neil Gorsuch, Brett Kavanaugh and Amy Coney Barrett. Journalistic accounts were not wrong to highlight this dimension, as well as Chief Justice John Roberts’s role as a swing justice.

But reasoning matters too, and much of the legal debate turned on how to understand what counts as a “law of general application.” In 1990, Justice Scalia said the state would be on solid ground if it implemented measures that were not intended to disadvantage religious practices. By 2020, the conservative justices insisted on a much more muscular notion of evenhandedness between religious and secular activity. When Chief Justice Roberts upheld California Governor Gavin Newsom’s executive order at the end of May 2020, he pointed out that various secular activities – including lectures, concerts, movie showings, spectator sports and theatrical performances – were subject to similar or more severe restrictions. Californians could gather in no greater numbers to hear a reading of Richard Dawkins’s God Delusion than to hear a passage from the book of Genesis. By contrast, Justice Kavanaugh, speaking for the conservative wing of the court, pointed to grocery stores, restaurants and factories, where more people were allowed.

In such comparisons, everything turns on what is considered comparable. While the virus does not care about the reasons people gather, it will transmit more easily depending on how they act once they are together. No indoor gathering of any size is entirely safe in a pandemic, but a number of factors make a difference: the prevalence of the virus in the community, the demographic profile of those present, the behaviour of people once gathered (singing, chanting and loud talking are particularly likely to spread the virus) and how likely people are to comply with mandates that they stay away from one another (presumably more likely in more transactional situations).

An additional difficulty is that “flattening the curve” is about managing the average number of transmissions in the community. While every death is a tragedy, it is not in the nature of public health to be able to prevent every death. A flattening-the-curve strategy involves avoiding increases – especially sustained increases – in the number of cases. No jurisdiction has attempted a total lockdown of a year or more – those that have successfully implemented a “zero COVID” strategy have not had to and those that have sought to flatten the curve have tried to maintain schooling, other health services and economic activity to the extent consistent with that strategy. This necessarily involves choices.

Chief Justice Roberts and the liberal members of the U.S. Supreme Court – while willing to scrutinize some measures they thought went too far – were also willing to allow some flexibility for choices by those accountable to the electorate. But by November, a conservative majority established an approach of looking at restrictions on religious activity that went beyond any secular comparator with the eyes of “strict scrutiny.” This follows a trend in American rights jurisprudence, found on both right and left and criticized by Columbia Law School professor Jamal Greene in his recent book How Rights Went Wrong, of viewing rights as “trumps.”7

Canadian constitutional law has always been more willing to protect religious practices from unintended restriction than the U.S. Smith case. But it has also been more willing to give governments latitude than the deontological tradition in American law, especially in the protection of public health. To be fair, not all American judges take an absolutist approach to rights – and sometimes Canadian fuzziness can make it difficult to predict how cases will be decided. But while it is possible that Canadian courts will take an approach like that of the conservative justices in the United States, so far that has not happened.

The first set of cases that have been decided in Canada involved applications to “stay” public health orders until the constitutional issues could be fully argued. This is generally a hard, but not impossible, thing for a person challenging governments to obtain. In the case Canadian lawyers still refer to, the tobacco companies facing the Chrétien government’s proposed unattributed health warning on cigarette packages failed to get an exemption while the case went to the courts, even though they (rightly or wrongly) ultimately persuaded the Supreme Court that their “freedom of expression” was unjustifiably infringed.

In light of the more urgent public situation of a pandemic compared with the chronic health issue of smoking, it is perhaps not surprising that preliminary “stay” applications have been rejected by Ontario, Quebec, Alberta and Manitoba courts. A Newfoundland and Labrador trial court has rejected a challenge to the “Atlantic Bubble” policy of keeping residents of other provinces out – while it accepted that this policy violated mobility rights guaranteed by section 6 of the Charter, it saw this as a “reasonable limit” of the kind the Charter’s section 1 allows. The case is being appealed by the Canadian Civil Liberties Association.

In British Columbia, I was part of a small team of lawyers for the defence in a constitutional challenge to Dr. Bonnie Henry’s Gatherings and Events Order (Beaudoin v. British Columbia). This order restricted in-person religious gatherings, with exceptions for limited-attendance baptisms, funerals and weddings – and later small outside gatherings, first for Orthodox Jewish congregations and now for anyone whose practices are consistent with outdoor gatherings.

By the time the case got to court, Dr. Henry had already exempted outdoor political demonstrations: B.C. Supreme Court Chief Justice Christopher Hinkson held that an earlier blanket ban on those was an unjustified infringement of freedom of peaceful assembly in light of the evidence and situation at the time. But he upheld the restrictions on in-person religious gatherings, specifically approving of Dr. Henry’s consultative style. As he noted, genuinely comparable religious and secular activities have been treated equally. Religious and secular education, for example, have both been allowed, as have religious and secular weddings and funerals. Like the Atlantic Bubble case, this case has been appealed; meanwhile,  other challenges are before the courts across the country.

I cannot pretend to be a neutral observer, but I think that, so far, the Canadian approach better reflects what a legal system can appropriately do. Judicial review can make public health decisions more transparent and evidence-driven, so long as it is sensitive to legitimate needs to make decisions quickly – and therefore necessarily before all the evidence can be in. Epidemics, like life, can only be fully understood backward, but must be lived forward. Lawyers and judges need not accept unreasoned claims of “expertise,” but they should have appropriate humility in relation to their ability to evaluate the management of inherently complex problems. This is especially true when one of those complex problems is the judicial system itself, and especially its accessibility for ordinary people – and neither lawyers nor judges have figured out how to fix it. But most of all, lawyers should always recognize that “rights” are inherently social claims, and that it is society that must negotiate their resolution.

To read more on religion and public health in the age of Covid, click to read Holy or Irresponsible by Martin Lockshin.

Continue reading “Rights and Religion”

photo by Ajay Suresh via Flickr under CC 2.0 

A spectre is haunting academia, journalism and politics – the spectre of Cancel Culture. The powers aligned together against this spectre today are as diverse as the Holy Alliance of “Pope and Czar, Metternich and Guizot, French radicals and German police spies” that Marx and Engels identified in 1848. But is “Cancel Culture” – as Marx and Engels proudly claimed of Communism – truly a world power recognized as such by the existing world powers? To use a more contemporary lexicon, is it even really a thing? Is this “New Intolerance” either new or intolerant?

The Republican National Convention and the Harper’s Letter

Some people think so. The danger of “Cancel Culture” was the theme of the August 2020 Republican National Convention. The 2020 RNC was the first in history not to include a platform of policy proposals, all the better to emphasize the cultural grievances of the staatsvolk, those whose ethnic identity is to be simply Americans.

This is hardly a new theme for Republicans, of course, who have been railing against the predecessors to “Cancel Culture” – “political correctness” or the “nattering nabobs of negativity” – since the 1960s. But the relative emphasis has clearly shifted. Freedom is no longer U.S. self-determination under threat from foreign actors – what remains of that theme has been left to MSNBC personalities like Rachel Maddow worried about Russian interference in American elections. Nor is Freedom really conceptualized any longer as market capitalism under assault from an Economic Left promoting redistributive taxation and spending or intrusive regulation of business. Republicans still talk about “socialism,” and the policy legacy of the first Trump Administration is undeniably lower business taxes and fewer environmental regulations, along with a lot of federal judges who will make these changes difficult to reverse. But that is not where the passion is.

To anyone listening to the speeches, it became clear that Freedom is really under threat from the Cultural Left: reformers or radicals who oppose “systemic racism,” particularly police violence disproportionately affecting black Americans, or contest traditional ideas of gender and sexuality, particularly to promote the rights of transgender people. The RNC occurred shortly after mass Black Lives Matter protests expanded across the world – sometimes accompanied by looting and violence. The BLM movement had been a target of Trump’s 2016 campaign and he quite clearly welcomed the ability to fight against it again, rather than defend his record on the COVID pandemic.

Conservatives have always promoted the defence of order and property against mobs, and American conservatives have long resisted demands for any sort of racial reckoning as disrespect to America’s Founding Fathers and tradition. It is also perfectly normal for conservative parties to support traditional gender norms and resist the demands of sexual minorities. What was perhaps surprising – at least to a listener who had not been paying attention to recent American political rhetoric – was that the Cultural Left was represented not just as a threat to law and property, but also, and especially, as a threat to the American public’s ability to speak. The party of racial and sexual order presented itself as the party of transgression, while supporters of change were presented as puritan scolds and censors. “Cancel Culture” was attacked not so much for undermining traditional propriety as for stopping Red Americans from saying whatever they want whenever they want. No one represents this position more than Trump, whose rhetorical style is borrowed from standup comedians and “shock jock” radio show hosts.

This rhetorical frame of healthy transgression stifled by the schoolmarms of the Cultural Left is also found among opponents of the Trump Administration and the current Republican Party. If there is a manifesto of this camp, it is the July 7 open letter to Harper’s magazine. The letter was signed by a popular front of the National Security Right, the Democratic-establishment Centre and the Economic Left, ranging from David Frum and Michael Ignatieff through baby boomer feminist icons Margaret Atwood and Gloria Steinem to anti-imperialist leftist academics Noam Chomsky and Cornel West. The Harper’s Letter (as it soon became known) described what this coalition is against as “a new set of moral attitudes and political commitments that tend to weaken our norms of open debate and toleration of differences.” The letter was careful to pair this new threat to tolerance with the “forces of illiberalism” associated with Donald Trump.

Perhaps as a price of such a broad coalition, the Harper’s Letter was unclear as to what events in the world it was responding to. It observes that “powerful protests for racial and social justice are leading to overdue demands for police reform,” a clear reference to the upsurge in mass protests that punctuated the COVID shutdowns as a result of the killing of George Floyd on May 25. The Letter implied that this movement, and movements associated with transgender rights, had “intensified” a challenge to norms of open debate in favour of that enemy of American individualism and sixties counterculture alike: “conformity.”

It is of course not at all uncommon for moderate members of reformist movements to criticize those more radical than themselves for extreme tactics or unrealistic goals. But, as with the Republican National Convention, the Harper’s Letter took up the more surprising perspective of the id against the superego. The problem with the excesses of the Cultural Left, as suggested by the cooler and wiser heads who signed the letter, were not the traditional problems of radicalism. Rather, their rhetoric, like the Republicans’, was about excessive conformism and limitations on debate. The warning was not of the risks of violence or utopianism, but of the danger posed to letting one’s freak flag fly.

While no single case seems to have provoked the Harper’s Letter, it was widely seen as a response to the resignation of James Bennett as senior editor for the New York Times op-ed page. On June 3, the Times published an opinion piece by Tom Cotton, a right-wing senator from Arkansas, calling for using the U.S. military to suppress the protests. Cotton had earlier tweeted that there should be “no quarter” for “for insurrectionists, anarchists, rioters, and looters”(the order of “no quarter” in war means to kill surrendering soldiers and is universally regarded as a war crime).

The op-ed was published two days after the Trump Administration apparently directed the use of tear gas by federal police to disperse protesters in Lafayette Park near the White House so that Trump and other senior administration officials, including the Defense Secretary and the Chairman of the Joint Chiefs of Staff, General Mark Milley, could cross to a nearby church. Milley later apologized for his involvement, amid reports that the military had been asked – and refused – to become involved in policing the Black Lives Matter protests. The printing of the Cotton op-ed sparked a backlash both within the New York Times staff and among its predominantly upper-middle-class liberal readership. Bennett resigned on June 7.

The Harper’s Letter was widely interpreted as having been triggered at least in part by these events. Bennett-hire and Harper-letter-signator Bari Weiss publicly resigned, accusing the Times of discrimination, hostile work environment and constructive discharge. Weiss came to prominence organizing a campaign to deny a Palestinian American anthropologist tenure at Barnard because of controversial criticisms she had made of Israeli archeology. Weiss had long been a leading figure among ostensibly liberal critics of the censorious nature of “identity politics” linked, coherently or otherwise, with a frequent tendency to see anti-Semitism lurking under criticisms of Israel or America’s pro-Israel foreign policy. Weiss received support from many centrist pundits and politicians, including Andrew Yang, the pro–universal basic income candidate for the Democratic Party nomination.

There are paradoxes here. It might seem that the obvious, even proud, authoritarian in the story was Senator Cotton. Even on the most sympathetic view, he represented the party of order and tradition, not of transgression or sceptical thought. The inclusion of “anarchists” – an ideological category that includes Chomsky – among those who should be put down by violence suggests a lack of concern with the First Amendment, however understood. Cotton followed up by calling for restricting federal funding to any state or local education system that used a Pulitzer-Prize-winning New York Times series, the “1619 Project,” which argued that race and slavery and had not been made sufficiently central to American history. By September, President Trump had instructed the Department of Education to follow up on this and promised he would consider Cotton for the Supreme Court if reelected. Critics like Bari Weiss wondered whether they had correctly identified the main threats to freedom of expression and civil liberties in America today.

The anti-Trump-anti-Cancel-Culture coalition was more concerned with what happened to Bennett, as editor, than with really defending the Cotton op-ed. They saw here a threat to the kind of cross-ideological-but-curated discourse represented by the New York Times. The paradox here is that the New York Times is a for-profit enterprise that, like the Washington Post, increased its subscriber base by more or less explicitly presenting itself as part of the opposition (“resistance”) to the Trump Administration. Its op-ed page is intended to generate money. It does not of course purport to represent all opinions tolerated under the First Amendment, or even a reasonable cross-section of actual American political views: it has three regular “Never Trump” conservative columnists (ranging from the interesting Ross Douthat through the past-his-prime David Brooks to the execrable hack Bret Stephens), despite the tiny share of the electorate that this strand of opinion represents.

The New York Times’s subscriber base certainly thinks of itself as open-minded and likes to be “challenged,” but like everyone else, they have their limits. In July 2020, in the midst of a worsening pandemic, the Trump Administration was preparing to put troops in the streets of the big cities where those subscribers live. From the perspective of the kind of people who keep the Times afloat, this was a personal threat, not a debating point. The Times is not a charity. And the customer is always right. In the best traditions of American capitalism, the Times acted to protect shareholder value – and while we can all sympathize with Bennett’s fate, he knew the business he was getting into.

Of course, lingering on a particular example may miss the point. A culture is not a law, and it is not a single instance either. I could have given the more Canadian example of Don Cherry’s loss of his perch at “Coach’s Corner” last year for accusing immigrants of not wearing poppies for Remembrance Day. Or any number of other controversies that seem to punctuate the news cycle. But a list of examples never gets us closer to a concept.

Opposing “Cancel Culture” gives meaning to both a certain kind of Cultural Right and a Cultural Centre in just the same way that opposing Communism did to homologous parts of the political spectrum after World War II. But Communism had a clear referent in the form of the Soviet government. It had secret police and loyal party members. It was clearly devoted to a form of coercion that liberals, of all stripes, could coherently oppose. By contrast, Cancel Culture consists of a loose array of human resources professionals, youthful activists and cultural anthropologists exercising the preeminently liberal rights of employers, protesters and academics to contract, assemble and theorize.

But as we move from a single example, we have trouble getting hold of the thing itself. The paradox is in making a specific conception of open debate beyond legitimate debate, and in labelling the questioning of polite, social tolerance of certain “differences” an intolerable difference. It is possible to forbid some considerations from entering into state action against certain ideas. And it is also possible to say that other considerations should not be part of deciding what is published in particular forums, or grounds for promotion or firing. But it is not possible to forbid cultural sanctions for expressing opinions – at least not without formal censorship. It is not even possible to criticize such sanctioning without engaging in sanctioning of one’s own. We are in need of some analysis, whether linguistic, historical or psycho.

Intolerance: An intolerably confused concept

Anyone who wants to supply analysis better show their credentials – particularly in a case like this one. They need to “situate” themselves.

I am a middle-aged White man with a well-paid professional job. I am probably more sympathetic to the movements of the Cultural Left than the median person fitting that demographic profile. But it would not be hard to find someone woker than me. For example, I am not in favour of abolishing or defunding the police, although I think some of the reforms unfortunately gathered under one or both of those slogans are worth taking a look at. While there is no doubt that social problems in North America – from COVID deaths to police violence – are disproportionately racially distributed, or that the reasons this is the case are the product of racial and colonial histories, I agree with those who say these problems have primarily race-neutral class-based solutions.

While I am not going to bite the bullet of defending every statement or action by every antiracist or transgender activist, though, I do not think that the “Cancel Culture” frame is defensible. In some cases – most transparently Trump’s or Cotton’s, but also among their anti-Cancel critics – it is a rhetorical device to silence or marginalize the people the user of the phrase disagrees with. Trump regularly calls for people who criticize him to be fired or even prosecuted and has repeatedly bemoaned the restrictions the First Amendment places on his ability to sue people for defamation. But even people more self-reflective than he is sometimes confuse criticism with censorship.

Alternatively, “Cancel Culture” might be identified with discourteous or self-righteous expressions by activists, especially online. To be sure, “piling on” on social media can be destructive and it would be unresponsive to contemporary reality to regard much of what happens when a large group focuses on one person’s alleged misdeeds as just “criticism” that should be addressed with resilience. The paradox, though, is that this is precisely what free speech centrism counsels. Moreover, whatever the sins of the Cultural Left in this regard, they are hardly the worst offenders. Anyone who wants to be controversial online faces trolls, most of them right-wing. Women who express controversial opinions can count on threats of sexual violence and ethnic minorities can be sure of racial epithets. While there is indeed a problem of “troll armies,” it is a cross-ideological problem, and one of lack of effective regulation of speech – to which American First Amendment fundamentalism has undoubtedly contributed.

One issue that quickly comes up in these conversations is how big a deal it is to label someone’s actions or statements “racist.” Outside the Cultural Left, at least in the North American middle class, racism is seen as an individual moral flaw that is both rare and terrible. From this perspective, accusing someone of being racist is essentially accusing them of being a member of the Ku Klux Klan. This is not how the Cultural Left understands things: racism for them is primarily structural. It is difficult to get White liberals to understand that this implies that claims of racism are therefore injunctions to reform, but not statements of irremediable evil. By contrast, middle-class men in heterosexual relationships have no particular difficulty understanding that a claim something they did or said was “sexist” does not imply that they are morally indistinguishable from Marc Lepine. It is a call to rethink behaviour. Whether you agree with a particular claim or not, it would obviously be unacceptable to make it a precondition for any polite conversation to preclude the possibility that anyone other than the most violent misogynist is in any way sexist. But it is considered a perfectly reasonable demand by both right and centrist critics of the Cultural Left that talk of racism be limited to references to neo-Nazis.

The final type of “cancellation” that raises difficult issues is the exercise of economic power over hiring and firing – either directly by employers themselves or through the market power of major customers – to discipline those who have engaged in what is considered intolerable expression.

From the libertarian or classical liberal perspective adopted by both Canadian and American free speech law, the exercise of economic power does not violate constitutional guarantees of freedom of expression. Some people on the Cultural Left take this as meaning that targeting a person’s job for what they say cannot raise freedom of expression issues in a broader sense. As a social democrat, however moderate, I disagree because I regard employer power as power potentially as despotic as that of the state. This is a particularly stark reality in America, where almost all employment is “at will” and few jurisdictions have protections against employer retaliation for political expression.

Of course, a right to say things an employer or its customers do not like to hear cannot possibly be absolute – a vice president of marketing cannot be expected to be allowed to praise a competitor’s products as better than those of her own company. But the American system of total protection from censorship by the state – to what seem to me like ludicrous extremes (the U.S. Supreme Court struck down laws against pretending to have a military medal or limiting the use of racially offensive trademarks1) – combined with total vulnerability to censorship by employers seems to me a real problem. And, it must be conceded, sometimes this power is exercised by the Cultural Left.

But most of the time? Is it really true that economic power to restrict expression is mostly from the Left? No, it is not. The most comprehensive data set of political firings at American postsecondary institutions since 2000 is maintained by Acadia Univesrity professor Jeff Sachs.2 There are interpretation issues, but it is clear that the majority of terminations occur because of criticism from the right (usually for being unpatriotic or too critical of Israel). As Sachs points out, since there are far more left-of-centre academics than right-of-centre ones, the probability of being fired from an academic position for political speech is lower on the left. But academics in fact have unusually high levels of job security. If we broaden our gaze to American society more generally, there can be no question that job insecurity chills speech, but also no reason to think it particularly chills right-wing speech.

By any reasonable metric, there is a broader array of political opinions available than ever. While social and economic pressures as well as the unwanted attention of troll armies make most people unwilling to attach their own names to controversial views, pretty much any opinion can be expressed on the internet pseudonymously. Canada, like every other country outside the United States, takes a less absolute view on free expression as a matter of constitutional law.3 But Canadian law is more protective of freedom of speech than it has ever been.4 More practically, it has proven very difficult for any country that wants to participate in the global internet to enforce more restrictive standards than those permitted in America. While all this speech has not led to the flourishing of the reasoned discussion hoped for by John Stuart Mill, that perhaps speaks more to the lack of realism of Mill’s ideal than any culture of intolerance.

Why cultural change is experienced as silencing

Nevertheless, we overwhelmingly think “Cancel Culture” or “political correctness” is a thing. In a comprehensive study in 2016, Angus Reid found that two thirds of Canadians thought political correctness had “gone too far,” with a similar number agreeing that “it seems like you can’t say anything without offending someone these days.”5 Americans are polled on these issues more regularly: they agree with similar statements in similar numbers. While people say things are worse than they used to be, they have always said they are worse than they used to be – there is no upward trend over time in people thinking this is a problem. The sentiment that political correctness has gone too far is held in similar numbers across racial groups in both countries, although it is slightly higher among men than among women.

The ubiquity of this sentiment makes sense once we accept that any speech act will take place in a context of social approval and disapproval. Unless we are absolute monarchs, when we say something we are simultaneously asserting some kind of authority and making ourselves responsible to the judgement of those who are listening. These norms are invisible when they are traditional and universal. But cultural reform consists precisely in seeking to change those norms, based on some higher norm of equality or autonomy. It can only be expressed as disapproval of the existing structure of value, and therefore only experienced by those within that existing structure as an unexpected loss of status.

Think of Mill’s complaint in On Liberty of the “tyranny of custom” restricting the principle of individuality in Victorian England, particularly of women or eccentric men. The only way this culture could change was by a self-conscious group of reformers – the first wave of feminists, along with the Victorian/Edwardian freethinkers so influenced by Mill. But the disapproval of these feminists and freethinkers for what they saw as the bigotry of more conventional Victorians was experienced as elite condescension at best and as suppression of the freedom of Englishmen at worst.

Moreover, any movement of reform must rely on solidarity. If those within that movement are seen as conceding to the social structures it is struggling against, they can only be disciplined by social disapproval within the movement. In some cases, this results in sectarian division, in others in conformity around the cause. But for anyone whose identity is caught up in the broader movement, disapproval by those “to one’s left” is likely to sting more than it would for the self-consciously reactionary.

Of course, once the cause is won, the norms that the movement sought to create become part of the tyranny of custom. I grew up in the 1980s in a relatively liberal city, Victoria. But I can assure you that no one at my high school was as free to say they were gay or to express a nonconforming gender identity as their children are. This newfound freedom is only possible because homophobic and transphobic abuse became subject to social sanction (and sometimes school discipline), which they were not in the 1980s. Then and now there were things that could be said and things that could not be said. The total amount of “tyranny of custom” has been conserved – but it has been redistributed in a way that allows for greater freedom and equality.

Not every effort at social reform in the past succeeded, and many of those efforts may not have been good ideas. And I would not suggest that all such efforts in the future will or should succeed either. But if they are meaningful at all, they will all involve changing what is socially disapproved. Custom may change from a tyrant to a constitutional monarch, but will never cease to rule. In that sense, someone will always feel cancelled.

Continue reading “Is Cancel Culture a Thing?”

China’s report to the World Health Organization on December 31, 2019, of a “pneumonia of unknown cause” in Wuhan – what we now know will be one of the pivotal events of the 21st century – at the time drew hardly any attention. Indeed, right up into March, politically engaged Canadians were deeply divided over another issue: the construction of the Coastal Gas Pipeline through the traditional territory of the Wet’suwet’en people – with the approval of the Wet’suwet’en’s elected chiefs and councils, but against the will of those claiming to represent their traditional governance structures. While the pandemic blew this (along with every other issue) out of the news cycle, it remains unresolved, is likely to flare up again and points to broader issues Canadian society will have to live with for the foreseeable future.

Coincidentally, this controversy was sparked by another relatively little remarked event that occurred while Canadians were preparing to celebrate the New Year. On December 31, 2019, Justice Marguerite Church of the British Columbia Supreme Court granted the company building the Coastal GasLink Pipeline an injunction against protesters blockading a bridge on the Morice West Forest Service Road, near Smithers, B.C.

The protesters said they were there to prevent people from accessing the territory of the “Unist’ot’en” without the consent of their traditional chiefs. The judge described the Unisto’ot’en as a matrilineal group of houses within the Gil_seyhu (Frog) Clan of the Wet’suwet’en. However, the most direct connection appears to be with Dark House, which has kept organizationally independent from the Office of the Wet’suwet’en representing the hereditary chiefs, but shares their opposition to the Coastal GasLink Pipeline traversing traditional Wet’suwet’en territory.

When the RCMP moved in to enforce the injunction in February, solidarity protests occurred across the country – most notably, Mohawk protesters blocked Canada’s rail arteries in Ontario and Quebec, making what had been a provincial story a truly national one.

The controversy raised deep issues dividing both Indigenous and non-Indigenous Canada: about what postcolonial reconciliation would look like, or whether it is even possible; about the relationship between democratic elections and representation; and about the future of the fossil fuel economy. Underlying all of these is the meta-issue of whether it is possible to think about these issues in a nuanced way in an era of polarization and social media.

The media largely moved on in March: first, hereditary chiefs agreed to a protocol with the federal and provincial governments about continuing rights and title discussions, and then North America finally started taking COVID-19 seriously. But the issues on the ground – and, of course, the more fundamental ones – have not been resolved. On May 1, elected chiefs objected to the process on the basis it would occur entirely within the hereditary system and the issues of the actual Coastal GasLink route are not part of it.

coastal gas pipeline

The Dream of LNG

The Coastal Gas Pipeline project involves building a link between the vast natural gas reserves of northeastern British Columbia and a liquefication facility near Kitimat on the Pacific coast. B.C.’s provincial government has long supported the goal of one or more major liquefied natural gas (“LNG”) facilities in the north, especially as the long-run prospects for North American natural gas prices plummeted in the wake of the massive increase in supply as a result of the shale revolution. Though world prices were also very low, even before the COVID-19 shock, proponents hope this is temporary.

Support is bipartisan: the B.C. Liberals pulled off an unexpected victory in the 2011 provincial election after a campaign focused on the benefits of LNG, and the current NDP government of John Horgan has also supported its development. Horgan’s government depends on its alliance with the anti-LNG B.C. Green Party for “confidence and supply,” but the Greens ultimately decided not to make LNG an issue on which they would bring the government down. While they have opposed all legislation to enable LNG, it can easily pass with the votes of the NDP and the Liberals.

The relationship between LNG and climate politics is contentious. Proponents argue that for the foreseeable future, exports of LNG will have the effect of displacing coal as a source of dispatchable electricity generation: while burning methane (the main component of “natural gas”) creates carbon dioxide, it is more efficient than coal (or oil, for that matter) and is vastly less toxic in its effect on ambient air quality. LNG proponents therefore argue that this is fossil fuel infrastructure that will benefit the environment, especially since British Columbia can use its abundant hydroelectricity to provide zero-carbon electricity for the liquefication process.

On the other hand, if methane escapes without being burned, it has a far greater warming effect than carbon dioxide, the greenhouse gas caused by combustion. The question of whether natural gas development is good or bad for the climate therefore depends on the degree of escape and the extent to which this can be reduced. Recent empirical work suggests that the release of methane into the atmosphere has been severely undercounted.1

Pragmatic cost-benefit arguments along these lines may seem irrelevant to those who see a deeper energy transition as a moral imperative, one that can only be fulfilled by ceasing to build any more infrastructure for extracting and transporting fossil fuels. Getting the planet to “net zero” carbon emissions by the middle of this century is not compatible with using natural gas, or any other fossil fuel, to generate electricity or heat homes – at least in the absence of significant developments in carbon capture technology.

For different reasons, arguments that natural gas is superior also irritate residents of Alberta and Saskatchewan, whose hydrocarbon economy is more reliant on heavy oils – the transportation of which has been a point of conflict between those provinces and British Columbia. But despite occasional rhetoric about West Coast hypocrisy, the oil and gas industry has been completely supportive of the Coastal Gas Pipeline, recognizing that if it cannot get built, the prospects for heavy oil projects are even more remote. Certainly, most British Columbians – particularly in the north – support the development of LNG as a source of employment and revenues for public services.

Critically, the British Columbians hoping for these benefits include a large proportion of the Indigenous people living in the north. The Kitimat facility is to a very large degree the product of efforts by leaders of the Haisla Nation, where it will be located. Among these leaders is Ellis Ross, the B.C. Liberal MLA for Skeena and a particularly lacerating critic of opponents of the pipeline. As with other major pipelines, Indigenous opinion is divided. Canadians quickly became aware that the elected chiefs representing First Nations along the line of the Coastal Gas Pipeline had agreed to “community benefit agreements,” but that hereditary chiefs of the Wet’suwet’en, in particular, had not.

The Dream of Reconciliation

The dream of a resource boom is the oldest one of settler British Columbia. The first resource boom – the marine fur trade – was a joint enterprise of Indigenous and European peoples, but it brought species loss and epidemics. Later booms – the gold rush, the coal rush, the timber rush, the hydro rush, the real estate rush – were pure manifestations of colonial state capitalism, with some succeeding on their own terms while others worked only for those who got out early.

In British Columbia, settlers and their government essentially appropriated land and resources without any attempt at reaching agreement with the Indigenous people living there. The only exceptions were a few mid-19th century Vancouver Island treaties with the Hudson’s Bay Company as agent of the Crown, and the extension of Treaty 8 from Alberta into the part of northern British Columbia east of the Rockies – where the natural gas deposits are.

Throughout the 20th century, the British Columbia government took the view that the land and its resources simply belonged to the province: if any aboriginal rights ever existed, they were “extinguished” long ago. It persisted in this view after section 35, affirming “existing aboriginal and treaty rights,” was added to the Canadian constitution in 1982. The B.C. government of Premier Bill Bennett signed on to that amendment claiming that it would have no effect west of the Rockies, since there were no treaties and, outside of reserves and food fishing, no aboriginal rights continued to “exist.” This continued to be the provincial government’s position for another decade.

It was the hereditary chiefs of the Wet’suwet’en, along with their Gitksan relatives, who brought the landmark litigation that challenged the B.C. government’s historical approach, bringing a vast amount of information about their traditional governance structures onto the public record. In 1997, the Supreme Court of Canada’s rendered the Delgamuukw decision, making it clear that aboriginal people in British Columbia continued to have legal rights to the land that governments and resource companies could not simply ignore. It was often claimed during the social media free-for-all surrounding these events that the decision vindicated the position of the hereditary chiefs. The truth is more complicated – and the unfinished business of that case is the necessary backdrop for what happened in the winter of 2020.

The Wet’suwet’en have a completely distinct language from their neighbours, but developed an interlocking matrilineal kinship/governance structure with them. Larger clans are subdivided into smaller houses.2 Although the system is called “hereditary,” it is not based on principles similar to European feudalism such as primogeniture: an individual becomes a chief of a house by being selected to carry on the name of the chief of that house: the process has sometimes been contested, but is ideally based on consensus attained at a feast – potlatch in the trade pidgin Chinook.

In addition to these traditional kinship/governance structures, there is a system of elected chiefs and councils, first created by the federal Indian Act. Enrolled members of Indian bands – now usually called First Nations – can periodically vote for a chief and band council. While no one denies that the origin of this system was colonial, these structures clearly now gain their legitimacy the same way that other elected governments do – on the basis of their mandate and the fact that they can be replaced if those they represent collectively decide to do so.

Opinions among politically active Indigenous people differ on the weight each of these structures should have in a postcolonial world: the moderate view that both traditional and elected structures should have a role is found in a number of modern treaties, but there are “traditionalists” who reject the elected system altogether or say its authority should be limited to the reserves, while there are others who feel that the traditional system should be limited to ritual and persuasive roles.

The Delgamuukw case was originally brought as a claim for “jurisdiction” and “ownership” by the hereditary chiefs of the Gitksan and Wet’suwet’en houses. The lands claimed by each house had been delineated, in the case of the Wet’suwet’en, at a 1986 feast where the entire claim area was divided into 133 territories assigned to 71 houses. Essentially, each house chief claimed to have a form of both sovereignty and property ownership over the specified territory, to be held in accordance with Wet’suwet’en law. For better or for worse, the Supreme Court of Canada did not accept that proposition. Instead, it set out, at length, its own concept of “aboriginal title” and how this could be proved. On the grounds that the evidence in the case had not been aimed at this (newly formulated) concept, the Supreme Court said there would have to be a new trial. That new trial never took place.

The difference between what the claimants in Delgamuukw were originally seeking and the “aboriginal title” that the Supreme Court ultimately described has been referred to as a “technicality” – but it is at the root of issues that remain with us two decades later. The Supreme Court made it clear that, in its view, aboriginal title had to be asserted at the nation level (i.e., by the Gitksan and Wet’suwet’en as a whole) and not at the level of a house. Decisions about how lands subject to aboriginal title will be used must be made “by the community.” Aboriginal title was also not held to be absolute; rather, it is subject to justifiable “infringements” by the federal or provincial governments, although those infringements have to meet a strict test in court.

Putting Postcolonialism into Practice

A number of practical issues arose out of the Delgamuukw decision. The most pressing was how resource decisions would be made while the extremely complex process of proving aboriginal title took place. The general answer to this came in the 2004 Haida decision – and it is really that decision that created the operative framework applied ever since. In Haida, the Supreme Court established a flexible doctrine of a “duty to consult” and, in some cases, accommodate, Indigenous entities with potential claims for aboriginal title (or for rights that do not go quite as far as title). On the one hand, this is “not a veto”; on the other hand, it means that any resource project to which there are objections by groups plausibly having aboriginal rights or title claims is potentially subject to legal challenge.

Different stakeholders would undoubtedly have different accounts of how well and how justly the system of resource and land development that resulted from Haida has worked. It is a complex area of law, resistant to simplistic summary – and certainly subject to reasonable criticism from all sides. But what may not have been appreciated in the national and global media conversation is that this system is definitely not the same as the unilateral authority of provincial resource ministries that prevailed in the last century.

This works both ways. Development requires the involvement of Indigenous peoples, but decisions not to develop can be challenged as well. Since Indigenous people face the same cross-cutting considerations of economic and environmental priorities and values as everyone else, the implications are complicated. In practice, the duty to consult has led to a resource industry that can only operate through often-complex agreements providing employment opportunities and funding of public services for Indigenous groups.

The classic legal question of how to address the “holdout” of one property owner who says no to a linear project whose value depends on going through everyone’s land arises in this new, hopefully postcolonial, context. If one community holding a claim to aboriginal rights or title says “no” to a project that is located solely in their land, they say no for themselves. But if they say “no” to a project that traverses multiple territories, then the “no” is for everybody. While this may sound appealing to opponents of fossil fuel infrastructure, the same would be true of transmission lines connecting zero-carbon run-of-the-river or wind farms to the electricity grid.

The difficult question is how to balance the right to consent to development with the right not to consent. Because the “duty to consult” is “not a veto,” this problem can be resolved – albeit not speedily and not without potentially alienating dissenters – by the courts deciding that the objectors were consulted sufficiently. But to that extent, “free, prior and informed consent” becomes an aspiration rather than a legal prerequisite.

The other problem Delgamuukw left unresolved was how to determine the will of each individual community. The Supreme Court stated that land use decisions were collective, not individual, but avoided the classic political theory problem raised: how does a community decide when its members disagree? The postcolonial dilemma posed by this problem arises because outsiders do not have the legitimacy to resolve disputes over the right system of governance, but cannot avoid having to deal with some governance structure.

Justice David Vickers struggled with this issue in his decision in the only other major title case to come to trial in British Columbia. brought by the elected chief and council of the Xeni’Gwetin First Nation (formerly the Nehemiah Valley Indian Band) on behalf of a Tsilhqot’in Nation (which, in his decision, Judge Vickers identified as a cultural nation like “French Canadians”) that had no definitive organizational or political existence. The elected government, as a political entity, could, depending on the social facts, exercise some of the rights of this prepolitical people. (The difficulties of a judiciary whose own authority necessarily comes from the “colonial settler state” determining as a “question of fact” the political representatives of an ethnos is perhaps an unavoidable paradox of postcolonialism in this context.) The Supreme Court adopted Justice Vickers’s approach without the same visible struggle and without necessarily giving guidance to how it might be approached in other contexts with different “social facts.”

One of the implications of the “duty to consult” regime is that such governance questions can, to some degree, be avoided. It is not absolutely necessary for a non-Indigenous government to determine these questions, so long as title is unsettled – it may be legally obliged to consult with both elected and traditional governance structures, and if there are differences of view, it can try to persuade a court that it did its best. But the precondition for consulting with everyone is that there be no specific entity that has a clear right to give or withhold “free, prior and informed consent”: the ”settler” government must listen, but ultimately, and subject to review by the colonial courts for how reasonably it has done so, it decides whom to heed.

In 2020, these contradictions manifested themselves on the streets and in social media comment threads. The Wet’suwet’en are divided as to whether a pipeline through their traditional territories is in their collective interest. Both traditional and elected structures have ways of resolving, but also asserting, these differences. Inevitably, opposing forces within the “settler” population were drawn on the side of different “authentic” representatives of the Wet’suwet’en depending on their own attitudes to natural gas development. The same, of course, can be said about Indigenous communities across the country: they too saw the quarrel in terms of their own disputes about governance and development.

A right to develop and a right to control development cannot both be truly absolute without coming into conflict. Depending on who is entitled to exercise the rights of the Wet’suwet’en, the Haisla may be able to exercise their right to develop only if not every group along the route has to fully and freely consent. If development does not happen, that also has implications for the interests of those upstream and downstream. These are the longstanding problems of pluralism and federalism – postcolonialism may mean that Indigenous people are brought into such problems as full partners, but it cannot mean that these problems will not exist.

Construction of the Coastal GasLink pipeline continues, as (over Zoom) do discussions between Wet’suwet’en hereditary chiefs and both levels of government about aboriginal title – as I write, the elected chiefs have objected to being frozen out of that process. The bottom has fallen out of energy markets – no one knows what will happen to them once the COVID pandemic ends.

Disputes about governance will, of course, always be with us as long as human beings disagree and have conflicting identities. These disputes become more complex once new voices are in the mix – but the simplicity of the colonial diktat is the peace of the grave and we should not be nostalgic for it.

A long, but neglected, strand in the Western tradition emphasized that the best regime is a mixed regime: neither democracy nor tradition should rule without the other. Finding the right mix for a particular culture is a problem that outsiders cannot solve. Nor has any culture definitely solved the problems of how different polities can compromise over matters that affect them all – and outsiders do need to be part of that one. Virtues of patience and practical wisdom are needed – something that traditional cultures (Western or Indigenous), for all their faults and all their differences, would see immediately. As a consequence, they would also see why the flattening democratic populism of social media will not make things better.

Continue reading “The Perils of Postcolonialism”

Whatever the reason for Canada being one of the world’s oldest and most stable democracies, it is not because Canadians understand exactly how it works. Since Confederation we have had 17 changes in the party controlling government at the federal level, and hundreds provincially – all of them peaceful, some of them consequential. But as the end of the 2019 federal campaign made clear, Canadians can be quite confused about how governments are chosen.

While citizens of France and the United States vote for who their president will be (in the U.S. case, if we ignore the Electoral College), Canadians vote for their prime minister only if he or she happens to be running in their riding. Instead, we elect a federal House of Commons or provincial legislative assembly. The partisan makeup of the house determines who gets to wield executive power. When one party wins a majority of seats, how this occurs is pretty straightforward: the leader of that party becomes the first minister – that is, prime minister or premier – and selects a cabinet.

But things are not so clear if there are more than two parties and none of them gets a majority of seats. If one party gets the most seats, does that party automatically get to form the government? Or is it legitimate for the other parties to agree among themselves and depose the government without an election? What happens when the incumbent party does not get the most seats, but the former opposition does not have a majority? Should the incumbent first minister give way to the leader of the opposition, or can he or she try to stick around and put together a working majority? When a government is defeated on a matter of confidence, when does that mean it must give the reins over to someone else and when can it “go to the people” in an election?

All of these issues have been matters of partisan debate in Canada in the last decade. In some cases, there is an expert consensus, but sometimes controversy arises among the coterie of constitutionalists – a group with no formal membership qualifications, no principle of accountability to anyone and no clear way of resolving disputes.

A Murky Area

Constitutional issues other than government formation are legal matters – which, while sometimes uncertain, can at least be authoritatively resolved by the courts. But the courts have refused to step into government formation – although even that principle took a beating in the United Kingdom when its Supreme Court held Boris Johnson’s request for prorogation to be unlawful and of no force and effect. Canada’s system is based on the U.K.’s, but if courts were to step in here, it would be revolutionary and inevitably controversial. And of course, since what is at stake is power, these disputes are not going to be conducted disinterestedly.

To be sure, most of the time, the system works whether it is universally understood or not. In practice, the “conventions of responsible government” – however mysterious they may be to the laity and even sometimes the clerisy itself – give a clear result about who is supposed to occupy 24 Sussex Drive (assuming it is ever made habitable). Contrary to semi-informed opinion, the representatives of the Crown – the governor general at the federal level and the lieutenant governors in the provinces (collectively, the governors) – rarely have any real choice in what to do.

But there are real question marks. In Canada, there is a particular question about whether an incumbent government that gets fewer seats than one of its rivals but can see a way to put together a working majority must give the party with the most seats a shot at governing. Since this eventuality almost happened in both of the last two federal elections – and in fact occurred in New Brunswick in September 2018 – we really should have some clarity about it.

Unfortunately, the answer requires some nuance, which partisan politics and media regard the way cats feel about baths. A governor would – and should! – let an incumbent first minister, no matter how many seats his or her party got, put a throne speech to the house. If the incumbent government lost a confidence vote at that time or shortly afterwards, then, and only then, the governor would call on the leader of the party with the most seats.

However, this does not mean the first minister who decides to do this is off the constitutional hook. First ministers are supposed to give governors the right advice. So even if the governor lets the first minister meet the house, that leaves open the question of whether the first minister ought to put the question to the governor in the first place.

In my view, an incumbent first minister whose party (or pre-election coalition) does not get a plurality should advise the governor to call on his or her more successful rival to take the first crack at governing. In this respect, Andrew Scheer was right in October 2019 to argue that there is a “modern convention” that the party with the most seats has a right to try to govern.

However, Conservative partisans were wrong to suggest – either in 2019 or in 2008 – that their opponents are obliged to leave them in office. On the contrary, if a parliamentary majority supports the old government, then the right thing to do would be to let the plurality party give a throne speech, but move an amendment that the house has no confidence in the new government. If that passes, the old government has every right to come back and govern as long as the new parliament lasts.

Put that in your 30-second ad buy.

Election 2019 and the Constitution

Election 2019 was not all about the Prime Minister’s more or less youthful ventures in racially insensitive costuming or the Leader of the Opposition’s inability to keep straight his qualifications to practise as an insurance broker or to get a U.S. passport. In addition to such substantive issues as climate change, tax policy and the prospects of a national pharmacare plan, the campaign briefly touched on the mysteries of the constitutional principles governing the formation of executive government.

In the end, we had a fairly boring result from the perspective of an enthusiast of Westminster system arcana. As a result of the much greater efficiency of their vote compared to that of the slightly larger portion of the electorate that voted Conservative, Prime Minister Justin Trudeau’s Liberals, while denied a majority, received a strong plurality of seats in the October 21 election. Since Jagmeet Singh’s New Democratic Party made it abundantly clear in the campaign that it would never support a Conservative government, and since many of the NDP’s policy objectives overlap with those of the Liberals, no one doubts that Trudeau can continue as Prime Minister.

Pundits can of course speculate on how long a Trudeau government will last before another election, but the Liberals clearly have the authority to remain in government with the cooperation or acquiescence of the smaller parties. The Liberals have ruled out formal cooperation, but they will be able to get any measure passed as long as they have the support of one of the Conservative Party, Bloc Québécois or NDP. A premature end to this Parliament forced by the opposition seems unlikely.

But if things had turned out slightly differently, we might be in the midst of a constitutional crisis. Shortly before voting day, Conservative leader Andrew Scheer created a ruckus by claiming that if his party received the most seats, “modern convention” meant he should get the first chance at being prime minister after the election. He could point to a similar statement by Justin Trudeau before the 2015 election, when it appeared quite likely that the Liberals would get a plurality at the expense of the Harper Conservatives, but before their last-minute momentum delivered a majority.

Scheer’s advisers appear to have thought that raising hypotheticals about what might happen after an election was a strategic misstep and he quickly turned to emphasizing the benefits of a Conservative majority. The only party leader who directly engaged Scheer’s constitutional claim was Elizabeth May, leader of the Green Party, who claimed that Westminster tradition gives the first right to form a government to the incumbent party, regardless of how many seats it gets.

The unceasing electronic bar fight / seminar that makes up our contemporary public sphere briefly filled the gap. Constitutional law professors, media talking heads and partisan trolls with five Twitter followers debated questions of “convention”, “precedent” and “principles of responsible government.” And just as quickly, the election was decided and the bar fight / seminar turned to new entertainments.

It might be worth thinking about the fact that this leaves an unexploded landmine in our political garden party. Sooner or later, the scenario Scheer raised will happen. The controversy was reminiscent of the debate that followed on the non-Conservative parties’ brief attempt to replace the Harper government at the end of 2008. Lovers of Canadian party politics can point to numerous earlier examples of party conflicts over the rules of the Canadian political road, most memorably the King-Byng crisis of 1926.

In these situations, media speak loosely. Partisans argue partisanly. Once upon a time, perhaps, there were universally recognized constitutional experts such as the late Senator Eugene Forsey who could upbraid imprecise punditry and silence the hacks. But in today’s flattened opinion environment and general distrust of expertise, who will play that role when a future electorate steps on the landmine?

The Mysteries of Westminster Government

It would be nice to clarify beforehand how power would peacefully be transferred. So what can we say for sure? What are the fundamentals of how Canadian governments are chosen?

For most purposes, if we are interested in how the right to exercise executive power gets determined, we can fairly simply divide democracies up into those with a parliamentary system and those with a presidential system. In a Madisonian or presidential system like the United States, the legislature and executive are elected separately, and they frequently have different partisan alignments. In the United States, as I write, the Democratic House of Representatives is about to impeach Republican President Trump. Whether he is removed from office by the Republican-controlled Senate (which seems unlikely) or not, whether he is reelected or not, and whoever replaces him, we can expect American politics to be dominated by conflict and occasional compromise between the executive and legislative branches for the foreseeable future.

In parliamentary systems, this is not supposed to happen (although the Brexit imbroglio, touched on elsewhere in this issue of Inroads, shows it sometimes can). The executive is not separately elected. Instead, the right to exercise executive power depends on being able to get the support or acquiescence of the legislature. If the executive and legislative branch come into serious conflict, then one of them must go: either the executive by a change in government or the legislature by a new election.1

Parliamentary systems have arcane and technical distinctions in how it is decided who has the right to be in government when the will of the legislature is not clear – differences that do not matter when there is a clear majority for a party or coalition, but can make a big difference when there is not. Many parliamentary systems provide for an explicit “vesting vote”: after each election and a transition period, the legislature votes for who the new executive will be.

For example, section 46 of the Scotland Act gives the Scottish Parliament the power to nominate one of its members as first minister during a transition period after an election or the fall of an old government. While that person is technically appointed by the Queen, in effect the Scottish people elect the legislature and the legislature elects the executive. Most democracies in the world that have avoided the United States’s separation of executive and legislative authority provide for some similar process.

But this is not how it works in the United Kingdom as a whole or in countries, like Canada, that have adopted its specific form of parliamentary government. While executive power depends on the “confidence” of the legislature, this is not determined by the relatively straightforward method of a vote at the outset of a parliament, but through conventions about when first ministers are supposed to resign and when governors are supposed to dismiss them.

Our Misleading Constitution

Canada’s written constitution, although it purports to declare “the Nature of Executive Government,” is in fact completely misleading on the subject. If you just read Canada’s constitution, you would think executive power belongs to the Queen (section 9), that she delegates it to a Governor General who serves at her pleasure (this went without saying), who in turn appoints members of the Privy Council (section 11), decides on judges (section 96) and has to agree to all legislation (section 91). The prime minister is not mentioned at all in the 1867 constitution – he or she is just an unnamed member of the Privy Council chosen and removed by the obviously more important governor general “from Time to Time.”

The prime minister only plays a cameo role in the rest of our written document: under section 35.1 of the Constitution Act, 1982, he or she gets the neocolonial power to decide what representatives of Aboriginal people will be invited to constitutional conferences about amendments that affect them and must attend such conferences. That’s it for textual attention to the most powerful official in the country. As a written document, the Canadian constitution is about as deceptive a guide to what is really going on as Stalin’s 1936 Constitution of the Soviet Union. Where the USSR was a personal dictatorship pretending to be a democracy, on textual evidence Canada is a democracy pretending to be a personal dictatorship.

One thing the constitution does make clear is that the executive power and legislative authority are legally distinct. This principle predated the loss of effective authority by the monarch personally and was inherited both in the United States (where the executive became directly elected) and in Westminster systems. The legislature is sovereign, except as limited by the constitution: the executive only has the powers the law gives it and in particular must always abide by statutes the legislature enacts. This legal superiority of the legislative branch can create a cynical contrast with the effective power of the executive in a system of party discipline – but it can also sometimes bite governments when they don’t expect it.

But despite the importance of this legal principle of legislative sovereignty, it would be unwise to rely very much on the text of the constitution to understand the link between “supreme executive power” and a “mandate from the masses” (in the words of Dennis the Peasant in Monty Python’s classic take on the British constitutional tradition). The unwritten “conventions of responsible government” provide the missing link between Canada’s monarchical written constitution and its representative reality. With very limited exceptions (the “reserve powers”), the governor must always do what he or she is told, whether by the first minister (for example, in appointing a cabinet), by cabinet (in passing an order in council) or by the legislature (in giving assent to legislation). Since the first minister decides who the cabinet is, the key question is who gets to be first minister.

The rule is not, as in Scotland, the positive one that the legislative body elects or nominates the first minister. Rather, the rule is a negative one. Canadian governments do not die of natural causes: they must be killed or commit suicide. A government continues until the first minister resigns or (much more rarely) is dismissed. The rules about when these things are supposed to happen are therefore all that keep Canada democratic. Governments are not elected: the provincial legislative assemblies and the federal House of Commons are the only elected bodies in our system.

Some of the conventional rules are clear. If a government loses the confidence of the elected house, then the first minister must either resign (in which case the governor will call on the leader of the opposition party with the most seats) – or ask for a new election. The governor will accede to a first minister’s request for a new election if the parliament has been around for a while – about six months – but not otherwise (unless it has been demonstrated that forming a stable government is impossible). In 1981, the Supreme Court of Canada added that there is a convention that if the opposition obtains a “majority” at the polls, the government must resign “forthwith” (in practice, whenever the new government is ready to be sworn in).

These bare-bones rules are enough to say what will happen with the current Parliament. Justin Trudeau has nether resigned nor been dismissed, so he remains Prime Minister. No opposition party obtained a majority at the polls, so there is no convention that requires him to resign “forthwith.” He will have to ask the Governor General to appoint a new cabinet. While it is conceivable that he might nominate cabinet members from other parties, he is under no obligation to do so.

The government will put forward a throne speech. The other parties will have the opportunity then or later to vote nonconfidence, but they will not do so unless they see the advantage of an alternative government or a new election. If the Liberals lost a confidence vote, they would not be entitled to an election for the first six months or so, but after that they could have one any time the Prime Minister decided it was in his political interests. (Canada has a fixed election law, but it has an exemption in these circumstances and the courts have already made it clear that they will not get in the way.)

How Minority Governments Govern

That does not mean it would make sense for the Liberals to try to rule as if they had a majority, as Joe Clark rashly promised to do when he had a minority government in 1979. In the last Parliament, the Liberal Party controlled every legislative committee and the government could prevent any legislation passing against its will if it was willing to whip its own caucus. This is no longer the case. The Liberals have said they will not try to get a coalition or even confidence and supply agreement with another party: they will have to rely on the desire of other parties to avoid elections to get budget measures and other matters of confidence through. But there is very little doubt about what is supposed to happen.

In 2008, when a coalition of the Liberals and NDP supported by a confidence and supply agreement by the Bloc Québécois declared its readiness to unseat Stephen Harper’s minority Conservative government, the Conservatives argued that this arrangement was illegitimate. Their argument was as constitutionally unfounded as it was politically effective.

In the current Parliament, the opposition parties could, in principle, vote nonconfidence in the government – so long as they do so early in the Parliament – and the Governor General would call on Andrew Scheer to be Prime Minister. This is what occurred in Ontario in 1985 (when Bob Rae’s third-place NDP supported David Peterson’s second-place Liberals) and in British Columbia in 2017 (when the Green Party supported the NDP). However, as a matter of political reality, it seems next to impossible to imagine the federal Liberals facing any similar effort by the opposition parties this time, even if the NDP had not explicitly ruled out cooperation with the Conservatives.

There is a Modern Convention …

But what if the 2019 election had been slightly different? What if the mysteries of voting efficiency and strategic voting had resulted in the Conservatives obtaining more seats than the Liberals? What would convention call for then? Was Scheer correct that “modern convention” implies that he would have had the first chance at governing?

If we go back far enough in Westminster history, the answer would have been a clear no. Trudeau would remain Prime Minister unless and until defeated on a confidence vote, and so the situation would not be materially different from the one that actually unfolded. This is because responsible government emerged out of a system of dual confidence: in the 18th century, for example, the government of the day required both the confidence of the monarch personally and the confidence of the House of Commons, so that the monarch could continue to tax and spend (“enjoy supply”). Just as a minister could continue on so long as the monarch had not announced that this was no longer his or her pleasure, so too he could continue so long as there was no affirmative denial of confidence or supply by the lower house. The Hanoverians eventually lost the practical ability to dismiss governments for policy reasons, so the principle of responsible government became in effect that the Crown hired governments and the House fired them.

But it would be wrong to end the development of the principles of responsible government at the accession of Queen Victoria. In Canada, the aftermath of the 1896 election created a new principle. The Conservatives, led by Charles Tupper, lost that election to Wilfrid Laurier’s Liberals, who obtained a majority of seats in the new Parliament. Tupper took the perfectly orthodox view that he remained Prime Minister until the House met. He hoped, no doubt, to do some kind of deal with some Liberal MPs – a greater possibility in the late 19th century than it has since become. But the Governor General, Lord Aberdeen, refused to take instructions from Tupper on appointments, forcing Tupper to resign. Aberdeen asked Laurier to take office as prime minister.

Tupper complained about this breach of the principles of responsible government for the rest of his political life, but Aberdeen’s actions seem obviously correct to us now. Two conventions come out of this event: first, if another party obtains a majority, an incumbent first minister must resign effective as soon as the other party’s leader desires, and second, in such circumstances, the government must act as a “caretaker” – a role that has now been expanded to the entire period from when the election is called until it is affirmatively established who has the right to govern (in a minority, by the acceptance of the throne speech).

In the early-19th-century model, while the House decided when a government came to an end, the Crown had a great deal of discretion about whom to call on and whether or not to give a defeated government the option of going to the electorate. Scholars like Forsey and, more importantly, actual practice have constrained that discretion to a number of rules.

In particular, it is now widely (if not universally) accepted that when a government falls, the Crown must call on the leader of the party with the next-most seats and that it is wrong for the governor to use his or her own sense of who could command a working majority. It is also now accepted, on the basis of both scholarship and practice, that an election request will be denied early in a Parliament (with exceptions where it is beyond reasonable dispute that no government can function) but will be granted after six months or so. These are all additions to the original rule that governments continue until they lose the confidence of the house. They make sense based on practice and on the principles that the Crown should avoid controversial partisan decisions and parties should be treated with symmetry.

Viewed in this light, Scheer’s claim (earlier made by Justin Trudeau) – that an opposition party that obtains the most seats in an election should get the first opportunity to meet the house – would appear to have merit as a “modern convention.” It is what has in fact happened federally in every minority parliament after 1925.

In 1925, Mackenzie King’s Liberals initially held on with the support of Progressive and Labour MPs despite getting fewer seats than the Conservatives – this constitutional fact has long been overshadowed by the more famous decision of Lord Byng to deny King an election when the Progressives pulled their support. After the 1926, 1957, 1963, 1979 and 2006 federal elections, incumbent governments stepped down when they received fewer seats, while no incumbent government at the federal level has ever stepped down when it received a plurality of seats in a minority parliament.

Precedents can be read in multiple ways. If a course of action has made political sense in the past, this does not mean it is a convention. A convention requires both that the course of action be viewed as binding and not merely strategic and that it make sense in principle. Here too, Scheer’s claim seems vindicated. The prime ministers who gave way all those times no doubt believed, without exception, that their own policies were better for the public interest than those of their opponents; they gave over power because they thought they were obliged to. In many cases, it is easy to imagine means they could have used to avoid a nonconfidence vote.

Moreover, Scheer’s approach is supported by principle. Having the first-mover advantage in a minority parliament carries the significant benefit that the other parties must affirmatively displace you – after six months or so, at the risk of an election. Therefore, if the system is to be symmetrical, this benefit should be allocated in a way that minimizes Crown discretion and is allocated as much as possible on the basis of how the people voted. That is why it is now more or less universally accepted that after a defeat, the governor should go to the party with the next most seats to form a government. The number of seats, while not determinative of the ability to command the confidence of the house, is an objective fact based on how people voted, and not a subjective decision of the governor or existing first minister.

… But it Has a Caveat

While the Conservatives were right this time, there was some suggestion that they were going to take the position they took in 2008 as well: that if they got the most seats, they not only had first crack at government in the new Parliament, but also last crack – that any arrangement between the Liberals and one or more of the third parties would be illegitimate. I was unable to track any example of Scheer himself making this claim, but there was some suggestion of this by some of his surrogates.

This would of course turn a reasonable position into the entirely unreasonable notion that governments, like parliaments, are chosen on a first-past-the-post basis. This is unreasonable because the whole point of responsible government is confidence of the majority of the legislature that can pass laws and approve taxes. If the Conservatives had received the most seats, Scheer would have the right to meet the House, but the House would have the right to vote him out and put Trudeau right back in office.

For this reason, there is undoubtedly a caveat to the modern convention. If an incumbent first minister could arrange for an agreement that guaranteed confidence and supply very quickly (say, in the first few days after the election), then calling on the leader of the party with the most seats would be pointless, since it would just lead to a nonconfidence vote and restoration. So, if the election had resulted in a situation where the Liberals came second, but there was a more or less instant promise of support from, say, the NDP, and the Liberals and the NDP combined could constitute a majority, then it would be legitimate for Trudeau to stay on. If it included this exception, Scheer’s “modern convention” could arguably even encompass the 1925 election, after which the Progressives and J.S. Woodsworth’s Labour group quickly supported the incumbent King government.

Many of those in the opinion sphere who argued that Trudeau could simply continue on if he failed to get the most seats cited Philippe Lagassé, a Carleton professor and expert on the Westminster system of government formation. In fact, Lagassé recognizes that, in Canada, incumbent governments that receive fewer seats in minority parliaments have not tried to stay in power since 1925 and that there is a norm that supports Scheer’s claim. He insists on calling this norm a “custom” rather than a “convention.” Since conventions are not laws and derive from the accepted norms of political actors, it is hard to see how this line can be successfully maintained.

To be fair, I would agree with Lagassé that a governor would probably not actually dismiss an incumbent first minister who tried to continue after getting fewer seats in a minority parliament. This is what Brian Gallant tried to do after the 2018 New Brunswick election, for example.

But where I disagree is that this is because there is no convention. A governor should only dismiss a first minister in the clearest of circumstances, where there is absolutely no doubt that the first minister ought to resign. For that reason, as long as everything is running correctly, a dismissal should never be necessary. A first minister should resign when convention dictates, and if first ministers regularly do resign – and feel themselves obligated to resign – in certain circumstances, then that is sufficient for there to be a convention. It is the first ministers themselves who are the first line of defence of conventions, although governors sometimes have to stand up for them independently.

If Trudeau had tried to continue with fewer seats and no agreement to get an effective majority – something he never said he would actually try to do – the Governor General might not have dismissed him on grounds of lack of clarity, but that would not itself mean he was acting appropriately. Of course conventions, like all norms, can ultimately cease to have force if they are violated enough, although they can sometimes get greater strength if they are violated but the violator is punished. This is true of Scheer’s modern convention – and also of all the other norms that are essential to the operation of the system.

Continue reading “Who’s On First? A Guide to Minority Government”

Communism failed because it ignored human nature. The question the current era presents to us – the question that underlies the crises represented by the words Brexit, Trump and gilets jaunes, but will also outlast them – is whether liberalism has the same problem. Communism could not handle humans’ individual and familial self-interest. Can liberalism handle their inherent need to be part of a group that defines itself against other groups?

Groups define themselves against other groups not only in the sense that they distinguish themselves on the basis of what they are not but also, unfortunately, in the sense that they compete with those other groups for status and resources. This is inevitable. Liberalism is, at bottom, the conviction that state coercion – also inevitable – should be neutral between fundamental aspects of the identity of the citizens that are subject to it. Coercion can be justified, if at all, only if it serves the equal freedom of all who are subject to it.

As a result, liberalism in the 21st century has to advance an idea that will undoubtedly meet with resistance. Since modern states are as bound to be ethnically pluralistic as they are to be religiously pluralistic, liberalism must advocate separation between nation and state, just as it earlier fought for separation of church and state. The social fact that nations and states overlap but do not coincide leads inexorably, for liberals, to the normative conclusion that no state should belong to a single people. The ability of past liberals to avoid this implication of their basic principles depended on historical circumstances that are now passing away.

If this is right, then it is not surprising that liberalism is facing resistance, or that the triumphalist narratives of globalization and democratization from the 1990s look hollow. The concept of democracy and sovereignty that we inherit is bound up with the nation-state. If nations and states are to be separated, who is the we that makes democratic decisions? How does the state retain the loyalty needed to fulfill its functions? How do the ethnicities that have identified with that state for centuries understand themselves when their countries become postnational?

It has taken liberalism a long time to get to this principle, and as a pragmatic movement comfortable with power it will naturally try to soft-pedal the implications. But liberalism now faces a global countermovement and needs to get its foundational commitments straight. And pragmatism increasingly pulls in the same way as principle: the coalitions that could put liberal parties in power in the West do not belong to a single “people,” and they will want the policies they vote for to reflect that.

Many liberals have proposed “civic nationalism” as a halfway house. But this will not work. If civic nationalism involves no real ideological commitments, it is too weak to count as nationalism. But if it is strong enough to have any real ideological weight, civic nationalism is no more compatible with liberalism than the ethnic version.

While there is no doubt that problems and struggles lie ahead, liberalism does have resources to address this problem. Liberalism is the ideology best equipped to deal with “intersectionality,” the principle that one has multiple identities and that the way each identity is experienced depends on the presence or absence of the others. Intersectionality is usually associated with a radical moralism that does not fit well with liberalism, but this is a contingent fact that can be changed. With a less individualistic and a more intersectional understanding of why states need to be limited and pluralistic, liberalism could be an appealing philosophy for younger people in the West and could regain enough vigour to put up a fight against its populist enemies.

Liberalism and Nationalism

People need to belong to groups bigger than themselves or their immediate families, but smaller than humanity as a whole. And those groups necessarily define themselves by the fact that they are not part of another group. This is a phenomenon familiar to anyone who engages in political speech online and, indeed, to anyone who went to high school. According to paleoanthropologists, it was true of our ancestors on the East African savannah. Everyone has particularistic loyalties to “their own” – a phrase characteristic of George Grant, English Canada’s leading critic of liberalism – just because it is their own.

This is a problem for certain traditional liberal theories that focus only on the rights of the individual and the need for a state to define and protect those rights. The essential goal of that form of liberalism is to figure out how to constrain the state from becoming so powerful that it threatens the individual, while ensuring that it is powerful enough to protect individuals from one another. The classical liberal solution was a state governed by the rule of law and representative democracy, appropriately constrained by guarantees of individual rights.

In the 20th century, most liberals recognized that negative rights needed to be supplemented by progressive taxation and social insurance. In the English-speaking world this recognition was notably expressed in the 1942 report of Sir William Beveridge, a British aristocratic liberal whose work was enthusiastically embraced by the democratic socialist movement.1 They also recognized that the state needed to play a role in regulating total demand to avoid periodic economic crises, as taught by John Maynard Keynes, another liberal toff who became a source of intellectual inspiration for labour politicians. But the key point about the whole picture is that it did not specifically refer to any groups other than the state as a whole. Individuals would react primarily to economic incentives. States would be insurance companies with navies.

To be sure, liberals always emphasized the importance of freedom of association and freedom of religion as ways of guaranteeing group loyalties defined in contrast to the state. The foundational struggle for liberalism was to detach the state from a particular church, but the coalition in favour of doing this rested fundamentally on the social community of minority churches. Liberals welcomed voluntary communities as a source of sense of meaning and loyalty to their “own.” The only price of membership in the liberal state was that these groups must not coerce members who seek to leave and must not threaten the state itself.

There may be doctrinaire cosmopolitan rationalists somewhere who are offended by any claim of a community less inclusive than humanity itself. These people are bound to be disappointed by humanity’s tribalism, just as a doctrinaire communist would be bound to be disappointed on realizing that real proletarians were never going to be the “new socialist man.” People differ in how groupish they are, a measure of personality that psychologists label “openness” and can quantify as one of the five basic dimensions of personality. There has indeed been an evolution in the “WEIRD world” – Western, Educated, Industrial, Rich Democracies – toward higher and higher levels of openness with each generation. But no one – not even an Esperanto-speaking world federalist – can exist without a tribe. Liberalism prides itself on being a pragmatic way of thinking that does not seek to coercively impose a utopian vision on people, but rather to give them institutional space to decide for themselves. It therefore has to learn to live with this fact about human beings.

The trouble begins with the question of what sources of group identity legitimately hold the state together. Groupishness, as a universal human phenomenon, is not on its own enough to explain nationalism, which is not. For most of human existence, the groups that commanded loyalty and defined themselves against others were small enough for everyone to know one another. The territorial state as part of an international system is a product of European modernity, along with wage labour, the world market and the colonial empire.

Territorial states gradually undid the overlapping secular and ecclesiastical jurisdictions of medieval Christendom and replaced them with a single sovereign authority defined against other, similar sovereign authorities. These absolutist states needed to channel universal human groupishness into identities that secured their own cohesion. Modern institutions of public education were developed to try to reeducate the inhabitants of France and England (and later Italy and Germany) to be citizens of a country, in priority to all other smaller or larger loyalties. As the transformation of the Renaissance Kingdom of England into the19th-century United Kingdom of Great Britain and Ireland suggests, this process could not occur without violence and exclusion of stubborn attachments within.

In the 19th century, liberalism and nationalism were assumed to be allies. The “self-determination” of peoples seemed to be consistent with the self-determination of individuals. The idea of liberal nationalism was that each people would get its own state, once the “artificial” borders of traditional multinational empires had been broken up. The high point of this vision was Woodrow Wilson’s Fourteen Points. While it inspired many people at the end of World War I, it was fundamentally compromised by the reality of the Versailles Treaty and the use of the language of self-determination by the Nazis in their designs on multiethnic Czechoslovakia in 1938.

The trouble is that peoples do not conveniently locate themselves exclusively within contiguous borders. As a result, a state for one people is necessarily a state defined against some of those who live within it. Moreover, since history does not end and powerful forces drive peoples to move across borders, or cause them to have different rates of demographic increase, the ethnic relationships within the territory of the state will constantly change.

This problem could be ignored as long as those outside the ethnos but within the state could be ignored. This was never an option for countries like Lebanon, Belgium or Canada where no ethnic group could really triumph, but it could work as a matter of realpolitik in countries with numerically smaller ethnic minorities.

However, one of the features of liberalism is that it encourages internal critique, as the limit of the circle of equal, autonomous persons is expanded on the demand of those left outside it. Enlightenment liberalism was simultaneously a project of white bourgeois males and one making claims based on the situation of all human beings. This contradiction can be the basis for presentist condemnations of the racism and sexism of the Enlightenment project, but its more important consequence was that it provided rhetorical space for the excluded to demand change in the ruling elite’s own terms. The demands of equal liberty made by bourgeois white men have been rejected by leftist intellectuals, but actual progressive social movements embraced these demands while insisting that the scope of equality and liberty be expanded. Once this happened for peoples whose existence does not correspond to an existing (or even possible) border, the liberal answer to empire can no longer be nation, but rather some messy multicultural federation – a federation being a democratized empire.

Canada, for example, originated as the federal union of British North America. The Victorian conception of Britishness was complicated, involving racial mythology about Anglo-Saxons, the political economy of free trade, the science of the industrial revolution and the redescription of the common law as an instrument of individual freedom. “Britishness” meant different things to George Brown and to George-Étienne Cartier. But no matter what its exact connotations, the idea that any part of the world, no matter how distant geographically from the original islands, could be made British was an unmistakably imperial idea.

However, already with 1867, it was necessary to separate state and nation to accommodate the reality of a Catholic, French population that could neither be given full authority over a particular territory nor denied a share of political power altogether. This need to accommodate was not a given. It contrasted with how the British Empire treated the Acadians conquered in Queen Anne’s war in the early 18th century and with the hopes for assimilation expressed by Lord Durham – an English radical – in his 1839 report. After Durham, however, it was clear to Baldwin and LaFontaine that Canada could only be democratic on a binational basis.

This accommodation was originally offered primarily to French Canadians and to a lesser extent English-speaking Catholics, primarily of Irish descent. But compromises were also made with the Métis with the Manitoba Act and with the Indigenous groups of the west, at the nadir of their strength, with the numbered treaties. Although these promises were disregarded by the Canadian state with the full flowering of settler colonialism, they were not forgotten by those to whom they were made.

Since the 1960s, partly in response to Quebec nationalism and partly in response to its own increased diversity, English Canada has largely abandoned any British identity in favour of a “multicultural” one. At that time, English Canada expressed a nationalism directed primarily at the United States of America, and this nationalism remained politically salient up until the free trade election of 1988. But since then, urban English Canada has identified too closely with “Blue” America to be really nationalist, while the conservative belt of rural Canada has been more fertile ground for a populist nationalism that is ethnic in a broad sense. This has caused a counterreaction in urban Canada, which has basically rendered the idea of a Canadian identity based in a peoplehood untenable.

Francophone Quebec’s initial response to secularization and modernization was a thinly disguised ethnic nationalism inspired by anticolonialism, and formulated now in terms of language rather than religion and descent. Left Quebec nationalism has obviously not disappeared, but it is no longer the beating heart of progressive Quebec. As in English Canada, it is issues of immigration and assimilation that have the most resonance with populist nationalism.

Much of Indigenous Canada has embraced an anticolonial nationalism of its own. Some have disclaimed any identification with the Canadian state at all. But for pragmatists, at least, the real objective is to be integrated into the Canadian federal structure with an alternative source of sovereignty to that of the federal and provincial governments, along with tacit or explicit acceptance that the sovereignty so claimed is one that is shared with the transethnic institutions of the Canadian state.

Canada’s situation in these respects is not simple and is the consequence of its own history. But it also echoes developments throughout the world, where the cause of liberalism and the cause of postnational states have become more closely identified. Moreover, the struggle over the nonidentification of nation and state has increasingly replaced the 20th-century struggle between labour and capital or over the amount of government redistribution as surely as that struggle displaced earlier ones about the place of the throne and the established church.

Brexit is the perfect example. Its proponents see themselves as protesting against a federal Europe displacing the sovereignty of the United Kingdom. But its main obstacle has been how it has disrupted quasi-federal arrangements within the United Kingdom itself, particularly in Northern Ireland, unfortunately the laboratory of the identity conflicts of modernity from the 17th century to the 21st.

To the extent that the separation of nation and state becomes a core liberal value, it will face a backlash, which will not disappear with better economic times. National identity fits well with basic human groupishness. It has been central to both personal identity and state formation in the West, and in the world influenced by the West, for centuries. It is therefore hard to imagine that declining ethnic majorities will abandon nationalism, and the pretense that it is nonethnic will become increasingly thin. But since the “people” as defined by the populists will never really be all the people in the state, and since those excluded will easily perceive this fact, the ethnic majoritarian coalition will inevitably give rise to a countercoalition.

Since liberalism is fundamentally defined by the idea that the state should not enforce one particularist conception of the good against dissenters, it cannot really be neutral in this conflict, which is going to define politics for the foreseeable future. Just as liberalism emerged as a pragmatic response to religious diversity, while often having to manage unwieldy coalitions of dissenters from the dominant religion, now it must do the same with ethnic identity. Contemporary liberalism’s demand must be that nation and state be separated. Its base consists of those who are threatened by uniting them. Principle and strategy leave no retreat: liberalism allows ethnic identity, but it must deny that identity state power.

The Mirage of Civic Nationalism

But is there a compromise between modern liberalism and nationalism that we can live with once ethnic nationalism is excluded? Is there a “civic nationalism” that is demanding enough to represent an alternative to the ethnic variety, while being consistent with liberal principles? A number of writers worried about the threat of nationalist populism to liberal institutions – including Yascha Mounk, Francis Fukuyama and John Judis – hope so.2 Mounk, Fukuyama and Judis are all liberals and can all see that if nationalism defines itself by claiming that membership in the state should be coincident with membership in the nation, illiberal results follow. But they intervene to ask the “left” to embrace a “civic nationalism,” arguing that without a thick sense of national identity, there will not be the will to put together projects like the social welfare state.

If “civic nationalism” means nothing more than that it is good if the citizenry identify with the state they live in as a common enterprise, and reasonable that they expect it to look out for their interests, then it is consistent with liberal principles. In this sense, though, the “civic nation” plays no greater role than the “civic province” or “civic municipality.” Public-spirited Torontonians or Manitobans expect their local or provincial governments to look out for their interests. A patriotism about a country that is similar to that felt for one’s city can certainly be a benign sentiment that no liberal would quarrel with. But patriotism is an emotion, while nationalism is an ideology. An ideology must define itself against something. So “civic nationalism” – if it is worthy of the name – must define the nation in a way that excludes some “civic” perspectives.

As Americans, Fukuyama and Judis want something like a commitment to the Declaration of Independence, the Constitution and an optimistic, entrepreneurial attitude to life as an identity substitute for blood and soil. The dilemma is that any ideological identity that is thick enough to fulfill the emotional needs met by nationalism will be as exclusionary as an ethnic identity – and even more in conflict with the liberal commitment to free debate of ideas. The phrase un-American has a nasty connotation for a reason. For all the flaws of old Europe, and for all the problems with its essentially ethnic understanding of national identity, at least the concept of an “un-Dutch” idea makes no sense.

Let us take Canadian examples of the problem with an ideological conception of national identity. In Lament For A Nation, George Grant claimed that Canadian identity depended on a less individualistic and more deferential approach to social life than prevailed in the United States.3 As a result, he wrote the Liberal tradition off as hostile to Canada – even though it had been the dominant tradition since Laurier and, in the 19th century, had led to the development of responsible government and had been part of the grand coalition leading to Confederation. Grant had to distance himself from the obviously individualistic strains in the Diefenbaker Conservatism that he was defending. Not surprisingly, since Canada has always been a pretty individualistic place, Grant had to conclude that Canadian identity was doomed before it could start.

Grant’s Lament foreshadowed numerous attempts to tie public polices about which there should be debate in democracies to national identity, about which there cannot be debate. Grant did this with federal Crown corporations, a theme that their CEOs have taken up ever since. Liberals and social democrats did it with the Canadian model of medicare and, after 1982, with a Charter and model of judicial review borrowed from the United States. Conservatives responded relatively harmlessly by tying national identity to peewee hockey and coffee-and-donut chains and less harmlessly to a more militaristic foreign policy. The liberal objection to all this is that it makes support for or opposition to certain contingent public policies matters of loyalty to the state.

It is not clear that there is actually a positive relationship between a strong sense of national identity and social welfare. Countries that have long struggled with a common national identity – like Canada and Belgium – do not seem to differ in any important way on this dimension from countries that have not, like France and the United States. If the welfare state is an efficient means of delivering social insurance – and it is – then it is not clear why it would not be enough for its citizens to recognize this. It is an empirical question, and the empirical evidence is not very strong that a specific ideological commitment is necessary for people to be public-spirited.

More fundamentally, though, civic nationalism faces the strategic and political problem of having no constituency. The resistance to Trump and Brexit, for example, comes primarily from the people who feel most excluded from their definitions of “American” or “British.” An opposing coalition must consist in people who have a wide variety of incompatible identity commitments. Negotiating such a coalition requires bracketing various commitments and promising them some space in the public policy that will result if the coalition succeeds. Liberalism is good at creating that kind of space. One possibility would be to define the common denominator of the antiethnic coalition as the “true” civic national identity. But this would just further enrage the majoritarian populists: they would not be “true” citizens of their own country! It is better to recognize the inherent asymmetry in the two contending coalitions.

The Intersectional Liberal Federalism of George-Étienne Cartier

We cannot put the question off any longer. If neither a single ethnic identity nor a single political identity for a state is compatible with liberalism, then how does liberalism learn to live with groupishness? Are we stuck with the pessimistic conclusion that the principles of John Stuart Mill and Benjamin Constant are for a species with a different evolutionary history from our own, possibly descended from solitary gibbons? As Edward O. Wilson, one of the world’s experts on ants, said of communism, “Great idea, wrong species.”

One response is that no one ever said it would be easy. Liberalism, like socialism, has a tendency to think of its success as guaranteed by history, so that when inevitability is put in doubt, the alternative seems to be despair. A more realistic approach would be to keep normative commitments separate from short-term success or disappointment.

Still, liberals do need a strategy. An alternative might start with the observation, banal on the cultural left, that identities are “instersectional.” This much-mocked word contains two useful and yet undeniable insights. The first is that every person is subject to multiple particularist loyalties and experiences: we are not just women/men or Canadians/Americans, but Canadian women/American women/Canadian men/American men – in exponentially more specific intersections of these sets. The second is that the experience of being part of the same group differs depending on the other groups to which one belongs: African-American women differ from African-American men not only in their gender identity but also in how they experience their racial identity. This example can be generalized indefinitely. Intersectionality is, in this sense, an undeniable fact.

And it is a problem for nationalism of any kind. That is because nationalism needs to elevate one identity cleavage to supreme importance while diminishing all the others. For a consistent nationalist, one must be an American or a Pole, but not a Polish-American. At minimum, such fractures are threatening to the national identity and to the idea of one people. From liberalism’s perspective, however, this is good news. Its enemies have a problem with the species they belong to as well, since, in fact, people do not spontaneously keep to a single identity.

But liberals have generally been suspicious of intersectionality. One problem is that those employing intersectional vocabulary tend to confuse oppression with virtue. They will often treat every identity distinction as a vertical one of oppressor and oppressed, and never a horizontal one of groups that must share a common space. Moreover, they will explicitly say that the oppressor can never judge – or even understand – the claims of the oppressed. If taken to the extreme, this would mean that differences could never be justly resolved or even effectively negotiated. Liberals have always differed with radicals in that they doubt that a politics in which the perspective of the “oppressor” can be ignored entirely would be either just as a moral matter or likely to succeed as a prudential matter.

But many liberals who object to intersectionality fail to recognize that an acknowledgement that everyone’s identity is complicated constitutes the best argument against radicalism. Very few people would fit into all the “oppressor” boxes, and even fewer would be “oppressed” in every respect. Even at the individual level, everyone has to find more or less principled compromises. An intersectional radical cannot pretend that there will be a single revolutionary subject, like Marx’s proletariat.

Another problem for liberals is the worry that a focus on intersectionality will lead to despair about the possibility of communication and collective action. If we cannot talk across identity categories, or if statements must simply be accepted, then a virtually infinite proliferation of such categories would create a hyperindividualized nightmare of noncommunication. On this point, it is precisely the liberal tradition, which has long been focused on the problems of common governance across divides of commitment to comprehensive worldviews, that has the resources to be useful to those concerned with the intersectional nature of identity.

Canadians should be more familiar than they are with George-Étienne Cartier’s 1866 speech in favour of Confederation, in which he called for a new “political nationality” with which neither the “national origin” nor the “religion of any individual” would compete. In Cartier’s vision, this political nationality would be shared by people of all parties and was not intended to replace ethnic or religious loyalties. In Cartier’s exposition on the new federal scheme, the political nationality of being a Canadian would serve those interests where religious, linguistic or ethnic identity was irrelevant. Cartier accepted that English-speaking Protestant Upper Canadians, French Catholics, Irishmen and Maritimers would all need to be represented in the councils of the political nation; were he alive today, he would no doubt modify his list to include women, Indigenous people and visible minorities.

Cartier’s concept of a political nationality should be distinguished from a civic nationalism dependent on allegiance to a substantive political ideology. Just as ethnic, linguistic and religious nations would meet in the institutions of the new political nation to hammer out their differences, so too would ideological groups. No doubt Cartier’s approach presupposed that anyone engaging in sedition against the state order would be suppressed. But it did not require any greater commitment than a willingness to work with the institutions as they existed.

One advantage the Confederation generation had over us today is that the socially and economically dominant English-speaking Protestants of Canada West were able to conceive of themselves not only as the true British North Americans but also as a section within British North America. As such, they advanced their interests through their representatives in a framework that implicitly accepted that others would also advance their interests. This did not prevent various identity panics on the part of this group, from the Riel rebellion through the Manitoba Schools controversy to the Conscription Crisis in the First World War. But through all of this, a framework remained in which Protestant Ontarians participated as one (loud) voice among many.

By contrast, neither left nor right is comfortable viewing the declining demographic “majorities” of the West today as one identity group among many – with legitimate interests, but also with an obligation to compromise those interests with the interests of others. For right populists, these groups just are “the people” and their identity demands are the demands of the nation as such. “Race” or “ethnicity” is something that only the Other has, which implies that the majority is raceless and without ethnicity. If challenged, the declining majority identity points to its acceptance of the principle of colour-blindness in law and its openness to the support of members of minorities willing to assimilate unreservedly to the majority.

The left sees through this and is understandably reluctant to acknowledge the legitimacy of a majoritarian identity politics. However, the left then goes on to insist that the majority just accept the moral untenability of its own identity as the corollary of accepting that its identity is just one among many. Instead of just making a principled demand for the separation of nation from state, the left in effect asks one people to cease to exist altogether.

The challenge is how to turn declining majorities into participants in multicultural compromise. Only on the racist and fascist far right is the contradiction resolved in favour of an explicit advocacy of “white” interests, but of course any use of multicultural language from this corner is just cover. This problem is insurmountable so long as the majority ethnicity defines itself as “white,” or as “French”, “Dutch” etc. The political entrepreneurs who seek to redefine majoritarian concerns in terms that speak to identity will probably continue to claim to speak for “the” people, as Trump and the Brexiteers do. From a liberal perspective, this is no better than an explicitly racist appeal, because it dissolves the universal into the particularities of one group.

While denying the concept of human nature, communism did speak to some perennial human characteristics: a longing for collective action, a dislike of hierarchies of rank and status. Its failure was its inability to integrate these aspects of human nature with others. Nationalist populism speaks to the need to be part of a group, and the need for that group to be “one,” but it suffers from the reality that our identities are multidimensional. On the left, this has been understood as “intersectionality,” but this insight has suffered from the left’s lack of attention to institutional realism. Liberals should give up on trying to renew their tradition by going back to the well of a single national identity and instead embrace the multiplicity they are best placed to reconcile with social order.

Continue reading “Is it Time to Separate Nation and State?”

Liberalism, broadly understood, is on the defensive. As political scientist Larry Diamond has pointed out, while the number of liberal democracies increased from the early 1970s to the turn of the millennium, since then we have been in a “democratic recession” with global measures of freedom – understood in a liberal sense – in decline.

Twenty years ago, economic determinism seemed to be on liberalism’s side. When the 20th century ended, it seemed that free markets, political democracy and a liberal version of the rule of law were the secret of economic success. It was widely thought that this had been demonstrated by the collapse of the Soviet bloc and by the success of the newly democratic east Asian tigers like South Korea and Taiwan. But today, the continued economic rise of the People’s Republic of China and the apparent stability of its one-party system of “socialism with Chinese characteristics” have made that claim pretty hard to sustain.

At the beginning of the new millennium, the conventional wisdom was that new information and communications technology would empower people in authoritarian countries to overthrow tyrants while deepening democracy at home. While there are some examples that have vindicated this hope, few people using Facebook or Twitter today feel these are unmitigated blessings. The reversal of democratic advance in the developing world, the success of Vladimir Putin’s Russia in pushing public opinion in Europe and the United States toward a nationalist right or antimarket left, and the deepening epistemic closure of the various political tribes in the rich countries makes any technologically determinist optimism increasingly implausible.

In the 1990s, it seemed as if freer movement of goods, people and capital did not even have to be argued for. It was inevitable. The idea that “globalization” was an irresistible force was shared by those who favoured it and those who trashed downtown Seattle to protest it. But since September 11, 2001, borders have become harder and religious and civilizational identities sharper. And since October 2008, the faith that markets should be left alone to increase wealth has been shaken, leading both to a healthy rethinking of global finance and a revival of mercantilist ideas that in trade one nation can win only if another loses. Never mind that Adam Smith and David Ricardo showed almost two hundred years ago that voluntary transactions usually leave both parties better off. In these populist times, who is going to listen to dead white males who were also globalist elites?

The ideological tendencies that have been the pillars of the Western liberal consensus since the Second World War – social democracy and Christian democracy – appeared perfectly healthy when the world woke up to find out that the Y2K panic was overblown. Today, both are in electoral decline, losing ground to populist nationalists on the right, hard-line Marxists on the left and idiosyncratic personality cults in the “centre.” Broadening our perspective to democracies in the global South complicates the picture, but also provides reasons for disquiet. I write shortly after the first round of the Brazilian presidential election, in which Jair Bolsonaro – a right populist long considered marginal in the political scene – obtained 46 per cent of the vote (Bolsonaro was elected President in the October 28 runoff).

Not long ago, the English-speaking world seemed different. Granted, it had been through some unwinnable wars and a financial crisis. But anglophone elites could smugly reassure themselves that a foundational liberal consensus, spanning the electable left and the electable right, would not be seriously threatened in the lands of John Locke, Thomas Jefferson and John Stuart Mill. But then came Brexit – and Trump.

To be sure, there is nothing intrinsically illiberal about leaving the European Union. Some Brexiteers argued that a fully sovereign Britain could recapitulate the liberal Little England dreams of Richard Cobden and John Bright by developing its own tradition of rights protection and enter into freer trade relations with the world. But polling evidence suggests that few Leave supporters are interested in a more open Britain, as opposed to preserving what they see as its historic identity. Moreover, leaving Europe has complicated the greatest liberal achievements of the Tony Blair years in finding a political accommodation for the contending Unionist and Nationalist identities in Northern Ireland. While electoral politics in the eras of John Major, Blair and David Cameron were dominated by a broadly liberal consensus around a civic definition of national identity and support of markets mitigated by social insurance, the major parties in Brexit-era Britain are dominated by a nationalist and nostalgic right and a left that is profoundly suspicious of business, markets and the institutions of the liberal international order.

As for Trump, as I write (in October), it seems unlikely that his populist nationalism will radically change America’s institutions. While he has made immigration enforcement nastier, for the most part he has left policy to conventional congressional Republicans who favour lower taxes and less regulation. But Trump clearly has transformed the rhetoric of the American right in a way that does not seem obviously reversible. Ronald Reagan and the Bushes rhetorically embraced the conservative conception of America as an idea – one of democratic politics, personal freedom and free markets. Trump instinctively rejects this bourgeois-liberal view of human nature, and his emotional connection to the Republican base (which includes most politically active white Christian Americans) shows, to my mind, that they instinctively reject it as well. Trump has consistently refused to claim that America is, or should aspire to be, morally superior. Trump values America solely because it is his, and he identifies its interests with his own. While all American presidents have failed to live up to liberal democratic ideals, he is the first in living memory to reject them.

The relationship between Trump’s Twitter stream and actual public policy is unclear. What is obvious is that he can, to the approval of approximately 40 per cent of the American electorate, deliberately dehumanize ethnic and religious groups and rage against norms constitutive of American liberal democracy such as the independence of criminal prosecution from partisan politics It is hard not to worry about how far a more disciplined leader of the same authoritarian coalition might get in future.

Mounk: Sensible proposals from the centre-left

Yascha Mounk, The People vs. Democracy: Why Our Freedom Is in Danger and How to Save It. Cambridge, MA: Harvard University Press, 2018. 400 pages.

The decline of support for the institutions of liberal democracy is not confined to a single country or a single age group. A number of depressing statistics are laid out in gory detail in Yascha Mounk’s The People vs. Democracy. Across western Europe and North America, trust in democratic institutions has been declining since the 1950s and is now at all-time lows. Each age cohort is less committed to democracy than the previous one: while 71 per cent of Americans born in the 1930s told pollsters it is “essential” to live in a democracy, only 29 per cent of those born in the 1980s gave the same answer. Similar results can be shown in every wealthy democracy, including Canada. More people support military rule (16 per cent of Americans in 2011) and a “strong leader who does not have to bother with elections” (32 per cent) than ever before. While older voters are more likely to support democracy in the abstract, they are also more likely to express racial resentment and, at least in the United Kingdom and the United States, to vote for right populists like Donald Trump.

The democratic recession is now undeniable and can no longer be dismissed as a blip. It requires rethinking the certainties of the 1990s. Rethinking is something liberals are good at. For the liberal intelligentsia – very much including those on the right primarily motivated by free market economics and keeping the liberal world order secure – the Trump election in particular has finally destroyed whatever complacency survived 9/11 and the 2008 financial crisis. Naturally, a demand for “big think” books to tell us what this all means has never been greater, and a supply has followed.

Mounk’s contribution to this literature approaches the problem from the antipopulist centre-left. His analysis begins by reminding us of the tension between democracy (a system of majority rule) and liberalism (a system of limitations on government). To be sure, some limits on what governments may do in repressing opposition and competitive sources of power are necessary for democracy to continue. But there is no guarantee that the majority will want these, or any other, liberal guarantees.

Mounk grants that liberalism can restrict democracy in questionable ways. Institutionally, judicial review, independent central banks, trade and investment treaties and other international institutions – most dramatically, the European Union – have reduced the choice set of elected politicians compared with the postwar era. These golden handcuffs were put in place out of a legitimate fear of illiberal demagogues. But such unaccountable institutions foster a sense of learned helplessness in the public. If all the important decisions are going to be made by the central bank or constitutional court or in Brussels, how much does a vote matter anyway?

While democracy in its most minimal sense merely requires that there be reasonably competitive elections in which the people can freely choose among contending elites to govern them, as an ideal it aims at equality of political influence. But in real democracies, it is the concerns of those with access to wealth, status and education that most sway public policy. While globalization has vastly increased the incomes of people in poor countries since 1980, it has also brought greater inequality of wealth and income to the rich world, especially its English-speaking sector. As a result, the rich countries have become less substantively democratic. Mounk notes all of this and, as a social democrat, he has proposals to improve the well-being of the population in the bottom half of the income distribution.

Mounk notes the risk of liberal institutions restricting democracy but, in light of his diagnosis of the dangers of populism, he is not willing to support reversing this. It is the danger of illiberal democracy that lies at the heart of his analysis. Although Mounk occasionally mentions the increasing strength of the pro-Russia left, he (correctly, in my view) focuses on the populist anti-immigrant right.

Like the far left, this tendency is generally supportive of (and reportedly assisted by) Vladimir Putin’s Russia. It defines “the people” in ethnic terms and is hostile both to cosmopolitan elites and to immigrants as outsiders. In Hungary and Poland, the populist right has taken power and has “deconsolidated democracy.” Mounk sees similarities in these European developments to those in Turkey, Russia and India It remains to be seen how far things go in Italy. For Mounk, as for Steve Bannon and (in some moods) Trump himself, the 2016 U.S. presidential election was the first step in a similar deconsolidation in America itself.

Mounk’s comparative approach is welcome context for North Americans marinated in the latest newsflash about the Trump administration, but with no comparable connection to events in Europe, let alone Turkey and India. There is no doubt that the European populist right and the Trump wing of the Republican Party have inspired each other.

At the same time, like any comparative enterprise, Mounk’s runs the risk of throwing together very different national situations. It is possible to argue that “authoritarianism” is a single thing that either “will happen here” or will not. But liberalism and democracy are both things we can have more or less of. Digging into Mounk’s discussion of what is happening in individual countries, it becomes clear that except for places like North Korea, authoritarianism, populism, democracy, corruption and even liberalism are not all or nothing. This seems to be true in North America as well. Antiterrorism panics have made us more illiberal in some ways. But victories by minorities have made us more liberal in others. Is America under Trump really less liberal than under McCarthy and Jim Crow? Or even than it was in the immediate aftermath of 9/11?

Quite properly, Mounk will not be diverted by any easy optimism, even one based on reminding people how bad the good old days really were. He can show that support for democracy has been declining everywhere in the West as memories of fascism and even Communism fade. He attributes the problem to social media, economic stagnation and “identity” – by which he means a feeling of threat and resentment among ethnic-racial majorities against outsiders. I find the simplest explanation best: the rise of the populist right is the result of identity threat. While social media make the situation more visible, I doubt they have independent causal force. And, from the data, Mounk himself shows that there is no real link between economic prosperity and support for the populist right. The populist right is strongest in countries like Hungary and Poland that have had the most economic growth. In general, it has been steadily increasing in strength, and the 2008 financial crisis does not seem to have made a particular difference. By contrast, Angela Merkel’s decision in 2015 to suspend enforcement of the EU’s Dublin policy and stop returning asylum seekers to their first port of entry in the EU in the wake of the Syrian refugee crisis appears to have been a turning point.

We might as well face the reality that ethnic identity is a powerful political motivator, and that the populist right can most convincingly tell threatened or resentful ethnic majorities that it will fight for them. Greater inclusion for other citizens of the country is thus coded as threat. Liberals can and should try to frame greater inclusion as of net benefit to everyone, and viewed from the perspective of economic interest this is largely true. Social democrats should put forward proposals about how to minimize economic disparity. But the very fact that societies with all levels of redistributive institutions and economic growth are facing the same phenomenon shows that economic solutions are not enough.

Mounk proposes a very sensible set of principles for centre-left reform of the welfare state, including coordinating taxation of the internationally mobile, ensuring that people who own property in a country pay taxes there, increasing housing supply and decoupling benefits from work. Western politics would be improved to the extent that we focused on these issues. But the real problem is still how to defang the identity threat felt by ethnic majorities. Mounk supports reinvigorating civic nationalism, in recognition that nation-states remain the locus of democratic decision-making.

If this civic nationalism is of the kind that public-spirited people feel for their cities and towns or subnational units, then I am all for it. But this is too weak a brew for genuine nationalists. I am sceptical that a “creedal nationalism” of the kind found in the United States, Canada and Australia is really much better than an ethnic nationalism. The problem with identifying citizenship with a set of beliefs is made obvious by the phrase “un-American” beloved of Joe McCarthy or, less seriously, the tendency of the Liberal Party of Canada to identify its own shibboleths with being Canadian. For all the problems with national identity in Europe, at least you cannot imagine someone getting hounded out of a job for “un-Dutch” opinions.

Of course people will always have particularistic loyalties broader than their families and smaller than the human species as a whole. But I see no reason that liberals should concede that these loyalties have to be targeted on one single entity. Liberals can support various kinds of federal approaches to inevitable identity conflicts, while preaching the truth of the cosmopolitan insight that a person’s moral worth does not depend on where they are born. That truth may not be popular, but showing the courage of your convictions – as Mounk urges liberals to do – means taking lonely stands.

Despite Mounk’s half-hearted support for civic nationalism (and if it could be quarter-hearted, I might even go along!), I consider his book an excellent guide to our current perilous state. Well-written, factual and with sensible proposals for orienting the resistance to right populism, it should be on the secular wintertime gift list for anyone with a liberal cosmopolitan in their life open to big-picture rethinking.

Goldberg: The battle for the true meaning of conservatism

Jonah Goldberg, Suicide of the West: How the Rebirth of Tribalism, Populism, Nationalism and Identity Politics is Destroying American Democracy. New York: Crown Forum, 2018. 464 pages.

Sadly, I cannot say the same for Jonah Goldberg’s Suicide of the West. My inability to endorse the book is sad because Goldberg, despite becoming wealthy and well known as a happy warrior for the American right (his Liberal Fascism: The Secret History of the American Left from Mussolini to the Politics of Change was a bestseller), has been politically orphaned by the Trump phenomenon. He acknowledges that Trump’s rise in the Republican Party demonstrates that he got it wrong. The American right, he now realizes, is not currently a coalition held together by a commitment to free markets, family values, a vision of foreign policy or a strict reading of the U.S. Constitution. Rather, its glue consists in the identity politics grievances of white Christians (in an ethnic, if not doctrinal, sense). This sense of grievance and the way it is expressed has obvious analogies to the left-wing “identity politics” activists that Goldberg targeted, but without the justification of any genuine history of oppression or subordination.

In the end, Goldberg finds that this right-wing identity politics is an understandable-if-regrettable response to the excesses of the left. I find him unconvicing, but I do not rule out the possibility that a well-presented argument for this thesis by a conservative writer could make for an interesting book. Unfortunately, Suicide of the West is too unfocused to do the job. Goldberg likes serious ideas and discusses various theories of the origins of the Industrial Revolution and the Enlightenment, the thought of Rousseau and the influence of Romanticism on Hollywood, with a long detour on Woodrow Wilson and the origins of the American administrative state.

Unfortunately, Goldberg is obviously out of his depth and should have focused on the postwar American conservative movement that he knows extremely well. He starts by saying God does not appear in the book, but he immediately attributes providential qualities to what he calls “the Miracle,” a combination of the Scientific Revolution, English common law, laissez-faire capitalism and the American constitution as it was before Woodrow Wilson and Franklin Delano Roosevelt ruined it. Rousseau (although clearly a leading figure in the Enlightenment) and Romanticism-influenced Hollywood (although clearly a major part of what made American capitalism great) are the bad guys. The British Empire and pre-Roosevelt America had their faults (slavery and the dispossession of Indigenous people), but according to Goldberg these were inessential. What was essential were the benefits of longer life expectancies and greater personal freedoms that we enjoy today but are on the verge of losing if we do not show enough gratitude for the Enlightenment and eschew Romanticism and all its works – which include the aforementioned Woodrow Wilson, gender studies and Donald Trump.

Goldberg is unable to say how the combination of economic and technological progress and liberal values got started in western Europe in the first place. Fair enough: experts argue about this and no one really can say. But by contrast, he is sure why they are threatened: their beneficiaries are not “grateful” enough for what they have brought to us.

The European Enlightenment and the United States of America are, like all human things, a mixed bag. They are not a “choice” and they are not going to commit suicide merely because people are not reverential enough towards them. Goldberg is aware of the paradox that the greatest critics of the institutional legacies of the Enlightenment and liberalism are the ones who have most thoroughly accepted its demand that authority be justified in light of the equal freedom of all. But he fails to see that this paradox cuts in both directions.

Colonialism, slavery and racism were just as essential to the Enlightenment and the U.S. Constitution as science and rights. As Orlando Patterson has argued, ideas of freedom and redemption have always been understood in terms of slavery and manumission. Or as Dr. Johnson put it, “How is it that we hear the loudest yelps for liberty among the drivers of negroes?” When setting out his thesis that the identity politics of the populist right is a response to the left, Goldberg never even considers the obvious progressive rejoinder that the identity politics of racial minorities was a response to the identity politics of white (originally, white Protestant) America.

To be sure, there is a sense in which the abolition of slavery was the “truth” of the Declaration of Independence. But that sense is a retrospective sense, made possible by the clash of the Civil War and the rhetoric of Lincoln. This was not a cheap truth, but Lincoln recognized that if the price were “all the wealth piled by the bondsman’s two hundred and fifty years of unrequited toil” and “every drop of blood drawn with the lash … paid by another drawn with the sword” the redemption would still be providential. This was identity politics with a vengeance, and out of it a genuine idea of freedom was born, and then betrayed with the defeat of Reconstruction. The civil rights movement of the 1960s was yet another attempt at redemption – and Goldberg’s movement, in its modern Goldwater-Reagan form, regained the majority status it had lost with the New Deal by opposing this attempt.

In his battle with the Trumpites over the true meaning of the Goldwater-Reagan movement, Goldberg should also realize that this truth is also defined rhetorically and retrospectively. Goldwater and Reagan won over the base of Dixiecrats and George Wallace followers to a vision of creedal nationalism compatible with de jure racial equality and more universal values. Many on the left have failed to see the moral progress implicit in this transformation, a moral progress exemplified by the room that the coalition now makes for black conservatives. At the same time, many on the more ideological and cosmopolitan right have been in denial about the historic roots of their coalition or the attitudes of its Trumpite followers about social insurance and globalization.

The ideological right was blindsided by the attraction felt by the “base” for a protectionist former Democrat with zero interest in conventional virtues, the Atlantic Alliance or free market orthodoxy. Trump presented as a tough guy who would fight for “real” Americans against foreigners and “unreal” Americans. The conservative intelligentsia exemplified by Goldberg were surprised by the true feelings of their own movement in a way that the most knee-jerk American progressive was not. In effect, the National Review crowd made the same mistake about the “base” that they made in invading Iraq: because they think of themselves as people who put ideas above identity, they assumed others would as well.

Those who feel left behind by progressive changes need representation too. It was inevitable that this group would sooner or later rebel against being voting cattle for a project of ultramarketization that was never the reason they joined in the first place. If we agree that all politics is identity politics of some kind, the problem becomes how to represent their interests in a civilized way, make appropriate compromises and bring home the bacon. Some of Goldberg’s colleagues, such as Ross Douthat, Reiham Salam and Yuval Levin, have started down this road. But Goldberg has not. He is correct that liberalism, in its left and right forms, is a creed that cuts against the tribal aspects of human nature. But it should not ignore them.

Fukuyama: Equal or superior recognition?

Francis Fukuyama, Identity: The Demand for Dignity and the Politics of Resentment. New York: Farrar, Straus and Giroux, 2018. 240 pages.

Standing between Goldberg and Mounk is Francis Fukuyama, whose Identity: The Demand for Dignity and the Politics of Resentment ploughs much of the same ground. A former neoconservative who remains critical of the American left for failing to connect, Fukuyama gets pride of place when it comes to “rethinking,” if for no other reason than that his 1989 article “The End of History?” (and his 1992 book The End of History and the Last Man) crystallized the post–Cold War liberal optimism now being rethought.

To be sure, The End of History was not the triumphalistic book it has been caricatured as being since it first came out. When Fukuyama referred to the End of History, he was not claiming that, after the collapse of the Soviet bloc – which occurred after the publication of the article and before the book – there would be no more events. Fukuyama was using the word history in a distinctive sense that owed its meaning to the early-19th-century German philosopher Georg Hegel, as interpreted by the mid-20th-century Russo-French philosopher and framer of the European Union, Alexandre Kojève.

For Hegel, as understood by Kojève, “history” as a coherent narrative can be contrasted with a more or less random sequence of events to the extent it is a development of ideas of freedom. Hegel saw in the aftermath of the French Revolution the generalization of the idea that everyone is a rights-bearing free subject. State authority can no longer be justified as the natural right of the strong to rule, but as rationally justifiable in light of this equal freedom. For Hegel – at least as Kojève told it – once the powers of the world gave even lip service to this idea, history was over. It did not matter that the rise of America and Russia and the world wars – all of which Hegel predicted – lay ahead. The abolition of slavery, universal suffrage, the rise of the labour movement, women’s legal equality, the end of the traditional European imperial dynasties and the fall of colonialism were all – from this Olympian perspective – details about how to work out this revolutionary idea and therefore did not count as history at all.

As the Soviet bloc fell, Fukuyama argued that this showed Hegel had been right after all. Although Fukuyama has often been interpreted as saying 1989 represented the end of history, he was in fact suggesting that it showed Hegel was right when he said that 1806 had that paradoxical result. Marx’s claim that history would end after the revolution was no longer believable. There was no longer an appealing potentially universal idea that could compete with liberal democracy in the sense of a mixed economy with guaranteed rights for individuals and competitive elections based on universal suffrage. Fukuyama did not dispute that there were many particularistic ideas that would continue – loyalty to family, clan, sect or nation among them. But his assumption at that time was that particularistic ideas were ultimately no match for universalistic ones, so if Marxism was no longer a competitor with liberalism in that space, history had indeed ended along with the Holy Roman Empire.

While Fukuyama thought that a modern economy required price signals and therefore some play for market forces, he was not making the argument beloved by the Economist magazine of the era that liberal capitalism would sweep all before it as a result of economic forces. He noted that authoritarian development in Asia (including China under Deng Xiaoping) was perfectly consistent with rapid technological development and some degree of market mechanisms. Fukuyama followed Hegel in believing that history is not primarily about economic forces, but about struggles for recognition, dignity or status (what he called thymos). Fukuyama noted that the demand for recognition can be either the demand of the subordinate for equal recognition (isothymia) or of the dominant or would-be dominant for superior recognition (megalothymia). He thought that both are deeply rooted in human nature. The advances of democracy, from the overthrow of the remnants of European fascism in Greece, Spain and Portugal in the mid-1970s through the realization of democracy in South Korea and Taiwan in the 1980s to the dramatic revolutions of eastern Europe in 1989 and the collapse of apartheid in South Africa in the 1990s, were expressions of the demand for “isothymia,” equal recognition, a demand that had taken liberal ideological form since the time of the American and French revolutions.

Marxism shared the basic Hegelian belief that it was possible to make sense of history, and that its coherent narrative can be explained in terms of the expansion of freedom through the struggle of the “slave” to obtain equal recognition. By contrast, after the mindless carnage of the First World War, most non-Marxist intellectuals became persuaded that the idea of any meaning to history at all was merely a secularized version of Christianity. Fukuyama had a point in noting that the spread of revolutionary ideas across the Soviet bloc in a short time suggested this was too quick. Ideas of global scope could still give history a meaning, if only after the fact and from the perspective of the present.

In addition to the criticism that Hegel-style history found too much sense in what he himself knew was a slaughterhouse with no one in charge, another criticism was that it was crudely Eurocentric. There is no denying the truth of this criticism as applied to Hegel himself: he dismissed Africa and pre-Columbian America outright and saw the civilizations of China and India as simply preparations for Greece and Rome. Hegel was the product of his time, of course. Paradoxically (or dialectically), the demand that institutions reflect isothymia unleashed by the European Enlightenment has been turned against the exclusions of the Enlightenment itself. John Locke, the avatar of the equal natural rights of all and toleration of religion, was an apologist for slavery and dispossession of Indigenous people. Immanuel Kant, author of “Idea for a Universal History from a Cosmopolitan Point of View,” was also the author of several tracts of “scientific” racism. From a postcolonial perspective, this was not just a matter of the failings of particular individuals. Rather, the scientific and liberal intellectual achievements of the Enlightenment both enabled and were rooted in European domination of the rest of the world.

For the End of History–era Fukuyama, it was sufficient to respond to these criticisms by noting that they were phrased in an ideological vocabulary rooted in the demand for isothymia that was itself a product of the Eurocentric process of history Hegel had described. “Eurocentric” can only be felt as a criticism if equal recognition is accepted as an ideal. This in itself demonstrated the universal implications of the specific and interrelated historic developments of Western science, economics and politics. While ideas that originated in the West could be turned against Western domination, in doing so the critics were acknowledging that their ultimate aspiration remained “getting to Denmark.” As non-European societies absorbed an increasing proportion of this aspiration, the contradiction inherent in the European origins of the ideal of equal recognition and its cosmopolitan implications would fade. Cultural relativism, like the Marxist state after the revolution, would wither away.

In 1989, Fukuyama was writing against a tradition of deep intellectual pessimism about the prospects for bourgeois liberal democracy dating back to the First World War. His claim that a demand for equal recognition is rooted deeply in human nature and that this demand’s only sustainable ideological expression was liberal clearly had comparatively optimistic implications. But The End of History also contained considerable discussion of what he considered the weaknesses and instabilities of the “post-historical” order. Some of these discussions seem prescient now. For example, Fukuyama thought that, if economic growth in America and Europe faltered, and the West’s cultural cohesion continued to disintegrate from an East Asian perspective, the relatively recent embrace of liberal democracy by economically successful east Asian countries might give way to a new deferential authoritarianism legitimizing itself on the basis of Confucian values. Fukuyama also saw the threat that refugee and migrant crises from the “still historical” worlds of the Middle East and Africa posed to “post-historical” Europe. He could see that the desire of the European public to keep migrants out would be in tension with the principle of equal recognition, and that Europe had no good answer to this dilemma.

Fukuyama was most troubled by the prospect that a liberal “post-historical” order could not tame the innate human desire for more status than others. In the late 19th century, Friedrich Nietzsche ridiculed the well-behaved product of egalitarian liberalism as the “last man” and instead celebrated the “over man” (superman) who would not shy away from explicitly trying to dominate other people. In Fukuyama’s view, it was this critique of 19th-century bourgeois society – and not the Marxist one – which led to the near-destruction of liberalism in the trenches of the First World War and in the rise of fascism in its aftermath. For Fukuyama, the danger to liberalism is not material deprivation, but boredom and the lack of an outlet for the domineering aspect of human nature.

The solution, if there is one, is to channel these drives away from politics and violence and toward making money or pastimes like extreme sports. It is in this context that Fukuyama discussed Donald Trump in The End of History. At that time, Trump was a metonym for esthetically vulgar capitalism. Fukuyama adopted John Maynard Keynes’s attitude that it was “better that a man tyrannise over his bank balance than his fellow citizens.” In other words, one of the advantages of capitalism is that it allowed instincts of domination the relatively harmless outlet of commercial success and consumerist one-upmanship. Fukuyama still worried that this would not be enough, and that the drive for megalothymia would lead to wars and an internal revolt against the constraints of bourgeois liberalism from those conceiving themselves as the strong.

In Identity, Fukuyama returns to the themes of The End of History now that the Donald is no longer content to tyrannize over his – possibly exaggerated – bank balance. Canvassing the intervening decades, Fukuyama makes a convincing argument that the demand for equal recognition continues to lead people to push up against existing structures of authority, as with the 2011 Arab Spring. While these revolts often do not lead in a liberal direction, there is still no coherent alternative universal idea to compete with liberalism, broadly understood. Fukuyama acknowledges that in emphasizing liberalism’s advantages over its universalistic rival, communism, he understated the appeal of particularistic alternatives. He continues to take the approach of viewing intellectual history as primary with politics ultimately being about the working through of ideas whose expression is most developed by philosophers and other intellectuals.

One story Fukuyama adds to Hegel’s account of how the primordial conflict between master and slave ultimately leads to the ideal of universal recognition of equal freedom owes a lot to Canadian philosopher Charles Taylor. This story describes how our contemporary idea of personal identity came to be. Premodern societies simply assumed that it was the job of the individual to conform to social norms and that failure to do this was obviously evidence of bad character. This was first challenged by Martin Luther and the Protestant Reformation, which introduced the idea that God worked through the individual conscience and that if a properly inspired individual was in conflict with the society, then society should change, not the inspired conscience. (Arguably, Fukuyama is oversimplifying here by identifying premodernity with Aristotle and Confucius, while ignoring counterexamples like the Hebrew prophetic tradition, the cynics or Jain and Taoist sages.) Rousseau secularized Luther into the idea, made familiar by modern popular culture, that everyone has a natural self that is repressed by society. After Freud, this discovery of the true self became conceived as a therapeutic process and this idea of therapeutic self-actualization has either replaced religion or has restructured it (as with many versions of evangelical Christianity or modernized Buddhism, both of which tend to use therapeutic idioms).

On Fukuyama’s current analysis, “identity politics” in the modern sense combines the struggle for isothymia with the therapeutic discovery of the true self as suppressed by society. To the extent that this is a claim for equal recognition, it fits within the Hegelian story and a liberal society simply needs to widen the circle of recognition to include new ways of being. The political problem, according to Fukuyama, is that the connection to deeply personal issues of psychological well-being makes it difficult to engage in the kind of compromises that are the key to democratic politics. In this respect, he contrasts these issues with the economic issues of redistribution and class relations that dominate “materialist” politics.

I do not doubt that this analysis is useful in understanding feminism and the liberation movements of gender and sexual minorities, as well as phenomena on the right like the prosperity gospel and Jordan Peterson’s Jungian self-help retelling of biblical stories. But I do not think it is actually useful in understanding the forces that have given rise to the democratic recession. The “identity politics” that has mattered is the traditional one of ethnic differences drawn around racial, linguistic and religious lines. It is the last of these, religion, that is fuelling the rise of the populist right in Europe and America. Indeed, in Europe especially, but also in North America, the anti-Muslim right will use the rhetoric of progressive expressive individualism as an ethnic marker between enlightened native Europeans and foreign invaders. It is difficult to argue that there is anything postmodern, or even post-Reformation, about ethnic politics. Human tribalism is as old as humanity, and managing it is something democracies have always had to do and something they have often failed at.

While there are obviously conflicts about abortion, gay rights and transgender washrooms in Trump’s America, it seems to me that these sorts of questions are manifestly not threatening democracy and are less salient than they were when Reagan was President. What is new since then is the fear of whites with an ethnic Christian identity that they are becoming a minority in America. In 2000, George W. Bush tried to reach Hispanic and Muslim voters on a shared social conservatism. Trump represents the abandonment of that strategy, and his overwhelming popularity among white evangelicals demonstrates that ethnic identity “trumps” any allegiance to the sexual morality of traditional religion.

Fukuyama acknowledges the legitimacy of demands for equal recognition by historically marginalized ethnic groups and the need to address their grievances (most saliently in North America, the overcriminalization of young black, Hispanic and Indigenous men). But he says redressing these grievances should take place within the context of a shared civic national identity and agreement that immigrants should assimilate to the norms of liberal democracy. While he will no doubt get grief from some campus activists for this, I frankly do not see any politically significant group in the relevant communities that would disagree with him. Certainly, Barack Obama had no trouble articulating an aspirational postracial American identity to be united by civic morality.

The trouble is that it is precisely this settlement that is threatening to the traditional ethnic majority. The further trouble is that while an objective observer might consider “red” Americans’ identity politics an exercise of megalothymia, they themselves would view it as a demand for isothymia (not in those terms, of course). Just as Protestant America saw itself becoming a minority in the 1920s and reacted by reviving the Klan and shutting down immigration, the broader (but still exclusive) white Christian ethnic identity forged after the Second World War also sees itself as losing equal recognition, regardless of whether this is true. Unfortunately, there are no neutral adjudicators in the struggle for recognition. Even more unfortunately, in Trump, ethnic majoritarian identity politics found a man whose genius is in combining the threatening dominance of Hegel’s master with the sullen resentment of the slave.

In other words, the problem is not that identity politics is inherently any more resistant to compromise than economic issues. The problem is that the political system has not developed a civilized form of ethnic brokerage politics that both includes traditional white ethnic majorities and requires them to see themselves as merely one interest among many. This is the problem that the identity politics left has labelled the problem of “white privilege” or “white fragility,” and it is a real one that could use someone of Fukuyama’s dialectical abilities and equanimity to unravel. He could also have updated the hints in The End of History of the contradictions between a global posthistorical order and national orders that remain historical in his sense, contradictions he saw as at the root of a potential migrant crisis in Europe, which came to be 20 years later. Unfortunately, Identity fails to do this, and so is a bit disappointing.

Faced with an increasingly ascendant populist right, backed by Putin’s rabid petrostate, liberals cannot afford complacency or fatalism. There is still no alternative universalistic vision that competes with limited government based on equal individual rights, competitive elections and a mixed economy. Liberalism’s strong point is that it recognizes the limits of the political in answering the fundamental questions of eternity and identity, and it allows people to optimize their own life chances based on their own decisions. But these are its weak points too. While rethinking will not be the answer to a fierce enemy, it is good to have Mounk and Fukuyama’s analyses; hopefully, another movement conservative can do what Goldberg failed to, and seriously rethink the history of the American classically liberal right.

Every area of study has its classic puzzles; the “anomalies theorists” pay their dues by proposing explanations. For the biology of sexual selection, it might be the peacock’s tail. For early-20th-century physicists, it was the black body radiation problem. For comparative political sociology, it is, in German historical economist Werner Sombart’s phrase, “Why Is There No Socialism in the United States?” For over a century, the absence of a mass socialist or labour party has been a defining aspect of “American exceptionalism.” But what if that were no longer true? What if socialism were to become a major force in American politics, even as it declined in Europe?

Since the First World War and the Bolshevik Revolution, almost every major democratic country has had a self-proclaimed labour, socialist or communist party as a major contender for power. Most of the undemocratic world either had a self-proclaimed socialist government or underground insurrectionary movement (and, not infrequently, both).

The United States was different. It exited the 20th century with the same Democratic and Republican parties it has had since the 1860s, and without mainstream politicians rhetorically proposing an alternative to capitalism. The fact of an exceptional American aversion to socialism was undisputed, with leftists and academics alike arguing about the reasons: the racial legacy of Jim Crow and slavery, the immigrant experience, the frontier, Protestant revivalism or the canny political instincts of Franklin Delano Roosevelt.

But looking around in 2018, we might wonder whether this classic contrast makes sense any more. In Europe, these are gloomy days for the successors of August Bebel, Kier Hardie and Jean Jaurès, with the traditional parties of the left wiped out in France and Italy, in apparently terminal decline in Germany, and riven by serious internal crisis in the United Kingdom. On the other side of the Atlantic, things are looking up for the estate of Eugene Debs and Norman Thomas. In 2016, an unteleginic self-proclaimed “democratic socialist” almost won the Democratic nomination for president. Arguably (although, of course, controversially), in an anti-establishment election decided in the Rust Belt, Bernie Sanders would have won.

The election of Donald Trump has, naturally enough, led to increased attention to right-wing populism and the racist “alt-right.” But it is at least possible that developments on the left will be of longer-term significance. Trump’s support is overwhelmingly among older Americans, while the even older Sanders won big among younger voters regardless of race and gender. A 2017 YouGov poll showed that 44 per cent of millennials (defined in this case as people born after 1987) would prefer to live in a “socialist” country, compared with 42 per cent opting for a “capitalist” one. Other polls with other questions consistently show more positive associations with “socialism” than with “capitalism” among younger Americans.

Polls of inchoate public attitudes are one thing; organizational power and intellectual influence are another. Here too something is happening among millennials outside the visible parts of mainstream American discourse. The once moribund Democratic Socialists of America (DSA) have received a remarkable “Trump bump”; membership has gone from under 7,000, when Sanders’s campaign began, to its present 30,000. This growth in membership has occurred along with a sharp turn to the left, as the DSA in 2016 cut its ties with the Socialist International of mainstream social democratic parties, ties that its founder, Michael Harrington, worked hard to build in the early 1980s.

A larger ecosystem of a millennial socialist left – not to be confused with mainstream Democratic progressives or liberals – including the DSA, a “podcast” scene led by the popular and profane Chapo Trap House, “red rose Twitter” and Jacobin magazine has spread beyond its native university milieu. Common to all of these is the combination of a millennial cultural vibe with a remarkably “old left” orientation around class (as opposed to an “identity politics” primarily oriented to race and gender), Marxist theory, traditional activism and the internecine debates of left history.

It is important not to get carried away. The organized off-campus socialist left might be growing rapidly, but it is still tiny. The DSA is small, compared not just with the German SPD or the British Labour Party but even with other fringe American organizations like the Libertarian Party. High abstract support for “socialism” among young Americans might turn out to be a lifecycle phenomenon they grow out of rather than a cohort phenomenon presaging future political realignment – the old cliché that a person who is not a socialist at 20 has no heart while one who is not a capitalist by 30 has no brain may be relevant here. The DSA is also small compared with Eugene Debs’s Socialist Party, which obtained almost a million votes in the presidential election of 1920, 3.4 per cent of the total, before it split into Communist and anti-Communist factions. Like other ideologues, American socialists are undoubtedly overrepresented online.

Still, given the vast attention lavished on the alt-right that everyone ignored until Trump came along, it may be worth asking whether the left might also be able to mount a challenge to American consensus values. No generation before the millennials has ever reported a preference for socialism over capitalism to pollsters. DSA is already bigger than any overtly socialist organization since Students for a Democratic Society (SDS) imploded in 1969. According to John Michael Colon, DSA, unlike SDS, consists primarily of former university and college students, who are often facing downward mobility and large student loans. This is interesting, given Peter Turchin’s evidence that internal social conflict is correlated with the “overproduction of social elites”: in the modern world, this typically occurs when many more people have postsecondary educations than can use them in the economy.

Trump proves that the longstanding certainties of American politics have become less reliable. At a minimum, it is quite possible that socialism, in some form or other, might be on the verge of a breakthrough in the United States. If this happens, since a substantial proportion of the country will continue to view socialism in apocalyptic colours, the already bitterly divided American political culture will become even more polarized.

Straddling the Left Traditions

Characterizing this new trend is a difficult task, if we want to avoid both inaccurate generalization and a level of detail about obscure disputes that would induce eye bleeding in even the most tolerant reader. As Monty Python’s Life of Brian illustrated, the overeducated/underemployed in general, and Marxists in particular, have a love for nuanced theoretical-programmatic differentiation. To get a handle on things, and at the risk of offending anti-individualist principles, we need a representative figure.

Bhaskar Sunkara, the editor of Jacobin, will do as well as anyone. One way in which the 28-year-old Sunkara is typical is that, politically, he tries to straddle the social democratic, Leninist and anarchist traditions that characterized the 20th-century left. Sunkara defines this mission of Jacobin in explicitly generational terms, as “the product of a younger generation not quite as tied to the Cold War paradigms that sustained the old leftist intellectual milieus like Dissent or New Politics.” Sunkara endorsed Bernie Sanders’s purely social democratic program and points to Scandinavian countries as models of the kind of change he hopes for in the United States. He says he is not opposed to markets in principle, although Jacobin never supports the free-market side in any controversy.

At the same time, as the name Jacobin suggests, Sunkara uses revolutionary imagery and has published sympathetic articles about the Russian Revolution and the Communist tradition. The DSA combines Democratic Party elected officials with a “tankie” fringe of Leninists who retroactively support Soviet military suppression of democratic working-class rebellions in Hungary, Czechoslovakia and Poland against Communist rule.

Sunkara’s generation of left activist is defined in reaction not only to post-9/11 U.S. military interventionism and the post-2008 financial crisis and Great Recession, but also to what they rightly perceive as fundamental failures in the movements against those things. The anti–Iraq War movement essentially disappeared when Barack Obama was elected president, despite the substantial continuity in policy with the Bush administration when it came to the war in Afghanistan, the surveillance of Muslim Americans, drone strikes around the world and disastrous regime change policies. Neoconservatism gave way to a functionally similar liberal internationalism insisting on “U.S. leadership,” but any mass movement dried up with a Democrat in the White House.

While Obama expressed some disagreement with the interventionist foreign policy establishment and may have blunted their most bellicose instincts, he never expressed any interest in spending political capital in transforming U.S. foreign policy. Despite the anti-interventionist instincts of the American public, especially younger Americans, he faced no political pressure on foreign policy from the left. The political left just ignored these issues after Bush left office, while the intellectual left either recycled sixties anticolonial ideology or was sympathetic to liberal internationalism. The most interesting and hardheaded critiques of American hegemonism tended to come from conservative realists such as Stephen Walt and John Mearsheimer, writers at the American Conservative and antiwar libertarians.

Occupy and Identity

The financial crisis and its aftermath of low employment rates hit millennials harder than any other age cohort. The immediate leftist response, the Occupy movement, is a target of particular ire among millennial Marxists. Occupy combined complete inflexibility on its chosen tactic of creating ungoverned camps in urban public spaces with hostility to programmatic responses to the Great Recession. The leaders of Occupy, influenced both by anarchism and by traditional American hostility to telling anyone what to do, went out of their way to discourage the movement from posing any specific demands or analysis. No doubt this was a way of resisting Leninist entrism, but it also reflected a basically antipolitical refusal to debate alternatives. One common feature of the Jacobin circle is their disgust at this aspect of Occupy, which they analyze as an internalization of the post–Cold War narrative that “there is no alternative” to “neoliberalism.” Sunkara (and others like him) saw Marxism as a hardheaded and systematic alternative to this disgustingly New Agey “anarchist” antipolitics.

From my own perspective, Marxism has very little interesting to say about periodic financial crises and the business cycle, beyond the (salutary) emphasis that they are endemic to capitalism. Marx himself was never happy with his crisis theory, which Engels published after his death. With its dependence on the labour theory of value and a self-contradictory account of the determination of profit rates, it has little to offer now. It would be more useful to read the 20th-century American economist Hyman Minsky or various post-Keynesians.

But for young activists looking for something less soupy than Occupy was able to supply, the old left tradition seemed refreshingly hard-edged. Occupy, Obama and the antiwar left all celebrate feelings and moralism; old-line socialism told underemployed-but-overeducated young activists to open up books and argue about economics, philosophy and history as well as show up for demonstrations. That might appeal to intelligent young people with high student debts and suddenly limited job prospects.

Even more important, old leftism might liberate young people weighed down by the fraught world of identity culture but unwilling to embrace a right-wing backlash narrative. By now, anyone of any age knows how dangerous the online “call out” politics of race, gender and sexuality has become. The Jacobin–Trap House milieu gets to be the moderate middle here, a position that appeals to many millennials. They can parody or analyze both the moral posturing of the “social justice warrior” crowd and the anti–social justice warrior industry.

The 2016 Democratic primaries pitted Sanders’s class-based appeal against Hillary Clinton’s promise that the first female president would be transformative. Clinton explicitly made identity-based appeals to defeat Sanders. In her stump speech in the primaries, Clinton asked, “If we broke up the big banks tomorrow, would that end racism? Would that end sexism? Would that end discrimination against the LGBT community?” From a Marxist perspective, this looks like weaponizing identity politics in defence of “neoliberalism.” Sanders’s male supporters were labelled as (and sometimes acted like) “Bernie bros,” holding Clinton to sexist standards. One major intellectual influence (albeit himself a curmudgeonly Slovenian baby boomer), Slavoj Žižek, went so far as to endorse Donald Trump as the true proletarian candidate in the general election. While the millennial Marxist milieu certainly supports broadly feminist and antiracist positions, it also provides space for criticisms of identity politics that many on the centre-right would agree with.

The old left was often opposed to racism and sex oppression, a tendency that can be dated back to Marx and Engels, but beyond that the Marxist tradition never worked out its relationship to gender and national-racial inequalities. At its worst, Marxism engaged in genocidal politics, a thread that runs from Engels’s call to eradicate Slavic national identies after the 1848 revolution through the racist as well as murderous policies of Stalin, Mao and Pol Pot. Even at its best, Marxism never worked out how class analysis and what it called “special oppression” worked together.

It would be insane, in the era of Trump, to discount the significance of racial and gender loyalties on how people vote. But gender or racial polarization seems like a dead end for the left politically and rarely of much use as a lens into policy solutions. Even crucial racial issues like police violence and mass incarceration turn out to affect a numerically larger, albeit proportionately smaller, group of whites. The resistance to social democratic or even liberal solutions to America’s problems around access to healthcare and reasonable-quality education have everything to do with race. However, sensible solutions would redistribute power and resources primarily along lines of class, not race.

Moreover, objectively, class gaps kept getting deeper over the decades between the 1970s and 2016 as America was making cultural progress in its representations of nonwhites, women and sexual minorities, and the business and professional elite endorsed at least symbolically the principle of racial, gender and sexual inclusion. Trump’s election, which puts this cultural progress in doubt, can be seized on as evidence that a failure to address class will ultimately undermine even this progress.

For the millennial Marxists, the glory of the Sanders movement was that it challenged the “neoliberal” consensus they believe has prevailed since the end of the Cold War. In some ways, the rise of right-wing populist nationalism challenges neoliberalism as well. Now that the inevitabilist “end of history” illusions of the 1990s are finally shattered, it becomes possible to engage again with the Marxist tradition, hopefully without the dogmatism and hostility to civil liberties that disfigured it. The millennial Marxists explicitly see themselves as in continuity with those on the socialist left who tried to find a way between Leninism and social democracy, including the Eurocommunists, the British Labour left and strands of the New Left.

Defining the Enemy

Assuming, then, that millennial Marxism really is a “thing,” is it a good thing or a bad thing? My own perspective is that of a Generation Xer radicalized in the 1980s moment of solidarity with Nicaragua, anti-apartheid activism, zines and punk rock. It was a lesser moment for North American leftism than the 1960s or the present, but I have to be careful to avoid paternal condescension. I was in the minority of my generation in being attracted to orthodox Marxism precisely because it seemed to provide a hardheaded analysis as against anarchism, postmodernism and identity politics.

I certainly despised Clinton and Blair when they were elected. But I gave up on any emotional identification with the far left during the travelling antiglobalization protests of the late 1990s. It seemed to me then, and seems to me now, that the benefits of freer international trade for the global poor between 1989 and 2001 had to be prioritized if internationalism was to be meaningful. It also seemed to me that the dominant wing of capital was open to pushing for a postethnic West with a more egalitarian sexual morality. In other words, capitalism had not stopped playing the “most revolutionary” part Marx and Engels spoke of in the Communist Manifesto. I briefly thought there might be something to the “Third Way” of Blairite social democracy, although I was disappointed by Blair’s embrace of the Iraq War and then shaken by the financial crisis.

From this somewhat idiosyncratic perspective, there are some things to welcome in the development of millennial Marxism. While the mainstream left might long ago have been too exclusively focused on the concept of class, for many decades it has seemed to pay insufficient attention to this concept, in light of the growing disparities of wealth and income in the West. No decent person should regret the enormous reduction in global poverty as a result of globalization or the relative opening of cultural space to women, racial and sexual minorities as a result of the logic of commodification breaking down older patriarchal structures. But the Marxist tradition has special insight into the dialectical nature of these developments. Capitalism has both its brutal progressive side and its tired conservative side.

Any movement can be understood by how it understands its enemy. For the millennial Marxists, its enemy is “neoliberalism.” This core concept is a slippery one, both intellectually and politically. It basically includes everybody the millennial Marxists disagree with, other than right-wing nationalist populists and Stalinist tankies. “Neoliberalism,” in the hands of the millennial Marxists, becomes an oddly ahistorical and idealist concept, a spectre haunting not only Europe but the world, betwitching people into supporting policies clearly contrary to their interests, in very different political contexts and affecting movements with very different social bases, simply because it is the spirit of the (post–Berlin Wall) age.

Politically, it throws together every mainstream politician in the Atlantic democracies over the last two generations – from Mitterrand, Reagan and Thatcher to Obama, May and Macron. Intellectually, it includes mainstream economists behind the Washington Consensus in the 1990s or Clinton- and Blair-style centre left governments as well as economists such as Friedrich Hayek and Milton Friedman who rebelled against the postwar mixed economy from a libertarian direction. Neoliberalism is not only the explanation for the Iraq war and the financial crisis, but also for why movements that the millennial Marxists like (Syriza in Greece, Chavismo in Venezuela) have ended in disaster. Sunkara sometimes puts forward Scandinavia as a model, without perhaps realizing the extent to which it has combined high taxes and social spending with a more rigorous commitment to free market liberalism in many areas than prevails in the United States.

Millennial Marxists need to develop a more historically grounded analysis of the limits of the liberalism/social democracy of the era of Clinton, Blair and Obama, one that starts from the historical problems the forces associated with those names had to solve. By the 1970s, as the public sector increased, it became increasingly difficult to simultaneously satisfy the producer interests of public sector workers, meet the demands on already-established public services, not frighten off middle-income taxpayers and keep the positive-sum spirit of “les trentes glorieuses” (the 30 years of relative prosperity after the Second World War). High levels of aggregate demand led to widespread strikes and inflation. In the Anglo-Saxon world, Thatcher-Reagan-style conservatism appeared to provide a way out of these problems, with tight money, a harder attitude toward unions and a limit on the growth of the tax-and-transfer state (although no real attempt to actually reduce it in size).

During this time, Michael Harrington’s project of ideologically sorting the major parties actually succeeded: conservative southern Democrats left, the Democrats became the unquestioned party of labour and minorities, and organized themselves programmatically around filling in the clearest hole in the U.S. welfare state, the lack of universal healthcare coverage. Republicans, by contrast, became firmly committed to opposing any tax increases, even as market income diverged and even as these became necessary to pay for the federal government’s commitment to Social Security and Medicare. But with the defeat of Walter Mondale and Michael Dukakis in the 1980s, the Democrats tried to move to the centre culturally (to attract white working-class voters) and on taxes and spending (to attract upwardly mobile middle-class voters). This more or less corresponded to similar moves within the British Labour Party under Tony Blair and the German SPD under Gerhard Schroeder, so while disappointing from a left perspective it did not really falsify Harrington’s bet that the Democrats were becoming a social democratic force in all but name.

Outside the English-speaking world, so-called “neoliberalism” was not an ideological phenomenon at all, but a reaction to the reality that postwar social democracy faced the limits of the nation-state as a structure. The program of Mitterrand’s Socialists to “change life” had to be abandoned not for ideological reasons but because it necessarily implied a devaluation of the franc against the Deutschmark. The ultimate solution for these problems, the European monetary union, cannot be reconciled with an effective social democracy until and unless there is a European working class that thinks of itself as such. And that does not seem forthcoming. The structures of internationalism, and even of Europeanism, do not seem capable of being democratized the way the nation-state was in the 20th century.

If a breakthrough for the left does not seem to be coming from Europe, what about the United States? The difficulties are different. One is the nature of the U.S. Constitution, with its multiple veto points, which would render a coherent social democratic program hard to introduce. As the millennial Marxists rail against the failure of Clinton and Obama to accomplish more, they tend to ignore these structural problems. Another difficulty, which gets more analysis, is the way in which group status competition – around race, religion and education – fails to correspond to economic class, but is far more motivating. In the very long run, generational changes may make this less important, but as Keynes pointed out, we do not live in the long run.

Even more important to whether the end of this particular kind of American exceptionalism leads to good or bad consequences is the extent to which millennial Marxists avoid reproducing the illiberalism of the Leninist tradition in a desire to appear radical. The American left has often felt it can avoid the moral ambiguity of the often oppressive legacy of 20th-century socialism precisely because any extremism on the left will inevitably be so marginal to American politics as to be harmless. But that will no longer be true if socialism is no longer marginal in America. Even if they avoid a Leninist ancestor cult, American leftists will not get anywhere unless they embrace the pragmatic nature of their country, as well as create roots at the state and local level. But if they do these things, they might promote a better society at home and give some impetus to the left internationally. In any event, they are something to watch out for.

Angela Nagle, Kill All Normies: The Online Culture Wars from Tumblr and 4chan to the alt-right and Trump. Alresford, England: Zero Books, 2017. 120 pages.

It is easy to have dark, foreboding intuitions about what apocalyptic movements might be developing on the internet and social media. It is far harder to observe the birth processes of those rough beasts slouching towards Twitter to be born.

Of course, this is the nature of the development of extremist movements. To any outside observer, the 1903 Congress of the Russian Social Democratic Labour Party would have presented nothing more than serious men in beards arguing incomprehensibly over party organization and dialectical materialism. Nor would watching angry veterans get drunk in Bavarian beer halls in 1921 have been much more enlightening about the threat to civilization steeping there.

It would have been helpful, when 20th-century totalitarianism was developing, to have someone like Irish journalist Angela Nagle around. The author of Kill All Normies: The Online Culture Wars from Tumblr and 4chan to the alt-right and Trump is a sensitive and critical observer with the stamina to wade through enormous quantities of dreck. She has studied the tiresome and combative worlds of the online alt-right and identitarian left, managing to balance empathy, analysis and common sense. With the election of Donald Trump, her Marxisant publisher Zero Books recognized that this research was onto something big, and rushed to get this book out. In some places, the hurry shows: names are misspelled, minor errors abound and some chapters seem more finished than others. But overall this is an indispensable work of reporting and analysis.

Nagle starts by situating the dystopic worlds she is about to talk about in the recurrent “cyberutopianism” that periodically characterizes discussions of new communications technologies. The likely ur-text of cyberutopianism is John Perry Barlow’s 1996 Declaration of the Independence of Cyberspace,1 which announced to the “Governments of the Industrial World, you weary giants of flesh and steel” that they had no sovereignty in the pure land of cyberspace, a “world that all may enter without privilege or prejudice accorded by race, economic power, military force, or station of birth,” in which the only law would be the Golden Rule.

In her first chapter, Nagle points to how this trope of cyberutopianism, despite apparently being buried by the absurdities of the dot-com era, was revived around 2011 with the Arab Spring and Occupy movements. She documents romanticization of the “leaderless network” enabled by technology, pointing out that this ideology disabled Occupy from taking on either programmatic form or tactical flexibility. She then turns to the development of the current online right. Kill All Normies shows more compellingly than any other book I am aware of how a nerdy male online subculture oriented toward video games became an increasingly dangerous alt-right.

Nagle describes an older generation of paleoconservatives around Pat Buchanan, who have long viewed politics as a battleground of racial and religious identity and despised the neoconservative project of turning the United States into a “proposition nation” in which anyone, regardless of religious or ethnic background, could be American so long as they believed in free markets and the Declaration of Independence. For the neoconservatives, this possibility was precisely what made the United States superior to “Old Europe.” For the “paleocons,” America is and should be a nation rooted in a white Christian majority, threatened as much by economic globalization and the neoconservative project of “perpetual war for perpetual peace” as by the identity politics of minorities.

The Buchananites recognize their dilemma in supporting a white, Christian American identity that is tied to capitalism and to the valorization of the U.S. military. In Nagle’s Marxist analysis (which many Buchananites share), the traditional white Christian identity of the United States is among the apparently solid things that the logic of capitalism “melts into air” in favour of the depersonalized logic of market exchange. Although Nagle does not point this out, the U.S. military serves both as a critical cultural marker for the white working class culture the Buchananites champion and as the backstop for the liberal, global order they despise. The paleocons share with the identity left a belief that universal, Enlightenment norms of liberalism and modernism are deeply deluded, and indeed view themselves as warriors in an identity struggle for America’s traditional majority.

The Buchananites have been purged by the mainstream right a number of times, most recently during the Iraq war. But they developed a presence online and a policy of “no enemies to the right” – including the small fascist groups that have always existed in America.

The “alt-right” as we now know it comes from a meeting of these Buchananite intellectuals and pseudo-intellectuals with a younger generation marinated in the hypermasculinist subcultures of video gamers and “pick up artists” (men who claim to study techniques for sexual success). The match is hardly a natural one.The denizens of 4chan do not live traditional Christian lives but delight in a deracinated, pornography-centred lifestyle and a nihilistic will-to-power worldview. In Nagle’s analysis, they are the product of the cultural left’s traditional valorization of “transgression” and irony. Nagle describes how the gamer subculture interacted with the “manosphere,” a “leaderless network” of men trading complaints about women, pop evolutionary psychology theories and tips on how to be sexually successful in the Hobbesian 21st-entury dating market. The “manosphere” shades into a “race realist” (i.e. racist) subculture, which focuses these psychosexual anxieties on concern about lack of fertility among white women.

Nagle documents the bizarre “gamergate” controversy, in which a feminist game designer who created “Depression Quest” (described by Nagle as “a terrible game featuring many of the fragility and mental illness–fetishizing characteristics of the kind of feminism that has emerged online in recent years”) was repeatedly threatened with rape and death and harassed in real life and online. This criminal behaviour represented only an extreme example of the treatment regularly visited on women who for some reason inspire the ire of male internet “trolls.” As Nagle pointed out before the world became aware of it with the deadly riots in Charlottesville in August 2017, at the far fringes of this movement are individuals clearly primed for terrorist violence. At the same time, they (apparently with some assistance from Vladimir Putin’s FSB) constituted the “troll army” supporting Donald Trump in the 2016 election.

Nagle’s account combines clear-headed journalistic description of these dangerous developments with an analysis that goes beyond moral denunciation – one that clearly has Marxist provenance but avoids academic jargon. I appreciate her account of how the cultural production of “irony” can allow plausible deniability and how “transgression” can easily lead to reactionary politics. Her analysis of how the logic of 21st-century capital is undermining structures of family, nation and race that were central to the development of capitalism is hardly novel, but is used nicely. From Nagle’s perspective, neither the centre nor the left is providing a positive vision to young white men threatened by demographic transition, transactionalized sexual relationships and the loss of meaningful masculine work. 4chan does not purport to have any answers to these developments either, least of all a traditional religious one, but like a dematerialized Munich beer hall it provides a sense of identity and agonistic struggle.

The other virtual subculture Nagle reports on is left-Tumblr, the “microblogging” home of gender fluidity. As with the racist-masculinist-nationalist right, there is nothing new about the transgender subculture. Drag queens were instrumental in the 1969 Stonewall riot that was a seminal event in the development of the modern gay rights movement. The idea that gender is distinct from biological sex, and more fluid and performative, has been part of feminist theory since Simone de Beauvoir’s 1949 The Second Sex and is the fundamental theme of Judith Butler’s 1990 Gender Trouble. What is new is Tumblr as the “subcultural digital expression of the fruition of Judith Butler’s ideas.”

According to Nagle, there are now literally hundreds of genders being expressed on Tumblr, from Ambigender (“a feeling of two genders simultaneously, without fluidity”) to Xirl (identification as a non-binary girl or nonbinary girl-adjacent). It is easy to be put off by the jargon inherent in these explorations, but in many ways gender-identity Tumblr is the ultimate expression of Barlow’s claim that cyberspace could liberate people from the matter- and history-bound hierarchies of “meatspace.” There can be no doubt that this has been a truly liberating experience for many people faced by stigmatization and seeking social solidarity.

As a leftist, Nagle does not want to deny this and certainly expresses solidarity with people feeling stifled by gender conformity (or who are part of other groups that have found networks and identity online). But she notes that “amid all the vulnerability and self-humbling, members of these subcultures often behaved with extraordinary viciousness and aggression, like their anonymous Pepe-posting counterparts behind the safety of the keyboard.”

“Pepe the Frog” is a cartoon mascot of the “race realist” alt-right, so Nagle is drawing a parallel that is in many ways unfair. The identities found on Tumblr can be life-changing in positive ways, while the same can obviously not be said for awkward white males radicalized by 4chan or the manosphere into politicized bigots. Gender-fluid Tumblr is not known for death and rape threats. But it certainly is known for calling out individuals for humiliation and shaming. Examples of mob behaviour and fanaticism include a widely-followed Twitter user “Brienne of Snarth” who criticized a grieving father of a toddler killed by an alligator in 2016 for “white privilege.”

Nagle introduces the concept of a “scarcity of virtue.” Status can be gained by self-righteously or snarkily denouncing others for racism, sexism, homophobia or transphobia. Many minor celebrities built followers on Tumblr or Twitter by doing just this. But they can, in their turn, be exposed for the same sins by those with a more secure position of intersectional virtue. The dynamics are familiar to people who have encountered left-wing sectarianism in the predigital age, or to those aware of the Christological debates of the fourth century.

But the immediacy of social media does seem to spread moral panics faster and make for astonishing victims. Nagle points to campaigns against feminist icon Germaine Greer and gay activist Peter Tatchell. Secular ex-Muslims have been particularly targeted. While progressives are vulnerable to these tactics, alt-right performance artists like Milo Yiannopoulos made careers out of provoking “social justice warriors,” generating publicity and material rewards from right-wing fans (in Yiannopoulos’s case, until he went over the line in making supportive comments about relationships between pubescent boys and adult men).

One question I have about Nagle’s story here is whether what she is describing is simply the inevitable excesses of an essentially beneficial movement for greater social inclusion. Every reform movement, including those we all commend in retrospect, has had ideologues and enforcers, and has inspired backlash in the broader society – which rarely wants to be disturbed. It is easy to imagine oneself as an abolitionist or fighter for women’s suffrage in the early days of those movements, but it is quite possible we would have found them – as most of their contemporaries did – to be tiresome, self-righteous monomaniacs.

On the other hand, we also have the examples of the Jacobins, Communists and left-wing terrorist groups to show that egalitarian ideology can be consistent with a soul-destroying militant conformism and authoritarianism. I suspect that gender fluidity and intersectionality are unlikely to lead to anything beyond the occasional overreaction and waste of energy, since they seem inherently implausible as a basis for real political power. But it is hard to know.

Nagle points to the revival of a class-based economic left as a way out of the clash of right and left identity politics. This is not a new idea: reading Second International Marxists, it is hard not to be struck by how many of them valued class politics as a way to get away from the nationalist passions they could see pulling European civilization apart (and putting Jews, in particular, in danger), even more than as a way of redressing the inequities of capitalism. The idea of class as the solution to the “national question,” the “woman question” and other such questions can be found in different ways in Engels, Luxemburg and Lenin.

In the end, the reactionaries who thought nation would always beat class turned out to be right. It is fascinating how strong the pull of the idea of class polarization as a cure for identity polarization is among millennial intellectuals like Nagle. This promise of an exit helps explain the appeal of wooden old-leftists like Bernie Sanders and Jeremy Corbyn to people born after Marxism was apparently definitively buried.

Continue reading “#Weimar”

It isn’t easy keeping either of America’s big coalitions together. In the opening months of the Trump administration, Republicans divided over foreign policy (neoconservative hegemonists vs. Jacksonian isolationists), trade policy (traditional Republican business interests and libertarians vs. Trumpian protectionists), immigration (ditto), health care (everybody against the Ryan plan) and, it now seems, even taxes. The Democratic Party was better able to unify around opposing Trump and the Republican desire to repeal Obamacare, but the division between its social-democratic and neoliberal wings – exposed by the surprisingly effective primary challenge of Bernie Sanders – reemerged in the struggle for Democratic National Committee chair between Keith Ellison and Tom Perez. The Trump era looks set to realign the partisan and ideological system that has characterized American politics for the last half-century.

But one issue was an exception. The nomination of the mild-mannered Neil Gorsuch as junior justice of the U.S. Supreme Court united each party internally and totally polarized them against each other. The U.S. Senate – long a bulwark of individualism and cross-party back-scratching, with at least ten Republican members who have publicly feuded with President Trump – divided almost perfectly on party lines. The minority Democrats took the politically risky step of filibustering a nominee whose professional qualifications were difficult to question. And then the majority Republicans took the equally perilous step of rewriting the body’s rules to eliminate the ability of a minority to filibuster a Supreme Court nominee (the so-called “nuclear option”).

The United States has never had the disciplined party structure of parliamentary democracies like Canada and the resolution of the impasse undeniably reduces the overall power of senators. For this reason, the Republican majority has ruled out taking the same step of eliminating the filibuster for ordinary legislation or budget issues, and Democrats would be unlikely to force such a result if they could avoid it. On these other matters, the institutional interests of senators are more powerful than their ideological loyalties. But for Supreme Court nominees, neither party’s base would tolerate such a set of priorities. If another justice dies or retires before the next presidential election – and three of them were born in the 1930s – this disciplined partisanship is very likely to be repeated.

To an outsider, it is strange that the most perfectly partisan issue in American politics is nomination to a purportedly nonpolitical office. On further investigation, it just gets stranger – and says a lot about the dangers to the rule of law of constitutionalizing moral and political conflicts.

From Scalia to Gorsuch via Trump

The fight that led to Gorsuch’s nomination began almost a year before Trump took office. On the night of February 12, 2016, after a day of quail hunting, Justice Antonin Scalia died in his sleep. For someone who made his name as an expert in administrative law and statutory interpretation, Scalia was a colourful and controversial character. A conservative Catholic who was acerbic both in oral argument and in writing, he typically voted for results congenial to the political right – although he was proud of the exceptions, such as his vote in Texas v. Johnson that flag burning is constitutionally protected free speech, as well as his numerous decisions limiting the scope of criminal statutes and protecting what he thought of as rights of criminal defendants.1

Scalia cut his teeth as a free-market critic of federal regulation, particularly of telecommunications. He was thus a minor player in the “law-and-economics” movement of the 1970s that analyzed law in terms of neoclassical microeconomic theory. But his more important role was in defining the specifically legal views of the American right, views that at least purport to eschew any connection with market economics or traditional morality.

Scalia propounded constitutional “originalism” (the idea, refined by Scalia, that the text of the constitution should be read in terms of the “public meaning” it had when adopted), Chevron deference (the idea that the executive branch should get to interpret its statutory powers when they were ambiguous) and statutory textualism (a refusal to pay attention to the political history of a law, combined with a relatively literal reading of its meaning). Although there are certainly conservative lawyers who disagree with one or more of these ideas, these ideas have largely defined the conservative mainstream since the Reagan era – in no small part as a result of Scalia’s energetic advocacy of them. Originalism and textualism in particular stood in contrast to the mainstream of American legal thought, which (as in Canada) prioritizes pragmatism and judicial discretion over the highly formalist and positivist approach Scalia advocated.

The relationship between Scalia’s legal commitments and political conservatism is as controversial as the legal ideas themselves. In any particular case, the results of reading constitutions historically, deferring to presidential administrations or reading congressional enactments textually could be liberal, conservative or neither depending on the constitutional provision, the administration or the Congress. Many of Scalia’s critics have argued that he applied his legal ideas inconsistently when they conflicted with his political biases.2 Scalia did not always deny this, but argued that formalism created an objective standard by which he could be judged, while pragmatism makes a virtue out of political bias.3 At the same time, in a larger sense, Scalia probably thought his legal views supported an overall populist conservatism, because he thought that the progressive agenda was promoted by judges reading the modernist prejudices of their class into constitutions and statutes.

However, another view of the “original” meaning of phrases in the Fourteenth Amendment like “equal protection of the laws” and states being prohibited from depriving any person of “life, liberty or property without due process of law” is that these words enacted principles that are capable of new application as social understanding changes. From this “new originalist” perspective, the original force of the Fourteenth Amendment comes from the underlying revolutionary principles of the post–Civil War Reconstructionist era, not the specific views of the majority of the population at the time. It is perfectly consistent with these principles and with the text to apply “equal protection” to gay and lesbian people and “liberty” to their freedom to marry, regardless of how fringe a view this would have been in the 1860s. Recently, an increasing number of progressive judges and legal theorists have come to see that this is not a dispute about whether the original public meaning of constitutional texts is binding, but about how meaning should be understood. Jack M. Balkin’s 2011 book Living Originalism4 sets this progressive originalist position most coherently, and Elena Kagan, President Obama’s former Solicitor General whom he appointed to the Supreme Court, set it out most pithily at her confirmation hearing when she said, “We are all originalists now.”

While Scalia was on the court, the median justice in ideologically charged cases was Anthony Kennedy, a moderate Republican appointed by Ronald Reagan when the more fearsomely conservative Robert Bork proved unacceptable to the Senate. Kennedy clashed most memorably with Scalia on gay rights issues,5 but was often aligned with the conservative wing of the court in other respects. Scalia’s death during the presidency of Barack Obama appeared to open up the possibility of a Democratic-appointed liberal majority. Obama nominated Merrick Garland, by all accounts a moderate liberal generally disinclined to interfere with the elected branches of government, whether for conservative or progressive reasons. Garland was about as favourable a justice as Republicans could expect from a Democratic president. However, while Democrats controlled the presidency, Republicans controlled the Senate, and they refused to allow Garland’s nomination to come to a vote while Obama remained in office.

Trump’s nomination as Republican candidate for president came as a shock to the leaders of the conservative movement – who believed they had established a lock on the Republican Party since Reagan. The “Reagan coalition” was mobilized around three ideological blocs: social conservatives motivated by a desire to preserve traditionalist virtues, economic libertarians motivated by a pro-market ideology and neoconservatives who advocate an assertive military and foreign-policy posture by the United States. Trump’s obvious personal failings and lack of religious convictions, ingrained protectionism, support for popular spending programs and inconsistent isolationism did not faze the Republican primary electorate, which warmed to his ultranationalist identity politics and authoritarian persona. But little Trump said was consistent with movement conservatism as it had been understood for 40 years.

Nevertheless, while some conservative thought leaders – especially those for whom foreign policy was particularly salient – abandoned Trump altogether, most social and economic conservatives were won over by the time the election rolled around. And indeed while much postelection punditry has focused on the small number of traditionally Democratic rust belt voters who switched from Obama, Trump avoided the electoral annihilation midcampaign polls predicted because traditional Republicans voted for him.

Instrumental to this consolidation was Trump’s commitment to appoint a replacement for Scalia from a shortlist of judicial conservatives prepared by leaders of the Federalist Society, a group of conservative and libertarian lawyers and law professors (while Gorsuch was not on Trump’s original list, he was on a supplemental list delivered later in the campaign). More than any other single thing Trump did on clinching the nomination, this commitment brought the majority of the “movement” behind him – enabling him to win in November despite the well-founded doubts of many movement leaders about his ideological reliability, competence and character. Delivering on this commitment, shortly after being inaugurated Trump nominated Judge Neil Gorsuch, appointed to the federal appellate courts by George W. Bush, to the Supreme Court.

The right-wing legal counterculture

As political scientist Steven Teles documents in his 2009 book The Rise of the Conservative Legal Movement, the Federalist Society is a fascinating case study in long-term institution-building by the American right. Founded by Reaganite students at elite law schools in 1982, the Federalist Society advocates no positions and submits no briefs. It organizes conferences and student chapters for libertarian and conservative legal types to talk to one another and sometimes debate progressives. By creating a network, it built up a right-wing legal counterculture. Of course, there have always been conservative lawyers, but when the Federalist Society was founded, there was little in the way of right-wing legal theory, especially in constitutional or administrative law. But since 1982, whole intellectual movements – law-and-economics, originalism, textualism, Thomist natural law thinking – have developed, been refined and to some extent migrated over to the progressive legal academy.

The political moment of this more ideological brand of right-wing lawyering arrived as a result of disappointment of conservative movement figures in results from Republican appointees. Eisenhower appointed both Chief Justice Earl Warren and liberal icon William Brennan. Nixon appointed the very conservative William Rehnquist, but also Harry Blackmun, author of the Roe v. Wade decision that created a constitutional right of access to abortion. George H.W. Bush appointed Clarence Thomas, who has proven to be more doctrinaire even than Scalia, but also David Souter, who quickly became aligned with the liberal wing of the court and who voted along with Blackmun, Kennedy and Sandra Day O’Connor (another Reagan appointee) to uphold Roe in the 1992 decision Casey v. Planned Parenthood. Casey was the final straw for conservative activists, who adopted the slogan “No More Souters” and broke with George W. Bush when he tried to appoint the insufficiently ideological Harriet Miers to the Court in 2005.

The result is that the American right has demanded ideological reliability, not just Republican partisanship. And, as Lenin and Gramsci realized long ago, this requires ideological institutions engaged in more abstract theorizing. Gorsuch is a hereditary conservative: his mother was an official in the Reagan administration and he is a product of the debates in the Federalist Society and similar circles. As a law professor at the University of Chicago, Scalia was an early mentor of the Federalist Society, and he undoubtedly had an outsize influence on the people who have gone through its ranks. But Scalia was fully formed intellectually before the Federalist Society began, as were Clarence Thomas and Samuel Alito. Chief Justice John Roberts is a lawyer’s lawyer, with an instrumental view of theory. It is widely speculated that he avoided ruling Obamacare unconstitutional because he did not want to embroil the Court, as an institution, in the no-win partisan disputes about health care.

Gorsuch differs from his predecessors in being a generational product of a much more theoretical culture on the legal right. This will likely mean both that he can be relied on from the conservative perspective, and that he will be able to put forward his perspective in a less acerbic way than Scalia. (It also means departure from Scalia’s stance of deferring to executive agencies when interpreting statutes, a position now unpopular on the legal right.) As is now traditional, Gorsuch did not say much that is substantive in his testimony before the Senate Justice Committee. As a court of appeal justice, his record showed he was a better than average writer, with results that were not unusual for a Republican appointee.

Commentators turned to his doctoral dissertation, The Right to Receive Assistance in Suicide and Euthanasia, which examined the issue from various perspectives including common law, constitutional law and contemporary moral philosophy. The dissertation is a solid if unremarkable academic work, addressing the arguments of pro-euthanasia thinkers like Peter Singer and Richard Posner without rancour or hyperbole. But it ends up with a conservative conclusion, defending traditional distinctions between foreseeably causing death with pain medication and deliberately causing it, or between a competent person dying by refusing hydration or treatment and active medical assistance.

Gorsuch’s thesis supervisor, Oxford legal philosopher John Finnis, is a leading exponent of a neo-Aristotelian natural law approach to legal and moral philosophy – ostensibly secular but definitely influenced by a tradition in Western philosophy that had a major impact on the moral theology of the Catholic Church. Gorsuch borrowed from this tradition the idea of human life as an “intrinsic good” that cannot be reduced to a utilitarian calculus or equated with a right of personal choice. While Gorsuch steered clear of abortion politics, and regarded Casey as authoritative, activists on both sides of the abortion debate clearly decided he would support overturning it if the opportunity arose, and he will probably vote like Scalia, while employing more measured rhetoric.

It should be said, though, that Aristotelian virtue ethics and natural law theory are not necessarily right-wing, and that consistent utilitarians and libertarians come to extremely unpopular and counterintuitive moral positions. Singer thinks it is wrong to eat meat or give your child birthday presents if the money could help save someone from malaria, while Posner guaranteed that he would never reach the U.S. Supreme Court by advocating auctioning off rights to adopt babies. This does not mean those positions are wrong (Jeremy Bentham counterintuitively opposed sodomy laws and slavery), but it should put some of the hyperventilating in context. The actually alarming thing is that it is important to dissect a dissertation in moral philosophy for clues about the future of public policy in a democracy of almost 300 million people. This is the consequence of the superempowerment of final courts of appeal.

The balance holds – for now

The immediate effect of Gorsuch’s nomination will not be huge. Anthony Kennedy will remain the median vote on the Supreme Court of the United States, as he has been since Sandra Day O’Connor retired in 2005. Since the ideological space between Kennedy and O’Connor is not that great, really the Court will remain where it has been since Gorsuch was a teenager. But the next death or retirement is likely to be either Kennedy or one of the Court’s liberals, in which case the balance could shift dramatically.

It is surely unfortunate that so much of significance to American public policy turns on actuarial accident. Too much that should be left to politics is judicialized. The inevitable result is that, once the most salient issues are decided by a Supreme Court, the law becomes hostage to politicization. Scalia’s answer to this was to find formalist approaches to adjudication that would transcend the judges’ own biases, but the actual result is that these approaches just become shibboleths for one political coalition and anathema for the other. The political debate is channelled into jurisprudential abstraction, but at the end of the day the judicial balance depends on the brass-knuckle politicking of senators like Mitch McConnell and Chuck Schumer.

Canada does not yet have the problem of polarization around judicial choice and our Supreme Court enjoys widespread acceptance, if not widespread understanding. But the same logic may eventually triumph here, since there is no greater sense in Canadian elite legal circles that the results in constitutional cases are independent of the ideologies and perspectives of the justices. This was exemplified by the questionnaire for Supreme Court of Canada appointment filled out by our most recently appointed justice, Malcolm Rowe, and published by the Trudeau government on the internet. Reflecting the academic consensus since the 1930s, Rowe stated quite frankly that “Supreme Court of Canada judges ordinarily make law, rather than applying it” and that the legitimacy of court decisions derived from the “wisdom and well-founded principles” of its judges.

The appointments of Justice Russell Brown in 2015 and Justice Rowe in 2016 both provoked a more ideologically polarized debate than we have been used to. We can hope that the United States is not the mirror of our future, but pessimism is probably the better way to bet.

Continue reading “Law as politics, politics as law”

Right now, Canadian law makes possession, trafficking and production of cannabis a crime with exemptions for medical marijuana. Simple possession is rarely prosecuted. From your perspective, what are the main problems with this system?

It might be counterintuitive for some people, but most of the problems we have with cannabis arise from its prohibition, not its use.

NORML wants to encourage safe and moderate use of cannabis – for people who want to use cannabis – because, as with alcohol, the majority of users have no problem and experience only its benefits. Prohibition means we leave controlled substances to organized crime, rather than regulating them for safety and purity – although only a minority of cannabis production is controlled by what we would seriously call “organized crime.”

Simple possession is rarely prosecuted? If you’re a kid from a minority community in an inner-city neighborhood or on a reservation, you may have a different experience with the justice system. That’s one important reason to legalize: cannabis need not be the gateway to the criminal justice system that it is now.

If you talk with police – away from cameras and microphones – they will tell you that enforcement of cannabis prohibition is not a good use of their time and resources. And they’ve known this for a long time. It’s politicians, primarily, who have kept the fires burning under cannabis prohibition. In my conversations with law enforcement, they say they would have backed off years ago because, as police officers, they have much bigger problems with alcohol than with cannabis.

Do you see health or other risks in cannabis use?

Yes, there are risks, but these have to be compared with alternatives. People with a preexisting mental illness, or a family history of it, should avoid cannabis and all other psychotropic substances. People should not drive or operate heavy equipment under the influence of cannabis. The evidence about the effect of cannabis alone on driving is inconclusive, but we are against driving while impaired by anything. Alcohol and cannabis are a particularly bad mix. We can expect bad reactions to things like mould and pesticides – and we should pay close attention to prevalence and severity.

Most of the concern today turns on the neurodevelopment of the adolescent brain when exposed to regular high doses. No serious person argues against an age limit, and most scholars of cannabis science would prefer to delay initiation into the late teens or early twenties. What is less certain at this stage is the direction of the causal arrows between use of cannabis and poor school performance or learning outcomes. Does cannabis use at higher doses undermine school performance, or are children who are performing poorly at school drawn to self-medication? The end of prohibition will, among other things, enable a rigorous examination of causes and effects and other correlates among users and nonusers

But in any event, risks pale in comparison to the risk of being criminalized for possession, which has always been the greater threat to life prospects under prohibition. Moreover, we can do more to deal with the risks in a legalized system with evidence-based regulation than under prohibition. Despite almost 90 years of criminal prohibition, Canadian youth are among the highest cannabis users in the world. Regulated legal vendors could do better, as they have for tobacco.

In November, the Task Force on Marijuana Legalization and Regulation provided its report. What do you think was good and bad about that report?

On balance, the report got the fundamentals right. The devil will be in the details, and we have not yet seen those details, but I suspect most casual users – and antiprohibitionists, which is NORML’s historic constituency – are comfortable with the Task Force Report. The issue turns on how one undoes bad policy. That turns out to be a more complex challenge than anyone realized. Almost 100 years of racist, punitive social engineering, deeply shrouded in myth and misinformation, is receding, but not without a fight. We’re at that point now, as Gramsci put it, where “the old is dying but the new is not yet able to be born.” The report does acknowledge that people will – and should be able to – grow their own. NORML argued that cannabis should not fall under the exclusive control of corporations. We’re also glad that the Task Force says cannabis and alcohol should not be sold from the same storefronts.

The Trudeau government has now introduced Bill C-45, which would take cannabis out of the Controlled Drugs and Substances Act and create a new Cannabis Act. What do you think of that legislation?

The act gets the essentials right, but some of the details are wrong or just not known yet.

Keep in mind that, though it may not seem like it, we are in “early days” where this psychotropically complex plant is concerned. There is much to be learned about the full range of therapies and, to be honest, harms associated with it. We have yet to see the regulations, so that limits our ability to pass judgement. From what I can tell, the government took the advice of the state of Colorado to start with stronger legislation on the expectation that it will be possible to loosen regulations with time and experience. The biggest challenge, in my view, was to create a regime that is able to learn and to feed that learning back into the regime’s design in real time. I am optimistic that this bill can do that.

Provincial regulation of sale made sense in the context of our existing jurisprudence over the control of alcohol and tobacco. Simple path dependency just made sense given our constitutional division of responsibilities and history of legislating consumption of toxic substances.

NORML endorses restrictions on promotion similar to those for tobacco for public health reasons. We have never endorsed or condemned the use of cannabis for any person or reason. We do not seek to grow the rate of use nor to shrink it. We just want the criminal justice system out of the whole issue through the enactment of legislation informed by science and best practices rather than fearmongering and hysteria. This act moves toward that objective.

The bill is still too punitive for what it does prohibit. We have pretty good evidence that punishment for cannabis crimes does not deter, and denouncement through criminal stigmatization is more harmful than use.

The worst thing is what the bill doesn’t do. It leaves in place the mandatory minimum penalties (MMPs) in the Controlled Drugs and Substances Act. Policymakers know – and will admit when away from cameras and microphones – that MMPs don’t deter and that certainty of punishment is more effective than severity. MMPs limit the discretion of judges – which is a feel-good measure – and sentence a “class” of crime rather than the individual offender. They’re a lazy means of appearing serious because they’re so easy to enact: a few changes to language. When former justice minister Rob Nicholson was in the Mulroney government, he opposed MMPs. Sooner or later some policymaker will say out loud what everyone knows: we should limit MMPs to the crimes for which the Law Reform Commission recommended it: murder and high treason.

Many critics have noted that the federal bill provides a legal age of 18, but the science suggests that cannabis use can damage developing brains.

There is no logical argument for setting a legal age different from that for alcohol and tobacco, given that both are more harmful and toxic substances. From a public health perspective later initiation is preferable, but it is hard to enforce as a practical matter because Canadian youth are already among the highest consumers of cannabis in the world and cannabis cultivation is no longer the province of nerdy horticulturalists but available to anyone with an internet connection. All public policy has to balance the public good against the practically enforceable. We are long past being able to enforce, as a practical matter, a strict legal age when the plant is so easy to grow.

Are there any amendments to Bill C-45 you think Parliament should consider?

No prison for pot! Mandatory minimums should be repealed immediately – not a year from now – and the availability of a “conditional sentence order” that enables sentences of imprisonment to be served in the community should be restored to at least prevent the serving of actual sentences of imprisonment pending legalization. All indictable offences should be abolished, leaving only summary conviction offences and a maximum of two years less a day imprisonment for serious matters until legalization.

NORML endorses the view that there should be no imprisonment for cannabis offences and the focus should be on monetary penalties for infractions and violations. We have lots of experience and evidence to support the claim that imprisonment does not produce the deterrent effect advertised for it.

The report seems to contemplate federal supply management of cannabis production. This would obviously lead to valuable rents to those producers who get inside the system. Do you have worries about that approach?

Not really because there exists a large and well-established cannabis culture dating back at least to the 1960s, which I predict will emerge out of the underground and assert its vitality and life force. That is, in effect, what is happening with the dispensaries now. Cannabis users will grow their own, trade and sell to one another as they have for decades, but without the threat of criminalization. Many people will prefer the federally sanctioned suppliers, and that’s fine, but at least as many will grow a handful of plants on their balconies or in their backyards or basements or purchase from friends.

What lessons does the regulation of tobacco and alcohol have for legalized cannabis?

The important lessons are derived from public health principles: no advertising to children is the big one because delaying age of initiation is desirable for all psychotropic substances. NORML wants to discourage use by people who operate heavy equipment or drive, the way we do with alcohol, but NORML also wants to discourage texting and other driving distractions. NORML would also like to see some of the tax revenues recycled into treatment and education to encourage safe and moderate use. NORML also feels strongly that permitting home production limits the incentives for corporations to “enhance” their products, perhaps to make them addictive on the tobacco model.

Do you think there are other “controlled drugs and substances” that could benefit from a similar approach to the one taken for marijuana? What would be the best next step?

Canadians should ask themselves, as a thought experiment, which drugs and controlled substances they would prefer to leave to the criminal underclass to have regulated for accessibility, safety and purity. I think we have to assume that we cannot eradicate demand, nor have we been able to eradicate supply. Given that, the best we can do is to employ a harm minimization strategy to both demand and supply. Prohibition, on the evidence and experience, turns out to be a harm maximization strategy. I think we can do better. The public health principles that will come to govern cannabis can be applied to all other controlled substances.

Demand creates its own supply. That supply is going to be controlled either by organized crime or by some government or quasigovernment institution – like a not-for-profit NGO – that is able to regulate for purity, access and quality and that is able to create a real workable gateway between suppliers and children. I echo the conclusion of The Economist’s editors that legalization is the “least bad option.”

Does NORML have any thoughts on how best to address the ongoing surge of opioid dependence and overdose?

There is early – but promising – evidence that opioid use, and associated overdose, seem to be lower in U.S. jurisdictions where cannabis is legal and easily available. We should pay close attention to this correlation because that confirms what many cannabis users suspect: cannabis is an adequate substitute for some of the conditions that opiates treat.