Jacob T. Levy, Rationalism, Pluralism, and Freedom. Oxford, England: Oxford University Press, 2015. 322 pages.

In Rationalism, Pluralism, and Freedom, McGill Professor Jacob Levy has made a major contribution to political theory by identifying and elaborating on a deep and enduring split within liberalism that unlocks much that is otherwise mysterious about its past disputes and future prospects. All liberals want to protect freedom and equality but, from the beginning, they have disagreed about the main source of threat.

For some liberals (whom Levy dubs “rationalists”), the main dangers are from traditionalist communities, whether local, sectarian or familial. For rationalist liberals, these traditional loyalties are the source of sexism, superstition and repression. They see centralized, impersonal nation-states – or, even better, supranational statelike organizations from the European Union to the United Nations – are at least potentially open to reform based on a reasoned understanding of the rights of all. In contrast, other liberals (Levy’s “pluralists”) see the bureaucratic state itself as the main enemy of freedom, constantly encroaching on voluntary and decentralized sources of social meaning. This has led pluralists to be suspicious of state-centred narratives of progress and often to form alliances with the modern state’s illiberal enemies.

Levy considers the consequences of this split for concrete contemporary disputes, for the intellectual history of the Western liberal tradition and for the larger future of liberal polities undergoing rapid cultural change and polarization. In all three areas, he develops complex variations on his simple theme of two liberalisms. After reading Levy’s book, it is hard to read a newspaper without seeing his dilemma play out. Is banning polygamy or the niqab liberation of women or oppression of religious minorities? Must evangelical colleges abandon their condemnation of sex outside heterosexual marriage if they want to train lawyers? Can European judiciaries require member states to accept unwanted demographic change in the name of common refugee protection obligations? How can indigenous systems of hereditary government be given formal, legal power in a modern state? In each case, both sides claim the mantle of freedom, choice and equal treatment. In each case, the issue is whether the main enemy of these values is traditional mores or rationalistic bureaucracies. These dilemmas cut across traditional left/right divides.

In the first part of his book, Levy considers the problem “intermediate groups” pose for liberal political theory. In its simplest form, liberalism focuses on the dyad of the state and the individual. For liberals, the equal autonomy of individuals is the source of both the state’s legitimate powers and the appropriate limits on those powers. Liberal states ought to maximize the freedom of everyone and can only establish limits on that freedom on impersonally justified grounds. Liberal documents from the American Declaration of Independence to the Canadian Charter of Rights and Freedoms follow this structure, deeply embedded in liberal theory.

The trouble, as conservative and radical critics of liberalism have long delighted in pointing out, is that societies are characterized by more than the individual/state dyad. People are embedded in all sorts of local and voluntary loyalties, institutions and organizations. The most benign might be defined by a common interest or hobby. More troublesome are those defined by common descent: from family and clans to ethnic and racial groups. Others are defined by adherence to a religion or other comprehensive ideology. Additional problematic intermediate groups include subnational governments and economic organizations like corporations and unions (although Levy excludes this last type of intermediate group from his discussion for reasons of scope).

Central liberal values such as freedom of association and of religion, privacy and local self-government get their meaning from human beings’ need to participate in these intermediate groups. But even at their best, these groups divide the world between insiders and outsiders, a division that any liberal must view with suspicion; at worst, intermediate groups are explicitly inegalitarian and hostile to individual autonomy.

Levy discusses, and rejects, two liberal theories of intermediate groups. The “pure” theory sees group existence as the simple product of individual choice, and concludes that the state should not interfere with groups so long as there is no explicit coercion to prevent individuals from leaving. On the pure theory, only a central state can violate autonomy: if a religion or an ethnic group is restrictive or sexist, then the individual affected always has the option to leave.

As Levy points out, there are two implicit assumptions in the pure theory: first that exit is realistic, and second that its possibility normatively justifies all internal restrictions on liberty and equality. The problem with the first assumption is that it holds only if the central state is in fact strong enough to guarantee exit, but provides no mechanism by which this strength can be maintained. As for the normative assumption that violations of autonomy and equality are fine so long as there is a possibility of exit, the pure theorists are inconsistent: they would not excuse the state, as such, from any violations of liberty and equality, as long as the affected individual could emigrate. The possibility of exit may diminish liberal unease, but should not eliminate it.

The opposite approach would be to require that every civil society group have the same relationship to its members that the liberal state has to its citizens. Levy calls this the “convergence” view. In some cases, this makes sense. Local governments and incorporated societies are required to have elections for leaders, articulate statelike impersonal justification of their strictures and provide procedural rights to those accused of violating them.

But when it comes to the intermediate groups centrally involved in creating meaning for people, the convergence view would require massive impersonal state involvement in our intimate lives. Descent-based groups do not have formal-bureaucratic organizational structures amenable to legalistic rights talk, and trying to enforce such a structure suggests an almost totalitarian project of atomizing and deracinating people. Religious organizations may or may not have formal structures, but when they do, it seems the most successful (think the Roman Catholic and Mormon churches) are precisely those that are most illiberal.

Levy thinks this is no accident, since religious institutions like those of mainline Protestantism or Reform Judaism that have most converged with liberal, secular values offer little that is distinctive to people who cannot get meaning in mainstream society. If this observation is correct, there will be strong selective pressures in a society globally committed to liberalism for robust intermediate groups to diverge from liberal values. For this reason, Levy does not think convergence is likely as a descriptive matter, and he rejects it, on liberal grounds, as a normative project.

If abstract, universalizing theories of intermediate groups do not work, it is still possible to look to traditions of relative suspicion. In Levy’s account the rationalist/pluralist views need not have the (implausible) deductive clarity of the pure/convergent theories. In the book’s second – and longest – part, Levy retells the intellectual history of liberalism in light of the tension between those who look at intermediate groups primarily as a source of liberal freedom and those who see such groups as a threat to it.

Levy combines a strong historical sense with close attention to theoretical distinctions historians often ignore. His story involves comparisons between familiar characters in succeeding eras, paired along rationalist/pluralist lines: from John Locke and more historically minded Whigs in the 17th century; through Voltaire as despiser of religion and admirer of enlightened despots and Montesquieu as opponent of royal absolutism and admirer of the estates in mid-18th-century France; to Thomas Paine and Edmund Burke during the era of the American and French revolutions; and culminating in the subtler 19th-century contrast of John Stuart Mill, for the rationalists, with his friend Alexis de Tocqueville, for pluralism.

Along the way, Levy introduces and situates a number of lesser-known figures, especially on the pluralist liberal side. Liberal theory developed during an era in which there was a real conflict between the emerging royal/democratic nation-state and the decentralized institutions of feudalism, and Levy shows that liberals were not always on the side of universal law and against feudal privilege. From the English Whigs through some of the French liberal critics of the Revolution to the pluralist new liberals in the 20th century, Levy reminds of us of the ways in which liberals invoked the “ancient liberties” of medieval Europe against absolutist monarchies and then against the ideological nation-state. There is a line, running especially through Catholic thought, from these pro-medieval antistatist versions of liberalism to more recent decentralist and federalist theories. This line has special importance for Canada because it explains some deep resonances between the thinking of British new liberals like Viscount Haldane and Quebec thinkers influenced by post–Vatican II Catholic social thought: behind both looms the 19th-century figure of Lord Acton.

Levy is least confident in his third part, in which he discusses the implications of the pluralist/rationalist dichotomy for our present and future predicament in the multicultural West. Levy obviously has sympathies for the pluralist side, but is willing neither to cast his lot entirely with it nor to try to broker a synthesis.

There is something to admire about this unwillingness to accept an easy resolution. On the other hand, from my perspective, the fact that the resolution between pluralism and rationalism more or less works in practice, at least for now, is itself in need of explanation. Levy seems to insist on a pessimistic or tragic conclusion that we cannot expect liberal values in a broad sense to suffuse civil society over the long run. But theoreticians should be interested in how a trick so difficult to pull off can be done at all. There must be some forces pulling toward equilibrium. In this respect, I think Levy is too dismissive of the forces of convergence, of how mutual accommodation in the political realm, at least sometimes, spills over into more tolerant and egalitarian approaches in the more intimate.

One possible conclusion from Levy’s discussion is that we should seek a golden mean: if either the bureaucratic state or particularistic identity groups get too powerful, then we are in trouble. A conception of political wisdom as requiring balance has a long tradition in the West, dating back to Aristotle and Cicero. It is also consistent with Francis Fukuyama’s observation that the historical basis of Western institutions lies in Western Europe’s uniquely strong-enough-but-not-too-strong central states.

Of course, as the investment advisers are required to say, past performance is no guarantee of future results, and Levy’s pessimism may be vindicated. The key issue is whether liberal societies are able to reproduce a core liberal culture: so long as that remains possible, pluralist concessions to illiberalism within voluntarily chosen intermediate groups probably strengthen liberalism. But there may be a tipping point beyond which the liberal nature of the central state could no longer be taken for granted. Canada has long benefited from a pragmatist streak in its liberal-rationalist ruling elite, but if that same elite loses an understanding of the historically contingent nature of this unstable arrangement along with the vocabulary to talk about how to preserve it, then the whole arrangement could be in danger.

Levy should be applauded for advancing a vital discussion within liberal theory, and doing so in a way that is informed by philosophical, historical and social-scientific perspectives. Political theorists should definitely read it, but so too should lawyers, policymakers, journalists and others interested in reconciling these dilemmas from a more practical perspective.

In addition to the force of Justin Trudeau’s optimistic personality, an obvious explanation for the Liberals’ election victory is that they promised to spend more than they taxed. Focusing just on the amounts promised, there is no reason to fear any serious negative consequences as a result. But the lurking issue is how this victory changes political incentives in the future. The “deficit taboo” is gone – but with it, have we lost the force that keeps democratic politics from leading to a debt crisis?

The Liberal platform announced planned deficits of approximately $9.9 billion, $9.5 billion and $5.7 billion in the first three full fiscal years of the government’s mandate. In contrast, the NDP anticipated surpluses of between $3 and $4 billion per year. At least as these things are conventionally viewed, the Liberals thus outflanked their traditional rival on the left. It is hard to know for sure whether this manoeuvre won the Liberals the “change vote,” but there is no doubt that the conventional wisdom that deficit spending is electoral suicide has been reversed.

From a purely technocratic point of view, the differences between the Liberal and NDP fiscal platform are not that big a deal. The total federal budget for fiscal year 2015–16 is $290 billion. Even the biggest deficit number is less than 3.5 per cent of the total, and on conservative estimates of economic growth, if the Liberal government sticks to its plan the debt-to-GDP ratio will continue to decline.

The main argument given for the deficits is that they get the economy moving again and generate jobs. Behind this slogan are models of the economy. If the total amount that people in the country wanted to spend were constant (a proposition economists describe as “Say’s law”), then more government borrowing would just mean an equal reduction in lending to businesses and individuals. While the mix of jobs would change, the total number would not. But cyclical booms and busts for the entire economy (at least when they are not caused by obvious external events like hurricanes or wars) show that the amount people want to spend is not constant. Sometimes there is a “general glut” where people and resources are left unemployed, even though there are clearly social needs to be fulfilled. At other times people are trying to spend too much relative to what the economy can actually produce, generating inflation.

The classic “old Keynesian” approach was to spend in recessions and tax in recoveries, thereby evening out the business cycle and making Say’s law that supply creates its own demand true by approximation. But Keynes wrote the General Theory of Employment, Interest and Money in the 1930s. At the time countries abandoned the gold standard but the assumption was that fiat currencies, controlled by governments, would be temporary. Monetary policy was more or less fixed, and only fiscal policy could make Say’s law come true.

In a country with its own fiat currency, like Canada, it is important to recognize that fiscal policy can be counteracted by monetary policy. Governments can react to feared inflation by increasing taxes and reducing spending, and to feared unemployment by doing the opposite, but central banks can react by reducing base money (if worried about inflation) and by increasing it (if worried about unemployment). What politicians do to aggregate demand, central bankers can undo. Since the budget cycle is much longer than the time for a central bank announcement, the central bankers have the last word.

In the early 1990s, the Rae government in Ontario ran large deficits with explicit Keynesian justifications, and the Mulroney federal government did the same without them. But the Bank of Canada was engaged in a tight monetary policy. The result was an unusually severe recession in comparison to that in other economies – even as Canada’s debt-to-GDP ratio reached its highest level ever (government debt maxed out at 101.7 per cent of GDP in 1996). In contrast, in the second half of the 1990s, both the federal and provincial governments engaged in serious fiscal austerity for the first time in a generation. The Bank of Canada lightened up and let the dollar fall. The result was a recovery and lower unemployment rates.

Milton Friedman’s belief that the amount of demand in the economy depended only on a particular monetary aggregate turned out to be wrong. But through the Great Moderation between the 1980s and 2008, he appeared to be right that a central monetary authority can offset whatever shocks occur to aggregate demand. A sufficient increase in the monetary base can offset a decrease in velocity (and vice versa). Thus, the central bank can keep overall nominal spending (and therefore inflation) on an even keel if it wants to. The New Keynesian synthesis took this experience into account: while changes in fiscal position as a result of automatic decreased revenues and increased spending on means-tested programs have a useful countercyclical effect, there are limits to what fiscal policy can achieve in light of the monetary offset.

The 2008 financial crisis complicated this picture. When it hit, nominal interest rates were already very low. To the extent that monetary policy has an effect through lowering interest rates, it looked as if the Bank of Canada (along with other central banks) was close to the “zero lower bound”: since interest rates cannot go below 0 per cent, many thought monetary policy could do little if deflationary forces occurred even when money was being given away for free. Thus even Stephen Harper – whose master’s thesis was an attack on countercyclical fiscal policy on the grounds that it will inevitably be driven more by politics than by economic need – undertook some fiscal stimulus via increased deficit spending. But central banks were not as constrained as people thought. They reached for unconventional measures, such as Quantitative Easing, a new term for creating money, which was used to buy government bonds.

The mini-recession of 2015 is not a “general glut” as in 2008–09; it is a partial glut of energy commodities. Canada has to adjust somehow: the dollar has fallen, which makes manufacturing exports cheaper and imports more expensive. This is in effect another form of monetary stimulus. On the day of the election, the Bank of Canada announced that it is maintaining its overnight rate target of 0.5 per cent – which is low, but still leaves some room for conventional monetary policy, let alone unconventional alternatives. In any event, the problem that budget cycles are too long to address fluctuations in aggregate demand is still there: by the time the new spending promised by the Liberal Party platform is put in place, the Canadian macroeconomy may be in a completely different position.

The second argument made by the Liberals for deficit spending builds on the point that interest rates are at an all-time low. If the government can borrow very cheaply, then it only needs a low return on its investment for borrowing to make sense. This is a good argument in principle: provided the value of project benefits offset the (currently low) cost of borrowing the necessary funds. Unfortunately, the Liberal platform refers to all public spending as “investments,” which denies any meaning to the word. We are told, for example, that a Liberal government will “invest” in the middle class by cutting taxes. To be fair, building infrastructure is investment, and it is too soon to know whether it will be infrastructure sufficiently useful to offset its costs. And, in any event, what really matters for our future fiscal position is the debt-to-GDP ratio – something which the Liberal platform promises not to increase.

But the technocratic economic perspective may not be the most important. Politicians get rewarded for pleasing voters, and most voters are not macroeconomists. It is puzzling that democracies can maintain any fiscal balance at all. Spending is popular and taxes are unpopular. Between the mid-seventies and the mid-nineties, that simple dynamic led to an explosion in public debt. The cycle was finally ended two decades ago when the political incentives changed, and deficits became political taboos.

By definition, a taboo is something that is not done for a rational reason. But taboos often evolve into cognitive shortcuts to obtain a (misunderstood) good. Whether or not avoiding pork in the Bronze Age Middle East was a good way of escaping the wrath of God, it was a good way of escaping the risk of trichinosis.

In an ideal world, “bad deficits” will continue to be political poison, while “good, technocratic deficits” will be okay. But making the correct distinction places a big bet on the sophistication of the electorate’s political psychology. No one can intuitively think in terms of billions of dollars, whereas everybody knows what it means to spend more than you bring in.

The generational consensus that deficits are basically bad is now over. What we need to worry about with this election result is not so much Trudeau’s budget numbers as what he and other politicians conclude from his poll numbers.

Ronald Beiner, Political Philosophy: What It Is and Why It Matters.
New York: Cambridge University Press, 2014. 247 pages.

Historians and anthropologists have long been fascinated with extreme cultures: warlike Spartans, fiercely independent mountain peoples, societies obsessed with asceticism and mystical experience. But we – the kind of people who read or write for Inroads – are typical specimens of the most atypical human culture of all.

Educated citizens of developed Western countries are unlike any other people who have ever walked the earth. UBC psychologists Joe Henrich, Steve Heine and Ara Norenzayan have labelled us “WEIRD” (for Western, Educated, Industrialized, Rich and Democratic) and have demonstrated that we are statistical outliers in the way we think.1 We have very little intuition about the natural world, but work well with abstractions. We detach means from ends. We are unusually willing to be fair to and to trust strangers. We value romantic love and individual choice, and pay little attention to extended family and inherited tradition. Our morality celebrates ever-expanding circles of concern and peaceful and equal treatment and respect for those different from ourselves, at the expense of honour, purity, authority or sanctity. On all these dimensions, Westerners are WEIRDer than non-Westerners, educated and affluent Westerners WEIRDer than working-class ones, and later generations WEIRDer than their parents.

This new kind of person is quite clearly the product of post-medieval historical developments in western Europe and then its settler colonies: the scientific revolution, the Reformation, the Enlightenment and the widespread disgust with honour and tribalism after the World Wars. Going back further, we are ultimately heirs of the interaction of biblical religion and Greek philosophical insight: the meeting of Athens and Jerusalem. We are also products of those who used the logic and rhetoric of the post-Enlightenment West to oppose religious, racial, gender and other hierarchies that characterized the West in its rise: among others, the abolitionist, anticolonial, civil rights and feminist movements.

WEIRD societies have advantages that are hard to deny (tempted though we may be to try by the WEIRD aversion to tribalist boasting). Any number of statistics about health or wealth would make the point, but the most dramatic is life expectancy at birth, which rarely exceeded 40 years anywhere before 1900, but today is converging on 80 in developed countries. The exponential growth of scientific knowledge is equally hard to relativize away: we just know more than our forebears did, and WEIRD habits of mind both reflect and promote this. For now, at least, the WEIRDer parts of the planet are also the most powerful.

However, reasonable people can disagree about whether this trend towards WEIRDness represents moral progress. Indeed, the striking thing about WEIRD habits of mind is they make it difficult even to talk about moral progress. Scientific development has tended to make us think that real knowledge comes from quantitative and experimental methods that do not give rise to moral insight. And the universalist values of liberalism paradoxically undermine themselves once they are (correctly) seen as the products of a specific, historically developed culture. If the most important thing is to be fair to people who are different, then the very idea that our own moral beliefs are truer than those of people who disagree with us, or come from other historical traditions, becomes suspect.

No culture anywhere consists entirely of virtuous and excellent people. But WEIRD culture seems unique (and, arguably, uniquely bad) in that it appears to abandon even the aspiration to any virtue other than tolerating others and living peacefully with them. Have we given up on the quest for rationally understanding how best to live? Has our immense technological reasoning and comparative tolerance meant that we must abandon the task at the heart of philosophy as Socrates, Plato and Aristotle would have understood it?

In Political Philosophy: What It Is and Why It Matters, University of Toronto professor Ronald Beiner takes up these questions by reviewing central texts of fourteen 20th-century intellectuals: Sigmund Freud, Max Weber, Hannah Arendt, Michael Oakeshott, Leo Strauss, Karl Löwith, Eric Vogelin, Simone Weil, Hans-Georg Gadamer, Jürgen Habermas, Michel Foucault, Alasdair MacIntyre, John Rawls and Richard Rorty. For Beiner, reflection on the value of modernity is political philosophy: there is virtually nothing in this book about institutional forms, racial/ethnic conflicts, gender, issues of peace and war or economic distribution, and remarkably little about politics, democracy or the state. Even when the writers he talks about addressed these issues, Beiner is always interested in what they thought of the merits of liberalism, modernity and value-pluralism, which Beiner for the most part treats as synonymous. Beiner’s contribution is to put together, in an accessible and even entertaining way, a conversation among those thinkers of the past century who most deeply accessed the resources of the Western tradition to consider the merits of its WEIRD progeny.

Critics of modernity: Strauss, Weil, Vogelin and MacIntyre

Some on this list – Strauss, Weil, Vogelin and MacIntyre, especially – clearly thought the West had taken a wrong turn. Strauss and Vogelin were aligned with the political right; Weil was, and MacIntyre is, an unconventional leftist. But for all four, modern philosophy, of which the modern world is a product, represented a decline from ancient and medieval thought precisely because it abandoned the quest for rational truth about how best to live. Strauss is notoriously difficult to pin down because of his belief that philosophers cannot always responsibly say what they mean, but in the text Beiner discusses, Natural Right and History, he warns of the effects of subsuming an objective right based on enduring human nature in quintessentially modern historical relativism. Weil’s themes are familiar today in the public statements of Pope Francis: she believed modern capitalism and state socialism both contradicted the objectively true Christian message, understood in highly Platonic terms. Vogelin diagnosed modern ideologies of every stripe as “Gnostic” attempts to realize in the social world what classical Christianity and Platonism had understood to be transcendent and beyond history. MacIntyre relies on Aristotle and Thomas Aquinas for his critique of the modern state, market and morality: things go wrong when we no longer understand that there is a right way of being for humans and therefore stop trying to seek it out.

Beiner is sympathetic to these root-and-branch critiques of modernity in favour of an ancient conception of the objectivity and unity of virtue. But he recognizes that none of these thinkers actually delivers a firm rational foundation for the good, and all of their political projects are nostalgic or utopian. Beiner longs for an “epic” mode of philosophizing that would try to discover the true purpose of human life (the “summum bonum”), but in the end he admits that we are not likely to see it, and concedes that the attempt has dangers of fanaticism and totalitarianism.

Unless we are prepared to agree with Vogelin that the ancients discovered the science of the transcendent, and that everything that has happened since the Middle Ages was no more than perversely forgetting this, then I think we have to agree with Beiner’s conclusion, if not with his regret. In Aristotle’s biology, living things, like tools or other human artifacts, have a function and can be evaluated by how well they fulfill that function. A knife’s function is to cut, and a knife that cuts well is a good kife; a shark’s function is to hunt, and a shark that hunts well is a good shark. The goal of ethics is to find the true function (telos) of human beings. But Darwin showed that beneath all the amazing functionality of biological life lies an amoral and mechanistic process of natural selection.

There is no reason to think there is a summum bonum. Even if there were, how could we identify the subset of humanity who knew what it was? Conceptual reasoning, historical erudition and empirical knowledge of human psychology are all good things, but there is no evidence that they make people morally better or politically more astute. As Oakeshott and Gadamer emphasized, philosophers have no special access to knowledge of the good life. Vogelin’s analysis of the theological roots of modern ideologies remains valuable, as does the point common to these theorists that morality makes more sense when rooted in a theory of human nature and the virtues that make it the best it can be. But there is no single such theory that can claim clear rational superiority over all the others, and even if there were, it would not justify coercively imposing it on everyone.

Modernity’s defenders: Weber, Habermas, Rawls and Rorty

If philosophers cannot tell us what the good life is, then the task must be left to ordinary people, deciding either individually or collectively. In other words, we are left with some mixture of liberalism and democracy. Beiner puts forward Weber, Habermas, Rawls and Rorty as the defenders of modernity and the plurality of legitimate values. There is a risk of caricature in that framing, since none of the “defenders” are uncritical of liberal modernity. A central theme for both Weber and Habermas was that the growth of bureaucratic rationality (for Weber) or the “steering media” of the market and state (for Habermas) threaten more immediate relationships between people. Rawls and Rorty were highly critical of the unequal distribution of wealth and opportunity in a liberal capitalist economy. Still, none of this group was nostalgic for traditional authority or believed there was a unique “philosophical” path to the truth about what is good for human beings.

Weber believed that a genuinely scientific study of society must be value-neutral, while political commitment necessarily meant making a tragic, existential choice between values rather than rationally deducing the right one. Beiner argues that Weber contradicted his value-neutral stance in his lectures on science and politics as vocations, since he obviously engaged in normative rhetoric. This criticism (and a similar criticism of Foucault) misfires because Weber did not claim to speak as a scientist (and Foucault did not claim to speak as a genealogist) when making value statements. In claiming that Weber and Foucault demonstrated a philosophical commitment when they made moral claims, Beiner tacitly makes the assumption that normative language is the province of philosophy. But this assumption is precisely what those he is criticizing objected to. Weber provided no scientific or philosophical argument for liberalism, or for anything else, but he never said he did.

Beiner is on stronger grounds criticizing Habermas and Rorty. Habermas insists on the non-subjectivity of truth, but he defines truth as whatever everyone would agree on in an idealized interaction. This is appealing in that it does not privilege special insights available only to philosophers, but the problem is that this idealized discussion is no longer about anything other than itself. In Habermas’s politics, we all have an obligation to try to agree on what it is that we would all agree on if we could really talk as equals. But this is just as circular as defining chemical truth as being about what perfect chemists would agree on if they discussed things long enough, rather than being about atoms and molecules. it is equally vacuous when applied to social and political debate: a debate about how best to defend a country or reform health care is not about what we agree on, but about the strategic situation or the most effective ways of preventing disease. For Habermas, we talk, but we don’t talk about anything.

Rorty was a pragmatist who considered not only moral and political ideas, but even scientific and technical ones, as just “ways of talking” that should be evaluated for usefulness, not truth. For him, aerospace engineering is not in any fundamental sense truer – as opposed to more useful – than beliefs in witchcraft, so it was no serious knock on his social-democratic multiculturalism that it could not be rationally established as better than Thomist Christianity or Islamic fundamentalism. This sounds absurd, but instrumentalist theories of scientific knowledge are not crazy, and may even be right.

The difficulty for Rorty is that an instrumentalist philosophy of science is only plausible if it leaves scientific practice unchanged: if a philosopher came along and said theoretical physics had to change because of his or her views, or depended on their being correct, the philosopher would rightly be laughed out of the room. But Rorty claimed his ironic, postmodernist attitude as necessary for a proper “culture of liberalism”: we will all be better off if we accept his rediscription. As Beiner demonstrates, though, once we accept Rorty’s rediscription, we can no longer consistently talk about being better off. If Rorty claimed his philosophical (or really antiphilosophical) views had no political implications at all, then he would be as immune to Beiner’s criticisms as Weber and Foucault. But since he didn’t make that claim, Beiner’s criticisms are on the mark. Rorty has a mandatory conception of the good after all, and it is a self-contradictory one.

Beiner raises a similar objection to the late Rawls of Political Liberalism. In that book, Rawls introduced (or perhaps reintroduced) the distinction between a political conception of justice and comprehensive conceptions of the good, the right or how the universe works. A liberal society is characterized by what Rawls calls “reasonable pluralism.” Many comprehensive conceptions of the good – each logically incompatible with the others – nonetheless live together because they are each compatible with the political conception of justice manifested in that state. Catholics, Buddhists, Kantians and devotees of Ayn Rand can all live under the same laws without oppressing the others, as long as those laws are governed by what Rawls calls “public reason,” and as long as all these groups accept this. This is only possible if the sphere of what the political concerns itself with is limited. Liberal politics therefore cannot provide answers to ambitious questions about the meaning of human life, or be premised on such answers.

Beiner thinks he has Rawls in the same kind of logical bind as Rorty. Rawls’s comprehensive view is not to have a comprehensive view, and this is self-defeatingly circular. He also accuses Rawls of unnecessarily limiting the ambitions of philosophy by conceding that no progress is possible in coming to a better substantive view of the human good.

But Rawls escapes these criticisms. Rawls, unlike Rorty, does not try to argue against comprehensive conceptions of the good, or try to establish that they are all equal, morally or logically. He just wants to isolate political structures from dependence on the controversial claims of those conceptions. Rawls places justice/fairness/right as the preeminent value for the ordering of the basic structure of society, and insists that the basic structure be fair between conceptions of the good. This is obviously inconsistent with the idea that “error has no rights.” But it does not involve a claim that there are no errors. For Rawls, as for John Stuart Mill, a free and just society is the way to find out what is good: the best conception is more likely to win out in a fair fight than in a fixed one.

Beiner cannot accept this because he bakes comprehensiveness into his definition of political philosophy. It is a “total horizon of moral, social and political existence in its normative dimensions.” This is refreshingly unfashionable, but it amounts to erasing the distinction between specifically political philosophy and philosophy in general. Dangerously, this presupposes no limits to the political sphere.

The “political” has always referred to specific institutions. For Plato and Aristotle, this was the Greek city. In the medieval world, it was the empire or realm. For moderns, it was the nation-state, as recognized in international law and politics from the Treaty of Westphalia through the United Nations Charter. One theme through Western thought is that these institutions have their inherent limits and therefore, even at their best, will not be able to realize the most important values. Even Plato seems to have realized this after his negative experiences in Syracuse. Aristotle distinguished the active life of politics from the (better) contemplative life that politics could at most make possible. Post-Augustinian Christian political thought emphasized both the legitimacy of secular authority and its lack of ultimate significance. Liberalism did not invent the idea that the political good is less than transcendent, although it certainly depends on it. Beiner ignores the institutional, and thereby the limits on a political philosophy.

The communitarian challenge and the liberal response

For me, the most interesting chapter of Beiner’s book is his “short excursus” on the communitarian movement of which he was a part. The leading figures, in addition to MacIntyre, were Charles Taylor, Michael Walzer, Christopher Lasch and Michael Sandel. The common ground was a critique of liberalism as being based on a false notion of human beings as fundamentally detachable from the social context in which they grew up.

As Beiner realizes, communitarianism was never able to deal with an obvious liberal response. Since it is an undeniable and universal truth that selves are “situated,” it is not possible for liberalism to threaten this truth. Of course, people are inevitably shaped by the communities in which they grow up, and will inevitably seek to be part of communities once they have done so. Liberalism has no problem with this: community is what the very-liberal principle of freedom of association protects. For liberals, the only issue arises if adults are coerced into remaining in “their” birth communities once they have chosen otherwise.

Liberalism allows, but does not require, single-minded adherence to particular communal identities. In contrast, illiberal identity politics inevitably requires people to abandon some communal identities in favour of others: most saliently for Canadians, membership in the Québécois people is said to require abandoning visually identifying with a religious community. The most effective response to this sort of demand by a communitarian philosopher like Taylor is the liberal rhetoric of freedom of religion and expression. Beiner observes this, and observes the general decline of communitarianism into a self-contradictory illiberal multiculturalism, in which identities are privileged just so long as they are defined as being outside the mainstream. But his analysis is weakened because his conception of political philosophy prejudges the issue of the role of the state in an illiberal way, and because he is ultimately uninterested in reflection on specific institutions.

Communitarian thinkers also had a bad habit of relying on their own ipse dixit about social trends, unchecked by any empirical methodology or modesty in the absence of one. Beiner quotes approvingly from Lasch’s early 1990s claim that liberalism was “crumbling” because of drugs, crime, gang wars, decay of the educational system and ever-increasing racial polarization. It is now pretty well known that crime and violence have declined since Lasch wrote. The educational system and racial polarization are less easy to quantify, but adjusted for demographic shifts, U.S. students do better on tests today than they did then, IQs continue to rise, and America elected a black president in 2008. Beiner does not talk about any of this, and to my mind is generally too nostalgic about the days when an erudite intellectual could rant about “current trends” without any empirical support. Arendt and Vogelin come to mind for me, but of all of Beiner’s post-Weberian exemplars, only Foucault, Habermas and Rawls felt any need to keep up with empirical social science.

Beiner’s ultimate conclusion is pessimistic. He cannot see how philosophy can rationally justify a single summum bonum. But acknowledging pluralism is, for him, surrender. For a student, his book is an excellent introduction to 20th-century debates about “modernity,” but it does not address more institutionally focused political philosophy. Beiner has failed to give sufficient weight to the autonomy of political philosophy and its dependence on focusing on the purposes of specifically political institutions. Continue reading “Political philosophy without political institutions”

Joseph Carens, The Ethics of Immigration.
New York: Oxford University Press, 2013.

384 pages.

Demographic politics are back. Migration from poor to rich countries may be the most polarizing issue of the 21st century for Europe and North America. We urgently need a way to think clearly about how a liberal-democratic state may treat people who move to it – and, more radically, on what basis, if any, it may stop them from coming in the first place.

While most people in the West think migrants have a right to be treated equally once they are here, there is a broad consensus that governments can, and should, keep people out if it is in their national interest to do so. In The Ethics of Immigration, Joseph Carens, a political theorist at the University of Toronto, argues that this widespread intuition is wrong, and that basic liberal democratic commitments require more or less “open borders.” His argument is provocative and very accessible. Carens has a knack for making technical immigration policy and even more technical philosophical debate interesting. Unfortunately, however, he embraces a style of “ideal theory” that renders his book much less useful for the genuinely tough issues of immigration policy than it could be.

There is no doubt about the timeliness of immigration as a burning moral and political issue. Between the world wars, anxieties about the fall of birth rates among European-derived people brought the open migration of the 19th century to an end. The defeat of fascism eventually resulted in a partial turnaround. In 1957, the Treaty of Rome established free movement of workers within what was then the European Economic Community. In the mid-sixties, Canada, the United States and Australia abandoned highly restrictive and racially discriminatory immigration policies. In the following half century, the demographics of the Anglosphere and, to a lesser extent, continental Europe changed dramatically. But the rich countries have never allowed in more than a fraction of the people who would like to come. While the global South, with the exception of parts of east Asia, did not approach the economic well-being of its former colonial masters, it did have a population explosion as access to modern health care dramatically reduced child mortality. The result was, and is, many more potential migrants than the West is prepared to accept.

Mass migration has never occurred without resistance. For decades, many European countries have had populist anti-immigration parties – sometimes explicitly racist or fascist. In the United States and the Commonwealth, nativism has expressed itself primarily through mainstream parties. For the most part, though, each new cohort in the native population has been more open to racial and ethnic diversity, while the descendants of migrants undeniably assimilated in the sense of becoming adept in the local language and popular culture. Business and political leaders have seen immigration as a way to bring in needed skills and mitigate the threat to the welfare state from aging native populations. In the 1990s, it was easy to imagine that the West would embrace a post-ethnic identity and provide a new home for ambitious people from poorer countries.

It is still possible, but the 21st century has been hard on inevitabilist illusions. As the last overwhelmingly white generation in the West, aging baby boomers have found a strange new respect for the demographic anxieties of the 1920s. September 11 reignited the West’s anxiety about the Islamic world. Muslim immigrants are particularly visible in western Europe. The 2008 economic crisis and the failure of Europe’s labour markets have pushed demographic politics back to the centre of controversy. As I write, most polls show Marine Le Pen as the most popular first-ballot choice in France’s 2017 presidential election. The United Kingdom Independence Party captured 13 per cent of the vote in Britain’s May 7 general election. Populist anti-immigration parties received more than 10 per cent of the vote in the most recent elections in France, Switzerland, Austria, Belgium, Finland, Norway, Sweden, Denmark and the Netherlands. Mainstream European parties are, with few exceptions, careful not to support increased legal immigration and are firmly in favour of suppressing illegal border crossing.

The anti-immigration wing of the Republican Party has been able to prevent any legislation giving legal status to the tens of millions of foreign nationals, mostly Mexicans, working and living in the United States illegally. President Obama, resorting to executive action, has responded by not enforcing existing immigration laws against large categories of people. The legality of this action will be decided in America’s highly partisan federal courts. Democrats and pro-immigration Republicans (most prominently represented by Jeb Bush) insist that they will increase enforcement on the Mexican border and have very limited proposals for extending legal immigration. Canada has maintained a relatively strong consensus in favour of existing levels of legal immigration of around 250,000 people per year, although the Harper government has tried to shift the composition of legal immigrants away from the family and refugee classes and toward economic immigrants.

The tension between the economic forces for increasing migration and the political forces for slowing it is only going to rise. Lance Pritchett estimated that in 2002 a Salvadoran with a primary school education could make almost 15 times as much working in the United States as at home. The ratio is even higher between sub-Saharan Africa and Europe. While the global South is entering the final phase of the demographic transition as birth rates fall, it will be a century before this plays out. In the meantime the gap between a growing working-age population in the South and a declining one in the rich countries is going to become starker. We can expect an aging Western electorate in a time of diminished economic expectations and increased security concerns to vote against accelerating the demographic shift that began in the 1960s. The reforms of that decade are under threat: it is hard to imagine their being expanded. But by limiting immigration, we are leaving trillions of dollars of potential economic wealth on the table, while condemning millions to poverty and insecurity.

19_migrant_USDA photo by Bob Nichols.Economically, immigration increases the size of the pie. Immigrants and their families usually increase their incomes substantially. This is not surprising because economic improvement is usually their motivation for immigrating, and it has to be significant enough to overcome all the disincentives of moving to an unfamiliar culture. For the receiving country, economic theory tells us that mobility of people, like mobility of goods and capital, should result in gains from trade. But these gains are not distributed equally. The less you have in common with the average immigrant, the more you will benefit. Specialists on immigration economics like George Borjas have found that high-skilled immigration increases overall wealth in the receiving country substantially, while diminishing inequality within the native population. Low-skilled immigration to economically developed countries benefits the immigrants themselves and high-skilled natives (but not much), but results in lower wages or joblessness for less-educated natives.

The social effects for receiving countries seem to be mixed. On the one hand, more diverse societies allow for more options and thereby more freedom. More ways of living feed more perspectives into the social mind. On the other hand, as Robert Putnam has shown, diversity can lead to a sense of social isolation and a decline in trust. At existing levels, immigration does not seem to have a dramatic effect on political institutions: recent immigrants participate less in politics, but generally vote in patterns similar to older populations. Despite the hype, radical anti-Western politics are restricted to a tiny minority.

What does immigration do to poor countries? On the “brain drain” view, rich countries lure away much of the high-skilled elite that source countries desperately need and spend scarce resources educating. On the other hand, migrants to the West send money home and bring back skills. Remittances from expatriates exceed all foreign aid. Migrants provide a bridge for the transmission of Western technology and organizational knowledge.

If immigration implies winners and losers, how should we decide whom to let in, and how to treat those who are here? On a straightforward cosmopolitan utilitarian calculus, more immigration is a good thing: immigrants benefit and the net effects on others are uncertain and probably positive. From the narrower perspective of current citizens of rich countries, high-skilled immigration seems to be good, but large amounts of low-skilled immigration could create more unequal societies, transforming Europe and North America into versions of Brazil or South Africa. In Michael Walzer’s phrase, open borders will mean “a thousand petty fortresses” as the well-off separate themselves domestically. But on a global level, this dystopian vision will actually be more equal than the status quo, so maybe we are morally obliged to do it anyway.

Applying principles to immigration issues

Carens’s strategy is to answer the moral questions about immigration using principles that have near-unanimous support among citizens of democratic countries – albeit applied in ways the vast majority of those citizens would reject. In the first part of The Ethics of Immigration, he presupposes that states have a moral right to restrict immigration, and discusses the implications of generally accepted democratic principles for such issues as birthright citizenship, naturalization, assimilation, rights of permanent residents and temporary workers, rights of illegal (“irregular”) immigrants and the grounds on which legal immigrants should be admitted. In the second part, he takes the more radical path of challenging the right of the state to keep people out at all.

Some of Carens’s conclusions will not be controversial within mainstream liberal opinion, at least in Canada. He is in favour of automatic citizenship for children of foreign nationals born in a country. Those who grow up in a country should have an automatic right to its citizenship as well, regardless of birthplace or nationality of parents. Authorized permanent residents should have essentially the same legal rights as citizens, and should have a right to obtain citizenship after they have been in the country for a lengthy period of time. While he acknowledges that public social norms will inevitably reflect majoritarian culture, they should also develop to accommodate minority interests.

What I found interesting about his discussion on these points is that many of the arguments turn on whether dual citizenship is something a state has a right to discourage. Carens compares making a person choose between two nationalities when they have close links to both to forcing a child to choose between parents. I find Carens’s arguments persuasive, but only because intercountry war has become unthinkable, except perhaps involving pariah states that do not send many migrants. A century ago, this would have seemed utopian: of course any state needed to know whether it had its nationals’ loyalty in a war. Concerns about loyalty today are not really about loyalty to specific states but to revolutionary ideologies, so dual citizenship is no longer a big deal.

But I think the failure to examine the factual/historical presuppositions for his argument exposes a weakness in Carens’s approach. Carens claims he can avoid empirical controversy by talking only about what justice requires and is concerned that excessive pragmatism will confuse what we can endorse and what we must endure. The problem is that it makes no sense to ask what is right when faced with unavoidable necessity. When the prospect of interstate war was the most important issue in world affairs, dual citizenship would have been not only inexpedient but also unjust.

Not all of Carens’s conclusions in this section of the book are uncontroversial. Carens says naturalization tests are unjust because people inheriting their citizenship through birth or parentage do not have to demonstrate any particular knowledge. He argues that there are strong normative obligations on a host population to modify its culture to accommodate newcomers, and even argues for an asymmetry of obligation: migrants are morally entitled to associate only among themselves, but natives are not. Even history must change – according to Carens, “the history of the nation has to be imagined and recounted in a way that enables citizens of immigrant origin to identify with it.” Carens maintains that temporary workers must be given the same workplace rights as other residents, although he allows for differences in access to social insurance. With respect to “irregular migrants,” he argues for a firewall between immigration authorities and other state authorities, particularly police, social workers and schools, and says that irregular migrants obtain a moral right to stay after about ten years in the country. Enforcement of immigration laws should primarily target employers.

In relation to who should come in, assuming the state can morally make such decisions, Carens posits that family reunification is morally required but is uncertain whether “family” should be understood in terms of the culture of the migrants or of the receiving country. He says ethnicity and cultural affinity are not legitimate grounds to choose migrants, although linguistic ability and economic prospects are. Carens also says that the current refugee policy of almost every Western country is immoral, and that European and North American countries are morally obligated to make a serious effort to resettle the millions of Convention refugees currently in camps.

I am sympathetic to many of Carens’s conclusions, although I disagree about history: if it were up to me, history would not be “imagined” for contemporary political purposes at all. But I doubt that any of these conclusions follow from generally accepted normative principles. For cosmopolitans, all that matters morally is membership in the human species, so the state can’t rightfully prioritize the interests of its current citizens at all. Carens says he is not a cosmopolitan, and thinks the state should prefer those who are members of the society. But behind most of his conclusions is the premise that the only thing that matters to social membership is duration of residence within the country’s borders. Any other principle, he argues, would privilege the way of life of the native population over those of the migrants.

To me, this just seems to be a restatement of the cosmopolitan viewpoint. If an existing majority is allowed to prioritize what is “theirs” precisely because it is theirs, then Carens’s arguments do not follow. They can set the rules about social membership, and if migrants don’t like those rules they don’t have to come. This particularist intuition is at the root of most of the attitudes Carens objects to. Clearly, “love of one’s own” has deep roots in human nature. Classical liberals think the state and the law should not act on this premise, although voluntary associations may. Communitarians disagree. Carens wants to have his communitarian cake and eat it too.

Utopian theory or lifeboat morality?

The most controversial issues are those surrounding “open borders.” According to Carens, the basic democratic principle of liberty implies that coercion must be justified, and the basic principle of equality implies that differences in treatment, particularly those based on status inherited at birth, must be justified. Closed borders are enforced by men with guns, and are coercive. And they deny rights of entry to rich countries, and all the opportunities that go with those rights, on the basis of an accident of birth. It follows that we must either come up with special justifications for controlling immigration or abandon our commitment to liberty and equality.

The obvious response is that what justifies limiting immigration is the chaos that would ensue if all the people who saw it as in their economic interest to move did so at once. The number who might like to move from the global South to Europe and North America is conservatively numbered in the hundreds of millions. The experience of contemporary levels of immigration would be no guide to what would occur if all immigration controls were lifted. Carens acknowledges the point, but says that this should not matter because the massive demand for immigration exists only as a result of the inequalities between rich and poor countries. If every country in the world had more or less similar levels of economic opportunity, only a few unusual people would want to leave their country of origin. Immigration at this level would be no threat to national identity, social order or any other good that seeks to justify immigration restriction.

To my mind, this is a cop-out. The National Front’s Marine Le Pen and UKIP’s Nigel Farage could happily agree that if such a world ever came into existence, they would support open borders. The issue, surely, is what the obligations of rich countries might be in this world. To be sure, moral philosophers should not feel confined to the immediately practicable. But they should not help themselves to an assumption of changed material circumstances that just disappears the dilemma they are purporting to assist us with. Migration between Sweden and Switzerland is not a burning issue one way or the other. If economic opportunities were more or less the same everywhere, we could expect that democracies would agree to let one another’s nationals move freely on the expectation that few would. That was no doubt the mindset of the signatories of the Treaty of Rome in 1957. If there were no significant economic inequalities in the world, we probably would not have much in the way of immigration control – but if we did, it would be no big deal.

In our world, in contrast, many Africans risk their lives every year to get into Europe, as do Latin Americans to get into the United States. We cannot escape the moral seriousness of sending them back – even if they are “only” economic migrants, it means sending them to live a life we would never accept for ourselves. But by the same token, Carens cannot really escape the argument from necessity: if Europe or North America let everyone in who wants in, economic theory would predict that people would keep moving until there was no longer a benefit to do so. There would be little welfare state left, and possibly little in the way of democracy and a market economy.

For Carens, as for John Rawls before him, anything that is a product of human action is subject to human agency: if it is a social fact, we could change it. Since the division of the world into states is a social fact, in his view it makes sense to question its justice, even if we have no real option of exiting a world of nation-states. He finds the idea that justice applies to the evaluation of options only if there is a mechanism to choose between them “puzzling.” But justice presupposes choice: as Carens and Rawls would concede, it makes no sense to say polio or cancer is “unjust” unless you posit an agent who could get rid of it. The historically given may be just as impossible for any particular agent to undo as the naturally given. Carens thinks it is obvious that feudalism or slavery was “unjust” before there was a way to change it.

But this is not obvious at all. Thinkers as diverse as Marx, Hayek and Burke would deny that it makes sense to talk about the “justice” of a utopian scheme without explaining the agency by which it would be realized. An immigration law can be changed because we have the mechanism of legislative amendment (itself a product of a historical development no one could have planned). But we have no similar mechanism to get rid of the division of the world into states or the economic differences between the rich countries and the poor ones.

Carens sometimes suggests that open borders would motivate Europe and North America to do the right thing and bring the global South up to their own economic level. But he does not try to establish that they could do this even if they wanted to. For most of human history, every society was in a Malthusian trap: economic improvement due to technological advance could lead to population growth, but it could not lead to sustained increases in living standards. Northwestern Europe escaped this trap in the late 18th century. The leftist explanation for this was colonialism and slavery, but it seems pretty clear that the causation ran the other way: Europe could have global colonial empires because it became rich, but it did not become rich because of global colonial empires. The West has a poor record of transplanting its institutions and, anyway, does not have the right to do so.

Of course, it is perfectly possible that other parts of the world will escape the Malthusian trap: east Asia has largely done so, and the Indian subcontinent and much of Africa have recently shown some signs of economic convergence. The biggest danger, particularly in Africa and the larger Middle East, is renewed civil conflict, although corruption and ethnic politics are certainly obstacles as well. But whatever happens, the West will at most play a supporting role. Carens has a remarkably old-fashioned view of Western agency acting on Southern structure.

In a very short passage, Carens provides a more useful analysis by comparing the rich countries to a lifeboat. The people on the lifeboat have no moral obligation to allow so many people into it that it sinks, but they do have an obligation to take as many as they can without endangering themselves. As Carens says, all Western societies could take more immigrants without endangering the institutions that make them desirable places to move to in the first place. Of course, there are difficult tradeoffs. Those most likely to drown without the lifeboat are the unskilled, but those who would be easiest to accommodate are the relatively fortunate. The lifeboat analogy allows for quantitative restriction on immigration, but it does imply that the number of immigrants should be as large as possible. The number of legal immigrants that is politically feasible will always be smaller than the number lifeboat morality demands. Hence Carens’s political project is sound, even if his utopian theorizing is not.

Brian Leiter, Why Tolerate Religion?
Princeton, NJ: Princeton University Press, 2012
208 pages

In the late seventeenth century, the Sikh religion was at a crossroads. Indeed, it was not clear whether it could survive. The Muslim Mughal empire, reasonably tolerant at the height of its power under Akbar a century earlier, had decided to suppress the upstart faith. Guru Gobind Singh became the last of the Sikh living gurus at the age of nine when his father, the ninth guru, was executed at the orders of the emperor. In 1699, a few years after winning a major battle against the Mughals, he created the Khalsa order. On joining the Khalsa, all prior social distinctions of caste, race and even gender were to be eliminated. The Khalsa became the basis of the first Sikh state in the eighteenth century.

Today, the Khalsa are the visibly “observant” Sikhs. As in many such orders in various religious traditions, the inner spiritual meaning of the initiation was to be illustrated by exterior signs: the “Five Ks.” The most noticeable is the kesh, the uncut hair that requires Khalsa men to wear beards and their hair in turbans. The most troublesome for modern secular states is the kirpan, a short sword that initiates must keep on their person at all times for self-defence and, when required, for promoting justice.

Hyperventilating on the internet aside, the North Atlantic West does not today face an existential crisis comparable to the Mughal persecution. But it does face a crossroads. For the first time in centuries, issues of religious diversity and the limits of toleration take centre stage in the West. Traditionally Christian populations have become polarized between those who have become thoroughly secular (the majority outside the United States) and a remnant evangelized by Protestant and Catholic revivalists who bear little resemblance to the establishment clerics of the mid-20th century. Where these two groups are both numerous, as in North America, they do not get along well, and their differences have ignited a “culture war,” now into its third decade.

Moreover, since 2001, Western foreign policy has focused on the challenge to Western security and interests posed by militant Islamists. Mass immigration means every religious tradition in the world has significant representation in western Europe and North America. Feminism and sexual liberalism have increasingly become nonnegotiable commitments of the West, but at best, they are in tension with traditional religious commitments, and at worst, they represent the face of evil and decadence to orthodox believers. Religious diversity is perhaps the most unsettling result of mass migration, and certainly the least susceptible to traditional liberal modes of compromise.

Under traditional liberal assumptions, religious toleration requires that the state enact laws for secular reasons, that everyone obey them and that religion be a private matter. This understanding has been challenged within human rights and constitutional law by the idea that religious believers need exemptions from laws that may have been secular in their intent but interfere with religious practices or doctrines.

The most prominent case in Canada was that of Gurbaj Singh Multani, a devout Sikh and initiate of the Khalsa order. In September 2001, he was a 12-year-old attending public school in LaSalle on Montreal Island. He accidentally dropped his kirpan in the schoolyard. The school confiscated it, but soon agreed with his parents that he could continue to wear the kirpan as long as it was sewn into his clothes in a secure and inaccessible way. However, this decision was overturned by the school board as contrary to the school’s prohibition on the carrying of weapons. The school board’s decision was in turn reversed by the Supreme Court of Canada in 2006 on the grounds that it failed to reasonably accommodate his freedom of religion under section 2(a) of the Canadian Charter of Rights and Freedoms and section 3 of Quebec’s Charter of Human Rights and Freedoms. This decision sparked almost a decade of controversy about the accommodation of minority religions and the secular nature of the state throughout Canada, particularly in Quebec.

Multani posed an impossible dilemma for a society committed to religious pluralism. Modern states deliver security and justice by insisting on a monopoly of force governed by impersonal laws. Even a country as respectful of individual self-defence as the United States requires that some spaces be weapons-free, and high on the list of such spaces are schools. No one can suggest that the modern state made schools weapons-free out of hostility to Sikhs. But the vow that a member of the Khalsa takes always to carry a kirpan is not a mere interest or preference that can be traded off in the pluralistic give-and-take of politics. It is an oath to God. It seems the secular state must either force some of its citizens to violate commitments of fundamental importance to them for a diffuse and uncertain benefit or exempt those citizens from a common requirement based on beliefs that cannot make sense to the fellow citizens who must take up the corresponding burden.

In Why Tolerate Religion?, Brian Leiter, a professor of law and philosophy at the University of Chicago, asks whether there are any reasons to treat religious and secular conscience differently, when and whether religiously mandated behaviour should get special exemptions from general laws enacted for secular purposes, and how far a state may go in suppressing religious expression to assert its own secularity. He answers that there are no distinctive reasons to respect religious conscience that do not apply to conscience generally. Since he thinks there is no way to give every conscientious objector an exemption from general laws, he holds that there should be no such exemptions, at least when the effect of an exemption is to shift a burden onto those who do not share the objection. He approves of states asserting their secularity, and even of suppressing religious expression to the extent necessary to do so, but argues that France’s practice of banning the hijab (Muslim headscarf) and other overt religious symbols for recipients of state services goes too far.

Leiter’s idiom is English-speaking analytic philosophy, with the strengths and weaknesses of this approach. The strength is a clear and logical argument from first principles: when there are conceptual leaps, they are laid out for the reader to decide whether to jump along with him. The weakness is a lack of historical perspective and a particular lack of sympathy for the religious sense. The result for me is that I agree with Leiter’s principles, but find his application of them sometimes to be too doctrinaire.

A Canadian reader especially will appreciate Leiter’s comparative approach: although he is an American law professor, he uses the Multani case to frame his discussion, and he engages critically but sympathetically with France’s attempt to maintain its system of laïcité (secularism) in light of the increased demographic presence of Muslim minorities.

Leiter opens by setting out the facts of Multani and comparing them to a hypothetical story of a rural schoolboy who inherits a knife from his father and wants to keep it on him at school. Knowing how to handle knives and guns is an important aspect of manhood in the rural culture of the American south and west. According to David Hackett Fischer’s Albion Seed, the folkways of this American backcountry can be traced back to the often lawless premodern borderlands of the British Isles, Ulster and the English-Scottish border. This kin- and honour-centred way of being an English-speaking person carried over to Appalachia, and was reinforced by the Great Awakening of evangelical and often sectarian Protestant fervour, the War of Independence and the Civil War. In other words, these folkways have some historical resonances with the development of the Khalsa. But, much to Leiter’s approval, no Western country would consider this boy’s plea to keep his knife to raise any kind of human rights or constitutional question.

As Leiter notes, the Canadian Charter guarantees freedom of “religion and conscience”, and a number of other human rights instruments speak of both. Yet claims of secular conscience are very rarely considered by the courts (and, if they are, they tend to be subsumed within freedom of expression, rather than freedom of conscience). Requests for exemption from generally applicable laws are almost always based on specifically religious practices and commitments.

Leiter usefully distinguishes among indifference, toleration and respect. As Leiter points out, toleration is by definition grudging: no one would be happy if a neighbour told them, “We tolerate people like you.” Indifference, not toleration, is the appropriate liberal attitude toward the race, ethnicity and sexual orientation of those with whom we share a community. One uncelebrated advantage of indifference is that it can be absolute. A liberal secular state has no reason to care about its citizens’ metaphysical beliefs or ritual practices if they do not affect the public sphere or the rights of others. This contrasts with most premodern states, which could tolerate heretical and infidel views (as Akbar did) but could not really be indifferent to them.

The question of toleration, by contrast, arises precisely when there is a secular or liberal reason to wish a belief or practice were otherwise. The “accommodation” cases, like Multani, therefore raise the question of toleration, properly speaking, since they arise precisely when the secular state has a legitimate reason, by its own lights, to interfere with the practice. Toleration, unlike indifference, cannot be absolute: the secular state has to suppress human sacrifice and perhaps polygamy, certain types of ritual animal slaughter and circumcision.

Leiter successfully argues that there are no reasons internal to the liberal tradition to tolerate (in this sense) religious conscience that do not also apply to secular conscience. He refers to John Rawls and John Stuart Mill, but I believe the principle can be derived more straightforwardly: it cannot make sense to demand that the state be neutral between mutually contradictory religious beliefs and at the same time privilege religion as such over other sources of moral commitment. Any reason for a Catholic to tolerate a Buddhist is also a reason to tolerate a vegan atheist. The United States is a bit of an outlier within the West on this subject. When Dwight Eisenhower said after being elected President in 1952 that “our form of government has no sense unless it is founded in a deeply felt religious faith, and I don’t care what it is,” he was widely mocked, but his statement seems to have expressed an enduring mindset among the American people. A 2014 Gallup poll reveals that healthy majorities would consider voting for a gay or Jewish president, and 58 per cent would vote for a Muslim, but a bare majority would even consider voting for an atheist. Some of the members of the Supreme Court of the United States think the government can promote religion in general, even if not a specific religion.

Leiter easily shows this approach is not consistent with basic principles of liberalism: whatever toleration the state should extend to religious conscience, it should also extend to secular conscience. His next move is more problematic. He claims that giving everyone who had a secular conscientious objection to a law an argument for an exemption from that law would be unworkable, and therefore defends the “no exemption” rule in the 1990 American case of Employment Division of Oregon v. Smith. In Employment Division, the Supreme Court denied Native American users of peyote in religious ceremonies an exemption from generally applicable drug laws on the basis that this law was not motivated by a desire to suppress their religion. The American Congress reacted by passing the Religious Freedom Restoration Act (RFRA), which requires accommodation of religious practices unless the government has a compelling interest that cannot be met with a less restrictive alternative. This rule is very similar to Canadian constitutional doctrine. Originally the subject of bipartisan support, it has become unpopular among American liberals since it was invoked by evangelical Protestant business owners who objected to paying for certain methods of birth control in the 2014 Hobby Lobby case.

Leiter is right that the RFRA and Canadian religious freedom jurisprudence discriminate on the basis of religion if they allow religious believers to shift burdens onto those who do not agree with them, without giving similar protection to nonreligious believers. At first glance, it seems compelling to say that this is not a rule that could be generalized to everyone with a claim of conscience, and it should therefore be abandoned altogether.

I think Leiter makes two mistakes in his argument at this point. First, he assumes that the relative absence of claims based on secular conscience is a product of legal doctrine, so that if the doctrine were relaxed there would be a flood of such claims, making law impossible. However, it seems more likely to me that the lack of “freedom of conscience” cases independent of religion is due more to the lack of plausible and motivated plaintiffs. If it were otherwise, Canadian law would have confronted the question of when a secular conscientious objector gets an exemption by now.

While Leiter is right to say that allowing exemptions that impose significant burdens on nonbelievers (“burden-shifting exemptions”) is not consistent with the religious freedom of those nonbelievers, he misses the fact that the actual cases usually turn on whether a real burden has been shifted. The early cases in Canada involved paternalistic laws, such as involuntary blood transfusions for Jehovah’s Witnesses, motorcycle and construction helmet requirements for Khalsa Sikhs or Sabbath-closing laws said to protect workers. The peyote ban in Employment Division is similarly paternalistic.

John Stuart Mill famously thought paternalism was never a good reason for a law. The Canadian Supreme Court has rejected Mill’s “harm principle” as a constitutional doctrine, and no doubt other high courts would do likewise. But it is coherent to allow the state to coerce people for their own good when what is at stake is merely preference and convenience, while rejecting such coercion when it offends what is taken to be a categorical moral demand. It is certainly possible that the same logic would be extended to a person who objected to a paternalistic law on the grounds of a plausible claim of secular conscience.

Other exemptions may be justified on the basis that the burden shifted to the majority that does not share the conscientious belief is equalized by other burdens the majority is able to impose just by being a majority. These cases are more difficult because they tend to assume both that there is a stable majority and that minorities with concentrated interests can never get their way. This may have been a reasonable assumption when, for example, in English-speaking North America, an undemanding official Protestantism surveyed a significant Catholic other and insignificant non-Christian alternatives. But it will not do for the 21st century when there is no longer a religious majority to accommodate everyone else in return for recognition of its hegemony.

On the other hand, the idea that everyone should be encouraged to accept give-and-take in the distribution of burdens is a civilized one, and we can hope it will survive. In any event, if the burden shifted is perceived to be too great, current doctrine – whether in Canada, in Europe or under the American Religious Freedom Restoration Act – will not accapt the claim for accommodation.

This leads me to my second complaint about Leiter’s approach: it ignores the details. The school and the Multani parents worked out a compromise: the kirpan would be on Gurbaj’s body, but sewn in such a way that he could not access it. Along with the total lack of evidence of school violence with kirpans in Canada, that solution does not seem to have really shifted a burden onto anybody. The pedestrian compromise demonstrated a civilized pragmatism in contrast to the school board’s ideological and “principled” approach. Indeed, the dissent in the Supreme Court of Canada would have avoided profound questions of religious freedom altogether in favour of a banal administrative law requirement of reasonableness. Leiter ignores the stitches. He makes fun of the Court’s willingness to restrict kirpans in courts and airplanes, but the same solution might not work on an airplane or in a criminal trial.

It might well be argued that the political system is very good at compromising interests. The judiciary, when it purports to deduce its rulings in favour of particular interests directly from the basic values of society, delegitimizes the losers in a way that an (always temporary) political loss does not. The reaction to the Multani decision is a good example of how little the Court is really able to foresee the consequences of what it does, and supports the wisdom of the more restrained judges who would have found for him solely on administrative grounds. But at a broader level, the accommodation legal norm is itself a product of a political process that has been able to maintain more legitimacy among religious minorities than the more logical approach in France. I do not see why the same approach cannot be extended to secular minorities when they have conscientious objections to following the law.

Stitches won’t always do it, and the secular West will sometimes have to insist on its own most basic commitments. Leiter rightly makes us aware that toleration is a virtue we only need when we have reason to be in conflict with one another. But his reluctance to dig into the details of the cases leads him to neglect the possibility that, instead of grand theorizing, those conflicts can usually be resolved with a little common sense.

As the April 7 election showed, there are no sure things in Quebec politics. If the Parti Québécois had won, and if – as originally announced – the National Assembly had enacted Bill 60 (the “Charter of Values”) without using the notwithstanding clause, it would have been challenged in the courts, and would likely have been struck down.

With the victory of the Quebec Liberal Party, it is harder to make predictions – especially about the future, as Yogi Berra would say. The Liberals promised a more moderate version of the Charter of Values. Kathleen Weil, the new Quebec Minister of Immigration, Diversity and Inclusion, has been vague about what the legislation will contain.

Scope of any challenge

While the PQ’s Bill 60 had 52 articles, three in particular seemed vulnerable to court challenge:

  • section 5, which would have prohibited employees of public bodies from wearing headgear, jewellery or clothing which conspicuously indicate a religious affiliation;
  • section 6, which would have required employees of public bodies to keep their faces uncovered; and
  • section 7, which (subject to some unspecified exceptions) would have required anyone receiving a public service to keep their face uncovered.

Sections 6 and 7 were not the first legislative attempt to require public employees and people receiving public services to uncover their faces. While worded somewhat more obliquely, Bill 94, introduced by Jean Charest’s Liberal government in 2010, would have had the same effect. Weil has indicated that there will be some legislation on face coverings, both for those who provide and those who receive public services. The new government might also seek to legislate on the question of conspicuous religious symbols, although a Liberal bill would undoubtedly be less sweeping than the PQ’s Bill 60, and will surely not invoke the notwithstanding clause.

If legislation in this area faces a court challenge, as it very likely will, such a challenge will inevitably be brought under both the Canadian Charter of Rights and Freedoms and its Quebec counterpart. It might prove tempting for a number of judges on the Supreme Court to resolve the case under the Quebec Charter.

First stage: Does legislation infringe a Charter right?

Challenges under the Canadian Charter always involve (at least) two stages. First, the party challenging the law must show that it “infringes” one of that person’s guaranteed rights found in sections 2 through 23 of the Charter. But it is important to realize that even if this occurs, the law is not automatically struck down. The government can still justify the law as a reasonable limit that can be “demonstrably justified in a free and democratic society” under section 1 of the Charter.

Challengers would argue that legislation restricting religious dress violates freedom of conscience and religion (s. 2), freedom of expression (s. 2) and the right to equality and protection of the law without discrimination based on religion (s. 15). The government, in turn, would claim – as the PQ stated in the preamble to Bill 60 – that the measures are necessary to ensure the values of separation of religions and the state, the religious neutrality of the state and equality between women and men, and therefore justifiable under section 1. Weil has said that rather than secularism (laïcité), the government will invoke the religious neutrality of the state.

I do not think there is any doubt that the challengers would succeed in establishing that restrictions on religious dress or rules that require people to uncover their faces would infringe freedom of conscience and religion and freedom of expression, and probably the right to equality and protection of the law without discrimination based on religion. If such restrictions were upheld, it would be under section 1 of the Canadian Charter.

A law breaches freedom of religion if it interferes with a person’s ability to act in accordance with his or her sincere religious beliefs in a manner that is more than trivial or insubustantial. “Triviality” must be judged from the framework of the believer. There really cannot be any doubt that restrictions on religious symbols and face coverings interfere with sincere religious practices.

Some defenders of the law might be tempted to argue that it interferes with religious practices only if the individual chooses to be a public employee or receive a public service. The Canadian courts do not accept this line of reasoning. The public employment context might make restrictions justifiable that would not be justifiable in other contexts, but those justifications still have to be made under section 1.

If such restrictions were upheld, therefore, it would be under section 1 of the Canadian Charter. What are the chances of that?

Second stage: Can the restriction be justified?

As a general matter, it is foolish to be confident regarding what the Supreme Court of Canada will decide under section 1. The formal structure of analysis is well understood, but the actual result is frankly dependent on the views of the majority of the Court. These views are not as polarized and ideologically predictable as in the United States. This is how Canadian lawyers like it, but it makes things hard to predict (as the federal government found out when two former Supreme Court justices and the author of the most influential treatise on Canadian constitutional law failed to anticipate the result in the reference concerning its abortive appointment of Marc Nadon to the Court).

The first question in a section 1 analysis is whether the objective of the legislation is sufficiently “pressing and substantial” to warrant limiting a constitutional right. The objective is distinguished sharply from the effects. Governments are usually taken at their word as to what the objective is, and rarely lose at this stage of the analysis. However, in the case of a restriction on religious dress, there would be some pressure to have the Supreme Court say that the very purpose of this legislation is an invidious antireligious bias, and it cannot be ruled out that the Court would agree and invalidate the legislation on these grounds alone.

On balance, though, I expect the Court to be true to historical type and to accept Quebec’s objectives as legitimate, especially if the Liberals do not try to assert a strong form of secularism. Obviously, the objective of equality between men and women is legitimate and is part of the Charter. The position that upholding the religious neutrality of the state is not a pressing and substantial objective that warrants limiting religious freedom does not survive reductio ad absurdum. Governments have to be able to impose restrictions on a government worker prosletyzing on the job. Even if questioning the motives of Quebec governments is popular in English Canada (although perhaps not so much for a federalist one), it would tend to undermine the legitimacy of any adverse decision in Quebec, and I doubt the Court would do that.

However, the Court might well say that the means chosen are not proportionate to the ends, and strike down the legislation for that reason. There are three elements to the “proportionality inquiry,” as Charter jargon would have it. The first is that there must be a rational connection between the law and the objective invoked to justify it. The second, and harder, test is that the law must “minimally impair” the right. In other words, if there is a way to accomplish the same goal that does not impair religious freedom as much, the law is unconstitutional. Finally, even if the law has benefits in terms of the legitimate objective, and even if these benefits could not be achieved in a less religious freedom–impairing way, the court could still decide that the bad effects outweigh the good ones.

The Liberals have a better shot at persuading the Court of the proportionality of the measures they propose than the PQ would have had with Bill 60.

The most vulnerable provisions would be the ones regarding receiving public services, because the impact on dignity and ability to participate in society are most intense. It will be argued (in my view, correctly) that preventing women wearing veils from accessing public services undermines the gender equality it is supposed to promote by further isolating an already marginalized group of women. The Court has, in some circumstances, upheld government requirements that women with religious objections to doing so show their faces – on drivers’ licences and in courts. However, it has done so where there is an obvious utilitarian explanation for why removing a face covering is necessary or at least desirable. A broad ban might well be (and, in my view, should be) seen as too restrictive, although courts might give a lot of weight to government concerns about identification.

Restrictions on religious expression by those providing public services would be given more leeway, but should also be tied to a utilitarian justification. When Bill 60 was being debated, the opposition parties proposed restricting its application to relatively few state officials. While the Supreme Court reacts well to moderation, this seems hard to justify in principle. If it is inappropriate to stop observant Sikhs from being traffic cops, it is equally inappropriate to stop them from being judges.

The harms that legislative restrictions on religious dress would be intended to remedy seem, certainly to most English Canadian observers, to be symbolic at most. The denial of public services to women covering their faces for religious reasons seems particularly difficult to justify. Should the Liberals introduce legislation to the same effect, it will be argued (I think rightly) that this just isolates and punishes the women the law is supposedly intended to benefit.

The composition of the Supreme Court has changed since the last major religious freedom cases, and the issue has far more salience than it ever did. Past cases leave a lot of wiggle room, so this composition may make a difference. Social scientists suggest that high courts usually reflect the elite consensus as the national level. Since that consensus was strongly against Bill 60, the best way to bet was that it would have been struck down. Time will tell whether a similar national elite consensus develops on the Liberals’ alternative, but, if it does, expect invalidation.

While visiting London in the summer of 1857, the Baron Carl de Gleichen, a man of complex nationality and advanced views, was set upon by denizens of the Victorian underworld and robbed. His assailants were caught and brought before the Marlborough Street Police Court. However, because the baron would not say he believed in a future state after death in which he would be “rewarded or punished according to his deserts,” they were set free. At English common law, the baron could not take an oath if he did not think a supernatural force would punish him for breaking it,1 and since he was the only witness, there was therefore no evidence with which to convict.

In 1992, an Ontario high school student – known to us as “N.S.” – told a trusted teacher that she had been repeatedly raped by her cousin and uncle from the age of six. Her family did not want to take any action and the police did not lay charges. It is hard to imagine N.S. had the self-confidence generations of privilege and freethinking had bred into the baron. Fifteen years later, though, she tried again. A Crown prosecutor was sufficiently persuaded of the plausibility of her evidence to allow charges to proceed. By this time, N.S. had developed the religious conviction that she must wear a niqab, a veil that covers her entire face other than her eyes, when in the presence of men outside her direct family.

Although they attended the same mosque as N.S., the accused men asked for a court order that N.S. remove the niqab while testifying. They argued that only by seeing her face could the judge or jury tell whether she was lying. As a result of a complex decision released by a divided Supreme Court of Canada in 2012, we do not know whether N.S. will be compelled to choose between obeying her religious convictions and testifying against her alleged assailants.2 On the basis of the Supreme Court decision, an Ontario Court judge has decided that N.S. must remove her veil to testify; she plans to appeal. In any case, it is clear that some Muslim women will not be allowed to testify in Canadian courts if they will not show their faces.

Religious belief and the competence of witnesses

Google remembers Baron de Gleichen today because his treatment by the English criminal justice system outraged John Stuart Mill:

This refusal of redress took place in virtue of the legal doctrine, that no person can be allowed to give evidence in a court of justice, who does not profess belief in a God (any god is sufficient) and in a future state; which is equivalent to declaring such persons to be outlaws, excluded from the protection of the tribunals; who may not only be robbed or assaulted with impunity, if no one but themselves, or persons of similar opinions, be present, but any one else may be robbed or assaulted with impunity, if the proof of the fact depends on their evidence … Under pretence that atheists must be liars, admits the testimony of all atheists who are willing to lie, and rejects only those who brave the obloquy of publicly confessing a detested creed rather than affirm a falsehood.3

Outlawery was the gravest punishment of the Anglo-Saxon legal order because an outlaw could be subjected to violence with impunity, and Mill knew the term resonated with his Victorian audience. He pointed out that the protection of the criminal law does not extend to a person who cannot give evidence in court. He had enough imagination to see that a judge or jury drawn from mainstream society will discount the honesty of a person who conscientiously holds minority opinions more than they should, and that there is therefore no danger that such a person’s evidence will be given excessive weight. By holding on to this “relic of persecution,” society injured both the religious liberty of its freethinkers and its own safety.

On Liberty is a favourite of undergraduate education in political theory and philosophy. It is clearly written and reasonably short, and easily inspires classroom discussion about the implications of the harm principle for censorship of pornography and hate speech. As an undergraduate, I must have been assigned it at least three times. But I don’t remember anyone lingering over the case of Baron de Gleichen. Classroom time is scarce, and I suspect it would have seemed too easy: an aftereffect of religious bigotry without any plausible secular justification that was anachronistic even at the time. Debates about the limits of freedom of speech were burning ones in the social science and humanities faculties of the early 1990s, but debates about the relationship between religious beliefs and the competence of witnesses in court proceedings, if we had thought about them at all, would have seemed about as relevant as the divine right of kings.

N.S.’s case shows we would have been wrong: de Gleichen’s case was not an easy one, and when faced with similar issues today, there is no guarantee we will get them right. With the exception of Justice Rosalie Abella, who consciously or unconsciously echoed Mill, the judges of Canada’s highest court decided that juries cannot be trusted to weigh the evidence of a woman who braves the obloquy of wearing a niqab. In Mill’s terms, they have declared such women outlaws, excluded from the protection of the tribunals. Two justices would never allow the evidence of a veiled woman to go to a jury; the majority would give trial judges the discretion to allow it when the evidence of marginal importance, but not when it is central to the case and credibility is in issue. The issues Mill dealt with remain living ones. Questions of who can give evidence go to the heart of who is included in a society both because a person who cannot give evidence is outside the protection of the legal system and because evidence depends on a consensus on procedures for determining who can be trusted. We are still a long way from those procedures being rooted in science, which means they must depend on cultural expectations, which in turn are more rooted in religion than secular undergraduates are likely to realize.

Oaths and early modern English law

As is so often the case, condescension about the past blinds us to a more interesting story. In fact, easy as it is to forgive him, Mill was unfair to the authors of the common law rule he decried. They were relative cosmopolitans who acted not out of religious bigotry or hatred but from purely secular motives.

Like most legal rules in those days, the “rule in Omichund v. Barker” arose out of concrete litigation. Barker was an Englishman trading in Calcutta in the first half of the 18th century, before British dominance was established in India. He ran up a large debt with a local merchant, Omichund, and sailed home rather than pay it. In 1744, Omichund sought the assistance of the English courts in enforcing his debt. The English Court of Chancery allowed for written depositions under oath from residents of foreign countries, a process that gave it a lot of international commercial litigation. Omichund’s witnesses swore an oath in accordance with Hindu custom and Barker objected that only Christians could swear an oath that would be admissible in an English court.

The issue was of enough importance that the Lord Chancellor asked the chief justices of the other English courts to convene a panel to rule on it. Britain’s growing power depended in large part on the ability of its newly independent legal institutions to ensure commercial security, which would be undermined if they could not accept evidence in commercial matters from outside the Christian world. But secular law was still closely associated with Christianity. The Lord Chancellor himself, although no longer usually an ecclesiastic, still played a significant role in appointing bishops of the Church of England. More importantly, the law was extremely concerned about perjury and tightly restricted who could testify in a case. Parties and spouses of parties could not testify in their own case. People under 21 could not testify. Perhaps the most influential common lawyer of all time, Lord Coke, had specifically said that “infidels” could not testify.

The opinion that comes down to us was that of the Lord Chief Justice of the Court of Common Pleas, Sir John Willes. Willes was by all accounts a worldly Hanoverian gentleman. According to Horace Walpole’s catty but entertaining memoir, Willes’s passion for gaming “was notorious; for women, unbounded.” Willes recognized it was “greatly to the advantage of this nation to carry on a trade and commerce in foreign countries.” He also recognized how important it was that an oath be regarded as binding by the person swearing it. Willes had the education in classical and biblical literature to easily demonstrate that oaths are universal institutions long predating Christianity. What mattered was not whether Omichund’s witnesses’ beliefs were true, but whether they created a motive to be honest. Willes and his fellow chief justices told the Lord Chancellor that Omichund’s evidence should be admitted.

Indeed, Willes’s rhetoric sometimes reminds the modern reader of John Stuart Mill. Willes denounced the “little mean narrow notion that no one but a Christian can be an honest man.” Still, as a betting man he was quite concerned about probabilities, and he was prejudiced enough to give the prior probability of a Christian’s telling the truth a higher estimate. Willes said Christians were “under much stronger obligations to swear nothing but the truth” than those who subscribed to other religions. But he also assumed, no doubt correctly, that English judges and juries would think likewise and could safely be entrusted with the decision of how much weight to give infidel witnesses.

This was an important breakthrough in an era when the ability to give evidence was so restricted by rules based on reliability. Willes was the product of a legal system that took the propensity of human beings to perjure themselves for granted and did not particularly trust juries to figure things out: that is why the system would not let anyone with an interest in a case testify. Omichund actually represents a landmark in the development of the more modern principle of evidence according to which anything relevant can usually be admitted unless there is good reason to think people have cognitive biases that mean they will put much more weight on the evidence than they should. Of course, common sense is imperfect and subject to prejudices, which no doubt sometimes put Hindu creditors at a disadvantage relative to their defaulting Christian debtors. On the other hand, while they were at a disadvantage, they were not excluded. Non-Christians could continue to use English courts; British commerce could continue to build on reliable legal institutions.

Willes stipulated the limit that caught the baron a century later as follows: “Such infidels (if any such there be) who either do not believe in a god, or, if they do, do not think he will either award or punish them in this world or in the next, cannot be witnesses in any case.” As his parenthetical remark suggests, Willes was impressed by the universality of oath-taking and did not seem to think there was anyone who failed to give it supernatural sanction and would therefore be caught by the rule. In any event, he needed a limit because he needed the oath. The 18th century had no great faith in human honesty. The oath was both a ritual and a technology for distinguishing statements on the basis of which a person’s life or property could be taken.

By the time Mill wrote, the era of parliamentary legal reform had begun. Protestants in the Anabaptist tradition who believed Matthew 5:34 prohibited oath-taking4 were a loyal part of the Liberal-Radical coalition, which had introduced legislation to allow for “solemn affirmations.” This legislation excluded atheists and did not change the common law requirement of belief in supernatural enforcement in relation to oaths. However, it was relatively easy to imagine extending affirmations to freethinkers and atheists, and by the end of the 19th century, this had occurred.

It remained open to 20th-century juries and judges to give greater weight to the testimony of witnesses who appear to think they will be subject to divine wrath if they do not tell the truth. In fact, difficult as it is for me to admit as a secular person, this may be a valid statistical generalization; the question of the impact of religious belief on truthfulness and other prosocial behaviour is a complicated one. While empirical sociologists in the sixties seemed to show a lack of correlation between antisocial behaviour and lack of religious belief, subsequent results are, well, complicated.5 Suffice to say that while Mill and Willes were definitely right about a lack of any necessary relationship between religious belief and honesty, Willes may have been on to something about the way to bet. There is no real reason to think that ordinary people are systematically deluded about these effects and, even if they were, long before the Charter of Rights, no one thought it would justify denying secular people the protection of the tribunals. Richard Dawkins himself can visit Toronto secure in the knowledge that if he is mugged on Philosopher’s Walk, he will be allowed to testify about it. We have made that much progress.

The Canadian conception of freedom of religion from the eighties to the present

At this point, the reader will already be anticipating that I will argue that the solution our great-grandparents came up with in relation to atheists and freethinkers in the 1890s ought to be extended to women whose interpretation of Islam requires them to wear the veil in court today, and may have framed some objections. Some may think there is just a more rational basis for worrying that testimony by veiled women will lead to inaccurate verdicts than was the case with enlightened barons. That objection will be addressed when I get to the Supreme Court’s own reasons. It turns out that the evidence, such as it is, says the opposite.

But a different objection demands a digression before we get there. A rule requiring a witness to swear to a belief in supernatural consequences for perjury differs from a rule requiring witnesses to reveal their faces when testifying. The question of supernatural consequences for perjury specifically refers to religious or metaphysical belief, while the question of whether one must reveal one’s face does not. On its own terms, the common law would not accept Baron de Gleichen’s evidence because of his sincere religious convictions. In contrast, the rule announced by the Supreme Court does not say N.S. must testify in a certain way because she is Muslim: rather, her interpretation of Islam means she cannot conscientiously follow the rule.

The most fundamental issue in the law of freedom of religion is whether this distinction matters. On one view, a law does not interfere with freedom of religion as long as it is written in general terms and is not motivated by religious bigotry: if a religious believer feels she cannot comply with it, that is her problem – the law is fine. In 1990, in Oregon v. Smith, a majority of the American Supreme Court endorsed this limited view.6

The case was brought by adherents of the Native American Church, which prescribes the use of peyote in its rituals. The adherents claimed an immunity from the general Oregon law prohibiting possession of peyote. The majority of the highest American court said the right of free exercise does not relieve an individual of the obligation to comply with a “valid and neutral law of general applicability on the ground that the law proscribes (or prescribes) conduct that his religion prescribes (or proscribes).” A more militant version of this objection holds that allowing exemptions from general rules based on religious beliefs is itself a violation of state religious neutrality – a concept associated with “strict separation” in the United States and laïcité in France.

Oregon v. Smith was immediately controversial. If it is right, then religious freedom has very little contemporary relevance, since laws almost never single out religious belief, but majoritarian practices often cause problems for believers. The U.S. Congress enacted the Religious Freedom Restoration Act (RFRA) in 1993 in specific response to Oregon v. Smith. The authors of the RFRA decided that free exercise of religious freedom needs to be protected by giving courts or other bodies the power to scrutinize general laws that have the possibly unintentional effect of interfering with sincere religious practice. A law stipulating that everyone must work on Saturday or accept blood transfusions is no big deal for a Methodist, but on this broader view would render Orthodox Jews and Jehovah’s Witnesses less free.

Until the decision in N.S., it was uncontroversial in the Supreme Court of Canada, although not in the political sphere, that the broader view of religious freedom was the right one. If a general law or policy substantially interferes with a person’s ability to act in accordance with his or her sincere religious beliefs, then it offends that person’s freedom of religion under section 2(a) of the Charter of Rights and Freedoms. It cannot matter whether the sincere religious belief is a reasonable one from the point of view of a secular court and there need not be any religious motivation to the law or policy.

There is a catch, though – there has to be. Some sincere religious beliefs (for example, a belief in human sacrifice) would be completely intolerable if put into practice. Some in the media remarked that the N.S. case involved a “contest of rights” since the accused men complained that if N.S. could testify without showing her face, their right to a fair trial would be compromised. In fact, this problem is universal in freedom of religion cases. They are all contests of rights. Any law or policy has at least a perceived benefit, and therefore removing it, or even exempting a subsection of the population from its effect, has a corresponding cost. Someone must pay that price, and it is in the nature of freedom of religion cases that religious belief will be a determining factor in who that is. If my coworker cannot be required to work on Saturdays because she views them as sacred, then I must take her shift precisely because I do not share that belief. The problem is reasonably tractable in a society in which there is a secure and normative religious tradition that agrees to accommodate everyone else in exchange for its dominance. Canada is increasingly not that type of society.

Judges cannot solve this problem, but they can negotiate it. Canadian judges do so by invoking section 1 of the Charter, which affirms the protected rights and freedoms “subject only to such reasonable limits prescribed by law as can be demonstrably justified in a free and democratic society.” Courts take a look at the secular justifications for the policy and decide whether the burden imposed on the believer is greater or less than the burden on society of choosing some alternative. In practice there is no metric with which to measure the weight of a burden on a sincere religious belief, so the court really considers how compelling the secular justifications for the law or policy are.

The result is that freedom of religion cases do and should turn into grubby exercises in analysis of often technical policies. Those seeking grand principles will not find them. Sunday observance laws are not acceptable as criminal prohibitions, but are fine as labour regulation. Observant Sikhs need not wear hard hats on construction sites, and their children may wear ceremonial daggers in public schools, but Jehovah’s Witnesses under 16 cannot prevent medically necessary blood transfusions. Observant Jews can set up a succah on commonly owned property contrary to the terms of a declaration of co-ownership, but a Hutterite cannot refuse to provide a photo for a driver’s licence.7

The results in these cases are not easy to predict. Many of the decisions were accompanied by strong dissents, so that all we can say about the result is that it got the most votes in an electorate of nine. But any legal regime will have hard cases, and hard cases are the ones that will go to the highest court. The more fundamental objection comes from those who believe religious freedom requires a strict and formal theological neutrality from the state, a neutrality that is inconsistent with all this balancing and accommodation. But after being buried so overwhelmingly by the Supreme Court early on, is this alternative conception capable of revival?

The Supreme Court rarely explicitly refers to public controversies it may become embroiled in, but there is little doubt that it is aware of them. The basic framework of analysis for religious freedom in Canada dates back to the 1980s, when it had extremely low political salience. Economic forces and consumer demand pushed Canadians toward Sunday shopping faster than the courts did, and hardly anyone really cared about Sikh construction workers wearing hard hats.

Anxieties created by Canada’s changing demographics remained below the surface, at least from the perspective of elite opinion. September 11, 2001, brought to the fore an anxiety about Islam in particular that has always existed in Western civilization. Because the Canadian political system is not well designed for an open discussion of the optimal amount of assimilation of recent immigrants, the topic plays out in populist sociodramas.

The 2006 Multani decision permitting kirpans (ceremonial daggers) in schools triggered one such, when the town council of the village of Hérouxville, Quebec, enacted a charter for new immigrants forbidding both the use of kirpans and the wearing of the niqab and hijab. Premier Jean Charest reacted to the subsequent controversy by appointing sociologist Gérard Bouchard and philosopher Charles Taylor to head a public inquiry into “Accommodation Practices Related to Cultural Differences.” When Bouchard and Taylor reiterated much of the orthodoxy about religious accommodation in their 2008 report, it met with a cool reception from the public. All three provincial parties immediately rejected its recommendation that the crucifix hanging in the National Assembly be removed.

While the Charter and multiculturalism in the abstract remain enormously popular in Canada, the low salience of accommodating religious minorities and apparent consensus in favour of such accommodation no longer seem to exist. A 2008 poll by the Insitute for Research on Public Policy showed 53 per cent of respondents opposing accommodation of religious and cultural minorities in comparison with only 18 per cent supporting that approach.8 Although the heroic rhetoric of constitutional judicial review would hold that courts ignore public opinion, Multani remains a high-water mark for constitutional religious accommodation, which has not done as well in the highest court since then.

In N.S., the majority of the Supreme Court continues to analyze religious freedom issues in more or less the old way. The issue is still how big a deal it would be to change the secular rule, with the Chief Justice and three others deciding it would be too big a deal in most circumstances, while Justice Abella concludes that it would not. However, two justices, LeBel and Rothstein, seem to me to abandon the historic Canadian analysis entirely, albeit not explicitly. Their decision does not follow the familiar (to Canadian constitutional lawyers) steps of justification of limitations of rights, but simply asserts that court proceedings are a communication process and “wearing a niqab … does not facilitate acts of communication.” However, they do not really try to justify a new departure along the lines of Oregon v. Smith, a departure that would require backing away from three decades of decisions.

Of course, the reader of Inroads is not subject to the institutional constraints of Supreme Court justices, and can decide that we made a bad turn with the Sikh hard hat case. At least in the long run, supreme courts tend to follow national elite opinion, and I cannot claim that there is a drop-dead logical argument that compels us to avoid the path of Oregon v. Smith on pain of self-contradiction. But to me the lessons of Omichund v. Barker suggest that we should not follow that path.

It is obvious that Baron de Gleichen’s religious freedom was violated, as was the freedom of other skeptics who had to pretend to believe in a deity to get ordinary criminal and civil justice. It’s not that the rule that caught him up was explicitly theological in motivation; it’s just that the reasons for that rule were not really good enough when society was actually confronted with the need to think about them. And yet it’s only with the broader conception of religious freedom we have had since the eighties that the need to think about our reasons arises. And surely we owe N.S. very good reasons if we are to expect her to recognize the legitimacy of a system that allows violence against her to remain unpunished.

The broader conception of religious freedom at least provides a framework for requiring such reasons, but for there to be more than a framework, the court’s scrutiny needs to be a searching one. In cases like N.S., where the traditional rule was made by the courts themselves, we should be skeptical that such scrutiny will really be forthcoming. To see why, we need to turn to the decision itself.

Not necessarily removal,
but removal if necessary

Frank Scott famously suggested that William Lyon Mackenzie King would not let his “on the one hand” know what his “on the other hand” was doing. Chief Justice McLachlin would perhaps not appreciate the comparison, but she is comfortable portraying her decisions as treading a middle path between two extremes:

A secular response that requires witnesses to park their religion at the courtroom door is inconsistent with the jurisprudence and Canadian tradition, and limits freedom of religion where no limit can be justified. On the other hand, a response that says a witness can always testify with her face covered may render a trial unfair and lead to wrongful conviction.

In the end, the Chief Justice gives trial judges the discretion to order the niqab removed or not, depending on the importance of the evidence and the possibility of a case-specific compromise. Not necessarily removal, but removal if necessary.

As I hope the digressions about Omichund v. Barker and past religious freedom jurisprudence show, the Chief Justice was exactly right about the problems with such a “secular response.” However, in light of this, and even more importantly in light of the fact that excluding N.S.’s evidence renders her an outlaw in precisely Mill’s sense of subjecting her to violence with impunity, the burden of proof is surely on the Chief Justice to show when testimony with a face covered poses a risk of unfair trials and wrongful convictions.

In fact, as the Chief Justice acknowledges, the cognitive science evidence is the other way. The evidence before the court at first instance, and the bulk of the articles submitted on appeal by various interveners, supports the basic finding that people are likely to overestimate how much they can tell about a witness’s truthfulness by looking them in the face. The Chief Justice noted that the evidence before them was fairly weak, and I sympathize with a demand by the Court for better social science before we dispense with common sense and tradition.

The problem, though, is that the Chief Justice ignores the possibility that a contextual case-by-case balancing can be done by the jury. They are the ones chosen for common sense and the ability to distinguish lies and errors from truth. Juries are not perfect, and cognitive science can reveal biases that really do cause wrongful convictions. The most important is probably sincere but mistaken identifications. Lots of evidence suggests that juries and judges give far more weight to these than they should (unfortunately, the court system has been very slow to correct this bias). Another is forensic science, where juries (and, unfortunately also judges) have a propensity to be bamboozled by confidently presented work that may be subject to all sorts of biases.

However, even if the Chief Justice was unwilling to accept the evidence that juries and judges are overconfident about their ability to tell whether someone is lying by looking at them (demeanour), there is absolutely no reason to think they err in the opposite direction. There is no reason to think that an ordinary jury, in a country where a majority of people would ban the niqab if they could, will give too much weight to the words of a woman who will brave the obloquy of publicly confessing to a detested version of a marginalized creed.

Defence lawyers can, and will, draw the attention of juries, or of judges sitting alone, to the fact that they could not see the complainant’s face in sexual assault trials. It is far more likely that this will raise a “reasonable doubt” when it should not than that it will fail to do so when it should.

As Justice Abella pointed out, we allow people whose facial features have been frozen by accident or illness to testify. The Charter itself guarantees the right to an interpreter, even though that obviously reduces the ability of juries and judges to “read” a witness. The Court has relaxed the traditional rules against hearsay, effectively allowing out-of-court testimony that cannot be seen at all when there is no better evidence. In the absence of a clearly demonstrated cognitive bias, none of this is unfair if the defendant can poke holes in it.

While I think we should be glad that the Court’s majority has left the door open for future courts to look at better cognitive evidence and let women wearing niqabs testify, I fear this will be unrealistic in sexual assault cases. The effect will be to render those women outlaws – cruelly ironic in light of the usual objection to the niqab as oppressing those very same people.

Chief Justice McLachlin is a reasonable, moderate person, as was Chief Justice Willes three centuries ago. In both cases, the decisions could have been much worse, but both have had the effect of creating outlaws based on religious convictions. I hope the next generation of legal leadership can do better.

Continue reading “Veils of ignorance”

Charles Murray,
Coming Apart: The State of White America 1960–2010. 

New York: Crown Forum, 2012.
416 pages.

A person could be well into middle age and not remember it, but for most of the 20th century class was the central category of both social theory and practical politics.

From Lenin’s arrival in the Finland Station until some difficult-to-pinpoint moment in the late seventies or early eighties, anyone who purported to be an intellectual had to grapple with Marxism, a doctrine that famously reduced history to the history of class struggles. Grappling with Marxism was by no means restricted to those on the left. Conservative anticommunists such as James Burnham (ex-Trotskyist, mentor of William F. Buckley and therefore grandmentor of Ronald Reagan) and Milovan Djilas (early ally of Tito but ultimately his most devastating critic) developed theories of new bureaucratic classes battling capitalists and oppressing workers. Toward the end of this period, right-wing intellectuals developed the public choice school of political theory that in many ways translated Marxist historical materialism into the language of game theory and neoclassical economics. Moderates considered how democratic institutions could reconcile the competing interests of capital and labour. Even leading existentialists, agonizing over individual choice and meaning in the face of death and seemingly distant from social or political concerns, experienced an inferiority complex in the face of Marxism.

But class was not just an organizing concept for pointy-headed intellectuals. With the interesting exception of Canada, class dominated the day-to-day politics of the Atlantic democracies. Britain’s Labour Party, West Germany’s Social Democratic Party and France and Italy’s Communist parties were working-class in self-conception and sociological reality, and their opponents were clearly the parties of business and middle-class professions. The two major U.S. political parties were free of socialist ideology of any stripe, but in the decades after the New Deal the Democrats saw themselves as the party of labour while the Republicans saw themselves as the party of business. Labour leaders in the decades after World War II unquestionably had a seat at the table. Working-class political power corresponded to an era of growing social programs, relatively equal incomes and constrained managerial discretion in the workplace.

That was then.

To those of us who became politically active in the eighties, the politics of class was already an object of nostalgia. Environmentalism, pacifism and gender, ethnic/racial and sexual identity were far more compelling for youthful activists than class. Even before the fall of the Berlin Wall, Marxism had lost its intellectual cachet, and with the collapse of Communism as a real alternative to liberal capitalism, the importance of class as a category of analysis was enormously devalued.

Class also became less central for practical politics. Nixon and then Reagan successfully appealed to southern and northern Catholic working-class people, especially men, while Democrats made inroads among professional groups like engineers and lawyers, who had traditionally been Republican. Thatcher and then Blair diluted, at least partially, the class character of their respective parties. European social democrats largely retained their historic base, but in the process often came to represent a smaller portion of the left-of-centre electorate, with the alternatives increasingly “postmaterialist” (i.e., middle-class).1 As manufacturing employment inexorably declined in the face of changing technology and rising trade with newly industrializing countries, even the labour movement became less working-class. Public sector professionals who had been slow to unionize now dominate the union movement in the West, particularly in North America.

From the vantage point of 2012 and at the risk of greatly oversimplifying, the postwar history of the West can be split in two. During the first half (the French refer to these years as les trente glorieuses), class dominated how elites thought about politics and society. It was assumed that policy would reflect some compromise between business and labour. During the subsequent 30 years, class became a marginal part of our conceptual toolkit. And yet that second half has seen both diminishing racial and gender disparities and increasing polarization of wealth and income.2 We think less about class, but it matters more to how we live.

Perhaps a generation from now, the 2008 financial crisis will be seen as another turning point. It once again has increased the salience of class. The contours of the mass movements opposing austerity in Europe would not astonish a newly defrosted observer cryogenically frozen in the 1940s. In the United States, the Tea Party and then the Occupy movement each briefly aroused positive feelings among a majority of Americans. Each presented in some form a class analysis of the situation America has found itself in since the collapse of Lehman Brothers and the TARP bailout. The Tea Party is now an unpopular, sectarian and destructive tendency within the Republican Party and the Occupy movement never attained any effective influence in mainstream politics, but the initial popularity of these very different populist movements suggests that class is back in the West.

Clearly, the two movements do not mean the same thing when they propose class struggle. Tea Party supporters see themselves as snubbed by high-status educated elites, and believe these elites use their status to get public-sector preferment at outsiders’ expense. The Occupiers consider economic wealth and power to be the same, and to the limited extent that they have any coherent programmatic goals, they advocate government redistribution of wealth and income. In Max Weber’s terms, the Tea Party emphasizes stratification by status (what Weber called Stand or status group) as opposed to the political left’s emphasis on stratification by wealth (closer to Weber’s Klasse, group defined by market position, or Marxian class, although vaguer than either).

Today’s radical populists are representative of the larger coalitions in which they are embedded. Voting for Republicans is positively correlated with income and negatively correlated with education.3 Since education is the primary source of non-income-based status in modern America, Weber might see the Democrats as a coalition of people who want the status system to dominate the economic system, while the Republicans are the opposite. Since education and wealth are in turn highly correlated, politics at the elite level has become a narcissism of small differences as well-paid and highly educated liberal academics trade barbs with well-educated and highly paid conservative business executives. But whether it is Republicans denouncing out-of-touch elitists or Democrats calling for higher taxes on the wealthy, class is as important as gender and race to the politics of the one Western country that has never experienced a serious ideological challenge to capitalism.

While class is going to remain politically important for some time to come, its study currently lacks the intellectual dynamism it had two generations ago. This is particularly true on the intellectual right. We should welcome the fact that Charles Murray, author of Losing Ground and The Bell Curve and a bona fide right-winger, has written a bestseller on the subject of class. Coming Apart: The State of White America 1960–2010 purports to survey changes in American class structure over the last 50 years on the basis of sociological data. Its thesis is that those changes represent an “unravelling” of American culture.

With the exception of a single chapter, Murray restricts his focus to American whites. This is a defensible choice: class differences are most easily studied by keeping ethnicity constant. As Murray argues, in 1963 race was obviously the deepest cleavage in American society; by the end of the period, from a sociological point of view, ethnic differences in life chances could be explained by differences in the class composition of American ethnic groups. In other words, the social prospects of black, brown and white college-educated people with professional jobs are similar. So too are the prospects of black, brown and white Americans without high school diplomas. Of course, black and brown are underrepresented among the college-educated and overrepresented among those without high school diplomas.

In one dimension, Murray sets his book up as a temporal comparison between November 21, 1963 (the day before President Kennedy was shot), and a present just before the financial crisis of October 2008. In another dimension, it is a spatial/social comparison between Fishtown, a white working-class neighbourhood of Philadelphia (in the book a composite of people who work, if at all, in blue collar and service industry jobs and have no education beyond high school), and Belmont, a wealthy suburb of Boston (doing duty as a composite of Americans with bachelor’s degrees and management or high-prestige professional jobs). Murray claims that Belmont and Fishtown diverged sharply over the 45 years, that the most significant aspect of the divergence is cultural as opposed to material, and that the divergence is threatening the American project as he understands it.

Murray finds a lot to criticize among both the upper and lower class. Belmont, he says, is out of touch. The managers, high-status professionals and cultural content providers constitute a “status group” in Weber’s technical sense, a circle for which “above all else a specific style of life can be expected” and who restrict nonfunctional interaction with everyone else.4 The food they eat, the wine and beer they drink, the vacations they take and the educational institutions they consider for their children are all designed to differentiate them from other Americans. Murray does not have a lot of sociological data, but illustrates his point with a test of his readers about their knowledge of NASCAR, family chain restaurants and popular television shows. Somewhat questionably, Murray argues that since his readers must be part of the upper class, if they do badly on these questions, it shows the upper class is more divorced from mainstream society that it used to be.

Murray concedes that rich people engaged in conspicuous consumption to get special status for themselves in 1963 as in 1863. The difference is that today’s upper class self-segregates with people cognitively similar to themselves. In 1963, as Murray tells it, America still had a high degree of status equality (within its dominant racial group), something that de Tocqueville celebrated in the 1830s and Weber remarked on in the early 20th century. Today, Murray suggests, elites are abandoning status equality; they are no longer rooted in a larger American society.

On the other hand, Murray establishes that the managerial-professional upper class works harder than ever and for the most part adheres to almost Victorian standards of bourgeois morality. The only major exception is that today’s bourgeoisie thinks premarital sexual relationships are sensible, so long as they are childless and broadly consistent with serial monogamy. The managerial-professional class gets divorced less, commits few crimes (Murray rightly wonders about white collar crimes and other abuses, but does not dig too deeply), and generally exercises and eats well – compared both to its working-class contemporaries and to the upper class of the seventies and eighties. Belmont conforms to a socially liberal ideology of tolerance and inclusiveness, but despite a little updating around the edges, its residents live as bourgeois a life as ever. Perhaps surprisingly, they go to church more than working-class and middle-class whites.

Murray has fewer positive things to say about Fishtown. Working-class white males are less likely to be employed, far less likely to be married and far more likely to be incarcerated than in 1963. In every postrecession recovery since the sixties, male labour force participation has failed to return to prerecession levels. Among women, workforce participation has not consistently declined, but it has remained flat since 1980 for those with high school or less, while educated women’s participation has steadily climbed in all but the worst economic times. Although Murray is unable to cite work less than a decade old, sociological studies in the eighties and nineties suggested cohabitation in America has not evolved into the de facto marriages seen in many other OECD countries. Instead, outcomes for children with two biological parents in a common law relationship in the United States were, as of the late 1990s, no better than for those in lone or step-parent families – which is to say, they were bad. Murray acknowledges improvements in the crime rate in the United States, but correctly points out that there has been no corresponding decrease in incarceration: an unprecedented portion of working-class men are in jail or otherwise under the supervision of the criminal justice system.

Murray has nothing particularly original to say as to why these trends have occurred. Globalization implies that market income disparities within countries will increase. Technological development has ambiguous effects in theory, but since 1970 it has tended to commoditize low-skill labour, while giving unprecedented opportunities to what Murray refers to as “someone with exceptional mathematical ability interpersonal skills or common sense.” Combined with the crime and incarceration wave, and women’s greater choice as to whether to stick with the father of their children, this has led to less stable family formation among less educated Americans, which in turn has exacerbated poor intergenerational human capital development and other social problems. By contrast, educated Americans, with greater opportunities than ever if they can climb to the top, sublimate any need to depart from the lifestyles of the 1950s into a fondness for niche music, craft beer and peasant bread.

Unfortunately, Murray engages in almost no comparative analysis. But as the OECD has determined, while disparity in market earning capacity has affected all rich economies, whether this disparity turns into substantial divergence in post-tax/transfer wealth, income or life opportunities depends a lot on the generosity of tax and transfer policies. Even in the United States, almost all improvement in living standards of the working poor comes from more redistributive policy.5 In his discussion of Belmont, Murray dismisses any talk of more redistributive policy on the basis of an unconvincing “futility” argument that any such policy would be completely negated by increased tax avoidance. Here Murray confines himself to bald assertion and does not engage any of the specialized literature.

Later in the book, Murray rather surprisingly acknowledges that social democratic policies would work on their own terms: increased taxes and public spending would lead to less income inequality. Coming Apart, which up to this point claims to be descriptive and empirical, here takes a normative turn. Murray claims that recent “happiness science” supports Aristotle’s contention that the good life is measured not by satisfying preferences but by cultivating human capacities – “deep satisfactions” as Murray refers to them. For Murray, the four deep satisfactions are found in one’s relationship with family, with work (or avocation), with community and with God. On this view, Fishtown’s tragedy is not that it has declined in material terms but that its residents are less likely than 40 years ago to have jobs, have stable families, build civil associations or participate in religious institutions. Murray argues that social democratic solutions would worsen the loss of “deep satisfactions” because achieving them is premised on the possibility of material failure.

In the normative chapters, Murray finally reveals his dystopian conclusion: class polarization is threatening the “American project.” He refers to de Tocqueville’s classic observation that Americans tolerate more material inequality but less status inequality than Europeans. His highly questionable conclusion is that this difference from Europeans has become less true. He also holds a politicized idea of the American project as forbidding government policy designed to help those who have failed in the market.

Murray’s argument raises numerous methodological criticisms. His standards of evidence fluctuate wildly. With respect to residential concentration of rich and poor Americans, we get standard social science. Elsewhere, we must rely on a few lines from The Philadelphia Story to establish the proposition that moral norms against men sexually assaulting intoxicated women have deteriorated in the last 60 years. At the most basic level, the choice of 1963 as a starting point is arguably gerrymandered. It was a moment of unusual social and political consensus within white America. Ten years later, the class changes Murray notes were only beginning, but America was a lot more violent and white America more divided than it is today. The cosy image of 1963 also depends on excluding by definitional fiat the struggles around the rise of the civil rights movement. Nineteen sixty-three was the year of Martin Luther King’s “I have a dream” appeal to thousands assembled in Washington and of Governor George Wallace’s standing at the entrance to the University of Alabama in an (unsuccessful) attempt to block black students.

As Murray defines class, far more white Americans are “upper-class” in 2010 (21 per cent) compared with 1960 (6 per cent) and far fewer are now “working-class”(30 per cent) compared with 1960 (64 per cent). This complicates any story of growing apart.

A progressive response to Murray’s critique of Belmont might go as follows. The consumption and esthetic choices of the upper class are hardly worrisome. Early adoption of environmental awareness, acceptance of more equal gender roles and pursuit of healthier lifestyles are surely praiseworthy. The problem is that the upper class has taken almost all of the post-1963 gains in income and wealth.

People who confuse upper-class consumption habits with moral superiority do exist. We have all met idiots who think they are superior because they eat organic vegetables or drink craft beer. Even sillier are those who think lifestyle choices are blows against the “system.” Maybe all of us who have “Belmont” consumption habits can be tempted by these associations. The best response is satire. David Brooks did it well in Bourgeois Bohemians and Christian Lander’s website Stuff White People Like does it even better.

But surely it is also silly to think that a preference for craft beer and HBO television is a real social problem. The bourgeois have always been early adopters of new cultural developments, as of new technologies. If highly educated Americans are no longer willing to drink mass-produced beer and are unenthusiastic about stock-car racing, it is hard to blame them. Murray cannot demonstrate that such choices have any social significance comparable to the polarization of wealth and income.

Indeed, Murray fails to demonstrate that there is a trend toward cultural class polarization. Yes, hardly anyone in the professional-managerial class smokes today, while a third of working-class whites still do. But smoking has declined throughout the population: the cultural shift started with the educated and has diffused downwards. Other cultural developments, including ones Murray complains about such as tattoos and demotic dress codes, have diffused in the opposite social direction. Murray creates an unfalsifiable thesis: when the bourgeoisie develops its own cultural trends, it is evidence of growing apart, but when it adopts working-class trends, it proves the collapse of American civilization (Murray literally argues this).

No doubt, America is more culturally fragmented than it used to be. In 1963 network television was a cultural product to which almost everyone was exposed simultaneously; there are no such cultural products today. But nothing like that existed before the development of movies and radio in the twenties, and the fact that cultural products are niche marketed now does not necessarily imply that these niches pose a dangerous social problem. People choosing social groups and lifestyles rather than inheriting them is as American as Mount Rushmore. It is at least arguable that the increase in subcultures has, if anything, given Americans a broader array of social hierarchies to participate in, and thereby undermined the prospect of a single status structure, such as England traditionally had and Americans rejected.

Murray is on stronger ground arguing that there has been a change in the basis for upper-class membership. Human capital, particularly human capital well aligned with an economy geared around abstract reasoning and self-discipline, is much more important than it used to be. One can compare Yale’s freshman class of 1964 (most famous alumnus, George W. Bush) with Harvard’s of 2002 (most famous alumnus, Mark Zuckerberg). Murray’s argument that the human capital most rewarded in a modern economy is primarily genetically inherited is dubious. The “Flynn effect” – that raw IQ scores increase by a standard deviation every generation, especially for the most abstract tasks – cannot be explained by genetics. Nevertheless, there is no doubt that choice of parents is almost as important in determining a child’s acquisition of human capital as it once was in determining acquisition of land, stocks and bonds. The result is a new elite, almost as closed as the hereditary aristocracies of the past, convinced that its success is a result of its own intelligence and hard work, and arguably therefore less inclined to noblesse oblige or even to a basic loyalty to the polity in which it lives. Murray is probably right that more redistributive taxation would not really alter these developments, but it would of course generate revenue.

Murray’s diagnosis of Fishtown’s problems makes a lot of sense, but we need not accept his fatalism. As Murray would no doubt accept, it is not up to the liberal state to directly promote his “deep satisfactions,” although it certainly makes sense to try to find policy solutions to declining labour-force participation for working-class males. Murray argues that any welfare state has the tendency to reduce “deep satisfactions” by mitigating risk, but this argument fails to account for the obvious fact that it is far easier to fail in Fishtown than in Belmont. If the children of the rich are doing better at sustaining jobs and marriages, it is not because they face a greater likelihood of material deprivation if they do not.

To the extent that social programs and taxes are designed in a way that punishes people for getting jobs, then they will indeed make things worse. Heavily means-tested benefits or benefits provided without work requirements eliminate the reward from work. For some groups this perverse effect is a crucial consideration; for others, encouraging work is not as important as transferring income. Universal public services funded by broad taxation tend to mitigate inequality and can be consistent with a broad expectation that everyone will do their part. Murray’s preference – an unconditional basic income, clawed back as earnings rise – really does have the potential of driving those with marginal skills out of the workforce, and also undermining the sense of reciprocity that political support for public services requires.

Murray does not distinguish between what I consider a truly social democratic model of relatively high but broadly based taxes combined with broad and honest provision of universal public services and a welfarist model of no-strings-attached entitlement to means-tested benefits. He is able to get away with this through a cliché-laden discussion of the “European” model, which fails to distinguish entirely between closed labour markets combined with handouts and open labour markets combined with generous social services. From a social democratic perspective, commitments to universal health care or education do not degrade community or “deep satisfactions,” but enable them.

In certain respects, America has done better in maintaining labour force participation than many European countries because it has kept its labour market relatively open and has supplemented the earnings of the working poor. Since the 1970s, macroeconomic policy across the OECD has tended to favour investors (by keeping inflation low) over workers (by tolerating high unemployment). At least in the United States, monetary policy targeted high unemployment. Europe has a completely unaccountable central bank in the thrall of sado-monetarism.

The neoliberal response to globalization and technological change has always been to promote education. The OECD has shown that education spending can promote equality. Murray usefully points out that not everyone is equally able to take advantage of education that relies on abstraction and self-discipline, and that working-class males may be the most likely to drop out of school, with negative results for their families, communities and the broader society. North America has failed to provide high-quality education that works with the strengths of working-class men. With some justice, Murray can claim that this problem is given less attention by the public-policy-oriented elite because the people affected are so unlike the elite. At the same time, Murray and his ideological allies fail to acknowledge how little the free market is likely to improve things.

Perhaps the economic crisis will finally precipitate a class-oriented political movement focused on the contemporary world. Such a movement will need to reach outside the current framework of left and right. It should work both to reduce economic disparity and to reverse declining cultural capital. While Coming Apart is flawed and the flaws reflect the preoccupations of the American right, it is a serious attempt to grapple with class and deserves consideration – including from those who reject Murray’s ideological assumptions.

Continue reading “Class, as seen from the right”

Canadians are a fortunate people who live in a successful country. And despite our self-deprecating image, we do not tire of telling ourselves so at high school commencement addresses or viceregal functions. But when it comes to public policy books for the general reader, we prefer the tone sombre and the narrative declinist. After all, George Grant wrote Lament for a Nation, not Audacity of Hope. Recent additions to the Canadian foreign policy genre have been dominated by complaints of decline in Canadian influence, seriousness and power.1

Many critics have taken up Grant’s narrative of a supine Canadian Establishment bending to the whims of the American imperial colossus. Others − centre-right complainers like Andrew Cohen and J.L. Granatstein − look back to a golden age peopled by the muscular Atlanticist liberal internationalists Grant hated. Either way, though, Canadian foreign policy writing eschews Mosaic rhetoric of new covenants in favour of Jeremiah’s language of lamentation and captivity. True, our lamentation is a relatively comfortable and therefore slightly comic one, provoked not by disaster and ruin but by uncertainty and loss of purpose. But it is the rhetoric we are comfortable with, and it is hard to sell anything else.

In Getting Back in the Game: A Foreign Policy Playbook for Canada, former Ambassador to the United Nations Paul Heinbecker gives an Establishment answer to all this doom and gloom. Canada is still influential and a force for good. Our fundamental goals of a close-but-independent relationship with the United States and a strong international system based on law are sound. What we need are more diplomatic and military resources, more confidence, a few strategic tweaks and less symbolic policy driven by domestic politics. The world needs more Canada and should get it. Canada, in turn, should put its game face on and give multilateralism 110 per cent.

My expectations of a foreign policy book by a long-time Foreign Affairs mandarin were not high. I worried I would have to read a turgid narrative of acronyms and state dinners, which carefully avoided any clear or interesting opinion. Fortunately, Getting Back in the Game is much better than that. So long as you don’t mind sports metaphors (two in the title alone), you won’t have stylistic complaints about this book. Heinbecker tells the story of Canadian foreign policy since Mackenzie King. He manages to be simultaneously critical, balanced and supportive of the basic enterprise. His opinions are not surprising, but he argues for them in clear, forceful language. What he does not do is question the basic assumptions of Canadian diplomacy or analyze why those assumptions have such trouble obtaining a consensus among the broader Canadian public today.

Part 1 of Getting Back in the Game exhorts Canadians to wallow less in self-doubt and project their values and interests more confidently on the world stage. Although it will never again have the relative significance it did in the 1940s, when most of the world was either recovering from war or still colonized, Canada is more powerful – certainly potentially — than we give it credit for.

How persuasive readers find this will depend on their expectations in the first place. “Worthwhile Canadian Initiative” is a proverbially boring headline because there have been a multitude of such initiatives. As I write, NATO has launched an air war against the Gaddafi regime in Libya under the leadership of a Canadian general and based on the “Responsibility to Protect” doctrine, promoted by Canada at the 2005 UN Summit.2 But it would be unwise to push the point too far. Canadians depend on a world order that they can only affect on the margins. The same may be true of the United States, but Canadians cannot really fool themselves on the subject. Mostly, we have to hope our luck holds.

Part 2 is history of Canadian foreign policy. The tale is Prime Minister−centric. Heinbecker’s biases are clear, if nonpartisan. There are two heroes, Pearson (who gets credit for the Saint-Laurent years) and Mulroney. There are two antiheroes, Mackenzie King and Stephen Harper. Grant’s own hero, Diefenbaker, can have nothing nice said about him, so almost nothing is said at all. Trudeau and Chrétien get mixed reviews.

King was of course politically formed by World War I and the conscription conflict that divided Canada on ethnic-linguistic lines. He promoted Canada’s independence from the British Empire, while accepting that it was inevitably aligned with the major anglophone powers. He led Canada through its second great war, politely hosting Roosevelt and Churchill with no real ambition to influence what they decided. Independence was a means of avoiding traditional imperial entanglements, even if the biggest entanglement could not be avoided. For King, foreign affairs were primarily a source of danger – to our finances, to the lives of our young men and, most of all, to fragile Canadian unity.

The postwar era was far more heroic from Heinbecker’s perspective. Although King remained Prime Minister until November 1948, Louis Saint-Laurent as his chosen successor and Secretary of State for External Affairs made foreign policy despite King’s misgivings. Pearson, a seasoned diplomat, became Saint-Laurent’s External Affairs Secretary and successor in his turn. With the unfortunate interregnum of Diefenbaker, there was substantial continuity until Pearson left office in 1968.

Canada, although clearly not in the same league as the United States and Soviet Union, was a significant military and economic power. Europe and Japan remained decimated by war; Africa and most of Asia were still colonies. The best and brightest flocked to the federal bureaucracy and External Affairs in particular. Canadians played a leading role in building both the multilateral institutions around the United Nations and the Cold War institutions around NATO. To that generation, there was no contradiction between progressive globalism and muscular anti-Communism.

There was also no sense that Canadian independence – still conceived as independence from Britain – was at odds with a close relationship with the United States. The Pearson generation shared the same progressive antitotalitarian and technocratic assumptions of the East Coast U.S. foreign policy establishment – although they no doubt found the isolationist and McCarthyite wings of the Republican Party as baffling as Heinbecker finds their Tea Party successors. Finally, they saw no conflict between Canadian interests and Canadian values, and were sure that they understood and embodied both.

Heinbecker agrees with contemporary critics like Andrew Cohen that this was a golden age, and longs for the untroubled synthesis of the Pearson era. Although he does not put it this way, he is in fundamental agreement with the Pearsonian impulse to accept and support American hegemony while binding that hegemony to a legal framework. We should support the Throne, but seek a constitutional settlement.

However, he is more nuanced than Cohen in his assessment of the subsequent Trudeau years (1968–84). Heinbecker makes the interesting point that Trudeau came to office convinced that foreign policy should emphasize Canadian interests more, and moralism less – much like Stephen Harper 40 years later. Trudeau’s enduring concern was fighting Quebec separatism and so he emphasized respect for state sovereignty and territorial integrity. This (possibly along with a desire to make life difficult for France) led him to support the central government of Nigeria in its bloody suppression of the Ibo secessionist government of Biafra. As Heinbecker points out, there is a parallel with Harper’s unconditional support of Israeli actions in Gaza and Lebanon. In both cases, Canada’s position was determined neither by morality nor by a hardheaded assessment of interests, but by projecting internal identity politics onto someone else’s conflict.

Unlike Harper, Trudeau had no appreciation of military values and generally had difficult relations with American presidents (Heinbecker points out the significant exceptions of Gerald Ford, who gave Canada the prize of a seat at the G7, and Jimmy Carter). Heinbecker correctly views his attempts to diversify Canada’s trading relationships away from the United States as quixotic and half-baked. Trudeau befriended a number of “progressive” Third World despots, but could not be taken seriously by other NATO leaders. For all his talents, Trudeau was ineffective on the international stage and his cherished final peace initiative never went anywhere. On the other hand, while critical of the way he conducted bilateral relations, Heinbecker would not fault Trudeau for maintaining an independent policy from the United States and credits him for building up a warm reservoir of feeling for Canada in much of the Third World.

Heinbecker’s lessons for today are that (as the right argues) Canada cannot expect to have influence unless it is a loyal ally to the Western countries that share its values and interests and pulls its weight, and (as the left argues) detaching interests from values is less smart than it sounds.

The warmest part of the history is Heinbecker’s account of the Mulroney years (1984–93). In Heinbecker’s telling, Mulroney’s achievement was at least equal to that of the Pearsonian golden age. Mulroney is of course widely distrusted by the Canadian public, and not without reason, but Heinbecker stoutly defends him. Mulroney certainly emphasized a close relationship with the United States, and was a confidant of both Reagan and the elder Bush, but Heinbecker denies that he was ever subservient to American interests. Heinbecker points to Mulroney’s close relationship with Nelson Mandela’s African National Congress, at a time the Reagan administration viewed the ANC (with some reason, it is in no one’s interest now to remember) as part of the Moscow-backed world Communist movement. Mulroney was also a strong supporter of multilateral institutions and global environmental agreements – his influence on Reagan and Bush was therefore a thorn in the side of neoconservatives who viewed both as threats to U.S. sovereignty.

Heinbecker cannot entirely decide what he thinks about the Chrétien years. On the one hand, Chrétien shared King’s caution and relative lack of engagement in foreign relations. The need to concentrate on deficit reduction led to a loss of resources for aid, diplomacy and the military. Canadian peacekeeping commitments withered. On the other hand, Heinbecker approves of the foreign ministers of that era and the calls Chrétien made – most notably concerning the decision not to participate in the second Iraq war.

Despite his lack of sympathy for declinism, Heinbecker is generally critical of Harper’s Conservatives. He thinks they are making a mistake in providing greater resources to the armed forces but not to diplomacy and aid. The decision to emphasize Latin America instead of Africa for aid priorities was boneheaded. Most of all, Harper is taken to task for emphasizing domestic identity politics in Canada’s foreign relations. He thinks (no doubt correctly) that the Harper government views international politics as an easy way to pick up votes among various traditionally Liberal diasporas. The obvious example is Israel/Palestine, but Heinbecker points to others.

In Part 3, Heinbecker makes his pitch for more resources and for a moderate policy. He marches through the major issues (Afghanistan, climate change, Security Council reform) in no-nonsense briefing-note style. There is much to agree with: for example, a North American Union on the EU model is a pointless distraction and success on bilateral issues with the United States will depend on finding allies among American interest groups and congressional barons. The fact that the United Nations is flawed hardly makes it dispensable. Peacekeeping often has a good cost-benefit ratio and we should do more of it.

The main difficulty is that Heinbecker takes a technocratic view of Canadian interests and values. For him they are givens, rather than the product of an internal political process. Indeed, his one disagreement with his hero Pearson is his dislike of the latter’s dictum that “foreign policy is really domestic policy with its hat on.” From Heinbecker’s perspective, domestic policy and, even more, domestic politics should have nothing to do with it.

Heinbecker realizes that different ethnic groups have different views of what constitute Canadian values in lands they retain an emotional attachment to. He also sees that Canadian politicians inevitably view foreign events with a domestic lens, as Trudeau did in Biafra. What he fails to consider is that it cannot be otherwise. An increasingly diverse nation is not going to have the common values that the WASP Pearsonian mandarins could take for granted. As a result, we should be cautious about an interventionist foreign policy, since we may simply import foreign conflicts into our own society. King’s nightmare now applies not just to the two “founding nations” but to almost every ethnic group on Earth.

One case in point is the contrast of angry Tamil demonstrators across Canada calling on Ottawa to denounce the Sri Lanka government’s destruction of the autonomous Tamil homeland, with Sinhalese wanting Canadian official support for Colombo’s fight against Tamil terrorists. Randomly looking at last weekend’s Globe and Mail (April 9, 2011), I read that the Conservative Party is proposing to make religious freedom in Egypt a priority – coincidentally, there is a significant Coptic Christian diaspora in a swing Mississauga riding.

Any particular example may be insignificant. Added together, they suggest we cannot assume that foreign policy will be motivated by shared values, since these often will not exist. Most Canadians will be uninterested in policy in a particular place, and those who are will have inherited a particularist narrative about what is going on.

Heinbecker also assumes, rather than demonstrates, that when Canadians want to help they know what to do. He points to the Mulroney government’s action during the Ethiopian famine of the mid-eighties without noting that this involved Canadian funding for a forced government resettlement of disfavoured ethnic groups. Canada has designated Bangladesh as one of 20 “countries of focus” for CIDA aid, but has said nothing publicly about the perverse vindictive campaign waged by the Bangladesh government against Muhammad Yunus, head of Grameen Bank, the world’s premier microfinance institution. The mixed record of Canadian development aid and a decade in Afghanistan has made Canadians wary about their ability to improve the world. Perhaps we can never have enough local knowledge to avoid blunders that make things worse.

In the end, I agree with Heinbecker that pessimism/fatalism is too easy. We have a fortunate position in a dangerous world, and we should do what we can to promote peace, human rights and development. Despite everything, all three have made significant progress since the days of Pearson or even of Mulroney. Demographic pressure on domestic health care costs will make cutting back on globally oriented expenditure increasingly appealing to Canadian voters, but we cannot sustain our attractive mixed economy unless the rest of the world has at least the hope of progress. We should be willing to make some sacrifices to contribute to making that hope more realistic.

I also agree with Heinbecker that the poles of pro- and anti-American sentiment in this country have more to do with our own insecurity than with any real choice that faces us. We are better off with a strong America and we are also better off with a law-abiding one. We should use the opportunity of an American administration relatively open to the possibility that these are mutually consistent.

Laurier’s claim that the 20th century would belong to Canada has been rightly ridiculed, and even Heinbecker wouldn’t claim that he was actually vindicated by Canada’s contributions to the postwar order. But the values we have sometimes painfully come to agree on – democracy, a free economy with generous social programs, ethnic and linguistic diversity, gender equality, the rule of law – are genuinely if imperfectly reflected in our institutions. It would be a mistake to think that everyone in the world aspires to those values, but Heinbecker correctly points out that more do than at any previous time in history. If Canada can bring a spirit of tough-mindedness and focus to its multilateral and development efforts, we might at least make the contribution to the next century that we made to the last.

Continue reading “An Establishment answer to Canadian declinism”

How political are judges? How political should they be?

Jeffrey Toobin, The Nine: Inside the Secret World of the Supreme Court.
New York: Doubleday, 2007. 480 pages.

Richard Posner, How Judges Think.
Cambridge, MA: Harvard University Press, 2008. 400 pages.

Reviewed by Gareth Morley

These questions make Canadians nervous. Politicians recall the fate of Reform/Conservative MPs Randy White and Maurice Vellacott, both sentenced to political oblivion for running against the courts. Polls consistently show that judges enjoy high esteem, while the politicians who appoint them and the lawyers from whose ranks they are drawn do not. For decades, every law student has been exposed to the “legal realist” thesis that law is politics by other means,1 and every litigator is intensely interested in the perspective and background of the judges before whom he or she argues. Publicly, however, the legal profession reacts with outrage to any suggestion that the line between law and politics is permeable. As a result, public policy discussions about how best to appoint judges and the appropriate limits of their power are difficult to conduct without provoking the charge of disrespect toward the Charter of Rights and Freedoms.

When Stephen Harper required Marshall Rothstein, his nominee to replace John Major on the Supreme Court of Canada, to testify before an ad hoc parliamentary committee, the move was widely denounced for politicizing the judicial selection process. The assembled parliamentarians received a stern lecture from Professor Peter Hogg, a supporter of the new process, warning them that they were to consider only the nominee’s “professional and personal qualities” for the office – not ideological leanings or the likely political effect of his judgements. They complied. Few substantive questions were asked, and none were answered.2

Replacing Justice Michel Bastarache for what is widely regarded as the Court’s Atlantic spot has been more contentious. Initially, a Supreme Court selection panel made up of two MPs from the government and one from each of the opposition parties was to come up with a shortlist. However, the opposition nominees to the panel objected to the government’s representatives being cabinet ministers and, according to the government, refused to “consider substantive business.”3 The Prime Minister unilaterally nominated Justice Thomas Cromwell of the Nova Scotia Court of Appeal, subject to questioning before another ad hoc committee after the federal election.

Harper’s actions were constitutionally presumptuous. If the appointment was conditional on Justice Cromwell’s appearance before the ad hoc committee, then it had not been made. But if it had not been made, then Harper could only “announce” that Justice Cromwell would be the one on the assumption he would be Prime Minister after October 14 – an assumption one would think a politician facing the electorate should be cautious about making.

Newfoundland and Labrador’s Justice Minister has angrily announced that Harper treated his province with “disrespect” by not considering a judge from Newfoundland and Labrador – suggesting a punitive motive on Harper’s part in the ongoing conflict with Premier Danny Williams.4 Ideological, as opposed to regionalist, attacks seem less likely, since Justice Cromwell is widely respected within the liberal legal elite, and was appointed a Court of Appeal judge by Jean Chrétien.

While court appointments in the 19th and early 20th centuries, including appointments to the Supreme Court of Canada, were of considerable patronage interest, they did not figure in ideological or policy debates. The ultimate decisions were made in London by British judges sitting as the Judicial Committee of the Privy Council. In the 1930s, progressive Canadian intellectuals decried the Judicial Committee for striking down R.B. Bennett’s modest New Deal legislation – just as the American Supreme Court was reversing itself and allowing Roosevelt’s more ambitioius equivalent. They succeeded in having appeals to the Privy Council abolished in 1949. After some signs of boldness in the 1950s, our highest court became extremely conservative and formalist in the 1960s, earning Ron Cheffins’s description of a “quiet court in an unquiet country.” The battles of the 1970s – over inflation policy, resources and constitutional change – differed from those of the 1930s and 1940s, but they continued to turn on decisions of politicians, not judges.

Entrenchment of the Charter of Rights in 1982 put the Court near the centre of Canadian public policy, and undoubtedly raised the stakes of appointment. Unquestionably a primary actor in criminal procedure, abortion, gay rights and Aboriginal policy, the Supreme Court has recently, and controversially, extended its bailiwick to medicare and public sector labour relations.5 Interest groups rate judges on the basis of their inclinations toward the issues the group cares about.6 And academic empirical work demonstrates some connection between party of appointment and the way intermediate appellate judges rule.7 We may not like to talk about judicial politics, but that doesn’t mean it doesn’t exist.

Moreover, it seems reasonable to think that the political salience of judicial appointment in Canada will only increase. It is difficult to find clear ideological differences between Supreme Court justices appointed by Mulroney and those appointed by Trudeau or Chrétien. Although more liberal and more conservative judges can be identified, on the highest court at least, these labels do not tend to track party of appointment. But ideological differences between the major parties, particularly on matters susceptible to “rights talk,” are greater than they used to be, and ideological orientations among judicial appointees may well loom large in the near future. We may look back at the sparring over the process of Justice Cromwell’s appointment as a harbinger of intense ideological conflict in the future.

Nine workaholics: The U.S. Supreme Court

To glimpse what that future will look like, it is tempting to look south. There can be no doubt about the political salience of the judicial system in the United States.

From the time the Democrats took control of Congress in the fall of 2006 to the financial crisis in the fall of 2008, neither they nor President Bush initiated much policy change. But in that same two-year period, the Supreme Court of the United States upheld abortion laws banning “partial birth,” thereby reversing their decision seven years earlier; struck down capital punishment for rape of a child; abolished the automatic exclusion rule for unconstitutional searches; invalidated government attempts to actively desegregate schools; invalidated the Military Commissions Act on the grounds that it unconstitutionally suspended the right of habeas corpus; and overturned the District of Columbia’s gun control laws.8 The hot-button issues that move the partisans of both the Democratic and Republican coalitions always seem to end up in court – which is, notoriously, where the 2000 presidential election was decided.

In almost every one of these cases, a solid bloc of four liberal justices (John Paul Stevens, David Souter, Ruth Bader Ginsburg and Stephen Breyer) faced off against an equally solid bloc of conservatives (John Roberts, Antonin Scalia, Clarence Thomas and Samuel Alito), with Justice Anthony Kennedy providing the swing vote. The importance of confirmation battles is underscored by the fact that Kennedy became a Supreme Court justice because of the defeat of Ronald Reagan’s original nominee, Robert Bork – a fearsomely conservative jurist who would undoubtedly have voted with the right on every one of these cases. Had Bork been nominated instead of Kennedy, it is highly likely that the constitutional right to abortion found in Roe v. Wade would have been undone in 1992.9

Jeffrey Toobin provides a solid journalistic guide to the battles that led to the current Roberts Court. Toobin is a legally trained writer for the New Yorker and analyst for CNN. His previous books addressed the O.J. Simpson, Monica Lewinsky and Bush v. Gore circuses. It is a greater challenge to make the usually buttoned-up jurisprudence of the U.S. Supreme Court interesting and accessible, but Toobin does a good job of providing a readable account.

Toobin’s book is modelled on Bob Woodward’s The Brethren, which skewered the Burger Court in the late 1970s, and Woodward helpfully contributes a blurb for Toobin. Unlike Woodward, Toobin is unable to break any stories that would be genuinely surprising to casual court-watchers – perhaps because the Rehnquist Court was more disciplined than its predecessors in the 1970s. So we learn that Clarence Thomas’s fellow judges disapproved of his interview with People magazine, that ideological opponents Antonin Scalia and Ruth Bader Ginsburg enjoy attending operas together, and that confirmed bachelor David Souter’s female colleagues have repeatedly tried to set him up with eligible women – to no lasting success. Either the private lives and animosities of this particular group of workaholics are exceedingly boring, or Toobin lacks Woodward’s sources.

Toobin may not have newsworthy stories to break, but he leaves us in no doubt about how political, and politically polarized, the Court is. He reports that Chief Justice Rehnquist eventually became completely cynical about legal reasoning. According to Toobin, Rehnquist came to the conclusion that all that mattered was which side had the votes. For Toobin, the story of the Rehnquist Court was one of a judicially conservative revolution gradually becalmed by the politicking skills of some of the liberals (particularly William Brennan, Stevens and Breyer) and by the ambivalence of its moderates, Justices Sandra Day O’Connor and Kennedy. Kennedy and O’Connor joined with the conservatives in giving George W. Bush the presidency in December 2000 – Toobin shows how eager Kennedy was to have the Court intervene.

Both O’Connor and Kennedy came to see themselves as part of (and perhaps leaders in) an international corps of judges somehow engaged in a common task of defining human rights, defending judicial independence and upholding the “rule of law.” This corps has emerged from the extension of U.S.-style judicial review of legislation through the democratic world, especially in those countries having democratized since 1990. O’Connor and Kennedy may have been on the “right” of this globalized judiciary, but they came to care about its fate and their reputation within it.

Nothing could be more alien, either intellectually or emotionally, to their fellow Republican appointees Scalia and Thomas. Through the Clinton Administration and George W. Bush’s first term, Scalia’s witty and acerbic rhetoric became increasingly directed at the moderates, who responded by drawing closer to the liberals, none of whom are radicals in any event. (Stevens and Souter were also Republican appointees; Breyer was one of the minds behind deregulation in the early 1980s and has many ties with moderate Republicans; Bader Ginsburg, appointed by Clinton, was demonized by culture warriors like Pat Buchanan but was found acceptable by the Senate Republican leadership.) The result is a court that is neoliberal in economics, but with a fair degree of social liberalism. O’Connor’s retirement and Rehnquist’s death in 2005 have pushed the U.S. Supreme Court to its current configuration, in which a single moderate (Kennedy) holds the balance. Kennedy’s libertarian streak has obliged the Court to act as a significant check on the “war on terror,” despite its traditionally deferential approach to national security.

It is highly likely that there will be new vacancies, especially on the liberal side, during the next presidential term. No one can seriously think that the future evolution of U.S. constitutional law depends to any substantial degree on the cogency of argument or the specificities of evidence. Rather, it turns on which party controls the White House and the Senate and, in the latter case, by what margin. The U.S. Supreme Court is politics – and barely by other means.

Are judges like umpires?

While almost all honest observers would concede that the U.S. Court, and its future constitutional direction, have become part of the political process, there is much less agreement on the normative question of whether this is a good thing.

During his confirmation hearings, now–Chief Justice John Roberts claimed that judges ought to be like umpires. They should call cases as they see them, applying rules of the game that they are not empowered to change. The real action should be left to the long-dead framers and ratifiers of the Constitution and the elected lawmakers today. After all, “no one goes to a baseball game to see the umpire.”

Roberts’s folksy sports analogy was widely interpreted as a promise that he would be a moderate on the Court. That is not how leading American legal academics, on the left and right, interpret what Roberts is or should be doing. The claim that a particular method of judging is the only way to be faithful to The Law is made both on the judicial left and judicial right. It unites Judge Bork with his antagonist Professor Ronald Dworkin.

In How Judges Think, Seventh Circuit Judge Richard Posner ridicules Roberts’s analogy, all but accusing him of bad faith:

Neither nor any other knowledgeable person actually believed or believes that the rules that judges in our system apply, particularly appellate judges and most particularly the Justices of the U.S. Supreme Court, are given to them the way the rules of baseball are given to umpires. We must imagine that umpires, in addition to calling balls and strikes, made the rules of baseball and changed them at will. Suppose some umpires thought that pitchers were too powerful and so they decided that instead of three strikes and the batter is out it is six strikes and he’s out, but other umpires were very protective of pitchers and thought there were too many hits and therefore decreed that a batter would be allowed only one strike.

In addition to being a sitting federal appellate judge, Posner is a prolific academic and no stranger to controversy. Best known in Canada for forcefully (and somewhat sarcastically) dismissing Conrad Black’s appeal and rejecting what he termed Black’s “no harm no foul” argument,10 Posner gained notoriety in legal circles for applying an explicitly reductionist neoclassical economic approach to law. Author of a widely used text (Economic Analysis of Law), he argues that the genius of the common law is that decisions evolve over time to advance economic efficiency. Enhancing economic efficiency is both empirically what courts do and the appropriate goal of law. Courts do not afford view rights to owners of existing property, for example, because such rights would render transaction costs for future builders unduly costly.

Posner’s positions are often controversial: rape should be criminalized so men do not inefficiently avoid the market for permanent sex partners; adoption would be better coordinated through an auction for babies.11 Appointed to the Seventh Circuit in 1981 as part of Reagan’s attempt to transform the ideological character of the federal judiciary, he has been a well respected and collegial judge – distinguished by his productivity, but without the radicalism of his academic work. He has brought his wide-ranging, reductionist and contrarian intellect to bear on a range of issues. Like Toobin, he has written on Monica Lewinsky and Bush v. Gore (although not O.J. Simpson). In his writings, he has applied his economic approach not only to sex and baby auctions, but also to old age, jurisprudence, antitrust, intellectual property, law and literature, public intellectuals and national security.

In How Judges Think, Posner sticks closer to home: his principal subject is the U.S. federal appellate courts, although he considers elected state judges and the comparative careers of the judiciary in European civil-code countries and in Commonwealth common-law countries (principally Britain). Posner has no doubt that judges are, and should be, policymakers attempting to find the most practical and reasonable solution to the problems they are faced with. In doing so, they apply their moral and political loyalties and values. Politicians, interest groups, successful litigators and the judges themselves know this. Those who pretend otherwise, Posner thinks, engage in mystification and humbug.

Posner’s version of the legal realist position has been somewhat refined through responding to the neoformalist12 critique and through his own experience. He recognizes that lawyers and judges take traditional legal sources – precedents, the texts of laws, contracts and other legal instruments – seriously and have no trouble applying them in easy cases. However, easy cases are unlikely to be litigated to the appellate courts. Most of the cases that fill the law reports could not be straightforwardly decided on the basis of formal legal sources and, Posner argues, the reasons given in those reports do not reflect the real motives for the decisions. Such reasons are after-the-fact rationalizations, typically written (in the U.S. federal courts at least) by law clerks.

Posner does not think judges decide on the basis of personal animosity or sympathy with individual litigants or their lawyers – the form of bias most laypeople worry about. As a law-and-economics guru, Posner notes that judicial incentives to decide a particular way are kept deliberately weak. However, the very weakness of the incentives increases the importance of the selection (and self-selection) process. The key to understanding the decisions judges make is understanding the kind of person who gets (and accepts) a judicial appointment: hard-working, bright, risk-averse and politically mainstream. Posner argues that it is the sociological similarity of the people in the judiciary that makes law predictable, not some science of law known to the initiated.

For Posner, the Supreme Court of the United States in particular is a “political court.” While he doubts that the legal issues decided by ordinary appellate courts can be resolved using legalist sources alone, he also recognizes that most appellate decisions have low salience with the broader public. They are political, but usually not high-profile “big-P” Political. Even the Supreme Court docket contains many cases dealing with technical issues of procedure and construction of obscure statutes. But because it controls the cases it will take (a tiny proportion of those decided by the appellate courts below it), because it is very weakly bound by precedent and because it is “drawn moth-like” to the flames of hot-button social issues, the Supreme Court does deal with high profile “big P” Political issues. Posner argues it is best seen as an oligarchic legislature.

“Realist” accounts of judges’ functions are often attacked on the grounds that they permit a judge to do whatever he or she wants, thereby threatening democracy. Posner argues the opposite: a judge who understands the freedom he or she exercises should be more careful about using it. Pragmatic reasoning should be based on institutional consequences as well as substantive precedents. In particular, a judge who understands that broad constitutional principles can justify any number of outcomes should be more willing to leave controversial decisions to legislatures. There is nothing inevitable about this inference, as Posner would presumably admit. If a realist or “pragmatic” judge doubts the wisdom of democracy and thinks that a countermajoritarian decision will stick, there is no reason internal to realism or pragmatism to stop that judge from overruling the legislative will.

Posner argues that judges obtain some legitimacy from their manner of appointment, but that the ultimate justification for judicial power is that it is necessary in a country with the decentralized and competitive democratic institutions and legalistic and individualist culture that prevail in the United States. Here Posner is undoubtedly trying to deflate the heroic self-image of lawyers and judges as defenders of human rights. (The most annoying chapter for his colleagues will undoubtedly be the one in which he argues against salary increases for federal judges.) Judges may not be champions of the downtrodden, but they have an important pragmatic role. The common-law model of the powerful appellate judge advances individual liberty and economic efficiency more than the bureaucratic civil-law judge, although the latter better reflects democratic principles.

Posner attacks two sets of issues for two different audiences, and one of the weaknesses of his account is a failure to distinguish between them. Most of Posner’s book concerns the ordinary decision-making of intermediate appellate courts on matters of statutory interpretation, procedure and common law. Posner is right that close cases (the kind that appellate courts get) inevitably involve policy. However, it does not follow that they only involve policy. The reason the case is a close one may be that the formal sources (precedent and text) point in one direction while policy points in the other – and judges do not always pick policy in these circumstances. Judges may well decide a statute or precedent is clear, even when they disagree with it.13

This debate is of interest mostly to legal professionals. Even when courts do rely on policy grounds for decisions (whether openly or covertly), the policy objectives at issue are generally not particularly controversial. As Posner himself notes, no one is opposed to enforcing solemn agreements or making those at fault pay for accidents. In the infrequent case where there is political salience to a matter of private law or statutory interpretation, the politicians are free to correct the courts – although it is easier to do this in the Canadian parliamentary system than in the American one of divided legislative authority.

But constitutional law raises different questions, and here Posner acts not only as legal academic and judge but also as a public intellectual. Although even constitutional decisions are susceptible to being “gotten around” by politicians, it is not easy. America still has the abortion regime created by Roe v. Wade in 1973, and looks to have it for a good while yet. Moreover, constitutional decisions have a much greater tendency to be about matters of political salience. And so it is in the constitutional sphere that the search for legitimacy in a disembodied law is most pressing.

There are essentially two approaches to trying to find this legitimacy. The first is to try to show how simple majoritarianism diverges from a more ideal form of democracy, and to suggest the judicial task is to bridge the difference. For the economically inclined, this approach is typically associated with public choice theory. Disappointingly, Posner has little to say about the extent to which failures in democratic government justify judicial intervention, although he rightly cautions that we should compare non-ideal democracies not to ideal judiciaries, but to real ones.

The other approach to justifying countermajoritarian judicial review is “originalism” – believed to be discredited a generation ago, but now firmly entrenched in both academic and judicial circles in the United States. At one time, “original intent” was a uniquely right-wing slogan, and it was vulnerable to several criticisms. Many issues that come before courts could not possibly have been thought about by the originators of a historic constitutional document: what could Madison or Hamilton have thought about whether references to an “army and navy” allow for an air force or whether a wiretap is a search and seizure? On those issues they did consider, it is quite possible that the “framers” disagreed with one another, and they may well have assumed that subsequent courts would use their own judgement on hard questions. (There is evidence from the Federalist No. 70 that Hamilton thought so, and it is beyond dispute that the authors of Canada’s Charter of Rights did). Moreover, it is hard to see what the legitimacy would be in allowing the intentions of long-dead slaveowners to determine the law today.

However, over the last generation, a more sophisticated form of originalism that focuses on “original understanding” has arisen. The new originalists (who sometimes include Antonin Scalia himself) distinguish between the meaning of a legal concept and its “expected application” in concrete decisions. While the meaning remains the same, the expected application is correctly subject to change as new evidence, arguments and judges arise. In this view, “cruel and unusual punishment” means the same now as it did in the 18th century – punishment that is in excess of what a civilized society should tolerate. What has changed is not the meaning of the words, but what we are prepared to tolerate.

There are difficulties with the new, more sophisticated originalism. What it gains in plausibility, it loses in determinacy. Almost any decision – even the invalidation of all death penalty statutes or an order requiring a more progressive income tax – could conceivably be “originalist” on this approach. But in dismissing originalism as nonsense, Posner does not consider its most currently influential form.

Posner’s descriptive theory is also weakened by its ahistorical quality. There is too little about how the Reagan Administration moved away from the traditional patronage-based approach to appointment and decided to appoint more ideological judges (thereby giving Posner his position) and how the Democrats ultimately responded by transforming the confirmation process into a negotiation over the ideological quality of federal judges. Interestingly, the very politicization of the selection process (in the sense of greater attention to the ideology of the nominees, as opposed to simply their party connections) may be a good thing for the system’s overall legitimacy and effectiveness. After the Democrats defeated Bork, it became necessary for the president and the senators in the opposition party to compromise somewhat on appointments. The result may be bloc voting, but it is also somewhat democratic – albeit with a lag. And even the lag may be a good thing, because it helps stabilize public policy.

Judicial politics: Open or closed?

No Canadian Toobin has arisen to air the internal politics of our court. A Canadian Posner is still unthinkable: our judges are unlikely to engage in spirited jurisprudential polemics with our Supreme Court. But Posner’s fundamental points are as valid for Canadians as for Americans. Our courts are just as unable to resolve difficult legal questions with reference to legal sources alone. They are just as dependent on their conceptions of what good policy would be.

The objections to an open selection process “politicizing” the judiciary are misguided: a judiciary that can strike down laws is necessarily politicized. The real question is whether the politics involved will be open or closed. It is crazy for politicians with a vision of where they want the country to go not to be interested in the ideology of appellate judges, particularly on the highest court.

While no one wants mindless partisan bickering, we should want to know how our own Nine will decide major issues. Our elected politicians should ask more of potential Supreme Court justices than that they be personally and professionally competent: they should ask those nominated to what is now our real chamber of “sober, second thought” how they think.

We should also be concerned about the notwithstanding clause falling into desuetude. Politicians will find other ways to avoid court decisions they do not like, but those other ways will be less transparent than resort to the notwithstanding clause. Court review is not inherently bad: as Plato argued two and a half millennia ago, and as subsequent experience has demonstrated, democracy is capable of making terrible mistakes. The difficulty is that intelligent, hard-working elite lawyers are equally capable of making mistakes, and there are fewer mechanisms to correct them. Canada pioneered a compromise in which the courts would be able to review legislation, but the last word would be left with the representatives of the people. Unfortunately, that compromise has been subjected to two and a half decades of abuse at the federal level, from Trudeau through Mulroney and Chrétien to Martin.

If we look south of the border, we can see that the attempt to construct and give power to institutions that are supposedly isolated from politics simply brings politics into appointments to those institutions.

Continue reading “Politicians in Robes”