— The Rise of Modern Philosophy
by Anthony Gottlieb (2016)
Gottlieb, A. (2016). The Dream of Enlightenment: The Rise of Modern Philosophy. London: Penguin.
In a relatively short period of time — from the 1640s to the start of the French Revolution in 1789 — Descartes, Hobbes, Spinoza, Locke, Leibniz, and Hume all lived, thought and, with hindsight, have profoundly influenced how we think and operate today (whether we know or not that it stems from them). Gottlieb’s Dream of Enlightenment tells the stories of these seminal characters and that of the birth of modern philosophy. Questions considered include what exactly does the advance of science have for our understanding of ourselves and for notions of there being a god? How should a government deal with individual liberty and the common good? and, what is government actually for? The enforcement of ethics and the mandating of morality? The questions they pondered back then remain our questions today. According to Gottlieb however, while it is tempting to think these philosophers speak our language and essentially live in our world, to understand them properly, we must step back into their era and the contexts in which they where limited by and liberated from. Dream of Enlightenment puts readers in the minds of these (frequently misinterpreted) characters in order to explain in a mostly engaging way, their arguments and legacies.
On a critical note, it has been said that this book is very much a history of Western philosophy of the “‘dead white man’ variety.” So the argument goes, Gottleib apes Hegel (who devoted 800 pages to the ancient Greeks in his history of philosophy and 400 pages to the moderns) in his choice of subjects. The Dream of Reason — the 1st volume of two (thus far) — has been deemed to be less of a history of philosophy up ‘to the Renaissance’ as the bi-line states, but more an introduction to Greek thought, with the majority of its time spent on the giants of Socrates, Plato and Aristotle. The second volume — The Dream of Enlightenment — covers the pre-modern philosophers from Descartes to Hume. As with this volume, Gottlieb combines brief biographical sketches of each thinker, in order to place them in their social context, with an in-depth discussion of two or three of their main philosophical arguments. As Anthony Skews points out, “the total absence of women or any thinker from outside Western Europe is glaring.”
From the book’s ‘Introduction’:
The Dream of Enlightenment, is philosophy over?
By Jonathan Rée for The Guardian
There was a time when every self-respecting egghead had to keep up with the latest developments in philosophy; not any more. Today’s intellectuals, if they do not ignore philosophy entirely, can content themselves with reading one or two books about its past. Hundreds of histories of philosophy are available, and they are all much the same: they tell the same basic story, with the same cast of leading characters.
Act one: ancient Greek philosophy, where Socrates postulates an ideal world of which our own reality is but a shadow. Act two: modern European philosophy, which begins in the 17th century when René Descartes tried to cast doubt on everything, thus precipitating a civil war between rationalists who thought that knowledge is based on reason, and empiricists who said that it depends on experience. Act three: professional philosophy, in which Immanuel Kant’s investigations into the logic of philosophical disagreement set it on the path to becoming an introverted technical specialism, increasingly subservient to the natural sciences. The details of the plot may be vague but the message is clear: philosophers are very clever, but very stupid too, promising much and delivering little. Philosophy is history, LOL.
Anthony Gottlieb will have none of this. He is on a mission to show that the great dead philosophers have been misunderstood and that they deserve to be taken seriously. “It is because they still have something to say to us,” he says, “that we can easily get these philosophers wrong.” In 2000 he published The Dream of Reason, a brilliant retelling of the story of ancient Greek philosophy which brought out the lasting relevance of Plato’s idea that truth, happiness and virtue are inseparable, while vindicating Aristotle as a serious thinker about nature, art and society. The Dream of Reason is now joined by this much-anticipated sequel, which picks up the story with Descartes and carries it forward to the beginnings of the French Revolution.
If rationality was the theme of the earlier volume, the present one focuses on novelty: in the 17th century, as Gottlieb puts it with characteristic panache, philosophy started to be dominated by “the new idea that all old ideas are suspect”. Descartes is famous for trying to make a fresh start with his slogan “I think therefore I am”, but no one is sure what he meant, and according to Gottlieb he has been “widely misunderstood”. Gottlieb takes issue with Prince Charles and Pope John Paul II, among others, for presenting Descartes as a “subjectivist”, who got modernity off to a bad start by trying to make the “I” the foundation of everything.
He also rejects the view of Descartes as a “rationalist” who refused to acknowledge the significance of the empirical sciences: he was, on the contrary, a bold advocate of the principle that physical phenomena are generated by the mechanical interactions of tiny particles. On top of that, Gottlieb dismisses stories about a battle between rationalists and empiricists as a “myth”, peddled by 19th-century philosophers with a definite agenda of their own.
Dozens of other myths get their comeuppance in this well-written and fast-moving book. If you assumed that Thomas Hobbes was an atheist who took a gloomy view of human nature, you will have to think again, and if you thought that Jean-Jacques Rousseau believed in a peace-loving “noble savage” you could not be more wrong. John Locke too is “often misunderstood”: he has a reputation for preferring “experience” to “innate ideas”, but his main point was that the mind is not a passive receptacle but an independent agent in the construction of knowledge; and in spite of being celebrated as the architect of modern liberties, he was far from being a liberal, “even by the standards of his own times”.
Meanwhile Baruch Spinoza’s attempt to equate God with nature comes out looking both attractive and plausible, and we learn that when Gottfried Leibniz spoke of everything being “for the best in the best of all possible worlds” he was not denying that life can be bloody awful, but simply reminding us that there is not much we can do about it.
Gottlieb concludes with an affectionate portrait of David Hume, who, as he observes, has become the role-model of choice for philosophers in the 21st century. Hume was a “naturalist”, it seems, who took pleasure in presenting human beings as little more than animals with an inflated sense of their own importance. He also had an enviable talent for “disturbing the peace”, philosophically speaking. But his principal achievement was that he never took himself too seriously: he performed high-risk philosophical manoeuvres with unflagging good humour, and was always willing to concede that his hard-won theoretical convictions might turn out to be ridiculous foibles. If you are upset by abstract arguments, he said, then you should get out a bit more and engage with “common life”, and after a while you will be able to relax as you watch them all “vanish like smoke”.
Gottlieb has got Hume’s geniality down to a T. “Every philosopher likes to think he has reached his conclusions via rigorous reasoning,” he says, with a collusive wink to his readers; in the 17th century, indeed, “falling in love with geometry seems almost to have been an occupational hazard”. Take Descartes: he was notable for “his faith in his own unusually bright light of reason”, but “perhaps he did not have all the answers”. As for Hobbes, he was so “bedazzled” by a priori geometry that he “got rather carried away” and ended up “over-egging his pudding”. But Leibniz is the one you really have to watch, since, poor fellow, he “did, in effect, tend to confuse his own mind with that of God”.
After a while, the jovial put-downs start to sound mean-spirited. It is fine to be diffident about one’s own powers, but there is something sneaky about being diffident on behalf of others, and Gottlieb risks undermining his defence of the great philosophers by depicting them as incorrigible fantasists, lost in their dreams of reason and enlightenment.
He tells us he is already at work on the final volume of his trilogy, which will run from Kant to the present, and the book will be eagerly awaited. But let’s hope that he will rein in his Olympian irony, and start treating his heroes with a little more respect.
Enlightenment without end
By John Gray for New Statesman
If we no longer seek virtue and salvation, we should blame the triumvirate of Machiavelli, Hobbes and Adam Smith.
According to David Wootton, we are living in a world created by an intellectual revolution initiated by three thinkers in the 16th to 18th centuries. “My title is, Power, Pleasure and Profit, in that order, because power was conceptualised first, in the 16th century, by Niccolò Machiavelli; in the 17th century Hobbes radically revised the concepts of pleasure and happiness; and the way in which profit works in the economy was first adequately theorised in the 18th century by Adam Smith.” Before these thinkers, life had been based on the idea of a summum bonum — an all-encompassing goal of human life. Christianity identified it with salvation, Greco-Roman philosophy with a condition in which happiness and virtue were one and the same. For both, human life was complete when the supreme good was achieved.
But for those who live in the world made by Machiavelli, Hobbes and Smith, there is no supreme good. Rather than salvation or virtue they want power, pleasure and profit – and they want them all without measure, limitlessly. Partly this is because these are scarce and highly unstable goods, craved by competitors and exposed to the accidents of fortune, hard to acquire and easily lost. A deeper reason is that for these thinkers human fulfilment is something that is pursued, not achieved. Human desire is insatiable and satisfaction an imaginary condition. Hobbes summarised this bleak view pithily: “So that in the first place I put for a general inclination of all mankind a perpetual and restless desire of power after power, that ceaseth only in death.” As Wootton notes, Mick Jagger and Keith Richards voiced a similar view of the human condition in their song “(I Can’t Get No) Satisfaction”. Whether they knew it or not, the lyric captured the ruling world-view of modern times.
Wootton is an innovative historian of ideas who has written illuminatingly about the rise of modern science. His book The Invention of Science: A New History of the Scientific Revolution (2015) showed how a world shaped by rapidly accumulating knowledge differs fundamentally from earlier, pre-modern worlds in which it was believed that everything truly important was already known. For Wootton, the rise of science was a paradigm shift in our view of the world even more far-reaching than those discussed in Thomas S Kuhn’s The Structure of Scientific Revolutions (1962). Kuhn argued that rather than being incrementally increased as new knowledge becomes available, scientific theories tend to be maintained and extended until they are suddenly overthrown in a revolutionary upheaval and a new paradigm installed.
For Wootton, the rise of science was the biggest paradigm shift of all – a cognitive revolution that altered the way humans think about the world completely and irreversibly. Between the 16th and 18th centuries the writings of Machiavelli, Hobbes and Smith established “an Enlightenment paradigm”, a system of beliefs about human beings as largely selfish creatures, sociable insofar as they feel sympathy for one another and realise that their welfare is intertwined, but essentially governed by the pursuit of their own desires.
In this system, the only rational goal of society is the maximum satisfaction of wants, and the only way of achieving this is a commercial society based on a market economy, private property and limited government. In this view, goodness is simply a set of strategies for pursuing whatever human beings most desire. Moral reasoning of the sort practised by Aristotle and Aquinas depended on the belief that there is an objectively good way of life. Once that belief has been given up, the pursuit of goodness is meaningless. All that remains is a utilitarian cost-benefit calculus.
Wootton presents the conceptual shift that gave birth to our life today in a book that is ambitious and impressive in its sweep. Nearly a third of Power, Pleasure and Profit’s 400 pages consist of scholarly notes and appendices. Yet Wootton’s vividly written narrative never loses momentum. Few academic books tell such a gripping story of how ideas can change the world. Yet it is a story that leaves out an enormous amount, and the view of “the Enlightenment paradigm” that Wootton presents is both parochial and anachronistic. He does not suggest that Enlightenment thinkers promoted a homogeneous set of ideas. “‘The Enlightenment’ is a problematic term,” he writes, “because it is easy and fruitful to multiply enlightenments.” Enlightenment thinking was riddled with “bitter disputes”, with radicals and conservatives adopting diverging views of the limits of human sociability. For all these caveats, Wootton’s Enlightenment paradigm is extraordinarily narrow:
When, at the end of the 20th century the collapse of communism seemed to herald a new world order, that order was to be based on Enlightenment principles: free markets, freedom of speech, the separation of religion from the state. When we talk about Western values, the values we have in mind are the values of the American Founding Fathers, which are Enlightenment values. When we describe what is good about our societies and when we criticise their failings, we are mobilising arguments developed within the Enlightenment paradigm… In the West, Enlightenment values, free markets, and political liberty are intertwined and interdependent.
Not only does this define the Enlightenment in terms of an American hegemony that is now plainly in the past. Focusing on a narrow subset of liberal ideas, it also excludes other ideas and movements that form part of the Enlightenment on any longer and wider view. Marx’s thought consisted in large part of a criticism of what Wootton defines as the Enlightenment paradigm. But Marx’s was an immanent critique of Enlightenment thinking, and the movements he inspired always regarded themselves as continuing an Enlightenment project.
When communists in Russia and China attempted to remake their countries on a new model, they believed they were replacing backward and spent civilisations by one based on Enlightenment ideas of progress and rationality. Yet Marx is mentioned by Wootton only once and Russia not at all. When China makes an appearance it is in the context of a discussion of the price of precious metals in the 18th century. Marxism surfaces near the end of the book, but as one item in a long list of highly disparate movements – “religious revivalism, idealism, socialism and Marxism, social Darwinism, the emergence of moral philosophies based on altruism, and in the 20th century Freudianism and postmodernism” – all of which Wootton describes as “attacks on the Enlightenment framework”.
In fact most of these movements represented extensions of Enlightenment thinking rather than rejections of it. Social Darwinists believed they were developing an Enlightenment science of human nature based on physiology. The vastly influential 19th-century positivist Auguste Comte – who invented the word “altruism” in order to define his secular ethic – saw himself as working in the tradition of the 18th-century French encyclopedists. Freud insisted to the end that he was applying scientific rationalism to the study of the human mind. Even 20th- and 21st-century postmodernists take their lead from Nietzsche, a lifetime admirer of Voltaire who situated himself at the end of Enlightenment thinking not outside of it. None of these movements can be plausibly described as simply attacking the Enlightenment.
Enlightenment thinking has continued in a variety of forms that Wootton’s narrow paradigm would exclude. At the same time powerful movements have arisen that promote a variety of counter-Enlightenment projects, of which he is dismissive:
The Enlightenment has resisted all efforts to kill it off. Over and over again garlic and crosses have been held out to defeat it; again and again a stake has been driven through first one and then another vital organ. Yet, vampire-like, it returns to life. When at the beginning of the 20th century Max Weber described the disenchantment of the world and said we are trapped within the iron cage of instrumental reasoning, it was Enlightenment values that he was describing… Weber was right – no matter how we try to escape, we remain within the cage.
Like Weber, Wootton exaggerates the rationality of modern life. It is true that regimes promoting counter-Enlightenment ideologies have so far been defeated. The Nazi regime, which rested on counter-Enlightenment ideas of blood and soil as well as invoking a racist biology inherited from some Enlightenment thinkers, was less efficient as a war economy than democratic Britain. Isis failed in its project of establishing a territorial state as a result of superior Western weaponry. But it is also true that Western governments have been profoundly influenced by modes of thinking that reject instrumental reason.
If our world was ruled purely by rationality of the kind promoted by Machiavelli or Hobbes, some of the errors and follies of recent times might well have been avoided. Could anyone deploying cost-benefit analysis have dreamt up the ruinous invasion of Iraq? Ten minutes of instrumental reasoning would have shown that the impact of regime change was at best unpredictable and would most likely be disastrously chaotic. More than by bungling realpolitik, the Iraq adventure and its yet more disastrous rerun in Libya were inspired by strands in Enlightenment thinking that Wootton neglects – notably the persisting influence of ideas derived from Western religion.
A major ingredient in the intellectual melange from which these “wars of choice” sprang was the ideology of democracy promotion – the belief that human beings everywhere dream of being delivered from tyranny. Originating in core Enlightenment thinkers such as John Locke and Immanuel Kant – whose political thinking was shaped by the universalistic evangelism of Christianity – this secular faith has been at least as influential in modern politics as utilitarian cost-benefit analysis.
The third of Wootton’s intellectual revolutionaries, Adam Smith, also relied on the formative ideas of monotheism. His argument for free trade deployed a Christian belief in a divine providence that placed peoples with different skills in different parts of the world so that they could trade with one another productively. Similarly, the Enlightenment of the American Founding Fathers depended on theistic assumptions about human rights being grounded in duties to God. When 21st-century liberals piously defer to Enlightenment values, it is this theistic inheritance they unwittingly invoke.
Earlier types of thinking did not altogether disappear with the rise of the Enlightenment, they mutated. Here the Enlightenment resembles modern science. Wootton uses the idea that the rise of science was a cognitive revolution as the model for the shift he claims has occurred in ethics and politics, and without doubt science has changed the world profoundly.
Our lives are daily transformed by technological spin-offs from the accelerating advance of scientific knowledge. But this has not meant the end of myth-making or magical thinking. Quite the contrary: science has itself become a vehicle for myth and magic. The belief that science can abolish immemorial evils is plainly magical thinking, and yet it continues to be widely accepted. How often have we been told that science can banish famines? No doubt new technologies can make physical shortage of food a thing of the past but science cannot prevent catastrophic famines of the kind that is now engulfing Yemen, for example. The causes of such famines are not in physical scarcity but human behaviour. If millions starve to death in that unfortunate country, it will be because of a reckless war. The growth of scientific knowledge does not make human beings more reasonable. It merely gives some of them more power to do what they want. Rather than irrational behaviour being eliminated, science has magnified the scale and consequences of human crime and folly.
There never was a revolution in the human mind of the sort Wootton believes occurred with the rise of science, and nor was there a paradigm shift of the kind he imagines happened with the Enlightenment. Weber’s iron cage – the irreversible triumph of instrumental reason – was a mirage. There was a good deal of conceptual change in early modern times, but it was far from being the all-encompassing shift that Wootton postulates. Nor is the current dominance of Wootton’s Enlightenment – which he greatly exaggerates – irreversible. It is a feature of Kuhn’s theory that paradigms are regularly overthrown. Might not the Enlightenment paradigm suffer this fate? Wootton considers the question only very briefly:
The future may be very different. One day robots may do all the hard work, energy may come from sources which make it effectively cost-free, and genetic modification may make disease and pain things of the past. In that world the Enlightenment paradigm, which originates in a recognition of scarcity, in the realisation that pleasure and happiness are in short supply, will come to seem irrelevant.
Here Wootton seems to allow that we might escape the iron cage after all. Advances in technology could give us the key. It is an unlikely prospect. Far more plausibly, scarcity will not be ended and new labour-saving technologies will become weapons in future power struggles. As in the past, science will be used as a tool in human conflict. If Wootton’s Enlightenment is discarded in another paradigm shift it will not be because of the advance of technology but instead the imperatives of politics. The liberal Enlightenment will be ditched when it no longer gives people enough of what they want.
Gray, J. (2019). Seven Types of Atheism. London: Penguin.
Wootton, D. (2018). Power, Pleasure and Profit: Insatiable Appetites from Machiavelli to Madison. Cambridge, Massachusetts: Harvard University Press.
Do atheists think too much like believers?
By George Scialabba for The New Republic (2018)
Our hominid ancestors first appeared around six million years ago. They started to use symbols around 150,000 years ago, and the first of the major religions began 5,000 years ago. What are we to make of this? Did humans have souls before then? If not, how did we acquire them? If so, why didn’t God reveal Himself throughout 99.9 percent of humanity’s life span? What was He thinking? And God’s puzzling silence didn’t end with the advent of religion. The God of the Old Testament was fairly communicative, and the gods of the Hindu pantheon made frequent appearances, at least for a while. But since Jesus ascended to heaven (or, if you prefer, since the angel Gabriel finished dictating to Muhammad), transmissions have all but ceased.
This would seem to call for some explanation. As the infidel Tom Paine scoffed: “A revelation which is to be received as true ought to be written on the sun.” The devout Cardinal Newman agreed but thought it had been: “The Visible Church was, at least to her children,” he wrote in 1870, “the light of the world, as conspicuous as the sun in the heavens, and the Creed was written on her forehead.” Unfortunately, the Church’s radiance has dimmed somewhat since then, and many unbelievers have wondered why God can’t write “YES, I EXIST” across the night sky in mile-high flaming letters visible (to each viewer in her own language, of course) everywhere on earth, each night for a week, once a year. Is that too much to ask of an omnipotent, infinitely loving Being?
God’s inexplicable reticence has always made life difficult for theists. John Gray thinks that such problems with theism shouldn’t make most atheists any more confident about their own outlook. Gray is professor emeritus of European thought at the London School of Economics, a prolific author (Seven Types of Atheism is his 22nd book), and a columnist for the New Statesman. He was briefly a Thatcherite, then became a critic of free-market fundamentalism, then (briefly, again) a New Labourite, though he strongly opposed (and was acutely prescient about) the Iraq war. Since around 2003 he has turned from political theory and current affairs to a more philosophical, even prophetic, vein, producing numerous short books that take a very long—and glum—view of Western intellectual history.
A similar argument runs through all these later books, including Seven Types of Atheism. The secular, progressive, rationalist ideologies of the West are so much “spilt theology.” The expectation that science, or more generally knowledge, will transform the human condition is a form of Gnosticism, the esoteric doctrine that the world is ruled by an evil demiurge, whom only those in possession of secret, saving knowledge can defeat. The belief that humankind will eventually achieve lasting peace and happiness merely recapitulates Christianity’s salvation history, in which the People of God will be redeemed at the end of days. Very few, mostly marginal figures, in either East or West, have achieved the detachment and disenchantment that would signal a genuine break with religious thinking. Most atheists have instead “searched for a surrogate Deity to fill the hole left by the God that has departed.”
The archetype of this quest was the Enlightenment, with its confident efforts to fashion a science of man. Unfortunately, these efforts issued in the racist pseudo-science of Voltaire and Hume (or so Gray claims), while all attempts to inaugurate the rule of reason have resulted in bloody fanaticisms, from Jacobinism to Bolshevism, that equaled the worst atrocities attributable to believers. Perhaps this should have come as no surprise. As Carl Becker argued 85 years ago in The Heavenly City of the Eighteenth-Century Philosophers (still “the best book on the Enlightenment,” in Gray’s opinion), the philosophes “demolished the Heavenly City of St. Augustine only to rebuild it with more up-to-date materials.” Gray’s verdict is even harsher: “Racism and anti-Semitism are not incidental defects in Enlightenment thinking. They flow from some of the Enlightenment’s central beliefs.”
Seven Types of Atheism does not offer a rigorous or exhaustive taxonomy of nonbelief. The seven sections mainly provide a convenient way of organizing Gray’s likes and (more often) dislikes. He starts with a chapter on the New Atheists, who have poured scorn on the more obvious logical difficulties and historical implausibilities of dogmatic religion. Even the New Atheists’ admirers must admit that they sometimes display more zeal than finesse, and that they give a general impression of punching down. Gray’s contempt for these contemporary would-be philosophes is such that he can barely bring himself to refer to them by name. The likes of Richard Dawkins and Sam Harris are, he judges, “mostly a media phenomenon and best appreciated as a type of entertainment.”
Instead he sets out some intellectual scaffolding. “There is no such thing as ‘the atheist worldview,’” he argues, because “atheism simply excludes the idea that the world is the work of a creator-god.” Some people identify atheism with scientific rationalism, but science cannot dispel religion—not least because religion is not a set of hypotheses to be disproven. Rather, it is anything—myths, rituals, even illusions—that makes sense of our passage through life. Others equate atheism with disbelief in the omnipotent God of Christianity and Islam; Gray counters that this notion falls short, since “religion is universal, whereas monotheism is a local cult.” Still others imagine that religion was simply a stage in human evolution, now left behind, to which Gray responds: “The human mind is programmed for survival, not for truth.” (Gray is much given to such lapidary pronouncements, perhaps because he is an ardent admirer of the brilliantly witty philosophers Arthur Schopenhauer and George Santayana.)
Gray’s next category, secular humanists, includes Mill, Marx, and Bertrand Russell, who for all their differences are alike in their “vast hopes for social transformation.” Atheists of this sort think they have left religion behind, but they are wrong. The history of Christianity is shot through with millenarian movements promising the end of history. After the Reformation, humanists dropped this apocalypticism in favor of gradual progress and swapped the aim of reaching the Heavenly City for the goal of building a utopia in this world through human effort. What Christianity and secular humanism share is more important than their differences: No other religious tradition—Jewish, Greek, Indian, Chinese—envisions history as linear rather than cyclical or conceives of humanity as a unitary collective subject. The very idea of utopia—a place where everyone is happy—could not have occurred to people who took for granted that individuals have irreconcilable desires and ideals, and that conflict is therefore impossible to eliminate. Western universalism, Gray scoffs, is very provincial indeed.
The same pattern appears again and again, Gray finds, as a mode of thought overthrows religion, only to imitate some of its characteristic intellectual moves. Evolution had no sooner vanquished Christian theology than outcroppings of “evolutionary theology” began appearing. Gray rebukes Darwin, who wrote: “As natural selection works solely for the good of each being, all corporeal and mental endowments will tend to progress to perfection.” Natural selection does not work solely for the good of each being, as Darwin himself acknowledged often enough elsewhere. But the impulse to identify evolution with progress has proved hard to resist, as has the temptation to lend evolution a hand with eugenics. “Evolutionary humanism” birthed some dehumanizing attitudes in the writings of Herbert Spencer, Ernst Haeckel, Julian Huxley, and H.G. Wells, who tended to view ordinary people as merely grist for the production of “men like gods,” in Wells’s famous (or infamous) phrase.
A subset of science-based atheists are the “transhumanists,” who believe that we are destined to become gods. Such prophecies were numerous in the twentieth century: The illustrious scientist J.D. Bernal imagined humans becoming creatures of pure light, Arthur C. Clarke foresaw a similar end for humanity in his 1953 novel Childhood’s End, and even Trotsky predicted that, after the Revolution, “the average human type will rise to the heights of an Aristotle, a Goethe, or a Marx. And above this ridge new peaks will rise.” Ray Kurzweil is the most confident exponent of transhumanism today, certain that by genetically enhancing ourselves and melding our minds with machines, we will produce a qualitatively new version of Homo sapiens, perhaps in the twenty-first century. Yuval Noah Harari is more ambivalent, pointing out that this new version of humanity, which he calls Homo Deus, may not see much point in keeping old and unimproved specimens of its predecessor species around. Gray, as usual, finds these supposedly daring speculations to be merely variations on an ancient tune: the age-old dream of transcending physical limitations and historical contingency and uniting with the Absolute.
Not all modern atheists are unwitting Christians. Some are unwitting Gnostics. In that ancient mystery religion, remember, the earth was created by a malevolent demiurge, while a transcendent God dwells, inaccessible, in a realm of light, unknowable except by those who receive a special, secret revelation. The baneful lure of esoteric knowledge—ideology—is, Gray argues, responsible for the modern political religions. Jacobins, Positivists, Bolsheviks, Nazis, and Maoists all featured an elite stratum of intellectuals whose mastery of some body of liberating ideas—Rousseauist republicanism, Comtean “social science,” Marxism-Leninism, Aryan race science, Mao Zedong Thought—entitled them to rule the uninitiated. That each of these movements was irrational, intolerant, authoritarian, and apocalyptic—hence religious, on one view of religion—no one can dispute. Yet whether it is useful, given the absence from all of them of a malevolent demiurge and a transcendent God, to call them “Gnostic” is less certain.
Most of Gray’s subjects are rationalists, who thought their way (or so they believed) out of religion. But he also contends with passionate or existential atheists, rebels who cannot forgive God for the horrors of the world or the miseries of their own natures. The dark prince of these “misotheists” (God-haters) is the Marquis de Sade. Finding himself beset by impulses to cruelty and sexual domination, he ascribed them to Nature, which, after the eighteenth-century French fashion, he equated with God. Sade’s distinction, however, is to have disenchanted Nature, which until then had been almost universally reverenced but which he saw as a cesspool of violent and lustful drives. Of course, as Gray points out, “Sade was mistaken when he imagined he had left monotheism behind. Instead he changed one unforgivable deity for another.”
More appealing misotheists include Dostoyevsky’s Ivan Karamazov, who “hands back his ticket” to heaven because an Almighty God allows innocent children to suffer, and William Empson, whose classic Seven Types of Ambiguity suggested the title of Gray’s book. Empson, a literary critic, derived an intense revulsion against Christianity from studying Paradise Lost, in which God is an all-powerful tyrant who created Hell and consigned to it a large part of human- (and angel-) kind. “The Christian God the Father, the God of Tertullian, Augustine, and Aquinas,” he wrote, “is the wickedest thing yet invented by the black heart of man.” Nevertheless, Gray finds Empson, too, insufficiently emancipated. “By invoking an idea of metaphysical evil, Empson showed he remained wedded to a Christian worldview.” A world in which God is the Devil is a Christian world, albeit with all the signs reversed.
At this point the reader, especially if she has encountered similar arguments in Gray’s previous books, may find a question arising in her mind: So what? Why does it matter that Bolshevism and Nazism both have certain structural and psychological resemblances to Christianity? Christianity has, after all, had benign as well as malign consequences; and the murderousness of Nazism and Bolshevism surely had far more to do with both societies’ history of absolutism than with those ideologies’ prophetic and millenarian character. Christianity has pervaded Western culture for over 1,000 years; its traces are bound to be everywhere—even in atheisms.
In another example of guilt by somewhat far-fetched association, Gray writes that Kant and Mill “believed that a universal moral law could be grounded in reason,” giving rise to an “evangelical liberalism” that has led modern Western governments, “possessed by chimerical visions of universal human rights,” to disastrous interventions in Afghanistan, Iraq, and Libya “in order to promote a liberal way of life in societies that have never known it.” Leaving aside the fact that these interventions had nothing whatever to do with promoting a liberal way of life and that the Western governments in question (especially our own) cared not a fig for universal human rights, does all this really call into question Kant’s and Mill’s theories or the Universal Declaration of Human Rights? Can Kant and Mill really be held responsible for Dick Cheney and Hillary Clinton?
In fact, some of the resemblances Gray claims to see between Christianity and various types of atheism are less than compelling. In a devastating critique of Becker’s Heavenly City, Peter Gay coined the phrase “the fallacy of spurious persistence” to name a tendency to claim false or exaggerated continuities. When Auguste Comte issued a “Catechism of Positive Religion,” the continuity with Roman Catholicism was clear. That Mill’s liberalism “aimed to replace monotheism even as it continued monotheistic thinking in another guise,” as Gray claims, is much less plausible—Mill’s thinking is “monotheistic” only in a very strained sense. “If you want to understand modern politics,” Gray writes, “you must set aside the idea that secular and religious movements are opposites.” Secular and religious people may beg to differ, but Gray knows better.
A close student of Isaiah Berlin, and the author of a notable book about him, Gray has clearly absorbed Berlin’s sensitivity to the dangers to liberty posed by ideologies, both rationalist and irrationalist, as well as by styles of thought, irrespective of content. Hence his concern with the extreme, unworldly, uncompromising character of the thinking of many who believe they have emancipated themselves from Christianity. It is a useful interpretive approach, from the viewpoint of mental hygiene—even if he sometimes takes aim at largely blameless thinkers like Hume, Kant, Mill, Marx, Darwin, and Russell.
At last, just as many readers will have begun to wonder if any Western thinkers ever succeeded in freeing themselves from monotheism, millenarianism, and Gnosticism, Gray introduces us to his favorite atheists: the anti-progressives and the mystics. George Santayana was a philosopher of amiable imperturbability. He wrote fluently but entirely unsystematically and without the slightest concession to the interests of academic philosophers, so he is largely forgotten. Instead of staging his exit from religion as a kind of cosmic melodrama, he simply “stepped out of monotheism altogether.” In a charming irony, he passed the last decade of his life in Rome, at the Convent of the Blue Nuns, producing countless exquisite sentences like this: “A mind enlightened by skepticism and cured of noisy dogma, a mind discounting all reports, and freed from all tormenting anxiety about its own fortunes and existence, finds in the wilderness of essence a very sweet and marvelous solitude.” Gray calls him “an atheist who loved religion.”
Joseph Conrad was as fatalistic and disillusioned as Santayana, but without Santayana’s lightheartedness and sense of mischief. If Santayana was an Epicurean, Conrad was a Stoic, certain that Fate would eventually come for each individual and that all that mattered was how she met it. Gray quotes Conrad’s famous letter to Bertrand Russell, who had asked his opinion of “international socialism” and its prospects: “I have never been able to find in any man’s book or any man’s talk anything to stand up for a moment against my deep-seated sense of fatality governing this man-inhabited world.” Conrad thought reason grossly overrated; competence and courage were enough to see one through. Santayana played with ideas; Conrad mistrusted them, sure that some fool would get hold of them and wreak havoc.
With his poodles, his concert-going, and his “carefully managed hedonism,” Schopenhauer was sublimely selfish and wholly bourgeois.
Schopenhauer, whom Gray exalts above Hegel and Nietzsche, was a “mystical” atheist. His philosophy of the will furnished Freud with many mordant apothegms. He studied Indian philosophy, which supported his belief that selfhood was an illusion, and that destroying this illusion was the only possible salvation. Curiously, Schopenhauer lived a far more sensual and worldly life than his ideal of salvation might suggest. With his poodles, his concert-going, and his “carefully managed hedonism,” he was sublimely selfish and wholly bourgeois. And yet, Gray writes, “for anyone weary of self-admiring world-improvers, there is something refreshing in Schopenhauer’s nastiness.”
Spinoza, on the other hand, was notably unworldly, and the least self-assertive of philosophers. He was a hero of the philosophes, partly because he was one of the last thinkers to be actively persecuted for his atheism, but also because he wrote an influential treatise on politics denying the legitimacy of censorship and advocating democratic republicanism. Gray admires him and Lev Shestov, a twentieth-century Russian existentialist, both “negative theologians” who predicated a God about which nothing could be said. Spinoza was a pantheist, but not in the same sense as most of his predecessors of that description. He did not conceive of countless individual gods dwelling in every separate object, but rather of a single God diffused through and in fact identical with the universe. He was also an early specimen of that recurring paradox (which is not really a paradox): the philosophical determinist who is also a passionate champion of freedom.
Gray is at his best in these sketches of writers he admires, as well as in the many similar sketches scattered through his previous books: Varlam Shalamov, Stanislaw Lem, J.G. Ballard (Straw Dogs); Sigmund Freud, Joseph Roth, Norman Lewis, T.E. Hulme, Llewelyn Powys (The Silence of Animals); Giacomo Leopardi, T. F. Powys, Philip K. Dick (The Soul of the Marionette); among others. They earn Gray’s highest praise: “Not looking for cosmic meaning, they were content with the world as they found it.” Such all-too-rare detachment is the beginning of wisdom. Addressing readers directly (at the end of Straw Dogs), Gray asks us to do likewise: “Other animals do not need a purpose in life.… Can we not think of the aim of life as being simply to see?”
With considerable respect for Gray (and for Conrad, Santayana, et al.), I would answer no. As long as so much of what we see is unnecessary suffering, we cannot be content with the world as we find it. Of course we should keep Gray’s cautions well in mind. The catastrophic revolutionary ideologies of the past were ersatz religions. Scientific utopias and promises to transform the human condition deserve the deepest suspicion. Moral and political progress are always subject to reversal. Humans are animals; human nature is riven with conflicts; reason is a frail reed. But even if we can’t set the cosmos right, we can’t leave our corner of it the way it is. Whatever else may be an illusion, other people’s suffering is not.
What is Wrong with Liberalism?
By David Wootton for History Today, Vol. 68 No. 6 (2018)
‘The greatest good for the greatest number’ flounders when society cannot agree on what is ‘good’ – or ‘bad’.
In 1826 the 20-year-old John Stuart Mill had a nervous breakdown. He had been raised by his father, James, as a utilitarian. Consequently, he had believed that all that mattered in life was pleasure and pain. Suddenly, nothing gave him pleasure anymore. Having been taught that his purpose in life was to spread happiness, he now realised, as he later reported in his Autobiography, that making other people happy would not bring about his own happiness. He emerged from this crisis when he realised that happiness is peculiar: it is a byproduct of doing something you care about, something you believe in. Paradoxically, he was now free to devote himself once more to making other people happy. His recovery began when he read the historian Jean-François Marmontel’s account of the death of his father and wept. Mill, having imagined the death of his own father, had begun to think and feel for himself.
This story has something important to tell us about what John Maynard Keynes called modern civilisation’s moral decay. For what Mill discovered is that utilitarianism alone cannot enable us to make sense of our lives or give us a purpose for living. Mill had been educated in an intellectual tradition which made no distinction between pleasure and happiness (though we know that plenty of people are happy in the face of adversity, while others are miserable when indulging every pleasure money can buy). It maintained that all pleasures are equally good. Good and evil, Hobbes, Locke, Hume and Bentham had all taught, are simply pleasure and pain. From the Huguenot refugee Pierre Bayle onwards, people wrote about pleasure and pain as if they were entries in an account book; reason, it was claimed, was simply a process of calculating how to maximise pleasure. Hobbes had pointed out that the word ‘reason’ derives from the Latin for ‘calculate’, while Bentham invented the word ‘maximise’. Hobbes was the first to insist that all pleasures are equally good, which implied they could be quantified; Locke’s psychology explained how we pursue happiness; Hume argued that moral judgements are simply judgements regarding pleasure and utility; and Adam Smith explained how a hidden hand ensures that individuals, pursuing their own selfish interests, benefit those around them.
This is the tradition out of which Bentham constructed utilitarianism: radically individualist and ahistorical. Although it acknowledged that not all human behaviour is rational, it insisted that, if people would only learn how to think straight, they would become both rational and happy. Looking back from 1938 to the days of 1914, Keynes diagnosed the contradiction. ‘Bertie [Bertrand Russell] in particular sustained simultaneously a pair of opinions ludicrously incompatible. He held that in fact human affairs were carried on after a most irrational fashion, but that the remedy was quite simple and easy, since all we had to do was carry them on rationally.’
In the complaints of those who seek to defend liberalism against populism we hear over and over again this same incompatible pair of opinions: those who vote for Brexit, or Trump, or Alternative for Germany are irrational, ignorant, uneducated; if only they had a proper grasp of their own interests, they would vote for Remain, or Clinton, or Merkel. But where the two sides disagree is precisely on what it means to be rational and, more fundamentally, on whether human beings can or should approach life as a series of profit and loss calculations, as if there might be some calculus that enables us to choose happiness.
Almost wherever one looks in the pre-1989 democracies we see signs of crisis. There are varied descriptions of what is going wrong: Patrick J. Deneen has written Why Liberalism Failed, Michiko Kakutani The Death of Truth. Neither focuses on an obvious question: what is the connection between the present crisis and immigration? From a Benthamite perspective, immigration is irrelevant: all economists agree that it is economically beneficial and unemployment is, by historical standards, low in all the countries where populism has taken hold. But opposition to immigration is strongest, not where immigrants are most numerous, but where people believe it will increase in the future. There is a tendency to think that hostility to immigration is about race: sometimes it is; often it isn’t. Hostility to white, middle-class incomers and gentrifiers mirrors hostility to immigrants. The issues raised by immigration are not just about income or race, but identity. Populism marks a new phase in identity politics for the simple reason that people fear not only poverty, but also identity deprivation.
What people fear is change and with good reason, for change is difficult and unsettling. Even where there are benefits, there are usually unfortunate, often unintended, consequences. Hence the division between people from ‘somewhere’, who feel threatened by the pace and direction of change, and the people from ‘anywhere’, who welcome change, or rather (for, in the absence of significant incentives, most people dislike change) who have already lived the change that others fear. The divide between Brexiters and Remainers, for example, is a divide between two different understandings of how best to keep things more or less as they already are.
For utilitarians, aversion to change, in and of it itself, is simply irrational: they can make no sense of nostalgia, of the affection for the familiar, or of the complex ways in which people construct a sense of identity. A utilitarian, who assumes that one pleasure can as easily be exchanged for another, as a pound can be exchanged for a euro, must mock the idea that a certain sort of pleasure, or a certain sort of identity, has some special added value attached to it, just because it conjures up memories, or fits like an old shoe. Yet people resist change and they are much more prone to mourn losses than to celebrate gains.
The Benthamite understanding of human nature and human behaviour, which drew on intellectual developments over the previous three centuries, from Machiavelli to Mill, always was, as the latter recognised, a profoundly unsatisfactory account of who we are. The errors are obvious: the conviction that human beings are, or can easily become, good and rational; that there is no arguing with someone who says ‘this is what gives me pleasure’, whether this (in Bentham’s example) is push-pin or poetry; and the presumption that we are all, as it were, in business as individuals, that the ties which bind us to family, friends, community, nation are purely instrumental arrangements of convenience. No one in this tradition was aware, to quote Keynes again (who was writing in September 1938, under the shadow of the coming war), ‘that civilisation was a thin and precarious crust erected by the personality and the will of a very few, and only maintained by rules and conventions skilfully put across and guilefully preserved. We had, he wrote of his younger self and his circle, ‘no respect for traditional wisdom or the restraints of custom’.
Who now respects traditional wisdom and the restraints of custom? The pace, the depth and breadth of change in the years since the Second World War have made appeals to tradition and custom seem ridiculous, at least to intellectuals; even the British Conservative Party has abandoned them. But many people still live customary lives – indeed everyone constructs their own private customs – and, where people face the prospect of their traditions being endlessly eroded and dissolved, their response is one of opposition. They look for someone to speak for them. This, liberals consistently fail to do.
Who now declares themselves to be a Utilitarian? Almost no one, yet much of what has happened over the last 50 years can be understood as a working out of Benthamite principles. When I started out in academia, more than 40 years ago, universities took the value of their enterprise for granted. Each discipline represented a craft, with its own traditions, customs and values. Hardly anyone worried about whether History, or Philosophy, or Literature was really worth studying; and no one claimed that three years at university represented much of a preparation for life outside. Now courses have learning objectives and transferable skills. We tell students, for example, that they are learning time-management skills. Our arguments for the study of any arts subject have become fundamentally utilitarian. In many ways universities have improved; but departments are now cost centres, students are consumers. Every step forward has also been a step – sometimes two steps – backwards. Our gains have been gains in efficiency, transparency and utility; the gains that Bentham hoped to see by the construction of his Panopticon, in which a single guard would be able to look straight into every prison cell. Our losses have been losses in purpose and meaning.
Here it is worth going back to the birth of liberalism in order to understand why this has always been, and always will be, the case. The whole point of liberalism, as it was invented by Locke and defended by the Enlightenment, was to provide an alternative to religious bigotry. As wars between Protestants and Catholics had destroyed much of Europe in the century and a half before 1688, weakening people’s ties to religious fundamentalism, of whatever sort, was a noble cause; but what it required was precisely that people should abandon many of the customs and traditions, the purposes and the meanings, they held dear.
No place like home
The Enlightenment was from the beginning a cosmopolitan movement; its leading authors travelled abroad and read foreign books; they belonged to the ‘republic of letters’, not to any country. Rousseau, in order to demonstrate his hostility to the Enlightenment, kept insisting (for as long as he could get away with it) that he was a citizen of Geneva. Voltaire, in so many ways Rousseau’s antithesis, named himself ‘de Voltaire’ after a place that simply didn’t exist. He found happiness perched on the border between France and Geneva, where he could evade both Catholic and Protestant intolerance. He liked to describe himself as ‘English’. These were the original men and women from anywhere.
True to their tradition, the great modern liberal theorist, John Rawls, asks us to imagine that we know nothing about who we are, for only then can we make decisions about what sort of society we want to live in. Am I black or white, rich or poor, male or female, gay or straight, old or young, healthy or sick, Christian or atheist? Forget who you are, says Rawls, place yourself behind a veil of ignorance and only then can you make rational choices about the good life. You must not think of yourself as a person from somewhere, as someone with attachments, loyalties, customs and traditions, purposes and meanings. Liberalism requires you to put aside the very things that make you who you are, different from others.
Here lies the central paradox of liberal praise of diversity: as our cities become more ‘diverse’, they become more alike: there are McDonald’s restaurants in more than 100 countries. Increasing diversity goes hand in hand with increasing homogenisation. Liberalism is full of such paradoxes: affirmative action, for example, requires treating people according to categories (race, sex, gender, social background) which at the same time it insists it wants to erode, even eliminate. The EU aspires to ever closer union; but every step towards it reawakens the spirit of nationalism. Every effort to create a more liberal society seems to create problems as fast as it solves them: unintended consequences are an inescapable feature of planned social change. But there is a consistent pattern that appears as soon as you adopt the Benthamite logic of ‘maximisation’: every effort at social improvement substitutes meaningless quantities (‘value-added’, ‘impact’, ‘relative poverty’) for authentic qualities (competence, excellence, tradition, community).
I am a liberal. So was Mill; but his response to his moral crisis was to insist on the superiority of poetry to push-pin. So was Keynes; but his response to the rise of Nazism was to acknowledge the intellectual and moral failure of the liberal tradition. A similar response is called for now. At such moments historians have a particular responsibility. Keynes, in order to think about what was going wrong in 1938, felt obliged to think about the history of philosophy from Bentham to ‘Bertie’, just as I have felt obliged to think about the history of moral, political and economic thought from Machiavelli to Bentham, the eventual outcome of which has been what Weber called the ‘iron cage’ of modern economic and bureaucratic rationality. Only when we are prepared to acknowledge, as Keynes was in 1938, that our inherited presuppositions have become obstacles not assets can we hope to refashion our cage. If we fail to engage with those who disagree with us, if we fail to understand what matters to them, then our failings are both intellectual and moral. For humanity’s mental and moral incapacities there are, alas, no permanent cures, but we can aim to do better than we are doing right now.
After the Second World War liberalism acquired a new lease of life; only recently some were celebrating ‘the end of history’ as a result of the universal triumph of liberal values. Well, history is back and the historians need to play their part in raising the quality of the resulting debate. Since liberalism is by its nature individualist and ahistorical, it is a fundamental obligation on historians to explore and expose the weaknesses of its traditions. History is always about a particular time, a particular place; it is always about groups more than it is about individuals; it is always the history of somewhere. The Enlightenment was much better at conjectural history than at real history; and the task of the historian is to show that even the most enlightened thinkers were the products of local circumstances. History is, in short, the best antidote to the intellectual failings of liberalism; the historian’s job is to tear down the veil of ignorance and capture the inescapable embeddedness of human experience. This, though, is not how some view the historian’s enterprise, writing, often unconsciously, in praise of liberal values and postmodern assumptions. They can’t speak to the crisis because their books are symptoms of it, not solutions to it.
Who are we?
Take, for example, an essay about Brexit by Linda Colley entitled Can History Help?, published in March 2018 in the London Review of Books (see below). If one were to summarise the essay in two words they would be ‘embrace change’. Colley looks forward to a more multilingual, more global Britain. She attacks populism without even mentioning immigration. The problem with the UK, she maintains, is that old structures still persist. ‘Parochialism’ is always a bad thing. That the persistence of old structures might be a source of strength and resilience, that rootedness in a local community might be a source of security and confidence has apparently not occurred to her. Colley’s Britons (1992) brilliantly evoked the construction in the 18th and early-19th centuries of a distinct British identity – anti-French, anti-Catholic, indeed anti-European. The unspoken assumption of the book, written when Britain was a member of the European Exchange Rate Mechanism, was that this identity was one that we could now finally outgrow; and yet, it seems, many British citizens were and remain attached to it. ‘Can History Help?’ Yes it can; but not if it speaks for only one side of the argument for and against liberalism.
Can history help?
By Linda Colley for The London Review of Books, Vol. 40 No. 6 (2018)
Listen to this article (read by Linda Colley)
We are, all of us, saturated with information on change. There is 24-hour news. Twitter, Facebook and other online platforms transmit the latest occurrences across the globe. Those of us old-fashioned enough still to want newspapers can scan their online versions at any time. Yet this blizzard of material easily produces a sense of overload, even powerlessness, a feeling that we are simultaneously being told too much, yet can grasp too little. One vital respect in which history can help is by encouraging us to look away from the blitz of ever shifting news stories, and to consider instead what has proved genuinely significant in the past. Once we do this, we are immediately reminded that most really game-changing transformations have happened slowly. Minute by minute change is a media illusion.
To be sure, there have been a few genuinely world-altering events that seem to have happened in an instant. The men who dropped an atomic bomb on Hiroshima on 6 August 1945 were deploying technology that had taken decades to develop. Nonetheless, in carrying out that act, these US airmen did effect an almost immediate transformation in the nature of warfare and in attitudes towards it. Many momentous changes, however, have taken centuries to work through. Consider the terrible outbreak of plague in the 14th century known as the Black Death. Europe suffered disproportionately, losing perhaps 50 per cent of its total population. One result of this, however, was that the living standards and wages of many of those who survived seem to have improved. This, it has been suggested, led in time to a marked increase in Europeans’ food consumption and demand for consumer goods. And this rise in demand may well in turn have contributed to the increasing number of European trading voyages across the world’s oceans in the 15th, 16th and 17th centuries. Even traumatic shifts in human history can have mixed and sometimes useful consequences.
So epic changes are very occasionally rapid, but sometimes stretch over centuries. Most commonly, though, major changes become apparent within the canonical span of a human lifetime: three score years and ten. Consider the dramatic changes that have occurred in Germany in the seventy or so years since the Second World War, and the multiple, multinational consequences of this. Or the postwar experience of Sweden. Eighty years ago, Sweden was in some respects on the margins. In 1930, more of its people worked in agriculture than in industry, and many of them were poor. The country had long since lost its empire, and in the 19th century it had been forced to give up some of its territory. In the Second World War, its reputation suffered from the Nazi sympathies of sectors of its population. Yet look at Sweden now, in the early 21st century: a place of ultra-modernity in the arts, technology and design, affluent, comfortably in the top ten, year after year, in tables of the happiest countries in the world, a place that is adroit and effective in its exercise of soft power, as can be seen in its use of the Nobel Prizes, and its championing of neutrality and ecological good causes.
Of course, as the Scandi-noir thrillers remind us, Sweden is not perfect; and it’s possible that Russian armies may challenge it in the future as they have in the past. Nonetheless, its progress since 1945 has been startling, and this alerts us to the potential helpfulness of history in a double sense. First: looking hard at countries that have successfully reinvented themselves, and studying the tricks and adaptations that have enabled them to do so, is a useful and cheering thing to do, not least in this particular country, at this particular time. We can and should learn from others. Second: the degree of transformation that some societies and peoples have achieved within the span of a single human lifetime offers a powerful corrective to the essentialism so often preached by populist politicians and commentators.
Populists, a widespread breed at present, often like to represent particular territories and sets of people in terms of an unchanging and finite set of characteristics, either out of boosterism, or as a means to marginalise and condemn. Thus, Sarah Palin, one-time Republican governor of Alaska, used to refer to her supporters as ‘real Americans’, as though such unadulterated beings existed, and as though her opponents were somehow not ‘real’. By the same token, the leading populist party in Finland used to call itself the True Finns, as though other Finns were not part of an essential Finnish nation.
Given the current bitter polarisation of political allegiances, it is important to remember that national groupings have never been homogeneous and are rarely static. Of course, there are some persistent habits and patterns of thought and behaviour in all long-standing states. But countries and their populations are not just mixed in terms of ethnicity, politics, religion and much more, they also change over time, sometimes rapidly and radically. Swedes today are very different from Swedes in 1940. So whenever you hear people saying ‘the British people are …’, followed by a list of asserted characteristics, or ‘Americans have always been …’ etc, a sense of history will help to summon up a tonic dose of scepticism.
What are the triggers of dramatic episodes of change? Savage outbreaks of disease can be a trigger; so can significant alterations in climate, like the so-called Little Ice Age that began in the 17th century. Some leaps forward in technology, such as the invention of printing in China, have precipitated long-drawn-out, transcontinental changes; so have economic crises, and major shifts in the nature of belief and ideology, such as the Reformation. But perhaps the most recurring and paradoxical trigger of change in human society has been war.
It is a cliché of political science that states make war and that war in turn has the capacity to make and remake states. It isn’t always true. Wars sometimes destroy states and peoples altogether. Nonetheless, outbreaks of major warfare have very often obliged states to reconfigure themselves, sometimes in productive and beneficial ways. Think of the world wars of the 20th century, unquestionably horrendous and lethal episodes, but in some ways constructive. In 1914, no woman in Britain could vote in national elections. Even as far as men were concerned, this country had one of the lowest levels of enfranchisement in Europe. After the Representation of the People Act was passed in 1918, however, virtually all men over 21 and most women over 30 gained the vote. The First World War wasn’t the sole or a straightforward reason for this change, but it was a major factor. By the same token, the Second World War transformed levels of welfarism on both sides of the Atlantic, helping to bring about the National Health Service in the UK and the GI Bill in the US, which eased mass access to further education. The impact of these wars outside Euro-America was still more portentous. Agitation against European and non-European empires was on the rise well before 1914. But the two world wars’ weakening of the power and finances of the various maritime empires – British, French, German, Dutch, Portuguese, Japanese and others – allowed decolonisation to advance far faster than it would otherwise have done.
Unsurprisingly, the countries that were invaded or defeated tended to be the ones that underwent the most fundamental changes. Germany, Japan and France all gained new constitutions after 1945. By contrast, neither the UK, which was seemingly a victor power, nor the US, which was certainly a victor power, changed its political system in the wake of the Second World War. Instead, in both countries, victory served for a while to burnish and strengthen the existing political order.
Over the centuries both the United Kingdom and the United States have indeed been almost too successful in their recourse to war, and this has had mixed repercussions for their political systems and democracy. In the United States, success against the British in the Revolutionary War led to the drafting of the American constitution of 1787, a brief but remarkable document. Thereafter, there were many more American victories: a further repulsion of the British in the War of 1812; ruthlessly successful expansionist wars against Native Americans and Mexicans; a civil war which, though bloody, did not result in the fragmentation of the country; and yet more victories in overseas colonial wars and in the two world wars – a record only really marred by the Vietnam War and Iraq, both limited and strictly overseas struggles.
This conspicuous success rate on the battlefield helped to cement the political system established in 1787, which has been subject to only a limited number of amendments. The US now possesses the oldest written constitution still in operation in the world, which is an achievement to be sure, but also by now a source of some difficulties. The 1787 constitution said nothing about the operation of political parties. This lacuna was manageable so long as the main US parties were similar in outlook and prepared to abide by certain ‘gentlemanly’ conventions. Today, these conditions no longer apply, and gridlock has ensued. Similarly, the second amendment, passed in 1791, allowing US citizens access to arms, was manageable when most firearms were muskets that took minutes to load. Obviously, this is no longer the case. So while it may be tempting to attribute current political dysfunction in the US to particular personalities, some of the root causes are long-term, structural and connected to America’s experience of war. Military success has helped to foster constitutional stasis and complacency.
The UK exhibits comparable problems, but to a more pronounced degree. Like the US, but over a longer period of time, the UK has been both a markedly warlike state and generally a successful one. In the mid-17th century, England and its contiguous nations experienced a bloody civil war that briefly made it a republic and gave it a codified constitution. In 1688, it experienced a successful Dutch-led invasion, a crisis that ultimately strengthened the power of the Westminster Parliament vis à vis the Crown. But, thereafter, the island of Britain, as distinct from the island of Ireland, experienced no enduringly successful foreign invasions; and, with the exception of the American Revolutionary War, no really major overseas military defeats. As a result, in the United Kingdom even more than in the United States, old structures of politics were able to persist. It’s true that the electorate steadily widened, though only slowly. But the House of Commons, the House of Lords, the monarchy, the pre-eminence of London, and certain conventions of political and electoral practice: these things endured. Nothing has happened that might have forced a major process of political reconfiguration, as distinct from ad hoc adjustments. And this raises a set of questions and possibilities.
Could it be that Britain’s political stability has become too pronounced? That, by not having to adjust and alter its political system as so many other countries have had to do, the UK has stored up unaddressed problems and unhelpful stagnancies? If so, might the convulsions and divisions over Brexit have some tonic effect? Might this bitterly divisive and presumably long-lasting change turn out to be the painful moderniser that military defeats and invasions have sometimes proved to be for other countries?
Exactly what Brexit will entail remains unclear, and if anything is becoming more so. Some believe that, after preliminary discomforts, the results will be very positive. Others believe we are doomed. A growing number of pundits and activists argue that the decision to leave the EU may be reversed, or that some sort of compromise solution may be hammered out. Some assert that Brexit won’t make much difference at all. This lack of clarity has been exacerbated by an overly narrow focus on economic and commercial questions.
I’d like to suggest a somewhat different perspective on Brexit. By instinct I am a Remainer, but I think that some form of Brexit may now be unavoidable. If that does turn out to be the case, I suspect that the resulting disruptions and realignments will affect far more than the economy: the trick will be to see if this can be turned to the good, or at least to something halfway productive.
In a recent pamphlet on the constitutional ramifications, Vernon Bogdanor has hinted at ways in which Brexit might conceivably have some constructive, albeit unpredictable, effects. As is becoming clear, and as Bogdanor sets out, Ireland represents a major challenge, and not just for reasons of cross-border trade. The Good Friday Agreement promised Northern Ireland parity of rights with the Republic. But if the UK does pull out of the EU, Northern Irish rights will no longer be protected by Brussels and the European courts, but will come back substantially to Westminster. And by the doctrine of parliamentary sovereignty, Westminster would be free in the future to modify those rights. Indeed, these challenges extend to all of the UK. The British government has undertaken to incorporate relevant sections of EU law and rights in statute law. But the same caveat applies: such incorporation would mean that these transferred rights and laws could be altered in the future by a sovereign parliament.
As Bogdanor remarks, some thoughtful politicians, such as Dominic Grieve, are proposing a new British Bill of Rights in the event of Brexit so as to protect vital rights against such legislative tinkering. This would be a good idea. It would also be valuable if more UK citizens and all political parties shifted some of their focus away from purely economic matters, and devoted more attention to the political, structural and legal vulnerabilities and quandaries that have been exposed by this crisis, and to the question of how these could be addressed. Taking back control sounds alluring. But we need to think hard about who exactly is going to be doing the controlling, and how these potential controllers are themselves to be better controlled.
As I have said, because of Britain’s relative immunity to invasion and defeat, the longevity of some of its political structures has been unusually marked. Brexit, however, may well prove a tipping point, sharpening issues to do with federalism, with undue executive power, and with the need for clearer written rights. We have to give thought to these matters. Brexit is likely to resemble Pandora’s box: all sorts of new and disruptive things will emerge to which our political masters have so far given only scant attention, in public at least.
Consider the prospect, much vaunted by some, of a new ‘global Britain’. If this is to be anything more than a slogan, a fig-leaf for a new parochialism, all sorts of changes will be necessary, and not just in the realm of commerce. For instance, if there is to be a more global Britain, people in England, Wales, Scotland and Northern Ireland will have to become multilingual. Membership of the EU has been an easy ride in the sense that English is its main language of business and exchange. You can get by with English in most European cities. But monolingual English will be a problem if people want to interact more extensively with cities in China, India, South America, parts of Africa and Japan. Consequently, many more children in the UK are presumably going to have to do what their counterparts in other parts of Europe and elsewhere already do: conquer two languages at school, ideally more. History can offer some help with this.
Any notion that Britons are somehow intrinsically bad at learning foreign languages is a recent and bogus invented tradition. In the 19th century, many politicians in this country laid confident claim to several languages. William Gladstone, the four-time Liberal prime minister, according to his biographer Roy Jenkins, was convinced that ‘an educated Englishman ought to be able to communicate in all the principal languages of civilised Europe. So he did.’ Gladstone’s ‘attitude to modern languages’, Jenkins goes on, ‘was reminiscent of a tank cutting its way through undergrowth’. Actors in Britain’s overseas empire needed more than just European languages. Sir John Bowring, the fourth governor of Hong Kong, claimed to speak a hundred languages and dialects, and was certainly adept at more than thirty. Such men, you may protest, were exceptional elite figures. True enough, but there is plenty of evidence of British and Irish plebeian multilingualism too. Sailors, for instance, were famous for picking up conversational skills in multiple languages, because they moved about and needed them for their trade. We may be moving back to that kind of world.
Because, even leaving Brexit aside, given the current advance of robotics, and the degree to which this will radically alter the nature, volume and location of employment opportunities, as many people as possible are going to need to learn how to operate in different places, using different languages. It will help them enormously if they also know something about the history of these places. Recently, a government spokesman argued that UK universities should charge less for arts and social science courses since these subjects ‘do little to boost careers’ or the economy. Such a view is wrong and short-sighted even in utilitarian and economic terms. If there is going to be any kind of globally-involved Britain, people are going to need to study history, just as they will need to study languages, and a whole lot more. You cannot hope to do effective business with other people if you do not understand their language or if you have no knowledge of their societies and how they evolved.
As Richard Evans has pointed out, 20th-century British-based historians were conspicuous for producing important work on the history of other countries, especially European ones. But in recent decades there has sometimes been a contraction of scope and range; and anyway it is not any more just European history that demands attention. The history of China, Japan, Africa, Central Asia, South America, India, Indonesia and other regions is becoming ever more vital.
Yet there are reportedly fewer historians in UK universities working on South America, for instance, than there were in the 1980s. An informal survey published in 2013 claimed that three-quarters of historians employed in UK universities worked only on the European past, of whom a disproportionate number focused on the home islands. In schools, getting children to learn something of the past of the wider world is even more challenging, in large part because of the nature of the curriculum. In Britain, unlike in many other parts of Europe, history is a compulsory subject only up to the age of 14. Given that the prevailing cult of over-testing takes up a great deal of lesson time, children have only limited opportunities for learning about how different parts of our world have evolved.
There are some not overly expensive steps that might be taken to redress this. History could be made a compulsory subject for longer. Testing could be cut back, allowing more time for language-learning, as well as for history. A future new monarch’s reign could be commemorated by the founding of three or four new regius chairs in global history. These would ideally be situated not in Oxford, Cambridge or Edinburgh, but in universities in cities that once played important global roles, and might do so again: Birmingham, Glasgow, Sheffield, Cardiff, Manchester, Belfast. It could be made a condition of holding such a chair that the holder would present a course of accessible lectures on the non-European past, which would be made available online. It might also be a condition that these professors would help schools to develop a workable syllabus in the subject.
With imagination, thought and will – and it is more a matter of these than of money – this and much more might be done. If Brexit happens [it has :)], the impact of that change is likely to take a long time to work out. We cannot remotely know all the consequences, though we can be sure that they will be wider than most of the current political and media buzz is suggesting. And we can make plans and projects to meet some of the changes that are likely to ensue. If we are to do this properly, history won’t just help. It will be indispensable.
Eldridge, R. T. (Ed.). (2009). The Oxford Handbook of Philosophy and Literature. Oxford: Oxford University Press.
Gottlieb, A. (2000). The Dream of Reason: a History of Philosophy from the Greeks to the Renaissance. New York: W. W. Norton & Co.
Grayling, A. C. (2019). The History of Philosophy. London: Penguin.
Kenny, A. (1998). An Illustrated Brief History of Western Philosophy. New York: John Wiley & Sons
Russell, B. (1945). A History of Western Philosophy. London: George Allen & Unwin Ltd.
FULL READING LIST