Enlightenment Now: The Case for Reason, Science, Humanism, and Progress Steven Pinker

Enlightenment Now Book Cover Enlightenment Now
Steven Pinker
Psychology
Penguin
2018
556

Feeling like everything has gone to $%#!? Worried about the future? This is excellent medicine. Pinker takes an analytical approach using data to show that quality of life, wealth, safety, peace, knowledge, and happiness are on the up across the globe!

I would read Thinking Fast and Slow first. It will help with understanding various biases.

There are a LOT of notes. I might try to trim them down by removing some of the stuff that only I would note. There are lots of history and economic history bits.

“Why should I live?”

In the very act of asking that question, you are seeking reasons for your convictions, and so you are committed to reason as the means to discover and justify what is important to you. And there are so many reasons to live! As a sentient being, you have the potential to flourish. You can refine your faculty of reason itself by learning and debating. You can seek explanations of the natural world through science, and insight into the human condition through the arts and humanities. You can make the most of your capacity for pleasure and satisfaction, which allowed your ancestors to thrive and thereby allowed you to exist. You can appreciate the beauty and richness of the natural and cultural world. As the heir to billions of years of life perpetuating itself, you can perpetuate life in turn. You have been endowed with a sense of sympathy—the ability to like, love, respect, help, and show kindness—and you can enjoy the gift of mutual benevolence with friends, family, and colleagues. And because reason tells you that none of this is particular to you, you have the responsibility to provide to others what you expect for yourself. You can foster the welfare of other sentient beings by enhancing life, health, knowledge, freedom, abundance, safety, beauty, and peace. History shows that when we sympathize with others and apply our ingenuity to improving the human condition, we can make progress in doing so, and you can help to continue that progress.

The Enlightenment principle that we can apply reason and sympathy to enhance human flourishing may seem obvious, trite, old-fashioned. I wrote this book because I have come to realize that it is not. More than ever, the ideals of reason, science, humanism, and progress need a wholehearted defense.

We ignore the achievements of the Enlightenment at our peril.

The ideals of the Enlightenment are products of human reason, but they always struggle with other strands of human nature: loyalty to tribe, deference to authority, magical thinking, the blaming of misfortune on evildoers.

Harder to find is a positive vision that sees the world’s problems against a background of progress that it seeks to build upon by solving those problems in their turn.

“The West is shy of its values—it doesn’t speak up for classical liberalism,”

The Islamic State, which “knows exactly what it stands for,”

Friedrich Hayek observed, “If old truths are to retain their hold on men’s minds, they must be restated in the language and concepts of successive generations”

What is enlightenment? In a 1784 essay with that question as its title, Immanuel Kant answered that it consists of “humankind’s emergence from its self-incurred immaturity,” its “lazy and cowardly” submission to the “dogmas and formulas” of religious or political authority.1 Enlightenment’s motto, he proclaimed, is “Dare to understand!”

David Deutsch’s defense of enlightenment, The Beginning of Infinity.

All failures—all evils—are due to insufficient knowledge.

It is a mistake to confuse hard problems with problems unlikely to be solved.

The thinkers of the Enlightenment sought a new understanding of the human condition. The era was a cornucopia of ideas, some of them contradictory, but four themes tie them together: reason, science, humanism, and progress.

If there’s anything the Enlightenment thinkers had in common, it was an insistence that we energetically apply the standard of reason to understanding our world, and not fall back on generators of delusion like faith, dogma, revelation, authority, charisma, mysticism, divination, visions, gut feelings, or the hermeneutic parsing of sacred texts.

Others were pantheists, who used “God” as a synonym for the laws of nature.

They insisted that it was only by calling out the common sources of folly that we could hope to overcome them. The deliberate application of reason was necessary precisely because our common habits of thought are not particularly reasonable.

That leads to the second ideal, science, the refining of reason to understand the world.

To the Enlightenment thinkers the escape from ignorance and superstition showed how mistaken our conventional wisdom could be, and how the methods of science—skepticism, fallibilism, open debate, and empirical testing—are a paradigm of how to achieve reliable knowledge.

The need for a “science of man” was a theme that tied together Enlightenment thinkers who disagreed about much else, including Montesquieu, Hume, Smith, Kant, Nicolas de Condorcet, Denis Diderot, Jean-Baptiste d’Alembert, Jean-Jacques Rousseau, and Giambattista Vico.

They were cognitive neuroscientists, who tried to explain thought, emotion, and psychopathology in terms of physical mechanisms of the brain. They were evolutionary psychologists, who sought to characterize life in a state of nature and to identify the animal instincts that are “infused into our bosoms.” They were social psychologists, who wrote of the moral sentiments that draw us together, the selfish passions that divide us, and the foibles of shortsightedness that confound our best-laid plans. And they were cultural anthropologists, who mined the accounts of travelers and explorers for data both on human universals and on the diversity of customs and mores across the world’s cultures.

The idea of a universal human nature brings us to a third theme, humanism. The thinkers of the Age of Reason and the Enlightenment saw an urgent need for a secular foundation for morality, because they were haunted by a historical memory of centuries of religious carnage: the Crusades, the Inquisition, witch hunts, the European wars of religion. They laid that foundation in what we now call humanism, which privileges the well-being of individual men, women, and children over the glory of the tribe, race, nation, or religion.

We are endowed with the sentiment of sympathy, which they also called benevolence, pity, and commiseration. Given that we are equipped with the capacity to sympathize with others, nothing can prevent the circle of sympathy from expanding from the family and tribe to embrace all of humankind, particularly as reason goads us into realizing that there can be nothing uniquely deserving about ourselves or any of the groups to which we belong.

A humanistic sensibility impelled the Enlightenment thinkers to condemn not just religious violence but also the secular cruelties of their age, including slavery,

The Enlightenment is sometimes called the Humanitarian Revolution, because it led to the abolition of barbaric practices that had been commonplace across civilizations for millennia.

With our understanding of the world advanced by science and our circle of sympathy expanded through reason and cosmopolitanism, humanity could make intellectual and moral progress.

Government is not a divine fiat to reign, a synonym for “society,” or an avatar of the national, religious, or racial soul. It is a human invention, tacitly agreed to in a social contract, designed to enhance the welfare of citizens by coordinating their behavior and discouraging selfish acts that may be tempting to every individual but leave everyone worse off. As the most famous product of the Enlightenment, the Declaration of Independence, put it, in order to secure the right to life, liberty, and the pursuit of happiness, governments are instituted among people, deriving their just powers from the consent of the governed.

The Enlightenment also saw the first rational analysis of prosperity.

Specialization works only in a market that allows the specialists to exchange their goods and services, and Smith explained that economic activity was a form of mutually beneficial cooperation (a positive-sum game, in today’s lingo): each gets back something that is more valuable to him than what he gives up. Through voluntary exchange, people benefit others by benefiting themselves;

He only said that in a market, whatever tendency people have to care for their families and themselves can work to the good of all.

“If the tailor goes to war against the baker, he must henceforth bake his own bread.”)

doux commerce, gentle commerce.

Another Enlightenment ideal, peace.

Together with international commerce, he recommended representative republics (what we would call democracies), mutual transparency, norms against conquest and internal interference, freedom of travel and immigration, and a federation of states that would adjudicate disputes between them.

The first keystone in understanding the human condition is the concept of entropy or disorder, which emerged from 19th-century physics and was defined in its current form by the physicist Ludwig Boltzmann.1 The Second Law of Thermodynamics states that in an isolated system (one that is not interacting with its environment), entropy never decreases.

It follows that any perturbation of the system, whether it is a random jiggling of its parts or a whack from the outside, will, by the laws of probability, nudge the system toward disorder or uselessness—not because nature strives for disorder, but because there are so many more ways of being disorderly than of being orderly.

Law of Entropy.

Life and happiness depend on an infinitesimal sliver of orderly arrangements of matter amid the astronomical number of possibilities.

The Law of Entropy is widely acknowledged in everyday life in sayings such as “Things fall apart,” “Rust never sleeps,” “Shit happens,” “Whatever can go wrong will go wrong,”

“The Second Law of Thermodynamics Is the First Law of Psychology.”4 Why the awe for the Second Law? From an Olympian vantage point, it defines the fate of the universe and the ultimate purpose of life, mind, and human striving: to deploy energy and knowledge to fight back the tide of entropy and carve out refuges of beneficial order.

in 1859, it was reasonable to think they were the handiwork of a divine designer—one of the reasons, I suspect, that so many Enlightenment thinkers were deists rather than outright atheists. Darwin and Wallace made the designer unnecessary. Once self-organizing processes of physics and chemistry gave rise to a configuration of matter that could replicate itself, the copies would make copies, which would make copies of the copies, and so on, in an exponential explosion.

Organisms are open systems: they capture energy from the sun, food, or ocean vents to carve out temporary pockets of order in their bodies and nests while they dump heat and waste into the environment, increasing disorder in the world as a whole.

Nature is a war, and much of what captures our attention in the natural world is an arms race.

the third keystone, information.8 Information may be thought of as a reduction in entropy—as the ingredient that distinguishes an orderly, structured system from the vast set of random, useless ones.

The information contained in a pattern depends on how coarsely or finely grained our view of the world is.

Information is what gets accumulated in a genome in the course of evolution. The sequence of bases in a DNA molecule correlates with the sequence of amino acids in the proteins that make up the organism’s body, and they got that sequence by structuring the organism’s ancestors—reducing their entropy—into the improbable configurations that allowed them to capture energy and grow and reproduce.

Energy channeled by knowledge is the elixir with which we stave off entropy, and advances in energy capture are advances in human destiny. The invention of farming around ten thousand years ago multiplied the availability of calories from cultivated plants and domesticated animals, freed a portion of the population from the demands of hunting and gathering, and eventually gave them the luxury of writing, thinking, and accumulating their ideas. Around 500 BCE, in what the philosopher Karl Jaspers called the Axial Age, several widely separated cultures pivoted from systems of ritual and sacrifice that merely warded off misfortune to systems of philosophical and religious belief that promoted selflessness and promised spiritual transcendence.

(Confucius, Buddha, Pythagoras, Aeschylus, and the last of the Hebrew prophets walked the earth at the same time.)

The Axial Age was when agricultural and economic advances provided a burst of energy: upwards of 20,000 calories per person per day in food, fodder, fuel, and raw materials. This surge allowed the civilizations to afford larger cities, a scholarly and priestly class, and a reorientation of their priorities from short-term survival to long-term harmony. As Bertolt Brecht put it millennia later: Grub first, then ethics.19

And the next leap in human welfare—the end of extreme poverty and spread of abundance, with all its moral benefits—will depend on technological advances that provide energy at an acceptable economic and environmental cost to the entire world

The first piece of wisdom they offer is that misfortune may be no one’s fault. A major breakthrough of the Scientific Revolution—perhaps its biggest breakthrough—was to refute the intuition that the universe is saturated with purpose.

Galileo, Newton, and Laplace replaced this cosmic morality play with a clockwork universe in which events are caused by conditions in the present, not goals for the future.

Not only does the universe not care about our desires, but in the natural course of events it will appear to thwart them, because there are so many more ways for things to go wrong than for them to go right.

Awareness of the indifference of the universe was deepened still further by an understanding of evolution.

As Adam Smith pointed out, what needs to be explained is wealth. Yet even today, when few people believe that accidents or diseases have perpetrators, discussions of poverty consist mostly of arguments about whom to

Another implication of the Law of Entropy is that a complex system like an organism can easily be disabled, because its functioning depends on so many improbable conditions being satisfied at once.

So for all the flaws in human nature, it contains the seeds of its own improvement, as long as it comes up with norms and institutions that channel parochial interests into universal benefits. Among those norms are free speech, nonviolence, cooperation, cosmopolitanism, human rights, and an acknowledgment of human fallibility, and among the institutions are science, education, media, democratic government, international organizations, and markets. Not coincidentally, these were the major brainchildren of the Enlightenment.

And the second decade of the 21st century saw the rise of populist movements that blatantly repudiate the ideals of the Enlightenment.1 They are tribalist rather than cosmopolitan, authoritarian rather than democratic, contemptuous of experts rather than respectful of knowledge, and nostalgic for an idyllic past rather than hopeful for a better future.

The disdain for reason, science, humanism, and progress has a long pedigree in elite intellectual and artistic culture.

The Enlightenment was swiftly followed by a counter-Enlightenment, and the West has been divided ever since.

The Romantic movement pushed back particularly hard against Enlightenment ideals. Rousseau, Johann Herder, Friedrich Schelling, and others denied that reason could be separated from emotion, that individuals could be considered apart from their culture, that people should provide reasons for their acts, that values applied across times and places, and that peace and prosperity were desirable ends. A human is a part of an organic whole—a culture, race, nation, religion, spirit, or historical force—and people should creatively channel the transcendent unity of which they are a part. Heroic struggle, not the solving of problems, is the greatest good, and violence is inherent to nature and cannot be stifled without draining life of its vitality. “There are but three groups worthy of respect,” wrote Charles Baudelaire, “the priest, the warrior, and the poet. To know, to kill, and to create.”

The most obvious is religious faith.

Religions also commonly clash with humanism whenever they elevate some moral good above the well-being of humans, such as accepting a divine savior, ratifying a sacred narrative, enforcing rituals and taboos, proselytizing other people to do the same, and punishing or demonizing those who don’t.

A second counter-Enlightenment idea is that people are the expendable cells of a superorganism—a clan, tribe, ethnic group, religion, race, class, or nation—and that the supreme good is the glory of this collectivity rather than the well-being of the people who make it up. An obvious example is nationalism, in which the superorganism is the nation-state, namely an ethnic group with a government.

Nationalism should not be confused with civic values, public spirit, social responsibility, or cultural pride.

It’s quite another thing when a person is forced to make the supreme sacrifice for the benefit of a charismatic leader, a square of cloth, or colors on a map.

Religion and nationalism are signature causes of political conservatism, and continue to affect the fate of billions of people in the countries under their influence.

Left-wing and right-wing political ideologies have themselves become secular religions, providing people with a community of like-minded brethren, a catechism of sacred beliefs, a well-populated demonology, and a beatific confidence in the righteousness of their cause.

Political ideology undermines reason and science.7 It scrambles people’s judgment, inflames a primitive tribal mindset, and distracts them from a sounder understanding of how to improve the world. Our greatest enemies are ultimately not our political adversaries but entropy, evolution (in the form of pestilence and the flaws in human nature), and most of all ignorance—a shortfall of knowledge of how best to solve our problems.

For almost two centuries, a diverse array of writers has proclaimed that modern civilization, far from enjoying progress, is in steady decline and on the verge of collapse.

Declinism bemoans our Promethean dabbling with technology.9 By wresting fire from the gods, we have only given our species the means to end its own existence, if not by poisoning our environment then by loosing nuclear weapons, nanotechnology, cyberterror, bioterror, artificial intelligence, and other existential threats upon the world

Another variety of declinism agonizes about the opposite problem—not that modernity has made life too harsh and dangerous, but that it has made it too pleasant and safe. According to these critics, health, peace, and prosperity are bourgeois diversions from what truly matters in life.

In the twilight of a decadent, degenerate civilization, true liberation is to be found not in sterile rationality or effete humanism but in an authentic, heroic, holistic, organic, sacred, vital being-in-itself and will to power.

Friedrich Nietzsche, who coined the term will to power, recommends the aristocratic violence of the “blond Teuton beasts” and the samurai, Vikings, and Homeric heroes: “hard, cold, terrible, without feelings and without conscience, crushing everything, and bespattering everything with blood.”

The historical pessimists dread the downfall but lament that we are powerless to stop it. The cultural pessimists welcome it with a “ghoulish schadenfreude.” Modernity is so bankrupt, they say, that it cannot be improved, only transcended.

A final alternative to Enlightenment humanism condemns its embrace of science. Following C. P. Snow, we can call it the Second Culture, the

Second Culture today. Many intellectuals and critics express a disdain for science as anything but a fix for mundane problems. They write as if the consumption of elite art is the ultimate moral good.

Intellectual magazines regularly denounce “scientism,” the intrusion of science into the territory of the humanities such as politics and the arts.

Science is commonly blamed for racism, imperialism, world wars, and the Holocaust.

Intellectuals hate progress.

It’s the idea of progress that rankles the chattering class—the Enlightenment belief that by understanding the world we can improve the human condition.

A modern optimist believes that the world can be much, much better than it is today. Voltaire was satirizing not the Enlightenment hope for progress but its opposite, the religious rationalization for suffering called theodicy, according to which God had no choice but to allow epidemics and massacres because a world without them is metaphysically impossible.

In The Idea of Decline in Western History, Arthur Herman shows that prophets of doom are the all-stars of the liberal arts curriculum, including Nietzsche, Arthur Schopenhauer, Martin Heidegger, Theodor Adorno, Walter Benjamin, Herbert Marcuse, Jean-Paul Sartre, Frantz Fanon, Michel Foucault, Edward Said, Cornel West, and a chorus of eco-pessimists.

Psychologists have long known that people tend to see their own lives through rose-colored glasses: they think they’re less likely than the average person to become the victim of a divorce, layoff, accident, illness, or crime. But change the question from the people’s lives to their society, and they transform from Pollyanna to Eeyore.

Public opinion researchers call it the Optimism Gap.

The news, far from being a “first draft of history,” is closer to play-by-play sports commentary.

The nature of news is likely to distort people’s view of the world because of a mental bug that the psychologists Amos Tversky and Daniel Kahneman called the Availability heuristic: people estimate the probability of an event or the frequency of a kind of thing by the ease with which instances come to mind.

Availability errors are a common source of folly in human reasoning.

Vacationers stay out of the water after they have read about a shark attack or if they have just seen Jaws.12 Plane

How can we soundly appraise the state of the world?

The answer is to count. How many people are victims of violence as a proportion of the number of people alive? How many are sick, how many starving, how many poor, how many oppressed, how many illiterate, how many unhappy? And are those numbers going up or down? A quantitative mindset, despite its nerdy aura, is in fact the morally enlightened one, because it treats every human life as having equal value rather than privileging the people who are closest to us or most photogenic.

Resistance to the idea of progress runs deeper than statistical fallacies.

Many people lack the conceptual tools to ascertain whether progress has taken place or not; the very idea that things can get better just doesn’t compute.

A decline is not the same thing as a disappearance. (The statement “x > y” is different from the statement “y = 0.”) Something can decrease a lot without vanishing altogether. That means that the level of violence today is completely irrelevant to the question of whether violence has declined over the course of history.

The only way to answer that question is to compare the level of violence now with the level of violence in the past. And whenever you look at the level of violence in the past, you find a lot of it, even if it isn’t as fresh in memory as the morning’s headlines.

No, the psychological roots of progressophobia run deeper. The deepest is a bias that has been summarized in the slogan “Bad is stronger than good.”21 The idea can be captured in a set of thought experiments suggested by Tversky.

The psychological literature confirms that people dread losses more than they look forward to gains, that they dwell on setbacks more than they savor good fortune, and that they are more stung by criticism than they are heartened by praise. (As a psycholinguist I am compelled to add that the English language has far more words for negative emotions than for positive ones.)

One exception to the Negativity bias is found in autobiographical memory. Though we tend to remember bad events as well as we remember good ones, the negative coloring of the misfortunes fades with time, particularly the ones that happened to us.24 We are wired for nostalgia: in human memory, time heals most wounds.

The cure for the Availability bias is quantitative thinking,

Trump was the beneficiary of a belief—near universal in American journalism—that “serious news” can essentially be defined as “what’s going wrong.” . . . For decades, journalism’s steady focus on problems and seemingly incurable pathologies was preparing the soil that allowed Trump’s seeds of discontent and despair to take root. . . . One consequence is that many Americans today have difficulty imagining, valuing or even believing in the promise of incremental system change, which leads to a greater appetite for revolutionary, smash-the-machine change.

The shift during the Vietnam and Watergate eras from glorifying leaders to checking their power—with an overshoot toward indiscriminate cynicism, in which everything about America’s civic actors invites an aggressive takedown.

Sentiment mining assesses the emotional tone of a text by tallying the number and contexts of words with positive and negative connotations, like good, nice, terrible, and horrific.

Putting aside the wiggles and waves that reflect the crises of the day, we see that the impression that the news has become more negative over time is real. The New York Times got steadily more morose from the early 1960s to the early 1970s, lightened up a bit (but just a bit) in the 1980s and 1990s, and then sank into a progressively worse mood in the first decade of the new century.

And here is a shocker: The world has made spectacular progress in every single measure of human well-being. Here is a second shocker: Almost no one knows about it.

In the mid-18th century, life expectancy in Europe and the Americas was around 35, where it had been parked for the 225 previous years for which we have data.3 Life expectancy for the world as a whole was 29.

The life expectancy of hunter-gatherers is around 32.5, and it probably decreased among the peoples who first took up farming because of their starchy diet and the diseases they caught from their livestock and each other.

It returned to the low 30s by the Bronze Age, where it stayed put for thousands of years, with small fluctuations across centuries and regions.4 This period in human history may be called the Malthusian Era, when any advance in agriculture or health was quickly canceled by the resulting bulge in population, though “era” is an odd term for 99.9 percent of our species’ existence.

Progress is an outcome not of magic but of problem-solving.

Problems are inevitable, and at times particular sectors of humanity have suffered terrible setbacks.

Average life spans are stretched the most by decreases in infant and child mortality, both because children are fragile and because the death of a child brings down the average more than the death of a 60-year-old.

Are we really living longer, or are we just surviving infancy in greater numbers?

So do those of us who survive the ordeals of childbirth and childhood today live any longer than the survivors of earlier eras? Yes, much longer. Figure

No matter how old you are, you have more years ahead of you than people of your age did in earlier decades and centuries.

The economist Steven Radelet has pointed out that “the improvements in health among the global poor in the last few decades are so large and widespread that they rank among the greatest achievements in human history. Rarely has the basic well-being of so many people around the world improved so substantially, so quickly. Yet few people are even aware that it is happening.”13

In his 2005 bestseller The Singularity Is Near, the inventor Ray Kurzweil forecasts that those of us who make it to 2045 will live forever, thanks to advances in genetics, nanotechnology (such as nanobots that will course through our bloodstream and repair our bodies from the inside), and artificial intelligence, which will not just figure out how to do all this but recursively improve its own intelligence without limit.

Lacking the gift of prophecy, no one can say whether scientists will ever find a cure for mortality. But evolution and entropy make it unlikely. Senescence is baked into our genome at every level of organization, because natural selection favors genes that make us vigorous when we are young over those that make us live as long as possible.

Peter Hoffman points out, “Life pits biology against physics in mortal combat.”

“Income—although important both in and of itself and as a component of wellbeing . . .—is not the ultimate cause of wellbeing.”16 The fruits of science are not just high-tech pharmaceuticals such as vaccines, antibiotics, antiretrovirals, and deworming pills. They also comprise ideas—ideas that may be cheap to implement and obvious in retrospect, but which save millions of lives. Examples include boiling, filtering, or adding bleach to water; washing hands;

The historian Fernand Braudel has documented that premodern Europe suffered from famines every few decades.

Many of those who were not starving were too weak to work, which locked them into poverty.

As the comedian Chris Rock observed, “This is the first society in history where the poor people are fat.”

hardship everywhere before the 19th century, rapid improvement in Europe and the United States over the next two centuries, and, in recent decades, the developing world catching up.

Fortunately, the numbers reflect an increase in the availability of calories throughout the range, including the bottom.

Figure 7-2 shows the proportion of children who are stunted in a representative sample of countries which have data for the longest spans of time.

We see that in just two decades the rate of stunting has been cut in half.

Not only has chronic undernourishment been in decline, but so have catastrophic famines—the crises that kill people in large numbers and cause widespread wasting (the condition of being two standard deviations below one’s expected weight)

Figure 7-4 shows the number of deaths in major famines in each decade for the past 150 years, scaled by world population at the time.

The link from crop failure to famine has been broken. Most recent drought- or flood-triggered food crises have been adequately met by a combination of local and international humanitarian response.

In 1798 Thomas Malthus explained that the frequent famines of his era were unavoidable and would only get worse, because “population, when unchecked, increases in a geometrical ratio. Subsistence increases only in an arithmetic ratio. A slight acquaintance with numbers will show the immensity of the first power in comparison with the second.” The implication was that efforts to feed the hungry would only lead to more misery, because they would breed more children who were doomed to hunger in their turn.

Where did Malthus’s math go wrong? Looking at the first of his curves, we already saw that population growth needn’t increase in a geometric ratio indefinitely, because when people get richer and more of their babies survive, they have fewer babies (see also figure 10-1). Conversely, famines don’t reduce population growth for long. They disproportionately kill children and the elderly, and when conditions improve, the survivors quickly replenish the population.13 As Hans Rosling put it, “You can’t stop population growth by letting poor children die.”14

Looking at the second curve, we discover that the food supply can grow geometrically when knowledge is applied to increase the amount of food that can be coaxed out of a patch of land. Since the birth of agriculture ten thousand years ago, humans have been genetically engineering plants and animals by selectively breeding the ones that had the most calories and fewest toxins and that were the easiest to plant and harvest.

Clever farmers also tinkered with irrigation, plows, and organic fertilizers, but Malthus always had the last word.

The moral imperative was explained to Gulliver by the King of Brobdingnag: “Whoever makes two ears of corn, or two blades of grass to grow where only one grew before, deserves better of humanity, and does more essential service to his country than the whole race of politicians put together.”

British Agricultural Revolution.16 Crop rotation and improvements to plows and seed drills were followed by mechanization, with fossil fuels replacing human and animal muscle.

But the truly gargantuan boost would come from chemistry. The N in SPONCH, the acronym taught to schoolchildren for the chemical elements that make up the bulk of our bodies, stands for nitrogen, a major ingredient of protein, DNA, chlorophyll, and the energy carrier ATP. Nitrogen atoms are plentiful in the air but bound in pairs (hence the chemical formula N2), which are hard to split apart so that plants can use them.

Fertilizer on an industrial scale,

Over the past century, grain yields per hectare have swooped upward while real prices have plunged.

In the United States in 1901, an hour’s wages could buy around three quarts of milk; a century later, the same wages would buy sixteen quarts. The amount of every other foodstuff that can be bought with an hour of labor has multiplied as well: from a pound of butter to five pounds, a dozen eggs to twelve dozen, two pounds of pork chops to five pounds, and nine pounds of flour to forty-nine pounds.

In addition to beating back hunger, the ability to grow more food from less land has been, on the whole, good for the planet. Despite their bucolic charm, farms are biological deserts which sprawl over the landscape at the expense of forests and grasslands. Now that farms have receded in some parts of the world, temperate forests have been bouncing back,

High-tech agriculture, the critics said, consumes fossil fuels and groundwater, uses herbicides and pesticides, disrupts traditional subsistence agriculture, is biologically unnatural, and generates profits for corporations. Given that it saved a billion lives and helped consign major famines to the dustbin of history, this seems to me like a reasonable price to pay. More important, the price need not be with us forever. The beauty of scientific progress is that it never locks us into a technology but can develop new ones with fewer problems than the old ones (a dynamic we will return to here).

There is no such thing as a genetically unmodified crop). Yet traditional environmentalist groups, with what the ecology writer Stewart Brand has called their “customary indifference to starvation,” have prosecuted a fanatical crusade to keep transgenic crops from people—not just from whole-food gourmets in rich countries but from poor farmers in developing ones.

Poverty has no causes,” wrote the economist Peter Bauer.

History is written not so much by the victors as by the affluent, the sliver of humanity with the leisure and education to write about it.

Norberg, drawing on Braudel, offers vignettes of this era of misery, when the definition of poverty was simple: “if you could afford to buy bread to survive another day, you were not poor.”

Economists speak of a “lump fallacy” or “physical fallacy” in which a finite amount of wealth has existed since the beginning of time, like a lode of gold, and people have been fighting over how to divide it up ever since.4 Among the brainchildren of the Enlightenment is the realization that wealth is created.5 It is created primarily by knowledge and cooperation: networks of people arrange matter into improbable but useful configurations and combine the fruits of their ingenuity and labor. The corollary, just as radical, is that we can figure out how to make more of it.

The endurance of poverty and the transition to modern affluence can be shown in a simple but stunning graph. It plots, for the past two thousand years, a standard measure of wealth creation, the Gross World Product, measured in 2011 international dollars.

The story of the growth of prosperity in human history depicted in figure 8-1 is close to: nothing . . . nothing . . . nothing . . . (repeat for a few thousand years) . . . boom! A millennium after the year 1 CE, the world was barely richer than it was at the time of Jesus.

Starting in the 19th century, the increments turned into leaps and bounds. Between 1820 and 1900, the world’s income tripled. It tripled again in a bit more than fifty years. It took only twenty-five years for it to triple again, and another thirty-three years to triple yet another time. The Gross World Product today has grown almost a hundredfold since the Industrial Revolution was in place in 1820, and almost two hundredfold from the start of the Enlightenment in the 18th century.

Indeed, the Gross World Product is a gross underestimate of the expansion of prosperity.

Adam Smith called it the paradox of value: when an important good becomes plentiful, it costs far less than what people are willing to pay for it. The difference is called consumer surplus, and the explosion of this surplus over time is impossible to tabulate.

Economic historian Joel Mokyr calls “the enlightened economy.”8 The machines and factories of the Industrial Revolution, the productive farms of the Agricultural Revolution, and the water pipes of the Public Health Revolution could deliver more clothes, tools, vehicles, books, furniture, calories, clean water, and other things that people want than the craftsmen and farmers of a century before.

“After 1750 the epistemic base of technology slowly began to expand. Not only did new products and techniques emerge; it became better understood why and how the old ones worked, and thus they could be refined, debugged, improved, combined with others in novel ways and adapted to new uses.”

One was the development of institutions that lubricated the exchange of goods, services, and ideas—the dynamic singled out by Adam Smith as the generator of wealth. The economists Douglass North, John Wallis, and Barry Weingast argue that the most natural way for states to function, both in history and in many parts of the world today, is for elites to agree not to plunder and kill each other, in exchange for which they are awarded a fief, franchise, charter, monopoly, turf, or patronage network that allows them to control some sector of the economy and live off the rents (in the economist’s sense of income extracted from exclusive access to a resource).

The third innovation, after science and institutions, was a change in values: an endorsement of what the economic historian Deirdre McCloskey calls bourgeois virtue.12 Aristocratic, religious, and martial cultures have always looked down on commerce as tawdry and venal. But in 18th-century England and the Netherlands, commerce came to be seen as moral and uplifting. Voltaire and other Enlightenment philosophes valorized the spirit of commerce for its ability to dissolve sectarian hatreds:

The Enlightenment thus translated the ultimate question ‘How can I be saved?’ into the pragmatic ‘How can I be happy?’—thereby heralding a new praxis of personal and social adjustment.”

In 1905 the sociologist Max Weber proposed that capitalism depended on a “Protestant ethic” (a hypothesis with the intriguing prediction that Jews should fare poorly in capitalist societies, particularly in business and finance). In any case the Catholic countries of Europe soon zoomed out of poverty too, and a succession of other escapes shown in figure 8-2 have put the lie to various theories explaining why Buddhism, Confucianism, Hinduism, or generic “Asian” or “Latin” values were incompatible with dynamic market economies.

Sarting in the late 20th century, poor countries have been escaping from poverty in their turn. The Great Escape is becoming the Great Convergence.

Extreme poverty is being eradicated, and the world is becoming middle class.

In 1800, at the dawn of the Industrial Revolution, most people everywhere were poor. The average income was equivalent to that in the poorest countries in Africa today (about $500 a year in international dollars), and almost 95 percent of the world lived in what counts today as “extreme poverty” (less than $1.90 a day). By 1975, Europe and its offshoots had completed the Great Escape, leaving the rest of the world behind, with one-tenth their income, in the lower hump of a camel-shaped curve.20 In the 21st century the camel has become a dromedary, with a single hump shifted to the right and a much lower tail on the left: the world had become richer and more equal.

In two hundred years the rate of extreme poverty in the world has tanked from 90 percent to 10, with almost half that decline occurring in the last thirty-five years.

Also, an increase in the number of people who can withstand the grind of entropy and the struggle of evolution is a testimonial to the sheer magnitude of the benevolent powers of science, markets, good government, and other modern institutions.

“In 1976,” Radelet writes, “Mao single-handedly and dramatically changed the direction of global poverty with one simple act: he died.”

The death of Mao Zedong is emblematic of three of the major causes of the Great Convergence.

The first is the decline of communism (together with intrusive socialism). For reasons we have seen, market economies can generate wealth prodigiously while totalitarian planned economies impose scarcity, stagnation, and often famine.

A shift from collectivization, centralized control, government monopolies, and suffocating permit bureaucracies (what in India was called “the license raj”) to open economies took place on a number of fronts beginning in the 1980s. They included Deng Xiaoping’s embrace of capitalism in China, the collapse of the Soviet Union and its domination of Eastern Europe, and the liberalization of the economies of India, Brazil, Vietnam, and other countries.

It’s important to add that the market economies which blossomed in the more fortunate parts of the developing world were not the laissez-faire anarchies of right-wing fantasies and left-wing nightmares. To varying degrees, their governments invested in education, public health, infrastructure, and agricultural and job training, together with social insurance and poverty-reduction programs.35

Radelet’s second explanation of the Great Convergence is leadership.

During the decades of stagnation from the 1970s to the early 1990s, many other developing countries were commandeered by psychopathic strongmen with ideological, religious, tribal, paranoid, or self-aggrandizing agendas rather than a mandate to enhance the well-being of their citizens.

The 1990s and 2000s saw a spread of democracy (chapter 14) and the rise of levelheaded, humanistic leaders—not just national statesmen like Nelson Mandela, Corazon Aquino, and Ellen Johnson Sirleaf but local religious and civil-society leaders acting to improve the lives of their compatriots.38

A third cause was the end of the Cold War. It not only pulled the rug out from under a number of tinpot dictators but snuffed out many of the civil wars that had racked developing countries since they attained independence in the 1960s.

A fourth cause is globalization, in particular the explosion in trade made possible by container ships and jet airplanes and by the liberalization of tariffs and other barriers to investment and trade. Classical economics and common sense agree that a larger trading network should make everyone, on average, better off.

Radelet, who observes that “while working on the factory floor is often referred to as sweatshop labor, it is often better than the granddaddy of all sweatshops: working in the fields as an agricultural day laborer.”

Over the course of a generation, slums, barrios, and favelas can morph into suburbs, and the working class can become middle class.47

Progress consists of unbundling the features of a social process as much as we can to maximize the human benefits while minimizing the harms.

The last, and in many analyses the most important, contributor to the Great Convergence is science and technology.49 Life is getting cheaper, in a good way. Thanks to advances in know-how, an hour of labor can buy more food, health, education, clothing, building materials, and small necessities and luxuries than it used to. Not only can people eat cheaper food and take cheaper medicines, but children can wear cheap plastic sandals instead of going barefoot, and adults can hang out together getting their hair done or watching a soccer game using cheap solar panels and appliances.

Today about half the adults in the world own a smartphone, and there are as many subscriptions as people. In parts of the world without roads, landlines, postal service, newspapers, or banks, mobile phones are more than a way to share gossip and cat photos; they are a major generator of wealth. They allow people to transfer money, order supplies, track the weather and markets, find day labor, get advice on health and farming practices, even obtain a primary education.

Quality of life.

Health, longevity, and education are so much more affordable than they used to be.

Everyone is living longer regardless of income.55 In the richest country two centuries ago (the Netherlands), life expectancy was just forty, and in no country was it above forty-five.

Today, life expectancy in the poorest country in the world (the Central African Republic) is fifty-four, and in no country is it below forty-five.56

GDP per capita correlates with longevity, health, and nutrition.57 Less obviously, it correlates with higher ethical values like peace, freedom, human rights, and tolerance.

Between 2009 and 2016, the proportion of articles in the New York Times containing the word inequality soared tenfold, reaching 1 in 73.1

The Great Recession began in 2007.

In the United States, the share of income going to the richest one percent grew from 8 percent in 1980 to 18 percent in 2015, while the share going to the richest tenth of one percent grew from 2 percent to 8 percent.4

I need a chapter on the topic because so many people have been swept up in the dystopian rhetoric and see inequality as a sign that modernity has failed to improve the human condition. As we will see, this is wrong, and for many reasons.

Income inequality is not a fundamental component of well-being.

The point is made with greater nuance by the philosopher Harry Frankfurt in his 2015 book On Inequality.5 Frankfurt argues that inequality itself is not morally objectionable; what is objectionable is poverty. If a person lives a long, healthy, pleasurable, and stimulating life, then how much money the Joneses earn, how big their house is, and how many cars they drive are morally irrelevant. Frankfurt writes, “From the point of view of morality, it is not important everyone should have the same. What is morally important is that each should have enough.”

Lump fallacy—the mindset in which wealth is a finite resource,

Since the Industrial Revolution, it has expanded exponentially. That means that when the rich get richer, the poor can get richer, too.

“The poorer half of the population are as poor today as they were in the past, with barely 5 percent of total wealth in 2010, just as in 1910.”8 But total wealth today is vastly greater than it was in 1910, so if the poorer half own the same proportion, they are far richer, not “as poor.”

Among the world’s billionaires is J. K. Rowling, author of the Harry Potter novels, which have sold more than 400 million copies and have been adapted into a series of films seen by a similar number of people.10 Suppose that a billion people have handed over $10 each for the pleasure of a Harry Potter paperback or movie ticket, with a tenth of the proceeds going to Rowling. She has become a billionaire, increasing inequality, but she has made people better off, not worse off (which is not to say that every rich person has made people better off).

Her wealth arose as a by-product of the voluntary decisions of billions of book buyers and moviegoers.

When the rich get too rich, everyone else feels poor, so inequality lowers well-being even if everyone gets richer. This is an old idea in social psychology, variously called the theory of social comparison, reference groups, status anxiety, or relative deprivation.

We will see in chapter 18 that richer people and people in richer countries are (on average) happier than poorer people and people in poorer countries.

In their well-known book The Spirit Level, the epidemiologists Richard Wilkinson and Kate Pickett claim that countries with greater income inequality also have higher rates of homicide, imprisonment, teen pregnancy, infant mortality, physical and mental illness, social distrust, obesity, and substance abuse.14

The Spirit Level theory has been called “the left’s new theory of everything,” and it is as problematic as any other theory that leaps from a tangle of correlations to a single-cause explanation. For one thing, it’s not obvious that people

Wilkinson and Pickett’s sample was restricted to developed countries, but even within that sample the correlations are evanescent, coming and going with choices about which countries to include.

Kelley and Evans held constant the major factors that are known to affect happiness, including GDP per capita, age, sex, education, marital status, and religious attendance, and found that the theory that inequality causes unhappiness “comes to shipwreck on the rock of the facts.”

The authors suggest that whatever envy, status anxiety, or relative deprivation people may feel in poor, unequal countries is swamped by hope. Inequality is seen as a harbinger of opportunity, a sign that education and other routes to upward mobility might pay off for them and their children.

People are content with economic inequality as long as they feel that the country is meritocratic, and they get angry when they feel it isn’t. Narratives about the causes of inequality loom larger in people’s minds than the existence of inequality. That creates an opening for politicians to rouse the rabble by singling out cheaters who take more than their fair share: welfare queens, immigrants, foreign countries, bankers, or the rich, sometimes identified with ethnic minorities.18

Investment in research and infrastructure to escape economic stagnation, regulation of the finance sector to reduce instability, broader access to education and job training to facilitate economic mobility, electoral transparency and finance reform to eliminate illicit influence, and so on.

Economic inequality, then, is not itself a dimension of human well-being, and it should not be confused with unfairness or with poverty. Let’s now turn from the moral significance of inequality to the question of why it has changed over time.

The simplest narrative of the history of inequality is that it comes with modernity.

Inequality, in this story, started at zero, and as wealth increased over time, inequality grew with it. But the story is not quite right.

The image of forager egalitarianism is misleading. For one thing, the hunter-gatherer bands that are still around for us to study are not representative of an ancestral way of life, because they have been pushed into marginal lands and lead nomadic lives that make the accumulation of wealth impossible, if for no other reason than that it would be a nuisance to carry around. But sedentary hunter-gatherers, such as the natives of the Pacific Northwest, which is flush with salmon, berries, and fur-bearing animals, were florid inegalitarians, and developed a hereditary nobility who kept slaves, hoarded luxuries, and flaunted their wealth in gaudy potlatches.

They are less likely to share plant foods, since gathering is a matter of effort, and indiscriminate sharing would allow free-riding.

What happens when a society starts to generate substantial wealth? An increase in absolute inequality (the difference between the richest and poorest) is almost a mathematical necessity.

Some people are bound to take greater advantage of the new opportunities than others, whether by luck, skill, or effort, and they will reap disproportionate rewards.

As the Industrial Revolution gathered steam, European countries made a Great Escape from universal poverty, leaving the other countries behind.

What’s significant about the decline in inequality is that it’s a decline in poverty.

But then, starting around 1980, inequality bounced into a decidedly un-Kuznetsian rise.

The rise and fall in inequality in the 19th century reflects Kuznets’s expanding economy, which gradually pulls more people into urban, skilled, and thus higher-paying occupations. But the 20th-century plunge—which has been called the Great Leveling or the Great Compression—had more sudden causes. The plunge overlaps the two world wars, and that is no coincidence: major wars often level the income distribution.

The historian Walter Scheidel identifies “Four Horsemen of Leveling”: mass-mobilization warfare, transformative revolution, state collapse, and lethal pandemics.

The four horsemen reduce inequality by killing large numbers of workers, driving up the wages of those who survive.

But modernity has brought a more benign way to reduce inequality. As we have seen, a market economy is the best poverty-reduction program we know of for an entire country.

(Another way of putting it is that a market economy maximizes the average, but we also care about the variance and the range.) As the circle of sympathy in a country expands to encompass the poor (and as people want to insure themselves should they ever become poor), they increasingly allocate a portion of their pooled resources—that is, government funds—to alleviating that poverty.

The net result is “redistribution,” but that is something of a misnomer, because the goal is to raise the bottom, not lower the top, even if in practice the top is lowered.

Figure 9-4 shows that social spending took off in the middle decades of the 20th century (in the United States, with the New Deal in the 1930s; in other developed countries, with the rise of the welfare state after World War II). Social spending now takes up a median of 22 percent of their GDP.31

The explosion in social spending has redefined the mission of government: from warring and policing to also nurturing.32 Governments underwent this transformation for several reasons. Social spending inoculates citizens against the appeal of communism and fascism. Some of the benefits, like universal education and public health, are public goods that accrue to everyone, not just the direct beneficiaries.

Social spending is designed to help people who have less money, with the bill footed by people who have more money. This is the principle known as redistribution, the welfare state, social democracy, or socialism (misleadingly, because free-market capitalism is compatible with any amount of social spending).

The United States is famously resistant to anything smacking of redistribution. Yet it allocates 19 percent of its GDP to social services, and despite the best efforts of conservatives and libertarians the spending has continued to grow. The most recent expansions are a prescription drug benefit introduced by George W. Bush and the eponymous health insurance plan known as Obamacare introduced by his successor.

Many Americans are forced to pay for health, retirement, and disability benefits through their employers rather than the government. When this privately administered social spending is added to the public portion, the United States vaults from twenty-fourth into second place among the thirty-five OECD countries, just behind France.34

Social spending, like everything, has downsides. As with all insurance, it can create a “moral hazard” in which the insured slack off or take foolish risks, counting on the insurer to bail them out if they fail.

The rise of inequality in wealthy nations that began around 1980. This is the development that inspired the claim that life has gotten worse for everyone but the richest.

A “second industrial revolution” driven by electronic technologies replayed the Kuznets rise by creating a demand for highly skilled professionals, who pulled away from the less educated at the same time that the jobs requiring less education were eliminated by automation. Globalization allowed workers in China, India, and elsewhere to underbid their American competitors in a worldwide labor market, and the domestic companies that failed to take advantage of these offshoring opportunities were outcompeted on price.

Declining inequality worldwide, increasing inequality within rich countries—into a single graph which pleasingly takes the shape of an elephant (figure 9-5

The cliché about globalization is that it creates winners and losers, and the elephant curve displays them as peaks and valleys. It reveals that the winners include most of humanity. The elephant’s bulk (its body and head), which includes about seven-tenths of the world’s population, consists of the “emerging global middle class,” mainly in Asia. Over this period they saw cumulative gains of 40 to 60 percent in their real incomes. The nostrils at the tip of the trunk consist of the world’s richest one percent, who also saw their incomes soar.

Globalization’s “losers”: the lower middle classes of the rich world, who gained less than 10 percent. These are the focus of the new concern about inequality: the “hollowed-out middle class,” the Trump supporters, the people globalization left behind.

The rich certainly have prospered more than anyone else, perhaps more than they should have, but the claim about everyone else is not accurate, for a number of reasons.

Most obviously, it’s false for the world as a whole: the majority of the human race has become much better off. The two-humped camel has become

Extreme poverty has plummeted and may disappear; and both international and global inequality coefficients are in decline. Now, it’s true that the world’s poor have gotten richer in part at the expense of the American lower middle class, and if I were an American politician I would not publicly say that the tradeoff was worth it. But as citizens of the world considering humanity as a whole, we have to say that the tradeoff is worth it.

Today’s discussions of inequality often compare the present era unfavorably with a golden age of well-paying, dignified, blue-collar jobs that have been made obsolete by automation and globalization.

What’s relevant to well-being is how much people earn, not how high they rank.

Stephen Rose divided the American population into classes using fixed milestones rather than quantiles. “Poor” was defined as an income of $0–$30,000 (in 2014 dollars) for a family of three, “lower middle class” as $30,000–$50,000, and so on.46 The study found that in absolute terms, Americans have been moving on up. Between 1979 and 2014, the percentage of poor Americans dropped from 24 to 20,

Upper middle class ($100,000–$350,000),

The middle class is being hollowed out in part because so many Americans are becoming affluent. Inequality undoubtedly increased—the rich got richer faster than the poor and middle class got richer—but everyone (on average) got richer.

A third reason that rising inequality has not made the lower classes worse off is that low incomes have been mitigated by social transfers. For all its individualist ideology, the United States has a lot of redistribution. The income tax is still graduated, and low incomes are buffered by a “hidden welfare state” that includes unemployment insurance, Social Security, Medicare, Medicaid, Temporary Assistance for Needy Families, food stamps, and the Earned Income Tax Credit, a kind of negative income tax in which the government boosts the income of low earners. Put them together and America becomes far less unequal.

The United States has not gone as far as countries like Germany and Finland,

Some kind of welfare state may be found in all developed countries, and it reduces inequality even when it is hidden.50

The sociologist Christopher Jencks has calculated that when the benefits from the hidden welfare state are added up, and the cost of living is estimated in a way that takes into account the improving quality and falling price of consumer goods, the poverty rate has fallen in the past fifty years by more than three-quarters, and in 2013 stood at 4.8 percent.

The progress stagnated around the time of the Great Recession, but it picked up in 2015 and 2016 (not shown in the graph), when middle-class income reached a record high and the poverty rate showed its largest drop since 1999.54

The unsheltered homeless—fell in number between 2007 and 2015 by almost a third, despite the Great Recession.55

Income is just a means to an end: a way of paying for things that people need, want, and like, or as economists gracelessly call it, consumption. When poverty is defined in terms of what people consume rather than what they earn, we find that the American poverty rate has declined by ninety percent since 1960, from 30 percent of the population to just 3 percent. The two forces that have famously increased inequality in income have at the same time decreased inequality in what matters.

Together, technology and globalization have transformed what it means to be a poor person, at least in developed countries.

The poor used to be called the have-nots. In 2011, more than 95 percent of American households below the poverty line had electricity, running water, flush toilets, a refrigerator, a stove, and a color TV.58 (A century and a half before, the Rothschilds, Astors, and Vanderbilts had none of these things.)

The rich have gotten richer, but their lives haven’t gotten that much better. Warren Buffett may have more air conditioners than most people, or better ones, but by historical standards the fact that a majority of poor Americans even have an air conditioner is astonishing.

Though disposable income has increased, the pace of the increase is slow, and the resulting lack of consumer demand may be dragging down the economy as a whole.62 The hardships faced by one sector of the population—middle-aged, less-educated, non-urban white Americans—are real and tragic, manifested in higher rates of drug overdose (chapter 12) and suicide

Truck drivers, for example, make up the most common occupation in a majority of states, and self-driving vehicles may send them the way of scriveners, wheelwrights, and switchboard operators. Education, a major driver of economic mobility, is not keeping up with the demands of modern economies: tertiary education has soared in cost (defying the inexpensification of almost every other good), and in poor American neighborhoods, primary and secondary education are unconscionably substandard. Many parts of the American tax system are regressive, and money buys too much political influence.

Rather than tilting at inequality per se it may be more constructive to target the specific problems lumped with it.65 An obvious priority is to boost the rate of economic growth, since it would increase everyone’s slice of the pie and provide more pie to redistribute.

The next step in the historic trend toward greater social spending may be a universal basic income (or its close relative, a negative income tax).

Despite its socialist aroma, the idea has been championed by economists (such as Milton Friedman), politicians (such as Richard Nixon), and states (such as Alaska) that are associated with the political right, and today analysts across the political spectrum are toying with it.

It could rationalize the kludgy patchwork of the hidden welfare state, and it could turn the slow-motion disaster of robots replacing workers into a horn of plenty. Many of the jobs that robots will take over are jobs that people don’t particularly enjoy, and the dividend in productivity, safety, and leisure could be a boon to humanity as long as it is widely shared.

Inequality is not the same as poverty, and it is not a fundamental dimension of human flourishing. In comparisons of well-being across countries, it pales in importance next to overall wealth. An increase in inequality is not necessarily bad: as societies escape from universal poverty, they are bound to become more unequal, and the uneven surge may be repeated when a society discovers new sources of wealth.

THE ENVIRONMENT

The key idea is that environmental problems, like other problems, are solvable, given the right knowledge.

Beginning in the 1960s, the environmental movement grew out of scientific knowledge (from ecology, public health, and earth and atmospheric sciences) and a Romantic reverence for nature.

In this chapter I will present a newer conception of environmentalism which shares the goal of protecting the air and water, species, and ecosystems but is grounded in Enlightenment optimism rather than Romantic declinism.

Ecomodernism, Ecopragmatism, Earth Optimism,

Enlightenment Environmentalism or Humanistic Environmentalism.3

Ecomodernism begins with the realization that some degree of pollution is an inescapable consequence of the Second Law of Thermodynamics. When people use energy to create a zone of structure in their bodies and homes, they must increase entropy elsewhere in the environment in the form of waste, pollution, and other forms of disorder.

When native peoples first set foot in an ecosystem, they typically hunted large animals to extinction, and often burned and cleared vast swaths of forest.

When humans took up farming, they became more disruptive still.

A second realization of the ecomodernist movement is that industrialization has been good for humanity.8 It has fed billions, doubled life spans, slashed extreme poverty, and, by replacing muscle with machinery, made it easier to end slavery, emancipate women, and educate children (chapters 7, 15, and 17). It has allowed people to read at night, live where they want, stay warm in winter, see the world, and multiply human contact. Any costs in pollution and habitat loss have to be weighed against these gifts.

The third premise is that the tradeoff that pits human well-being against environmental damage can be renegotiated by technology. How to enjoy more calories, lumens, BTUs, bits, and miles with less pollution and land is itself a technological problem, and one that the world is increasingly solving.

Figure 10-1 shows that the world population growth rate peaked at 2.1 percent a year in 1962, fell to 1.2 percent by 2010, and will probably fall to less than 0.5 percent by 2050 and be close to zero around 2070, when the population is projected to level off and then decline.

The other scare from the 1960s was that the world would run out of resources. But resources just refuse to run out. The 1980s came and went without the famines that were supposed to starve tens of millions of Americans and billions of people worldwide. Then the year 1992 passed and, contrary to projections from the 1972 bestseller The Limits to Growth and similar philippics, the world did not exhaust its aluminum, copper, chromium, gold, nickel, tin, tungsten, or zinc.

From the 1970s to the early 2000s newsmagazines periodically illustrated cover stories on the world’s oil supply with a gas gauge pointing to Empty. In 2013 The Atlantic ran a cover story about the fracking revolution entitled “We Will Never Run Out of Oil.”

And the Rare Earths War? In reality, when China squeezed its exports in 2010 (not because of shortages but as a geopolitical and mercantilist weapon), other countries started extracting rare earths from their own mines, recycling them from industrial waste, and re-engineering products so they no longer needed them.15

Instead, as the most easily extracted supply of a resource becomes scarcer, its price rises, encouraging people to conserve it, get at the less accessible deposits, or find cheaper and more plentiful substitutes.

In reality, societies have always abandoned a resource for a better one long before the old one was exhausted.

In The Big Ratchet: How Humanity Thrives in the Face of Natural Crisis, the geographer Ruth DeFries describes the sequence as “ratchet-hatchet-pivot.” People discover a way of growing more food, and the population ratchets upward. The method fails to keep up with the demand or develops unpleasant side effects, and the hatchet falls. People then pivot to a new method.

Figure 10-3 shows that since 1970, when the Environmental Protection Agency was established, the United States has slashed its emissions of five air pollutants by almost two-thirds. Over the same period, the population grew by more than 40 percent, and those people drove twice as many miles and became two and a half times richer. Energy use has leveled off, and even carbon dioxide emissions have turned a corner, a point to which we will return.

They mainly reflect gains in efficiency and emission control.

Though tropical forests are still, alarmingly, being cut down, between the middle of the 20th century and the turn of the 21st the rate fell by two-thirds (figure 10-4).24 Deforestation of the world’s largest tropical forest, the Amazon, peaked in 1995, and from 2004 to 2013 the rate fell by four-fifths.25

Thanks to habitat protection and targeted conservation efforts, many beloved species have been pulled from the brink of extinction, including albatrosses, condors, manatees, oryxes, pandas, rhinoceroses, Tasmanian devils, and tigers; according to the ecologist Stuart Pimm, the rate of bird extinctions has been reduced by 75 percent.31 Though many species remain in precarious straits, a number of ecologists and paleontologists believe that the claim that humans are causing a mass extinction like the Permian and Cretaceous is hyperbolic.

One key is to decouple productivity from resources: to get more human benefit from less matter and energy. This puts a premium on density.36 As agriculture becomes more intensive by growing crops that are bred or engineered to produce more protein, calories, and fiber with less land, water, and fertilizer, farmland is spared, and it can morph back to natural habitats. (Ecomodernists point out that organic farming, which needs far more land to produce a kilogram of food, is neither green nor sustainable.)

All these processes are helped along by another friend of the Earth, dematerialization. Progress in technology allows us to do more with less.

Digital technology is also dematerializing the world by enabling the sharing economy, so that cars, tools, and bedrooms needn’t be made in huge numbers that sit around unused most of the time.

Hipsterization leads them to distinguish themselves by their tastes in beer, coffee, and music.

Just as we must not accept the narrative that humanity inexorably despoils every part of the environment, we must not accept the narrative that every part of the environment will rebound under our current practices.

If the emission of greenhouse gases continues, the Earth’s average temperature will rise to at least 1.5°C (2.7°F) above the preindustrial level by the end of the 21st century, and perhaps to 4°C (7.2°F) above that level or more. That will cause more frequent and more severe heat waves, more floods in wet regions, more droughts in dry regions, heavier storms, more severe hurricanes, lower crop yields in warm regions, the extinction of more species, the loss of coral reefs (because the oceans will be both warmer and more acidic), and an average rise in sea level of between 0.7 and 1.2 meters (2 and 4 feet) from both the melting of land ice and the expansion of seawater. (Sea level has already risen almost eight inches since 1870, and the rate of the rise appears to be accelerating.) Low-lying areas would be flooded, island nations would disappear beneath the waves, large stretches of farmland would no longer be arable, and millions of people would be displaced. The effects could get still worse in the 22nd century and beyond, and in theory could trigger upheavals such as a diversion of the Gulf Stream (which would turn Europe into Siberia) or a collapse of Antarctic ice sheets.

A recent survey found that exactly four out of 69,406 authors of peer-reviewed articles in the scientific literature rejected the hypothesis of anthropogenic global warming, and that “the peer-reviewed literature contains no convincing evidence against [the hypothesis].

Nonetheless, a movement within the American political right, heavily underwritten by fossil fuel interests, has prosecuted a fanatical and mendacious campaign to deny that greenhouse gases are warming the planet.47

The problem is that carbon emissions are a classic public goods game, also known as a Tragedy of the Commons. People benefit from everyone else’s sacrifices and suffer from their own, so everyone has an incentive to be a free rider and let everyone else make the sacrifice, and everyone suffers. A standard remedy for public goods dilemmas is a coercive authority that can punish free riders. But any government with the totalitarian power to abolish artistic pottery is unlikely to restrict that power to maximizing the common good. One can, alternatively, daydream

Most important, the sacrifice needed to bring carbon emissions down by half and then to zero is far greater than forgoing jewelry: it would require forgoing electricity, heating, cement, steel, paper, travel, and affordable food and clothing.

Escaping from poverty requires abundant energy.

Economic progress is an imperative in rich and poor countries alike precisely because it will be needed to adapt to the climate change that does occur. Thanks in good part to prosperity, humanity has been getting healthier (chapters 5 and 6), better fed (chapter 7), more peaceful (chapter 11), and better protected from natural hazards and disasters (chapter 12). These advances have made humanity more resilient to natural and human-made threats: disease outbreaks don’t become pandemics, crop failures in one region are alleviated by surpluses in another, local skirmishes are defused before they erupt into war, populations are better protected against storms, floods, and droughts.

The enlightened response to climate change is to figure out how to get the most energy with the least emission of greenhouse gases. There is, to be sure, a tragic view

Ausubel notes that the modern world has been progressively decarbonizing.

Annual CO2 emissions may have leveled off for the time being at around 36 billion tons, but that’s still a lot of CO2 added to the atmosphere every year, and there is no sign of the precipitous plunge we would need to stave off the harmful outcomes. Instead, decarbonization needs to be helped along with pushes from policy and technology, an idea called deep decarbonization.73

A second key to deep decarbonization brings up an inconvenient truth for the traditional Green movement: nuclear power is the world’s most abundant and scalable carbon-free energy source.

Nuclear energy, in contrast, represents the ultimate in density,

It’s often said that with climate change, those who know the most are the most frightened, but with nuclear power, those who know the most are the least frightened.

“The French have two kinds of reactors and hundreds of kinds of cheese, whereas in the United States the figures are reversed.”89

The benefits of advanced nuclear energy are incalculable.

An energy source that is cheaper, denser, and cleaner than fossil fuels would sell itself, requiring no herculean political will or international cooperation.92 It would not just mitigate climate change but furnish manifold other gifts. People in the developing world could skip the middle rungs in the energy ladder, bringing their standard of living up to that of the West without choking on coal smoke. Affordable desalination of seawater, an energy-ravenous process, could irrigate farms, supply drinking water, and, by reducing the need for both surface water and hydro power, allow dams to be dismantled, restoring the flow of rivers to lakes and seas and revivifying entire ecosystems.

The last of these is critical for a simple reason. Even if greenhouse gas emissions are halved by 2050 and zeroed by 2075, the world would still be on course for risky warming, because the CO2 already emitted will remain in the atmosphere for a very long time. It’s not enough to stop thickening the greenhouse; at some point we have to dismantle it.

The obvious way to remove CO2 from the air, then, is to recruit as many carbon-hungry plants as we can to help us. We can do this by encouraging the transition from deforestation to reforestation and afforestation (planting new forests), by reversing tillage and wetland destruction, and by restoring coastal and marine habitats.

Will any of this happen? The obstacles are unnerving; they include the world’s growing thirst for energy, the convenience of fossil fuels with their vast infrastructure, the denial of the problem by energy corporations and the political right, the hostility to technological solutions from traditional Greens and the climate justice left, and the tragedy of the carbon commons.

Despite a half-century of panic, humanity is not on an irrevocable path to ecological suicide.

PEACE

In The Better Angels of Our Nature I showed that, as of the first decade of the 21st century, every objective measure of violence had been in decline.

For most of human history, war was the natural pastime of governments, peace a mere respite between wars.2

(Great powers are the handful of states and empires that can project force beyond their borders, that treat each other as peers, and that collectively control a majority of the world’s military resources.)

It’s not just the great powers that have stopped fighting each other. War in the classic sense of an armed conflict between the uniformed armies of two nation-states appears to be obsolescent.

The world’s wars are now concentrated almost exclusively in a zone stretching from Nigeria to Pakistan, an area containing less than a sixth of the world’s population. Those wars are civil wars, which the Uppsala Conflict Data Program (UCDP) defines as an armed conflict between a government and an organized force which verifiably kills at least a thousand soldiers and civilians a year.

The flip is driven mainly by conflicts that have a radical Islamist group on one side (eight of the eleven in 2015, ten of the twelve in 2016); without them, there would have been no increase in the number of wars at all. Perhaps not coincidentally, two of the wars in 2014 and 2015 were fueled by another counter-Enlightenment ideology, Russian nationalism, which drove separatist forces, backed by Vladimir Putin, to battle the government of Ukraine in two of its provinces.

The worst of the ongoing wars is in Syria,

“Wars begin in the minds of men.” And indeed we find that the turn away from war consists in more than just a reduction in wars and war deaths; it also may be seen in nations’ preparations for war. The prevalence of conscription, the size of armed forces, and the level of global military spending as a percentage of GDP have all decreased in recent decades.

Kant’s famous essay “Perpetual Peace.”19

As we saw in chapter 1, many Enlightenment thinkers advanced the theory of gentle commerce, according to which international trade should make war less appealing. Sure enough, trade as a proportion of GDP shot up in the postwar era, and quantitative analyses have confirmed that trading countries are less likely to go to war, holding all else constant.21

Another brainchild of the Enlightenment is the theory that democratic government serves as a brake on glory-drunk leaders who would drag their countries into pointless wars. Starting in the 1970s, and accelerating

Democratic Peace theory, in which pairs of countries that are more democratic are less likely to confront each other in militarized disputes.22

Yet the biggest single change in the international order is an idea we seldom appreciate today: war is illegal.

That cannot happen today: the world’s nations have committed themselves to not waging war except in self-defense or with the approval of the United Nations Security Council. States are immortal, borders are grandfathered in, and any country that indulges in a war of conquest can expect opprobrium, not acquiescence, from the rest.

War “enlarges the mind of a people and raises their character,” wrote Alexis de Tocqueville. It is “life itself,” said Émile Zola; “the foundation of all the arts . . . [and] the high virtues and faculties of man,” wrote John Ruskin.

Romantic militarism sometimes merged with romantic nationalism, which exalted the language, culture, homeland, and racial makeup of an ethnic group—the ethos of blood and soil—and held that a nation could fulfill its destiny only as an ethnically cleansed sovereign state.

But perhaps the biggest impetus to romantic militarism was declinism, the revulsion among intellectuals at the thought that ordinary people seemed to be enjoying their lives in peace and prosperity.34 Cultural pessimism became particularly entrenched in Germany through the influence of Schopenhauer, Nietzsche, Jacob Burckhardt, Georg Simmel, and Oswald Spengler, author in 1918–23 of The Decline of the West. (We will return to these ideas in chapter 23.) To this day, historians of World War I puzzle over why England and Germany, countries with a lot in common—Western, Christian, industrialized, affluent—would choose to hold a pointless bloodbath. The reasons are many and tangled, but insofar as they involve ideology, Germans before World War I “saw themselves as outside European or Western civilization,” as Arthur Herman points out.35 In particular, they thought they were bravely resisting the creep of a liberal, democratic, commercial culture that had been sapping the vitality of the West since the Enlightenment, with the complicity of Britain and the United States. Only from the ashes of a redemptive cataclysm, many thought, could a new heroic order arise.

Worldwide, injuries account for about a tenth of all deaths, outnumbering the victims of AIDS, malaria, and tuberculosis combined, and are responsible for 11 percent of the years lost to death and disability.

Though lethal injuries are a major scourge of human life, bringing the numbers down is not a sexy cause. The inventor of the highway guard rail did not get a Nobel Prize, nor are humanitarian awards given to designers of clearer prescription drug labels.

More people are killed in homicides than wars.

But in a sweeping historical development that the German sociologist Norbert Elias called the Civilizing Process, Western Europeans, starting in the 14th century, began to resolve their disputes in less violent ways.6 Elias credited the change to the emergence of centralized kingdoms out of the medieval patchwork of baronies and duchies, so that the endemic feuding, brigandage, and warlording were tamed by a “king’s peace.” Then, in the 19th century, criminal justice systems were further professionalized by municipal police forces and a more deliberative court system.

People became enmeshed in networks of commercial and occupational obligations laid out in legal and bureaucratic rules. Their norms for everyday conduct shifted from a macho culture of honor, in which affronts had to be answered with violence, to a gentlemanly culture of dignity, in which status was won by displays of propriety and self-control.

(Homicide rates are the most reliable indicator of violent crime across different times and places because a corpse is always hard to overlook, and rates of homicide correlate with rates of other violent crimes like robbery, assault, and rape.)

Violent crime is a solvable problem.

Half of the world’s homicides are committed in just twenty-three countries containing about a tenth of humanity, and a quarter are committed in just four: Brazil (25.2), Colombia (25.9), Mexico (12.9), and Venezuela. (The world’s two murder zones—northern Latin America and southern sub-Saharan Africa—are distinct from its war zones, which stretch from Nigeria through the Middle East into Pakistan.) The lopsidedness continues down the fractal scale. Within a country, most of the homicides cluster in a few cities, such as Caracas (120 per 100,000) and San Pedro Sula (in Honduras, 187). Within cities, the homicides cluster in a few neighborhoods; within neighborhoods, they cluster in a few blocks; and within blocks, many are carried out by a few individuals.17 In my hometown of Boston, 70 percent of the shootings take place in 5 percent of the city, and half the shootings were perpetrated by one percent of the youths.18

High rates of homicide can be brought down quickly.

Combine the cockeyed distribution of violent crime with the proven possibility that high rates of violent crime can be brought down quickly, and the math is straightforward: a 50 percent reduction in thirty years is not just practicable but almost conservative.

This “Hobbesian trap,” as it is sometimes called, can easily set off cycles of feuding and vendetta: you have to be at least as violent as your adversaries lest you become their doormat. The largest category of homicide, and the one that varies the most across times and places, consists of confrontations between loosely acquainted young men over turf, reputation, or revenge. A disinterested third party with a monopoly on the legitimate use of force—that is, a state with a police force and judiciary—can nip this cycle in the bud. Not only does it disincentivize aggressors by the threat of punishment, but it reassures everyone else that the aggressors are disincentivized and thereby relieves them of the need for belligerent self-defense.

Here is Eisner’s one-sentence summary of how to halve the homicide rate within three decades: “An effective rule of law, based on legitimate law enforcement, victim protection, swift and fair adjudication, moderate punishment, and humane prisons is critical to sustainable reductions in lethal violence.”32 The adjectives effective, legitimate, swift, fair, moderate, and humane differentiate his advice from the get-tough-on-crime rhetoric favored by right-wing politicians.

Together with the presence of law enforcement, the legitimacy of the regime appears to matter, because people not only respect legitimate authority themselves but factor in the degree to which they expect their potential adversaries to respect it.

Thomas Abt and Christopher Winship

They concluded that the single most effective tactic for reducing violent crime is focused deterrence. A “laser-like focus” must first be directed on the neighborhoods where crime is rampant or even just starting to creep up, with the “hot spots” identified by data gathered in real time. It must be further beamed at the individuals and gangs who are picking on victims or roaring for a fight. And it must deliver a simple and concrete message about the behavior that is expected of them, like “Stop shooting and we will help you, keep shooting and we will put you in prison.” Getting the message through, and then enforcing it, depends on the cooperation of other members of the community—the store owners, preachers, coaches, probation officers, and relatives.

Also provably effective is cognitive behavioral therapy.

It is a set of protocols designed to override the habits of thought and behavior that lead to criminal acts.

Therapies that teach strategies of self-control. Troublemakers also have narcissistic and sociopathic thought patterns, such as that they are always in the right, that they are entitled to universal deference, that disagreements are personal insults, and that other people have no feelings or interests.

Together with anarchy, impulsiveness, and opportunity, a major trigger of criminal violence is contraband.

Violent crime exploded in the United States when alcohol was prohibited in the 1920s and when crack cocaine became popular in the late 1980s, and it is rampant in Latin American and Caribbean countries in which cocaine, heroin, and marijuana are trafficked today. Drug-fueled violence remains an unsolved international problem.

“Aggressive drug enforcement yields little anti-drug benefits and generally increases violence,” while “drug courts and treatment have a long history of effectiveness.”

Neither right-to-carry laws favored by the right, nor bans and restrictions favored by the left, have been shown to make much difference—though there is much we don’t know, and political and practical impediments to finding out more.39

In 1965 a young lawyer named Ralph Nader published Unsafe at Any Speed, a j’accuse of the industry for neglecting safety in automotive design. Soon after, the National Highway Traffic Safety Administration was established and legislation was passed requiring new cars to be equipped with a number of safety features. Yet the graph shows that steeper reductions came before the activism and the legislation, and the auto industry was sometimes ahead of its customers and regulators.

In 1980 Mothers Against Drunk Driving was formed, and they lobbied for higher drinking ages, lowered legal blood alcohol levels, and the stigmatization of drunk driving, which popular culture had treated as a source of comedy (such as in the movies North by Northwest and Arthur).

The Brooklyn Dodgers, before they moved to Los Angeles, had been named after the city’s pedestrians, famous for their skill at darting out of the way of hurtling streetcars.

When robotic cars are ubiquitous, they could save more than a million lives a year, becoming one of the greatest gifts to human life since the invention of antibiotics.

After car crashes, the likeliest cause of accidental death consists of falls, followed by drownings and fires, followed by poisonings.

Figure 12-6 shows an apparent exception to the conquest of accidents: the category called “Poison (solid or liquid).” The steep rise starting in the 1990s is anomalous in a society that is increasingly latched,

Then I realized that the category of accidental poisonings includes drug overdoses.

In 2013, 98 percent of the “Poison” deaths were from drugs (92 percent) or alcohol (6 percent), and almost all the others were from gases and vapors (mostly carbon monoxide). Household and occupational hazards like solvents, detergents, insecticides, and lighter fluid were responsible for less than a half of one percent of the poisoning deaths, and would scrape the bottom of figure 12-6

The curve begins to rise in the psychedelic 1960s, jerks up again during the crack cocaine epidemic of the 1980s, and blasts off during the far graver epidemic of opioid addiction in the 21st century. Starting in the 1990s, doctors overprescribed synthetic opioid painkillers like oxycodone, hydrocodone, and fentanyl, which are not just addictive but gateway drugs to heroin.

A sign that the measures might be effective is that the number of overdoses of prescription opioids (though not of illicit heroin and fentanyl) peaked in 2010 and may be starting to come down.56

The peak age of poisoning deaths in 2011 was around fifty, up from the low forties in 2003, the late thirties in 1993, the early thirties in 1983, and the early twenties in 1973.57 Do the subtractions and you find that in every decade it’s the members of the generation born between 1953 and 1963 who are drugging themselves to death. Despite perennial panic about teenagers, today’s kids are, relatively speaking, all right, or at least better. According to a major longitudinal study of teenagers called Monitoring the Future, high schoolers’ use of alcohol, cigarettes, and drugs (other than marijuana and vaping) have dropped to the lowest levels since the survey began in 1976.58

Humanity’s conquest of everyday danger is a peculiarly unappreciated form of progress.

Just as people tend not to see accidents as atrocities (at least when they are not the victims), they don’t see gains in safety as moral triumphs, if they are aware of them at all. Yet the sparing of millions of lives, and the reduction of infirmity, disfigurement, and suffering on a massive scale, deserve our gratitude and demand an explanation. That is true even of murder, the most moralized of acts, whose rate has plummeted for reasons that defy standard narratives.

TERRORISM

It’s because terrorism, as it is now defined, is largely a phenomenon of war, and wars no longer take place in the United States or Western Europe.

A majority of the world’s terrorist deaths take place in zones of civil war (including 8,831 in Iraq, 6,208 in Afghanistan, 5,288 in Nigeria, 3,916 in Syria, 1,606 in Pakistan, and 689 in Libya), and many of these are double-counted as war deaths, because “terrorism” during a civil war is simply a war crime—a deliberate attack on civilians—committed by a group other than the government.

About twice as many Americans have been killed since 1990 by right-wing extremists as by Islamist terror groups.

Modern terrorism is a by-product of the vast reach of the media.

Killing innocent people, especially in circumstances in which readers of the news can imagine themselves. News media gobble the bait and give the atrocities saturation coverage. The Availability heuristic kicks in and people become stricken with a fear that is unrelated to the level of danger.

The legal scholar Adam Lankford has analyzed the motives of the overlapping categories of suicide terrorists, rampage shooters, and hate crime killers, including both the self-radicalized lone wolves and the bomb fodder recruited by terrorist masterminds.14 The killers tend to be loners and losers, many with untreated mental illness, who are consumed with resentment and fantasize about revenge and recognition. Some fused their bitterness with Islamist ideology, others with a nebulous cause such as “starting a race war” or “a revolution against the federal government, taxes, and anti-gun laws.” Killing a lot of people offered them the chance to be a somebody, even if only in the anticipation, and going out in a blaze of glory meant that they didn’t have to deal with the irksome aftermath of being a mass murderer.

The historian Yuval Harari notes that terrorism is the opposite of military action, which tries to damage the enemy’s ability to retaliate and prevail.16

From their position of weakness, Harari notes, what terrorists seek to accomplish is not damage but theater.

Harari points out that in the Middle Ages, every sector of society retained a private militia—aristocrats, guilds, towns, even churches and monasteries—and they secured their interests by force: “If in 1150 a few Muslim extremists had murdered a handful of civilians in Jerusalem, demanding that the Crusaders leave the Holy Land, the reaction would have been ridicule rather than terror. If you wanted to be taken seriously, you should have at least gained control of a fortified castle or two.”

Sociologist Eric Madfis, has recommended a policy for rampage shootings of “Don’t Name Them, Don’t Show Them, but Report Everything Else,” based on a policy for juvenile shooters already in effect in Canada and on other strategies of calculated media self-restraint.)

DEMOCRACY

humanity has tried to steer a course between the violence of anarchy and the violence of tyranny.

Early governments pacified the people they ruled, reducing internecine violence, but imposed a reign of terror that included slavery, harems, human sacrifice, summary executions, and the torture and mutilation of dissidents and deviants.

Chaos is deadlier than tyranny. More of these multicides result from the breakdown of authority rather than the exercise of authority.

One can think of democracy as a form of government that threads the needle, exerting just enough force to prevent people from preying on each other without preying on the people itself.

Democracy is a major contributor to human flourishing. But it’s not the only reason: democracies also have higher rates of economic growth, fewer wars and genocides, healthier and better-educated citizens, and virtually no famines.4 If the world has become more democratic over time, that is progress.

The political scientist Samuel Huntington organized the history of democratization into three waves.5 The first swelled in the 19th century, when that great Enlightenment experiment, American constitutional democracy with its checks on government power, seemed to be working.

With the defeat of fascism in World War II, a second wave gathered force as colonies gained independence from their European overlords, pushing the number of recognized democracies up to thirty-six by 1962.

The West German chancellor Willy Brandt lamented that “Western Europe has only 20 or 30 more years of democracy left in it; after that it will slide, engineless and rudderless, under the surrounding sea of dictatorship.”

Military and fascist governments fell in southern Europe (Greece and Portugal in 1974, Spain in 1975), Latin America (including Argentina in 1983, Brazil in 1985, and Chile in 1990), and Asia (including Taiwan and the Philippines around 1986, South Korea around 1987, and Indonesia in 1998). The Berlin Wall was torn down in 1989,

In 1989 the political scientist Francis Fukuyama published a famous essay in which he proposed that liberal democracy represented “the end of history,” not because nothing would ever happen again but because the world was coming to a consensus over the humanly best form of governance and no longer had to fight over it.8

The rise of alternatives to democracy such as theocracy in the Muslim world and authoritarian capitalism in China. Democracies themselves appeared to be backsliding into authoritarianism with populist victories in Poland and Hungary and power grabs by Recep Erdogan in Turkey and Vladimir Putin in Russia (the return of the sultan and the czar).

After swelling in the 1990s, this third wave spilled into the 21st century in a rainbow of “color revolutions” including Croatia (2000), Serbia (2000), Georgia (2003), Ukraine (2004), and Kyrgyzstan (2005), bringing the total at the start of the Obama presidency in 2009 to 87.14

As of 2015, the most recent year in the dataset, the total stood at 103.

It is true that stable, top-shelf democracy is likelier to be found in countries that are richer and more highly educated.17 But governments that are more democratic than not are a motley collection: they are entrenched in most of Latin America, in floridly multiethnic India, in Muslim Malaysia, Indonesia, Niger, and Kosovo, in fourteen countries in sub-Saharan Africa (including Namibia, Senegal, and Benin), and in poor countries elsewhere such as Nepal, Timor-Leste, and most of the Caribbean.18

Political scientists are repeatedly astonished by the shallowness and incoherence of people’s political beliefs, and by the tenuous connection of their preferences to their votes and to the behavior of their representatives.21 Most voters are ignorant not just of current policy options but of basic facts, such as what the major branches of government are, who the United States fought in World War II, and which countries have used nuclear weapons. Their opinions flip depending on how a question is worded: they say that the government spends too much on “welfare” but too little on “assistance to the poor,” and that it should “use military force” but not “go to war.” When they do formulate a preference, they commonly vote for a candidate with the opposite one. But it hardly matters, because once in office politicians vote the positions of their party regardless of the opinions of their constituents.

Many political scientists have concluded that most people correctly recognize that their votes are astronomically unlikely to affect the outcome of an election, and so they prioritize work, family, and leisure over educating themselves about politics and calibrating their votes. They use the franchise as a form of self-expression: they vote for candidates who they think are like them and stand for their kind of people.

Also, autocrats can learn to use elections to their advantage. The latest fashion in dictatorship has been called the competitive, electoral, kleptocratic, statist, or patronal authoritarian regime.22 (Putin’s Russia is the prototype.) The incumbents use the formidable resources of the state to harass the opposition, set up fake opposition parties, use state-controlled media to spread congenial narratives, manipulate electoral rules, tilt voter registration, and jigger the elections themselves. (Patronal authoritarians, for all that, are not invulnerable—the color revolutions sent several of them packing.)

In his 1945 book The Open Society and Its Enemies, the philosopher Karl Popper argued that democracy should be understood not as the answer to the question “Who should rule?” (namely, “The People”), but as a solution to the problem of how to dismiss bad leadership without bloodshed.

Steven Levitsky and Lucan Way point out, “State failure brings violence and instability; it almost never brings democratization.”27

The freedom to complain rests on an assurance that the government won’t punish or silence the complainer. The front line in democratization, then, is constraining the government from abusing its monopoly on force to brutalize its uppity citizens.

Has the rise in democracy brought a rise in human rights, or are dictators just using elections and other democratic trappings to cover their abuses with a smiley-face?

The abolition of capital punishment has gone global (figure 14-3), and today the death penalty is on death row.

We are seeing a moral principle—Life is sacred, so killing is onerous—become distributed across a wide range of actors and institutions that have to cooperate to make the death penalty possible. As these actors and institutions implement the principle more consistently and thoroughly, they inexorably push the country away from the impulse to avenge a life with a life.

EQUAL RIGHTS

First Lady Michelle Obama in a speech at the Democratic National Convention in 2016: “I wake up every morning in a house that was built by slaves, and I watch my daughters, two beautiful, intelligent black young women, playing with their dogs on the White House lawn.”

A string of highly publicized killings by American police officers of unarmed African American suspects, some of them caught on smartphone videos, has led to a sense that the country is suffering an epidemic of racist attacks by police on black men. Media coverage of athletes who have assaulted their wives or girlfriends, and of episodes of rape on college campuses, has suggested to many that we are undergoing a surge of violence against

The data suggest that the number of police shootings has decreased, not increased, in recent decades (even as the ones that do occur are captured on video), and three independent analyses have found that a black suspect is no more likely than a white suspect to be killed by the police.6 (American police shoot too many people, but it’s not primarily a racial issue.)

The Pew Research Center has probed Americans’ opinions on race, gender, and sexual orientation over the past quarter century, and has reported that these attitudes have undergone a “fundamental shift” toward tolerance and respect of rights, with formerly widespread prejudices sinking into oblivion.

Other surveys show the same shifts.8 Not only has the American population become more liberal, but each generational cohort is more liberal than the one born before it.

Millennials (those born after 1980), who are even less prejudiced than the national average, tell us which way the country is going.10

A decline in prejudice or simply a decline in the social acceptability of prejudice, with fewer people willing to confess their disreputable attitudes to a pollster.

And contrary to the fear that the rise of Trump reflects (or emboldens) prejudice, the curves continue their decline through his period of notoriety in 2015–2016 and inauguration in early 2017.

Stephens-Davidowitz has pointed out to me that these curves probably underestimate the decline in prejudice because of a shift in who’s Googling.

Stephens-Davidowitz confirmed that bigoted searches tended to come from regions with older and less-educated populations. Compared with the country as a whole, retirement communities are seven times as likely to search for “nigger jokes” and thirty times as likely to search for “fag jokes.”

These threads confirmed that racists may be a dwindling breed: someone who searches for “nigger” is likely to search for other topics that appeal to senior citizens, such as “social security” and “Frank Sinatra.”

Private prejudice is declining with time and declining with youth, which means that we can expect it to decline still further as aging bigots cede the stage to less prejudiced cohorts.

Until they do, these older and less-educated people (mainly white men) may not respect the benign taboos on racism, sexism, and homophobia that have become second nature to the mainstream, and may even dismiss them as “political correctness.”

Trump’s success, like that of right-wing populists in other Western countries, is better understood as the mobilization of an aggrieved and shrinking demographic in a polarized political landscape than as the sudden reversal of a century-long movement toward equal rights.

Hate crimes against Asian, Jewish, and white targets have declined as well. And despite claims that Islamophobia has become rampant in America, hate crimes targeting Muslims have shown little change other than a one-time rise following 9/11 and upticks following other Islamist terror attacks, such as the ones in Paris and San Bernardino in 2015.20

Women’s status, too, is ascendant.

Violence against women is best measured by victimization surveys, because they circumvent the problem of underreporting to the police; these instruments show that rates of rape and violence against wives and girlfriends have been sinking for decades and are now at a quarter or less of their peaks in the past

No form of progress is inevitable, but the historical erosion of racism, sexism, and homophobia are more than a change in fashion.

Also, as people are forced to justify the way they treat other people, rather than dominating them out of instinctive, religious, or historical inertia, any justification for prejudicial treatment will crumble under scrutiny.

In his book Freedom Rising, the political scientist Christian Welzel (building on a collaboration with Ron Inglehart, Pippa Norris, and others) has proposed that the process of modernization has stimulated the rise of “emancipative values.”36 As societies shift from agrarian to industrial to informational, their citizens become less anxious about fending off enemies and other existential threats and more eager to express their ideals and to pursue opportunities in life. This shifts their values toward greater freedom for themselves and others. The transition is consistent with the psychologist Abraham Maslow’s theory of a hierarchy of needs from survival and safety to belonging, esteem, and self-actualization (and with Brecht’s “Grub first, then ethics”). People begin to prioritize freedom over security, diversity over uniformity, autonomy over authority, creativity over discipline, and individuality over conformity. Emancipative values may also be called liberal values, in the classical sense related to “liberty” and “liberation” (rather than the sense of political leftism).

The graph displays a historical trend that is seldom appreciated in the hurly-burly of political debate: for all the talk about right-wing backlashes and angry white men, the values of Western countries have been getting steadily more liberal (which, as we will see, is one of the reasons those men are so angry).

A critical discovery displayed in the graph is that the liberalization does not reflect a growing bulge of liberal young people who will backslide into conservatism as they get older.

The liberalization trends shown in figure 15-6 come from the Prius-driving, chai-sipping, kale-eating populations of post-industrial Western countries.

What is surprising, though, is that in every part of the world, people have become more liberal. A lot more liberal:

We’ve already seen that children the world over have become better off: they are less likely to enter the world motherless, die before their fifth birthday, or grow up stunted for lack of food.

Starting with influential treatises by John Locke in 1693 and Jean-Jacques Rousseau in 1762, childhood was reconceptualized.50 A carefree youth was now considered a human birthright. Play was an essential form of learning, and the early years of life shaped the adult and determined the future of society.

KNOWLEDGE

Homo sapiens, “knowing man,” is the species that uses information to resist the rot of entropy and the burdens of evolution.

Social science, correlation is not causation. Do better-educated countries get richer, or can richer countries afford more education? One way to cut the knot is to take advantage of the fact that a cause must precede its effect.

Better education today makes a country more democratic and peaceful tomorrow.

Better-educated girls grow up to have fewer babies, and so are less likely to beget youth bulges with their surfeit of troublemaking young men.9 And better-educated countries are richer, and as we saw in chapters 11 and 14, richer countries tend to be more peaceful and democratic.

So much changes when you get an education! You unlearn dangerous superstitions, such as that leaders rule by divine right, or that people who don’t look like you are less than human.

Studies of the effects of education confirm that educated people really are more enlightened. They are less racist, sexist, xenophobic, homophobic, and authoritarian.10 They place a higher value on imagination, independence, and free speech.11 They are more likely to vote, volunteer, express political views, and belong to civic associations such as unions, political parties, and religious and community organizations.12 They are also likelier to trust their fellow citizens—a prime ingredient of the precious elixir called social capital which gives people the confidence to contract, invest, and obey the law without fearing that they are chumps who will be shafted by everyone else.13

Intelligence Quotient (IQ) scores have been rising for more than a century, in every part of the world, at a rate of about three IQ points (a fifth of a standard deviation) per decade.

Also, it beggars belief to think that an average person of 1910, if he or she had entered a time machine and materialized today, would be borderline retarded by our standards, while if Joe and Jane Average made the reverse journey, they would outsmart 98 percent of the befrocked and bewhiskered Edwardians who greeted them as they emerged.

It’s no paradox that a heritable trait can be boosted by changes in the environment. That’s what happened with height, a trait that also is highly heritable and has increased over the decades, and for some of the same reasons: better nutrition and less disease.

Does the Flynn effect matter in the real world? Almost certainly. A high IQ is not just a number that you can brag about in a bar or that gets you into Mensa; it is a tailwind in life.38 People with high scores on intelligence tests get better jobs, perform better in their jobs, enjoy better health and longer lives, are less likely to get into trouble with the law, and have a greater number of noteworthy accomplishments like starting companies, earning patents, and creating respected works of art—all holding socioeconomic status constant.

Still, there have been some signs of a smarter populace, such as the fact that the world’s top-ranked chess and bridge players have been getting younger.

QUALITY OF LIFE

the worry that all that extra healthy life span and income may not have increased human flourishing after all if they just consign people to a rat race of frenzied careerism, hollow consumption, mindless entertainment, and soul-deadening anomie.

Cultural criticism can be a thinly disguised snobbery that shades into misanthropy.

In practice, “consumerism” often means “consumption by the other guy,” since the elites who condemn it tend themselves to be conspicuous consumers of exorbitant luxuries like hardcover books, good food and wine, live artistic performances, overseas travel, and Ivy-class education for their children.

In Development as Freedom, Amartya Sen sidesteps this trap by proposing that the ultimate goal of development is to enable people to make choices: strawberries and cream for those who want them. The philosopher Martha Nussbaum has taken the idea a step further and laid out a set of “fundamental capabilities” that all people should be given the opportunity to exercise.3 One can think of them as the justifiable sources of satisfaction and fulfillment that human nature makes available to us. Her list begins with capabilities that, as we have seen, the modern world increasingly allows people to realize: longevity, health, safety, literacy, knowledge, free expression, and political participation. It goes on to include aesthetic experience, recreation and play, enjoyment of nature, emotional attachments, social affiliations, and opportunities to reflect on and engage in one’s own conception of the good life.

That life is getting better even beyond the standard economists’ metrics like longevity and wealth.

As Morgan Housel notes, “We constantly worry about the looming ‘retirement funding crisis’ in America without realizing that the entire concept of retirement is unique to the last five decades.

Think of it this way: The average American now retires at age 62. One hundred years ago, the average American died at age 51.”

Today an average American worker with five years on the job receives 22 days of paid time off a year (compared with 16 days in 1970), and that is miserly by the standards of Western Europe.

In 1919, an average American wage earner had to work 1,800 hours to pay for a refrigerator; in 2014, he or she had to work fewer than 24 hours (and the new fridge was frost-free and came with an icemaker).

Hans Rosling suggests, the washing machine deserves to be called the greatest invention of the Industrial Revolution.

Time is not the only life-enriching resource granted to us by technology. Another is light. Light is so empowering that it serves as the metaphor of choice for a superior intellectual and spiritual state: enlightenment.

The economist William Nordhaus has cited the plunging price (and hence the soaring availability) of this universally treasured resource as an emblem of progress.

Adam Smith pointed out, “The real price of every thing . . . is the toil and trouble of acquiring it.”

The technology expert Kevin Kelly has proposed that “over time, if a technology persists long enough, its costs begin to approach (but never reach) zero.”

What are people doing with that extra time and money?

With the rise of two-career couples, overscheduled kids, and digital devices, there is a widespread belief (and recurring media panic) that families are caught in a time crunch that’s killing the family dinner.

But the new tugs and distractions have to be weighed against the 24 extra hours that modernity has granted to breadwinners every week and the 42 extra hours it has granted to homemakers.

In 2015, men reported 42 hours of leisure per week, around 10 more than their counterparts did fifty years earlier, and women reported 36 hours, more than 6 hours more

And at the end of the day, the family dinner is alive and well. Several studies and polls agree that the number of dinners families have together changed little from 1960 through 2014, despite the iPhones, PlayStations, and Facebook accounts.

Indeed, over the course of the 20th century, typical American parents spent more time, not less, with their children.

Today, almost half of the world’s population has Internet access, and three-quarters have access to a mobile phone.

The late 19th-century American diet consisted mainly of pork and starch.29 Before refrigeration and motorized transport, most fruits and vegetables would have spoiled before they reached a consumer, so farmers grew nonperishables like turnips, beans, and potatoes.

There can be no question of which was the greatest era for culture; the answer has to be today, until it is superseded by tomorrow.

HAPPINESS

According to the theory of the hedonic treadmill, people adapt to changes in their fortunes, like eyes adapting to light or darkness, and quickly return to a genetically determined baseline.4 According to the theory of social comparison (or reference groups, status anxiety, or relative deprivation, which we examined in chapter 9), people’s happiness is determined by how well they think they are doing relative to their compatriots, so as the country as a whole gets richer, no one feels happier—indeed, if their country becomes more unequal, then even if they get richer they may feel worse.

Some intellectuals are incredulous, even offended, that happiness has become a subject for economists rather than just poets, essayists, and philosophers. But the approaches are not opposed. Social scientists often begin their studies of happiness with ideas that were first conceived by artists and philosophers, and they can pose questions about historical and global patterns that cannot be answered by solitary reflection, no matter how insightful.

Freedom or autonomy: the availability of options to lead a good life (positive freedom) and the absence of coercion that prevents a person from choosing among them (negative freedom).

Happiness has two sides, an experiential or emotional side, and an evaluative or cognitive side.13 The experiential component consists of a balance between positive emotions like elation, joy, pride, and delight, and negative emotions like worry, anger, and sadness.

The ultimate measure of happiness would consist of a lifetime integral or weighted sum of how happy people are feeling and how long they feel that way.

People’s evaluations of how they are living their lives. People can be asked to reflect on how satisfied they feel “these days” or “as a whole” or “taking all things together,” or to render the almost philosophical judgment of where they stand on a ten-rung ladder ranging from “the worst possible life for you” to “the best possible life for you.”

Social scientists have become resigned to the fact that happiness, satisfaction, and best-versus-worst-possible life are blurred in people’s minds and that it’s often easiest just to average them together.14

And this brings us to the final dimension of a good life, meaning and purpose. This is the quality that, together with happiness, goes into Aristotle’s ideal of eudaemonia or “good spirit.”16

Roy Baumeister and his colleagues probed for what makes people feel their lives are meaningful. The respondents separately rated how happy and how meaningful their lives were, and they answered a long list of questions about their thoughts, activities, and circumstances. The results suggest that many of the things that make people happy also make their lives meaningful, such as being connected to others, feeling productive, and not being alone or bored.

People who lead meaningful lives may enjoy none of these boons. Happy people live in the present; those with meaningful lives have a narrative about their past and a plan for the future. Those with happy but meaningless lives are takers and beneficiaries; those with meaningful but unhappy lives are givers and benefactors.

Meaning is about expressing rather than satisfying the self: it is enhanced by activities that define the person and build a reputation.

The most immediate is the absence of a cross-national Easterlin paradox: the cloud of arrows is stretched along a diagonal, which indicates that the richer the country, the happier its people.

Most strikingly, the slopes of the arrows are similar to each other, and identical to the slope for the swarm of arrows as a whole (the dashed gray line lurking behind the swarm). That means that a raise for an individual relative to that person’s compatriots adds as much to his or her happiness as the same increase for their country across the board.

Happiness, of course, depends on much more than income.

Bowling Alone.

Though people have reallocated their time because families are smaller, more people are single, and more women work, Americans today spend as much time with relatives, have the same median number of friends and see them about as often, report as much emotional support, and remain as satisfied with the number and quality of their friendships as their counterparts in the decade of Gerald Ford and Happy Days. Users of the Internet and social media have more contact with friends (though a bit less face-to-face contact), and they feel that the electronic ties have enriched their relationships.

Social media users care too much, not too little, about other people, and they empathize with them over their troubles rather than envying them their successes.

Standard formula for sowing panic: Here’s an anecdote, therefore it’s a trend, therefore it’s a crisis.

But just because social life looks different today from the way it looked in the 1950s, it does not mean that humans, that quintessentially social species, have become any less social.

One of psychology’s best-kept secrets is that cognitive behavior therapy is demonstrably effective (often more effective than drugs) in treating many forms of distress, including depression, anxiety, panic attacks, PTSD, insomnia, and the symptoms of schizophrenia.

Everything is amazing. Are we really so unhappy? Mostly we are not. Developed countries are actually pretty happy, a majority of all countries have gotten happier, and as long as countries get richer they should get happier still. The dire warnings about plagues of loneliness, suicide, depression, and anxiety don’t survive fact-checking.

A modicum of anxiety may be the price we pay for the uncertainty of freedom. It is another word for the vigilance, deliberation, and heart-searching that freedom demands. It’s not entirely surprising that as women gained in autonomy relative to men they also slipped in happiness. In earlier times, women’s list of responsibilities rarely extended beyond the domestic sphere. Today young women increasingly say that their life goals include career, family, marriage, money, recreation, friendship, experience, correcting social inequities, being a leader in their community, and making a contribution to society.83 That’s a lot of things to worry about, and a lot of ways to be frustrated: Woman plans and God laughs.

As people become better educated and increasingly skeptical of received authority, they may become unsatisfied with traditional religious verities and feel unmoored in a morally indifferent cosmos.

EXISTENTIAL THREATS

In The Progress Paradox, the journalist Gregg Easterbrook suggests that a major reason that Americans are not happier, despite their rising objective fortunes, is “collapse anxiety”: the fear that civilization may implode and there’s nothing anyone can do about it.

Remember the Y2K bug?12 In the 1990s, as the turn of the millennium drew near, computer scientists began to warn the world of an impending catastrophe.

When 12:00 A.M. on January 1, 2000, arrived and the digits rolled over, a program would think it was 1900 and would crash or go haywire (presumably because it would divide some number by the difference between what it thought was the current year and the year 1900, namely zero, though why a program would do this was never made clear).

A hundred billion dollars was spent worldwide on reprogramming software for Y2K Readiness, a challenge that was likened to replacing every bolt in every bridge in the world.

A typical mammalian species lasts around a million years, and it’s hard to insist that Homo sapiens will be an exception.

Even if we did invent superhumanly intelligent robots, why would they want to enslave their masters or take over the world?

The second fallacy is to think of intelligence as a boundless continuum of potency, a miraculous elixir with the power to solve any problem, attain any goal.

Knowledge is acquired by formulating explanations and testing them against reality, not by running an algorithm faster and faster.

The real world gets in the way of many digital apocalypses. When HAL gets uppity, Dave disables it with a screwdriver, leaving it pathetically singing “A Bicycle Built for Two” to itself.

If we gave an AI the goal of maintaining the water level behind a dam, it might flood a town, not caring about the people who drowned. If we gave it the goal of making paper clips, it might turn all the matter in the reachable universe into paper clips, including our possessions and bodies.

Artificial intelligence is like any other technology. It is developed incrementally, designed to satisfy multiple conditions, tested before it is implemented, and constantly tweaked for efficacy and safety (chapter 12). As the AI expert Stuart Russell puts it, “No one in civil engineering talks about ‘building bridges that don’t fall down.’ They just call it ‘building bridges.’”

In 2002 Martin Rees publicly offered the bet that “by 2020, bioterror or bioerror will lead to one million casualties in a single event.”35

The question I’ll consider is whether the grim facts should lead any reasonable person to conclude that humanity is screwed.

The key is not to fall for the Availability bias and assume that if we can imagine something terrible, it is bound to happen. The real danger depends on the numbers: the proportion of people who want to cause mayhem or mass murder, the proportion of that genocidal sliver with the competence to concoct an effective cyber or biological weapon, the sliver of that sliver whose schemes will actually succeed, and the sliver of the sliver of the sliver that accomplishes a civilization-ending cataclysm rather than a nuisance, a blow, or even a disaster, after which life goes on.

Such attacks could take place in every city in the world many times a day, but in fact take place somewhere or other every few years (leading the security expert Bruce Schneier to ask, “Where are all the terrorist attacks?”).

Far from being criminal masterminds, most terrorists are bumbling schlemiels.

Serious threats to the integrity of a country’s infrastructure are likely to require the resources of a state.50 Software hacking is not enough; the hacker needs detailed knowledge about the physical construction of the systems he hopes to sabotage.

State-based cyber-sabotage escalates the malevolence from terrorism to a kind of warfare, where the constraints of international relations, such as norms, treaties, sanctions, retaliation, and military deterrence, inhibit aggressive attacks, as they do in conventional “kinetic” warfare.

But disaster sociology (yes, there is such a field) has shown that people are highly resilient in the face of catastrophe.53 Far from looting, panicking, or sinking into paralysis, they spontaneously cooperate to restore order and improvise networks for distributing goods and services.

It may be more than just luck that the world so far has seen just one successful bioterror attack (the 1984 tainting of salad with salmonella in an Oregon town by the Rajneeshee religious cult, which killed no one) and one spree killing (the 2001 anthrax mailings, which killed five).60

CRISPR-Cas9,

Prognosticators are biased toward scaring people.

As early as 1945, the theologian Reinhold Niebuhr observed, “Ultimate perils, however great, have a less lively influence upon the human imagination than immediate resentments and frictions, however small by comparison.”

As we saw with climate change, people may be likelier to acknowledge a problem when they have reason to think it is solvable than when they are terrified into numbness and helplessness.

The most obvious is to whittle down the size of the arsenal. The process is well under way. Few people are aware of how dramatically the world has been dismantling nuclear weapons. Figure 19-1 shows that the United States has reduced its inventory by 85 percent from its 1967 peak, and now has fewer nuclear warheads than at any time since 1956.113 Russia, for its part, has reduced its arsenal by 89 percent from its Soviet-era peak. (Probably even fewer people realize that about 10 percent of electricity in the United States comes from dismantled nuclear warheads, mostly Soviet.)114 In 2010 both countries signed

THE FUTURE OF PROGRESS

The poor may not always be with us. The world is about a hundred times wealthier today than it was two centuries ago, and the prosperity is becoming more evenly distributed across the world’s countries and people. The proportion of humanity living in extreme poverty has fallen from almost 90 percent to less than 10 percent, and within the lifetimes of most of the readers of this book it could approach zero.

The world is giving peace a chance.

The proportion of people killed annually in wars is less than a quarter of what it was in the 1980s, a seventh of what it was in the early 1970s, an eighteenth of what it was in the early 1950s, and a half a percent of what it was during World War II.

People are getting not just healthier, richer, and safer but freer. Two centuries ago a handful of countries, embracing one percent of the world’s people, were democratic; today, two-thirds of the world’s countries, embracing two-thirds of its people, are.

As people are getting healthier, richer, safer, and freer, they are also becoming more literate, knowledgeable, and smarter. Early in the 19th century, 12 percent of the world could read and write; today 83 percent can.

As societies have become healthier, wealthier, freer, happier, and better educated, they have set their sights on the most pressing global challenges. They have emitted fewer pollutants, cleared fewer forests, spilled less oil, set aside more preserves, extinguished fewer species, saved the ozone layer, and peaked in their consumption of oil, farmland, timber, paper, cars, coal, and perhaps even carbon. For all their differences, the world’s nations came to a historic agreement on climate change, as they did in previous years on nuclear testing, proliferation, security, and disarmament. Nuclear weapons, since the extraordinary circumstances of the closing days of World War II, have not been used in the seventy-two years they have existed. Nuclear terrorism, in defiance of forty years of expert predictions, has never happened. The world’s nuclear stockpiles have been reduced by 85 percent, with more reductions to come, and testing has ceased (except by the tiny rogue regime in Pyongyang) and proliferation has frozen. The world’s two most pressing problems, then, though not yet solved, are solvable: practicable long-term agendas have been laid out for eliminating nuclear weapons and for mitigating climate change. For all the bleeding headlines, for all the crises, collapses, scandals, plagues, epidemics, and existential threats, these are accomplishments to savor. The Enlightenment is working: for two and a half centuries, people have used knowledge to enhance human flourishing. Scientists have exposed the workings of matter, life, and mind. Inventors have harnessed the laws of nature to defy entropy, and entrepreneurs have made their innovations affordable. Lawmakers have made people better off by discouraging acts that are individually beneficial but collectively harmful. Diplomats have done the same with nations. Scholars have perpetuated the treasury of knowledge and augmented the power of reason. Artists have expanded the circle of sympathy. Activists have pressured the powerful to overturn repressive measures, and their fellow citizens to change repressive norms. All these efforts have been channeled into institutions that have allowed us to circumvent the flaws of human nature and empower our better angels. At the same time . . . Seven hundred million people in the world today live in extreme poverty. In the regions where they are concentrated, life expectancy is less than 60, and almost a quarter of the people are undernourished. Almost a million children die of pneumonia every year, half a million from diarrhea or malaria, and hundreds of thousands from measles and AIDS. A dozen wars are raging in the world, including one in which more than 250,000 people have died, and in 2015 at least ten thousand people were slaughtered in genocides. More than two billion people, almost a third of humanity, are oppressed in autocratic states. Almost a fifth of the world’s people lack a basic education; almost a sixth are illiterate.

Progress is not utopia, and that there is room—indeed, an imperative—for us to strive to continue that progress.

How reasonable is the hope for continuing progress?

The Scientific Revolution and the Enlightenment set in motion the process of using knowledge to improve the human condition.

Solutions create new problems, which take time to solve in their term. But when we stand back from these blips and setbacks, we see that the indicators of human progress are cumulative: none is cyclical, with gains reliably canceled by losses.3

The technological advances that have propelled this progress should only gather speed. Stein’s Law continues to obey Davies’s Corollary (Things that can’t go on forever can go on much longer than you think), and genomics, synthetic biology, neuroscience, artificial intelligence, materials science, data science, and evidence-based policy analysis are flourishing.

So too with moral progress. History tells us that barbaric customs can not only be reduced but essentially abolished, lingering at most in a few benighted backwaters.

If economies stop growing, things could get ugly.

As the entrepreneur Peter Thiel lamented, “We wanted flying cars; instead we got 140 characters.”

Whatever its causes, economic stagnation is at the root of many other problems and poses a significant challenge for 21st-century policymakers.

The second decade of the 21st century has seen the rise of a counter-Enlightenment movement called populism, more accurately, authoritarian populism.24 Populism calls for the direct sovereignty of a country’s “people” (usually an ethnic group, sometimes a class), embodied in a strong leader who directly channels their authentic virtue and experience.

By focusing on the tribe rather than the individual, it has no place for the protection of minority rights or the promotion of human welfare worldwide.

Populism comes in left-wing and right-wing varieties, which share a folk theory of economics as zero-sum competition: between economic classes in the case of the left, between nations or ethnic groups in the case of the right.

Populism looks backward to an age in which the nation was ethnically homogeneous, orthodox cultural and religious values prevailed, and economies were powered by farming and manufacturing, which produced tangible goods for local consumption and for export.

Nothing captures the tribalistic and backward-looking spirit of populism better than Trump’s campaign slogan: Make America Great Again.

Trump’s authoritarian instincts are subjecting the institutions of American democracy to a stress test, but so far it has pushed back on a number of fronts. Cabinet secretaries have publicly repudiated various quips, tweets, and stink bombs; courts have struck down unconstitutional measures; senators and congressmen have defected from his party to vote down destructive legislation; Justice Department and Congressional committees are investigating the administration’s ties to Russia; an FBI chief has publicly called out Trump’s attempt to intimidate him (raising talk about impeachment for obstruction of justice); and his own staff, appalled at what they see, regularly leak compromising facts to the press—all in the first six months of the administration.

Globalization in particular is a tide that is impossible for any ruler to order back.

Where the new president, Emmanuel Macron, proclaimed that Europe was “waiting for us to defend the spirit of the Enlightenment, threatened in so many places.”

In the American election, voters in the two lowest income brackets voted for Clinton 52–42, as did those who identified “the economy” as the most important issue. A majority of voters in the four highest income brackets voted for Trump, and Trump voters singled out “immigration” and “terrorism,” not “the economy,” as the most important issues.34

“Education, Not Income, Predicted Who Would Vote for Trump.”35 Why should education have mattered so much? Two uninteresting explanations are that the highly educated happen to affiliate with a liberal political tribe, and that education may be a better long-term predictor of economic security than current income. A more interesting explanation is that education exposes people in young adulthood to other races and cultures in a way that makes it harder to demonize them. Most interesting of all is the likelihood that education, when it does what it is supposed to do, instills a respect for vetted fact and reasoned argument, and so inoculates people against conspiracy theories, reasoning by anecdote, and emotional demagoguery.

Silver found that the regional map of Trump support did not overlap particularly well with the maps of unemployment, religion, gun ownership, or the proportion of immigrants. But it did align with the map of Google searches for the word nigger, which Seth Stephens-Davidowitz has shown is a reliable indicator of racism (chapter 15).36 This doesn’t mean that most Trump supporters are racists. But overt racism shades into resentment and distrust, and the overlap suggests that the regions of the country that gave Trump his Electoral College victory are those with the most resistance to the decades-long process of integration and the promotion of minority interests (particularly racial preferences, which they see as reverse discrimination against them).

Populist voters are older, more religious, more rural, less educated, and more likely to be male and members of the ethnic majority. They embrace authoritarian values, place themselves on the right of the political spectrum, and dislike immigration and global and national governance.39 Brexit voters, too, were older, more rural, and less educated than those who voted to remain: 66 percent of high school graduates voted to leave, but only 29 percent of degree holders did.40

Populism is an old man’s movement.

This raises the possibility that as the Silent Generation and older Baby Boomers shuffle off this mortal coil, they will take authoritarian populism with them.

Since populist movements have achieved an influence beyond their numbers, fixing electoral irregularities such as gerrymandering and forms of disproportionate representation which overweight rural areas (such as the US Electoral College) would help. So would journalistic coverage that tied candidates’ reputations to their record of accuracy and coherence rather than to trivial gaffes and scandals.

I believe that the media and intelligentsia were complicit in populists’ depiction of modern Western nations as so unjust and dysfunctional that nothing short of a radical lurch could improve them.

“I’d rather see the empire burn to the ground under Trump, opening up at least the possibility of radical change, than cruise on autopilot under Clinton,” flamed a left-wing advocate of “the politics of arson.”50

People have a tremendous amount to lose when charismatic authoritarians responding to a “crisis” trample over democratic norms and institutions and command their countries by the force of their personalities.

Such is the nature of progress. Pulling us forward are ingenuity, sympathy, and benign institutions. Pushing us back are the darker sides of human nature and the Second Law of Thermodynamics. Kevin Kelly explains how this dialectic can nonetheless result in forward motion: Ever since the Enlightenment and the invention of science, we’ve managed to create a tiny bit more than we’ve destroyed each year. But that few percent positive difference is compounded over decades into what we might call civilization. . . . [Progress] is a self-cloaking action seen only in retrospect. Which is why I tell people that my great optimism of the future is rooted in history.53

Kelly offers “protopia,” the pro- from progress and process. Others have suggested “pessimistic hopefulness,” “opti-realism,” and “radical incrementalism.”54 My favorite comes from Hans Rosling, who, when asked whether he was an optimist, replied, “I am not an optimist. I’m a very serious possibilist.”55

“The ruling ideas of each age have ever been the ideas of its ruling class.” Karl Marx

REASON

“One can’t criticize something with nothing”:

To begin with, no Enlightenment thinker ever claimed that humans were consistently rational.

What they argued was that we ought to be rational, by learning to repress the fallacies and dogmas that so readily seduce us, and that we can be rational, collectively if not individually, by implementing institutions and adhering to norms that constrain our faculties, including free speech, logical analysis, and empirical testing. And if you disagree, then why should we accept your claim that humans are incapable of rationality?

But real evolutionary psychology treats humans differently: not as two-legged antelopes but as the species that outsmarts antelopes. We are a cognitive species that depends on explanations of the world. Since the world is the way it is regardless of what people believe about it, there is a strong selection pressure for an ability to develop explanations that are true.7

The standard explanation of the madness of crowds is ignorance: a mediocre education system has left the populace scientifically illiterate, at the mercy of their cognitive biases, and thus defenseless against airhead celebrities, cable-news gladiators, and other corruptions from popular culture. The standard solution is better schooling and more outreach to the public by scientists on television, social media, and popular Web sites. As an outreaching scientist I’ve always found this theory appealing, but I’ve come to realize it’s wrong, or at best a small part of the problem.

Kahan concludes that we are all actors in a Tragedy of the Belief Commons: what’s rational for every individual to believe (based on esteem) can be irrational for the society as a whole to act upon (based on reality).17

What’s going on is that these people are sharing blue lies. A white lie is told for the benefit of the hearer; a blue lie is told for the benefit of an in-group (originally, fellow police officers).19 While some of the conspiracy theorists may be genuinely misinformed, most express these beliefs for the purpose of performance rather than truth: they are trying to antagonize liberals and display solidarity with their blood brothers.

Another paradox of rationality is that expertise, brainpower, and conscious reasoning do not, by themselves, guarantee that thinkers will approach the truth. On the contrary, they can be weapons for ever-more-ingenious rationalization. As Benjamin Franklin observed, “So convenient a thing is it to be a rational creature, since it enables us to find or make a reason for everything one has a mind to do.”

Engagement with politics is like sports fandom in another way: people seek and consume news to enhance the fan experience, not to make their opinions more accurate.25 That explains another of Kahan’s findings: the better informed a person is about climate change, the more polarized his or her opinion.

So we can’t blame human irrationality on our lizard brains: it was the sophisticated respondents who were most blinded by their politics. As two other magazines summarized the results: “Science Confirms: Politics Wrecks Your Ability to Do Math” and “How Politics Makes Us Stupid.”29

Of the two forms of politicization that are subverting reason today, the political is far more dangerous than the academic, for an obvious reason.

In 21st-century America, the control of Congress by a Republican Party that became synonymous with the extreme right has been pernicious, because it is so convinced of the righteousness of its cause and the evil of its rivals that it has undermined the institutions of democracy to get what it wants. The corruptions include gerrymandering, imposing voting restrictions designed to disenfranchise Democratic voters, encouraging unregulated donations from moneyed interests, blocking Supreme Court nominations until their party controls the presidency, shutting down the government when their maximal demands aren’t met, and unconditionally supporting Donald Trump over their own objections to his flagrantly antidemocratic impulses.71 Whatever differences in policy or philosophy divide the parties, the mechanisms of democratic deliberation should be sacrosanct. Their erosion, disproportionately by the right, has led many people, including a growing share of young Americans, to see democratic government as inherently dysfunctional and to become cynical about democracy itself.72

What can be done to improve standards of reasoning? Persuasion by facts and logic, the most direct strategy, is not always futile.

When people are first confronted with information that contradicts a staked-out position, they become even more committed to it, as we’d expect from the theories of identity-protective cognition, motivated reasoning, and cognitive dissonance reduction. Feeling their identity threatened, belief holders double down and muster more ammunition to fend off the challenge. But since another part of the human mind keeps a person in touch with reality, as the counterevidence piles up the dissonance can mount until it becomes too much to bear and the opinion topples over, a phenomenon called the affective tipping point.80 The tipping point depends on the balance between how badly the opinion holder’s reputation would be damaged by relinquishing the opinion and whether the counterevidence is so blatant and public as to be common knowledge: a naked emperor, an elephant in the room.81

The reasons are familiar to education researchers.84 Any curriculum will be pedagogically ineffective if it consists of a lecturer yammering in front of a blackboard, or a textbook that students highlight with a yellow marker. People understand concepts only when they are forced to think them through, to discuss them with others, and to use them to solve problems.

All students should learn about cognitive biases fell deadborn from my lips.)

Effective training in critical thinking and cognitive debiasing may not be enough to cure identity-protective cognition, in which people cling to whatever opinion enhances the glory of their tribe and their status within it.

Experiments have shown that the right rules can avert the Tragedy of the Belief Commons and force people to dissociate their reasoning from their identities.88 One technique was discovered long ago by rabbis: they forced yeshiva students to switch sides in a Talmudic debate and argue the opposite position. Another is to have people try to reach a consensus in a small discussion group; this forces them to defend their opinions to their groupmates, and the truth usually wins.

Most of us are deluded about our degree of understanding of the world, a bias called the Illusion of Explanatory Depth.

Perhaps most important, people are less biased when they have skin in the game and have to live with the consequences of their opinions.

Experiments have shown that when people hear about a new policy, such as welfare reform, they will like it if it is proposed by their own party and hate it if it is proposed by the other—all the while convinced that they are reacting to it on its objective merits.

However long it takes, we must not let the existence of cognitive and emotional biases or the spasms of irrationality in the political arena discourage us from the Enlightenment ideal of relentlessly pursuing reason and truth. If we can identify ways in which humans are irrational, we must know what rationality is. Since there’s nothing special about us, our fellows must have at least some capacity for rationality as well. And it’s in the very nature of rationality that reasoners can always step back, consider their own shortcomings, and reason out ways to work around them.

SCIENCE

That gravity is the curvature of space-time, and that life depends on a molecule that carries information, directs metabolism, and replicates itself.

But the scorn for scientific consensus has widened into a broadband know-nothingness.

Positivism depends on the reductionist belief that the entire universe, including all human conduct, can be explained with reference to precisely measurable, deterministic physical processes. . . . Positivist assumptions provided the epistemological foundations for Social Darwinism and pop-evolutionary notions of progress, as well as for scientific racism and imperialism. These tendencies coalesced in eugenics, the doctrine that human well-being could be improved and eventually perfected through the selective breeding of the “fit” and the sterilization or elimination of the “unfit.”

An endorsement of scientific thinking must first of all be distinguished from any belief that members of the occupational guild called “science” are particularly wise or noble. The culture of science is based on the opposite belief. Its signature practices, including open debate, peer review, and double-blind methods, are designed to circumvent the sins to which scientists, being human, are vulnerable. As Richard Feynman put it, the first principle of science is “that you must not fool yourself—and you are the easiest person to fool.”

The lifeblood of science is the cycle of conjecture and refutation: proposing a hypothesis and then seeing whether it survives attempts to falsify it.

The fallacy (putting aside the apocryphal history) is a failure to recognize that what science allows is an increasing confidence in a hypothesis as the evidence accumulates, not a claim to infallibility on the first try.

As Wieseltier puts it, “It is not for science to say whether science belongs in morality and politics and art. Those are philosophical matters, and science is not philosophy.”

Today most philosophers (at least in the analytic or Anglo-American tradition) subscribe to naturalism, the position that “reality is exhausted by nature, containing nothing ‘supernatural,’ and that the scientific method should be used to investigate all areas of reality, including the ‘human spirit.’”17 Science, in the modern conception, is of a piece with philosophy and with reason itself.

The world is intelligible.

In making sense of our world, there should be few occasions on which we are forced to concede, “It just is” or “It’s magic” or “Because I said so.”

Many people are willing to credit science with giving us handy drugs and gadgets and even with explaining how physical stuff works. But they draw the line at what truly matters to us as human beings: the deep questions about who we are, where we came from, and how we define the meaning and purpose of our lives. That is the traditional territory of religion, and its defenders tend to be the most excitable critics of scientism. They are apt to endorse the partition plan proposed by the paleontologist and science writer Stephen Jay Gould in his book Rocks of Ages, according to which the proper concerns of science and religion belong to “non-overlapping magisteria.” Science gets the empirical universe; religion gets the questions of morality, meaning, and value.

The moral worldview of any scientifically literate person—one who is not blinkered by fundamentalism—requires a clean break from religious conceptions of meaning and value.

To begin with, the findings of science imply that the belief systems of all the world’s traditional religions and cultures—their theories of the genesis of the world, life, humans, and societies—are factually mistaken. We know, but our ancestors did not, that humans belong to a single species of African primate that developed agriculture, government, and writing late in its history. We know that our species is a tiny twig of a genealogical tree that embraces all living things and that emerged from prebiotic chemicals almost four billion years ago. We know that we live on a planet that revolves around one of a hundred billion stars in our galaxy, which is one of a hundred billion galaxies in a 13.8-billion-year-old universe, possibly one of a vast number of universes. We know that our intuitions about space, time, matter, and causation are incommensurable with the nature of reality on scales that are very large and very small. We know that the laws governing the physical world (including accidents, disease, and other misfortunes) have no goals that pertain to human well-being. There is no such thing as fate, providence, karma, spells, curses, augury, divine retribution, or answered prayers—though the discrepancy between the laws of probability and the workings of cognition may explain why people believe there are. And we know that we did not always know these things, that the beloved convictions of every time and culture may be decisively falsified, doubtless including many we hold today.

What happens to those who are taught that science is just another narrative like religion and myth, that it lurches from revolution to revolution without making progress, and that it is a rationalization of racism, sexism, and genocide?

Ultimately the greatest payoff of instilling an appreciation of science is for everyone to think more scientifically.

Three-quarters of the nonviolent resistance movements succeeded, compared with only a third of the violent ones.50 Gandhi and King were right, but without data, you would never know it.

HUMANISM

The goal of maximizing human flourishing—life, health, happiness, freedom, knowledge, love, richness of experience—may be called humanism.

It is humanism that identifies what we should try to achieve with our knowledge. It provides the ought that supplements the is. It distinguishes true progress from mere mastery.

Some Eastern religions, including Confucianism and varieties of Buddhism, always grounded their ethics in human welfare rather than divine dictates.

First, any Moral Philosophy student who stayed awake through week 2 of the syllabus can also rattle off the problems with deontological ethics. If lying is intrinsically wrong, must we answer truthfully when the Gestapo demand to know the whereabouts of Anne Frank?

If a terrorist has hidden a ticking nuclear bomb that would annihilate millions, is it immoral to waterboard him into revealing its location? And given the absence of a thundering voice from the heavens, who gets to pull principles out of the air and pronounce that certain acts are inherently immoral even if they hurt no one?

A viable moral philosophy for a cosmopolitan world cannot be constructed from layers of intricate argumentation or rest on deep metaphysical or religious convictions. It must draw on simple, transparent principles that everyone can understand and agree upon. The ideal of human flourishing—that it’s good for people to lead long, healthy, happy, rich, and stimulating lives—is just such a principle, since it is based on nothing more (and nothing less) than our common humanity.

Our universe can be specified by a few numbers, including the strengths of the forces of nature (gravity, electromagnetism, and the nuclear forces), the number of macroscopic dimensions of space-time (four), and the density of dark energy (the source of the acceleration of the expansion of the universe). In Just Six Numbers, Martin Rees enumerates them on one hand and a finger; the exact tally depends on which version of physical theory one invokes and on whether one counts the constants themselves or ratios between them. If any of these constants were off by a minuscule iota, then matter would fly apart or collapse upon itself, and stars, galaxies, and planets, to say nothing of terrestrial life and Homo sapiens, could never have formed.

If the factual tenets of religion can no longer be taken seriously, and its ethical tenets depend entirely on whether they can be justified by secular morality, what about its claims to wisdom on the great questions of existence? A favorite talking point of faitheists is that only religion can speak to the deepest yearnings of the human heart. Science will never be adequate to address the great existential questions of life, death, love, loneliness, loss, honor, cosmic justice, and metaphysical hope.

To begin with, the alternative to “religion” as a source of meaning is not “science.” No one ever suggested that we look to ichthyology or nephrology for enlightenment on how to live, but rather to the entire fabric of human knowledge, reason, and humanistic values, of which science is a part. It’s true that the fabric contains important strands that originated in religion, such as the language and allegories of the Bible and the writings of sages, scholars, and rabbis. But today it is dominated by secular content, including debates on ethics originating in Greek and Enlightenment philosophy, and renderings of love, loss, and loneliness in the works of Shakespeare, the Romantic poets, the 19th-century novelists, and other great artists and essayists. Judged by universal standards, many of the religious contributions to life’s great questions turn out to be not deep and timeless but shallow and archaic, such as a conception of “justice” that includes punishing blasphemers, or a conception of “love” that adjures a woman to obey her husband.

A “spirituality” that sees cosmic meaning in the whims of fortune is not wise but foolish. The first step toward wisdom is the realization that the laws of the universe don’t care about you. The next is the realization that this does not imply that life is meaningless, because people care about you, and vice versa. You care about yourself, and you have a responsibility to respect the laws of the universe that keep you alive, so you don’t squander your existence. Your loved ones care about you, and you have a responsibility not to orphan your children, widow your spouse, and shatter your parents. And anyone with a humanistic sensibility cares about you, not in the sense of feeling your pain—human empathy is too feeble to spread itself across billions of strangers—but in the sense of realizing that your existence is cosmically no less important than theirs, and that we all have a responsibility to use the laws of the universe to enhance the conditions in which we all can flourish.

It would not be fanciful to say that over the course of the 20th century the global rate of atheism increased by a factor of 500, and that it has doubled again so far in the 21st. An additional 23 percent of the world’s population identify themselves as “not a religious person,” leaving 59 percent of the world as “religious,” down from close to 100 percent a century before.

The Secularization Thesis, irreligion is a natural consequence of affluence and education.66 Recent studies confirm that wealthier and better-educated countries tend to be less religious.

Why is the world losing its religion? There are several reasons.80 The Communist governments of the 20th century outlawed or discouraged religion, and when they liberalized, their citizenries were slow to reacquire the taste. Some of the alienation is part of a decline in trust in all institutions from its high-water mark in the 1960s.81 Some of it is carried by the global current toward emancipative values (chapter 15) such as women’s rights, reproductive freedom, and tolerance of homosexuality.82 Also, as people’s lives become more secure thanks to affluence, medical care, and social insurance, they no longer pray to God to save them from ruin: countries with stronger safety nets are less religious, holding other factors constant.83 But the most obvious reason may be reason itself: when people become more intellectually curious and scientifically literate, they stop believing in miracles.

No discussion of global progress can ignore the Islamic world, which by a number of objective measures appears to be sitting out the progress enjoyed by the rest. Muslim-majority countries score poorly on measures of health, education, freedom, happiness, and democracy, holding wealth constant.90 All of the wars raging in 2016 took place in Muslim-majority countries or involved Islamist groups, and those groups were responsible for the vast majority of terrorist attacks.

Still others were exacerbated by clumsy Western interventions in the Middle East, including the dismemberment of the Ottoman Empire, support of the anti-Soviet mujahedin in Afghanistan, and the invasion of Iraq.

But part of the resistance to the tide of progress can be attributed to religious belief. The problem begins with the fact that many of the precepts of Islamic doctrine, taken literally, are floridly antihumanistic. The Quran contains scores of passages that express hatred of infidels, the reality of martyrdom, and the sacredness of armed jihad.

Of course many of the passages in the Bible are floridly antihumanistic too. One needn’t debate which is worse; what matters is how literally the adherents take them.

Self-identifying as a Muslim, regardless of the particular branch of Islam, seems to be almost synonymous with being strongly religious.”94

Between 50 and 93 percent believe that the Quran “should be read literally, word by word,” and that “overwhelming percentages of Muslims in many countries want Islamic law (sharia) to be the official law of the land.”

All these troubling patterns were once true of Christendom, but starting with the Enlightenment, the West initiated a process (still ongoing) of separating the church from the state, carving out a space for secular civil society, and grounding its institutions in a universal humanistic ethics. In most Muslim-majority countries, that process is barely under way.

Making things worse is a reactionary ideology that became influential through the writings of the Egyptian author Sayyid Qutb (1906–1966), a member of the Muslim Brotherhood and the inspiration for Al Qaeda and other Islamist movements.100 The ideology looks back to the glory days of the Prophet, the first caliphs, and classical Arab civilization, and laments subsequent centuries of humiliation at the hands of Crusaders, horse tribes, European colonizers, and, most recently, insidious secular modernizers.

While the West might enjoy the peace, prosperity, education, and happiness of post-Enlightenment societies, Muslims will never accept this shallow hedonism, and it’s only understandable that they should cling to a system of medieval beliefs and customs forever.

Tunisia, Bangladesh, Malaysia, and Indonesia have made long strides toward liberal democracy (chapter 14). In many Islamic countries, attitudes toward women and minorities are improving (chapter 15)—slowly, but more detectably among women, the young, and the educated.

Let me turn to the second enemy of humanism, the ideology behind resurgent authoritarianism, nationalism, populism, reactionary thinking, even fascism. As with theistic morality, the ideology claims intellectual merit, affinity with human nature, and historical inevitability. All three claims, we shall see, are mistaken.

A thinker who represented the opposite of humanism (indeed, of pretty much every argument in this book), one couldn’t do better than the German philologist Friedrich Nietzsche (1844–1900).109 Earlier in the chapter I fretted about how humanistic morality could deal with a callous, egoistic, megalomaniacal sociopath. Nietzsche argued that it’s good to be a callous, egoistic, megalomaniacal sociopath. Not good for everyone, of course, but that doesn’t matter: the lives of the mass of humanity (the “botched and the bungled,” the “chattering dwarves,” the “flea-beetles”) count for nothing. What is worthy in life is for a superman (Übermensch, literally “overman”) to transcend good and evil, exert a will to power, and achieve heroic glory. Only through such heroism can the potential of the species be realized and humankind lifted to a higher plane of being.

Western civilization has gone steadily downhill since the heyday of Homeric Greeks, Aryan warriors, helmeted Vikings, and other manly men. It has been especially corrupted by the “slave morality” of Christianity, the worship of reason by the Enlightenment, and the liberal movements of the 19th century that sought social reform and shared prosperity. Such effete sentimentality led only to decadence and degeneration.

Man shall be trained for war and woman for the recreation of the warrior. All else is folly. . . . Thou goest to woman? Do not forget thy whip.

A declaration of war on the masses by higher men is needed. . . . A doctrine is needed powerful enough to work as a breeding agent: strengthening the strong, paralyzing and destructive for the world-weary. The annihilation of the humbug called “morality.” . . . The annihilation of the decaying races. . . . Dominion over the earth as a means of producing a higher type.

Most obviously, Nietzsche helped inspire the romantic militarism that led to the First World War and the fascism that led to the Second. Though Nietzsche himself was neither a German nationalist nor an anti-Semite, it’s no coincidence that these quotations leap off the page as quintessential Nazism: Nietzsche posthumously became the Nazis’ court philosopher. (In his first year as chancellor, Hitler made a pilgrimage to the Nietzsche Archive, presided over by Elisabeth Förster-Nietzsche, the philosopher’s sister and literary executor, who tirelessly encouraged the connection.) The link to Italian Fascism is even more direct: Benito Mussolini wrote in 1921 that “the moment relativism linked up with Nietzsche, and with his Will to Power, was when Italian Fascism became, as it still is, the most magnificent creation of an individual and a national Will to Power.”

The connections between Nietzsche’s ideas and the megadeath movements of the 20th century are obvious enough: a glorification of violence and power, an eagerness to raze the institutions of liberal democracy, a contempt for most of humanity, and a stone-hearted indifference to human life.

As Bertrand Russell pointed out in A History of Western Philosophy, they “might be stated more simply and honestly in the one sentence: ‘I wish I had lived in the Athens of Pericles or the Florence of the Medici.’” The ideas fail the first test of moral

Though she later tried to conceal it, Ayn Rand’s celebration of selfishness, her deification of the heroic capitalist, and her disdain for the general welfare had Nietzsche written all over them.113

Disdaining the commitment to truth-seeking among scientists and Enlightenment thinkers, Nietzsche asserted that “there are no facts, only interpretations,” and that “truth is a kind of error without which a certain species of life could not live.”

A godfather to all the intellectual movements of the 20th century that were hostile to science and objectivity, including Existentialism, Critical Theory, Poststructuralism, Deconstructionism, and Postmodernism.

A surprising number of 20th-century intellectuals and artists have gushed over totalitarian dictators, a syndrome that the intellectual historian Mark Lilla calls tyrannophilia.115 Some tyrannophiles were Marxists, working on the time-honored principle “He may be an SOB, but he’s our SOB.”

Professional narcissism. Intellectuals and artists may feel unappreciated in liberal democracies, which allow their citizens to tend to their own needs in markets and civic organizations. Dictators implement theories from the top down, assigning a role to intellectuals that they feel is commensurate with their worth. But tyrannophilia is also fed by a Nietzschean disdain for the common man, who annoyingly prefers schlock to fine art and culture, and by an admiration of the superman who transcends the messy compromises of democracy and heroically implements a vision of the good society.

And Trump has been closely advised by two men, Stephen Bannon and Michael Anton, who are reputed to be widely read and who consider themselves serious intellectuals. Anyone who wants to go beyond personality in understanding authoritarian populism must appreciate the two ideologies behind them, both of them militantly opposed to Enlightenment humanism and each influenced, in different ways, by Nietzsche. One is fascist, the other reactionary—not in the common left-wing sense of “anyone who is more conservative than me,” but in their original, technical senses.118

The early fascist intellectuals, including Julius Evola (1898–1974) and Charles Maurras (1868–1952), have been rediscovered by neo-Nazi parties in Europe and by Bannon and the alt-right movement in the United States, all of whom acknowledge the influence of Nietzsche.

A multicultural, multiethnic society can never work, because its people will feel rootless and alienated and its culture will be flattened to the lowest common denominator. For a nation to subordinate its interests to international agreements is to forfeit its birthright to greatness and become a chump in the global competition of all against all. And since a nation is an organic whole, its greatness can be embodied in the greatness of its leader, who voices the soul of the people directly, unencumbered by the millstone of an administrative state.

The first theocons were 1960s radicals who redirected their revolutionary fervor from the hard left to the hard right. They advocate nothing less than a rethinking of the Enlightenment roots of the American political order. The recognition of a right to life, liberty, and the pursuit of happiness, and the mandate of government to secure these rights, are, they believe, too tepid for a morally viable society. That impoverished vision has only led to anomie, hedonism, and rampant immorality, including illegitimacy, pornography, failing schools, welfare dependency, and abortion. Society should aim higher than this stunted individualism, and promote conformity to more rigorous moral standards from an authority larger than ourselves. The obvious source of these standards is traditional Christianity.

Theocons hold that the erosion of the church’s authority during the Enlightenment left Western civilization without a solid moral foundation, and a further undermining during the 1960s left it teetering on the brink.

Lilla points out an irony in theoconservativism. While it has been inflamed by radical Islamism (which the theocons think will soon start World War III), the movements are similar in their reactionary mindset, with its horror of modernity and progress.124 Both believe that at some time in the past there was a happy, well-ordered state where a virtuous people knew their place. Then alien secular forces subverted this harmony and brought on decadence and degeneration. Only a heroic vanguard with memories of the old ways can restore the society to its golden age.

First, the claim that humans have an innate imperative to identify with a nation-state (with the implication that cosmopolitanism goes against human nature) is bad evolutionary psychology. Like the supposed innate imperative to belong to a religion, it confuses a vulnerability with a need. People undoubtedly feel solidarity with their tribe, but whatever intuition of “tribe” we are born with cannot be a nation-state, which is a historical artifact of the 1648 Treaties of Westphalia. (Nor could it be a race, since our evolutionary ancestors seldom met a person of another race.) In reality, the cognitive category of a tribe, in-group, or coalition is abstract and multidimensional.126 People see themselves as belonging to many overlapping tribes: their clan, hometown, native country, adopted country, religion, ethnic group, alma mater, fraternity or sorority, political party, employer, service organization, sports team, even brand of camera equipment. (If you want to see tribalism at its fiercest, check out a “Nikon vs. Canon” Internet discussion group.)

It’s true that political salesmen can market a mythology and iconography that entice people into privileging a religion, ethnicity, or nation as their fundamental identity. With the right package of indoctrination and coercion, they can even turn them into cannon fodder.127 That does not mean that nationalism is a human drive. Nothing in human nature prevents a person from being a proud Frenchman, European, and citizen of the world, all at the same time.128

Vibrant cultures sit in vast catchment areas in which people and innovations flow from far and wide. This explains why Eurasia, rather than Australia, Africa, or the Americas, was the first continent to give birth to expansive civilizations (as documented by Sowell in his Culture trilogy and Jared Diamond in Guns, Germs, and Steel).129 It explains why the fountains of culture have always been trading cities on major crossroads and waterways.

Between 1803 and 1945, the world tried an international order based on nation-states heroically struggling for greatness. It didn’t turn out so well.

After 1945 the world’s leaders said, “Well, let’s not do that again,” and began to downplay nationalism in favor of universal human rights, international laws, and transnational organizations. The result, as we saw in chapter 11, has been seventy years of peace and prosperity in Europe and, increasingly, the rest of the world.

The European elections and self-destructive flailing of the Trump administration in 2017 suggest that the world may have reached Peak Populism, and as we saw in chapter 20, the movement is on a demographic road to nowhere.

Still, the appeal of regressive ideas is perennial, and the case for reason, science, humanism, and progress always has to be made.

Remember your math: an anecdote is not a trend. Remember your history: the fact that something is bad today doesn’t mean it was better in the past. Remember your philosophy: one cannot reason that there’s no such thing as reason, or that something is true or good because God said it is. And remember your psychology: much of what we know isn’t so, especially when our comrades know it too.

Keep some perspective. Not every problem is a Crisis, Plague, Epidemic, or Existential Threat, and not every change is the End of This, the Death of That, or the Dawn of a Post-Something Era. Don’t confuse pessimism with profundity: problems are inevitable, but problems are solvable, and diagnosing every setback as a symptom of a sick society is a cheap grab for gravitas. Finally, drop the Nietzsche.

Leonardo Da Vinci

Leonardo da Vinci Book Cover Leonardo da Vinci
Walter Isaacson
Biography & Autobiography
Simon & Schuster
October 2, 2018
624

Another top five of 2018. One of the best biographies I've read. It will sit along side of Isaacson's Benjamin Franklin. I feel like Da Vinci could be a cautionary tale to those of us trying to get better all of the time. Its a letter to the lifehackers, the continuous improvers, everyone looking for the perfect way to start (or end) your day. Da Vinci was a fascinating character and he either did not or could not care about any of that stuff. He was an observer. It is a reminder to not let the creativity get beat out of you by "school." Nothing has to be so regimented. Just be. Watch. Listen. Absorb. Let your mind go where it chooses.

Largely due to his work, dimensionality became the supreme innovation of Renaissance art.

I embarked on this book because Leonardo da Vinci is the ultimate example of the main theme of my previous biographies: how the ability to make connections across disciplines—arts and sciences, humanities and technology—is a key to innovation, imagination, and genius.

Leonardo had almost no schooling and could barely read Latin or do long division. His genius was of the type we can understand, even take lessons from. It was based on skills we can aspire to improve in ourselves, such as curiosity and intense observation. He had an imagination so excitable that it flirted with the edges of fantasy, which is also something we can try to preserve in ourselves and indulge in our children.

Vision without execution is hallucination. But I also came to believe that his ability to blur the line between reality and fantasy, just like his sfumato techniques for blurring the lines of a painting, was a key to his creativity. Skill without imagination is barren. Leonardo knew how to marry observation and imagination, which made him history’s consummate innovator.

Kenneth Clark called “the most relentlessly curious man in history.”6

Over and over again, year after year, Leonardo lists things he must do and learn.

I did learn from Leonardo how a desire to marvel about the world that we encounter each day can make each moment of our lives richer.

The painter Giorgio Vasari, born in 1511 (eight years before Leonardo died), wrote the first real art history book, Lives of the Most Eminent Painters, Sculptors, and Architects,

Leonardo was not always a giant. He made mistakes. He went off on tangents, literally, pursuing math problems that became time-sucking diversions. Notoriously, he left many of his paintings unfinished, most notably the Adoration of the Magi, Saint Jerome in the Wilderness, and the Battle of Anghiari.

His ability to combine art, science, technology, the humanities, and imagination remains an enduring recipe for creativity. So, too, was his ease at being a bit of a misfit: illegitimate, gay, vegetarian, left-handed, easily distracted, and at times heretical.

Above all, Leonardo’s relentless curiosity and experimentation should remind us of the importance of instilling, in both ourselves and our children, not just received knowledge but a willingness to question it—to be imaginative and, like talented misfits and rebels in any era, to think different.

Other than a little training in commercial math at what was known as an “abacus school,” Leonardo was mainly self-taught.

His lack of reverence for authority and his willingness to challenge received wisdom would lead him to craft an empirical approach for understanding nature that foreshadowed the scientific method developed more than a century later by Bacon and Galileo. His method was rooted in experiment, curiosity, and the ability to marvel at phenomena that the rest of us rarely pause to ponder after we’ve outgrown our wonder years.

In 1452 Johannes Gutenberg had just opened his publishing house, and soon others were using his moveable-type press to print books that would empower unschooled but brilliant people like Leonardo.

The Ottoman Turks were about to capture Constantinople, unleashing on Italy a migration of fleeing scholars with bundles of manuscripts containing the ancient wisdom of Euclid, Ptolemy, Plato, and Aristotle. Born within about a year of Leonardo were Christopher Columbus and Amerigo Vespucci,

The whale fossil triggered a dark vision of what would be, throughout his life, one of his deepest forebodings, that of an apocalyptic deluge.

There was no place then, and few places ever, that offered a more stimulating environment for creativity than Florence in the 1400s. Its economy, once dominated by unskilled wool-spinners, had flourished by becoming one that, like our own time, interwove art, technology, and commerce.

It was also a center of banking; the florin, noted for its gold purity, was the dominant standard currency in all of Europe, and the adoption of double-entry bookkeeping that recorded debits and credits permitted commerce to flourish.

Shops became studios. Merchants became financiers. Artisans became artists.7

Unlike some city-states elsewhere in Italy, Florence was not ruled by hereditary royalty. More than a century before Leonardo arrived, the most prosperous merchants and guild leaders crafted a republic whose elected delegates met at the Palazzo della Signoria, now known as the Palazzo Vecchio.

Exercising power from behind its façade was the Medici family, the phenomenally wealthy bankers who dominated Florentine politics and culture during the fifteenth century without holding office or hereditary title. (In the following century they became hereditary dukes, and lesser family members became popes.)

After Cosimo de’ Medici took over the family bank in the 1430s, it became the largest in Europe. By managing the fortunes of the continent’s wealthy families, the Medici made themselves the wealthiest of them all.

Cosimo supported the rebirth of interest in antiquity that was at the core of Renaissance humanism.

During his twenty-three-year reign, he would sponsor innovative artists, including Botticelli and Michelangelo, as well as patronize the workshops of Andrea del Verrocchio, Domenico Ghirlandaio, and Antonio del Pollaiuolo, which were producing paintings and sculptures to adorn the booming city.

The legacy of two such polymaths had a formative influence on Leonardo. The first was Filippo Brunelleschi (1377–1446), the designer of the cathedral dome.

Vitruvius’s paean to classical proportions, De Architectura.

Brunelleschi also rediscovered and greatly advanced the classical concepts of visual perspective, which had been missing in the art of the Middle Ages.

Brunelleschi showed how parallel lines seemed to converge in the distance toward a vanishing point. His formulation of linear perspective transformed art and also influenced the science of optics, the craft of architecture, and the uses of Euclidean geometry.11

Leon Battista Alberti (1404 –1472), who refined many of Brunelleschi’s experiments and extended his discoveries about perspective.

Alberti wrote his masterpiece analyzing painting and perspective, On Painting, the Italian edition of which was dedicated to Brunelleschi.

Alberti, on the other hand, was dedicated to sharing his work, gathering a community of intellectual colleagues who could build on each other’s discoveries, and promoting open discussion and publication as a way to advance the accumulation of learning.

Alberti’s On Painting expanded on Brunelleschi’s analysis of perspective by using geometry to calculate how perspective lines from distant objects should be captured on a two-dimensional pane.

Leonardo’s only formal learning was at an abacus school, an elementary academy that emphasized the math skills useful in commerce.

A left-hander, Leonardo wrote from right to left on a page,

“They are not to be read save with a mirror,” as Vasari described these pages. Some have speculated that he adopted this script as a code to keep his writings secret, but that is not true; it can be read, with or without a mirror. He wrote that way because when using his left hand he could glide leftward across the page without smudging the ink.

Being left-handed also affected Leonardo’s method of drawing. As with his writing, he drew from right to left so as not to smudge the lines with his hand.16 Most artists draw hatching strokes that slope upward to the right, like this: ////. But Leonardo’s hatching was distinctive because his lines started on the lower right and moved upward to the left, like this: \.

Being left-handed was not a major handicap, but it was considered a bit of an oddity, a trait that conjured up words like sinister and gauche rather than dexterous and adroit, and it was one more way in which Leonardo was regarded, and regarded himself, as distinctive.

Around the time Leonardo was fourteen, his father was able to secure for him an apprenticeship with one of his clients, Andrea del Verrocchio, a versatile artist and engineer who ran one of the best workshops in Florence.

When Leonardo arrived, Verrocchio’s workshop was creating an ornate tomb for the Medici, sculpting a bronze statue of Christ and Saint Thomas, designing banners of white taffeta gilded with flowers of silver and gold for a pageant, curating the Medici’s antiques, and generating Madonna paintings for merchants who wanted to display both their wealth and their piety.

Verrocchio’s bottega, like those of his five or six main competitors in Florence, was more like a commercial shop, similar to the shops of the cobblers and jewelers along the street, than a refined art studio. On the ground floor was a store and workroom, open to the street, where the artisans and apprentices mass-produced products from their easels, workbenches, kilns, pottery wheels, and metal grinders.

The goal was to produce a constant flow of marketable art and artifacts rather than nurture creative geniuses yearning to find outlets for their originality.

Unlike Michelangelo’s iconic marble statue of a muscular David as a man, Verrocchio’s David seems to be a slightly effeminate and strikingly pretty boy of about fourteen.

Nevertheless there are reasons to think that Leonardo posed for Verrocchio’s David.

Verrocchio’s art was sometimes criticized as workmanlike. “The style of his sculpture and painting tended to be hard and crude, since it came from unremitting study rather than any inborn gift,” Vasari wrote. But his statue of David is a beautiful gem that influenced the young Leonardo.

That ability to convey the subtleties of motion in a piece of still art was among Verrocchio’s underappreciated talents, one that Leonardo would adopt and then far surpass in his paintings.

The beauty of geometry.

For Leonardo, the drapery studies helped foster one of the key components of his artistic genius: the ability to deploy light and shade in ways that would better produce the illusion of three-dimensional volume on a two-dimensional surface.

“The first intention of the painter,” Leonardo later wrote, “is to make a flat surface display a body as if modeled and separated from this plane, and he who surpasses others in this skill deserves most praise.

In his Benois Madonna, for example, he painted the Virgin Mary’s blue dress in shades ranging from almost white to almost black.

The term sfumato derives from the Italian word for “smoke,” or more precisely the dissipation and gradual vanishing of smoke into the air. “Your shadows and lights should be blended without lines or borders in the manner of smoke losing itself in the air,” he wrote in a series of maxims for young painters.

The hooked nose and jutting jaw create a profile that became a leitmotif in Leonardo’s drawings, that of a gruff old warrior, noble but faintly farcical.

In Washington, DC’s National Gallery there is a marble relief of a young Alexander the Great, attributed to Verrocchio and his workshop, which features a similar ornate helmet with a winged dragon, a breastplate adorned with a roaring face, and the profusion of curls and fluttering swirls that the master imparted to his apprentice.

The glory of being an artist, he realized, was that reality should inform but not constrain.

With the Baptism of Christ, Verrocchio went from being Leonardo’s teacher to being his collaborator. He had helped Leonardo learn the sculptural elements of painting, especially modeling, and also the way a body twists in motion. But Leonardo, with thin layers of oil both translucent and transcendent, and his ability to observe and imagine, was now taking art to an entirely different level.

There was another reason, one even more fundamental, that Leonardo did not complete the painting: he preferred the conception to the execution. As his father and others knew when they drew up the strict contract for his commission, Leonardo at twenty-nine was more easily distracted by the future than he was focused on the present. He was a genius undisciplined by diligence.

All of Leonardo’s paintings are psychological, and all give vent to his desire to portray emotions, but none more intensely than Saint Jerome. The saint’s entire body, through its twists and uncomfortable kneeling, conveys passion. The painting also represents Leonardo’s first anatomical drawing and—as he fiddled with and revised it over the years—shows the intimate connection between his anatomical and artistic endeavors.

The curator of drawings at Windsor, Martin Clayton, came up with the most convincing explanation. He posited that the painting was done in two phases, the first around 1480 and the other following the dissection studies he made in 1510.

The significance of this goes beyond helping us understand the anatomical aspects of the Saint Jerome. It shows that Leonardo’s record of unreliability was not simply because he decided to give up on certain paintings. He wanted to perfect them, so he kept hold of many of them for years, making refinements.

Even some of his commissions that were completed, or almost so—Ginevra de’ Benci and the Mona Lisa, for example—were never delivered to clients.

He did not like to let go. That is why he would die with some of his masterpieces still near his bedside.

He knew that there was always more he might learn, new techniques he might master, and further inspirations that might strike him. And he was right.

“The good painter has to paint two principal things, man and the intention of his mind,” he wrote.

His inability to finish the Adoration of the Magi and Saint Jerome may have been caused by, and in turn contributed to, melancholy or depression. His notebooks from around 1480 are filled with expressions of gloom, even anguish.

“Tell me if anything was ever done . . . Tell me . . . Tell me.”

“There is no perfect gift without great suffering. Our glories and our triumphs pass away.”

Leonardo and Atalante were probably part of a February 1482 diplomatic delegation headed by Bernardo Rucellai, a wealthy banker, arts patron, and philosophy enthusiast who was married to Lorenzo’s older sister and had just been made Florence’s ambassador to Milan.2 In his writings, Rucellai introduced the term balance of power to describe the continuous conflicts and shifting alliances involving Florence, Milan, other Italian city-states, plus a pride of popes, French kings, and Holy Roman emperors. The competition among the various rulers was not only military but cultural, and Leonardo sought to be useful on both fronts.

Milan, with 125,000 citizens, was three times the size of Florence. More important for Leonardo, it had a ruling court. The Medici in Florence were generous supporters of the arts, but they were bankers who operated behind the scenes.

In other words, Milan’s castle provided a perfect environment for Leonardo, who had a fondness for strong leaders, loved the diversity of talent they attracted, and aspired to be on a comfortable retainer.

THE JOB APPLICATION

Most illustrious Lord, Having now sufficiently studied the inventions of all those who proclaim themselves skilled contrivers of instruments of war, and having found that these instruments are no different than those in common use, I shall be bold enough to offer, with all due respect to the others, my own secrets to your Excellency and to demonstrate them at your Convenience.
1) I have designed extremely light and strong bridges, adapted to be easily carried, and with them you may pursue and at any time flee from the enemy; and others, indestructible by fire and battle, easy to lift and place. Also methods of burning and destroying those of the enemy.

2) I know how, during a siege, to take the water out of the trenches, and make an infinite variety of bridges, covered ways, ladders, and other machines suitable to such expeditions.

3) If a place under siege cannot be reduced by bombardment, because of the height of its banks or the strength of its position, I have methods for destroying any fortress even if it is founded upon solid rock.

4) I have kinds of cannons, convenient and easy to carry, that can fling small stones almost resembling a hailstorm; and the smoke of these will cause great terror to the enemy, to his great detriment and confusion.

9) [Leonardo moved up this item in the draft.] And when the fight is at sea, I have many kinds of efficient machines for offense and defense, and vessels that will resist the attack of the largest guns, and powder and fumes.

5) I have ways of making, without noise, underground tunnels and secret winding passages to arrive at a desired point, even if it is necessary to pass underneath trenches or a river.

6) I will make unassailable armored chariots that can penetrate the ranks of the enemy with their artillery, and there is no body of soldiers so great that it could withstand them. And behind these, infantry could follow quite unhurt.

7) In case of need I will make cannons and artillery of beautiful and useful design that are different from those in common use.

8) Where bombardment will not work, I can devise catapults, mangonels, caltrops and other effective machines not in common use.

10) In times of peace I can give perfect satisfaction and be the equal of any other in architecture and the composition of buildings public and private; and in guiding water from one place to another. Also, I can execute sculpture in marble, bronze and clay. Likewise in painting, I can do everything possible, as well as any other man, whosoever he may be. Moreover, work could be undertaken on the bronze horse, which will be to the immortal glory and eternal honor of His Lordship, your father, and of the illustrious house of Sforza. And if any of the above-mentioned things seem impossible or impracticable to anyone, I am most readily disposed to demonstrate them in your park or in whatsoever place shall please Your Excellency.

Leonardo mentioned none of his paintings. Nor did he refer to the talent that ostensibly caused him to be sent to Milan: an ability to design and play musical instruments. What he mainly pitched was a pretense of military engineering expertise.

Leonardo cast himself as an engineer because he was going through one of his regular bouts of being bored or blocked by the prospect of picking up a brush.

After settling into Milan, he would in fact begin to pursue military engineering earnestly and come up with some innovative concepts for machines, even as he continued to dance around the line between ingenuity and fantasy.

Here is our gentle and beloved Leonardo, who became a vegetarian because of his fondness for all creatures, wallowing in horrifying depictions of death. It is, perhaps, yet another glimpse of his inner turmoil. Within his dark cave was a demon imagination.

Leonardo was a pioneer in propounding laws of proportion: how one quantity, such as force, rises in proportion to another, such as the length of a lever.

How serious was Leonardo? Was he merely being clever on paper and trying to impress Ludovico? Was the giant crossbow another example of his ingenuity blurring into fantasy? I believe his proposal was serious. He made more than thirty preparatory drawings, and he detailed with precision the gears, worm screws, shafts, triggers, and other mechanisms.

Leonardo would be known for paintings, monuments, and inventions that he conceived but never brought to fruition. The giant crossbow falls into that category.

That was also true, it turned out, for most of the military devices he conceived and drew during the 1480s. “I will make unassailable armored chariots,” he promised in his letter to Ludovico.

Only one of Leonardo’s military conceptions is known to have made it off the pages of his notebooks and onto the battlefield, and he arguably deserves priority as its inventor. The wheellock, or wheel lock, which he devised in the 1490s, was a way to create a spark for igniting the gunpowder in a musket or similar hand-carried weapon. When the trigger was pulled, a metal wheel was set spinning by a spring. As it scraped against a stone, it sparked enough heat to ignite the gunpowder.

The wheellock came into use in Italy and Germany around that time and proved to be influential in facilitating both warfare and the personal use of guns.

Leonardo would not be involved in military activity until 1502, when he went to work for a more difficult and tyrannical strongman, Cesare Borgia.

The best example was his set of plans for a utopian city, which was a favorite subject for Italian Renaissance artists and architects. Milan had been ravaged in the early 1480s by three years of the bubonic plague, which killed close to one-third of its inhabitants. With his scientific instincts, Leonardo realized that the plague was spread by unsanitary conditions and that the health of the citizens was related to the health of their city.

The population of Milan would be relocated to ten new towns, designed and built from scratch along the river, in order to “disperse its great congregation of people which are packed like goats one behind the other, filling every place with fetid smells and sowing seeds of pestilence and death.”

He applied the classic analogy between the microcosm of the human body and the macrocosm of the earth: cities are organisms that breathe and have fluids that circulate and waste that needs to move.

Leonardo’s idea was to combine the streets and canals into a unified circulation system. The utopian city he envisioned would have two levels: an upper level designed for beauty and pedestrian life, and a level hidden below for canals, commerce, sanitation, and sewage.

Unlike the cramped streets of Milan, which Leonardo realized led to the spread of disease, the boulevards in the new town would be at least as wide as the height of the houses. To keep these boulevards clean, they would be sloped to the middle to allow rainwater to drain through central slits into a sewer circulation system below.

In collecting such a medley of ideas, Leonardo was following a practice that had become popular in Renaissance Italy of keeping a commonplace and sketch book, known as a zibaldone.

On the center-left is a figure Leonardo loved to draw or doodle: a semiheroic, craggy old man with a long nose and jutting chin. Wearing a toga, he looks both noble and slightly comic.

And we will see variations of this craggy character reappearing often in his notebooks.

The result is a seamless connection of geometry to nature and a glimpse into Leonardo’s art of spatial thinking.

A fundamental theme in his art and science: the interconnectedness of nature, the unity of its patterns, and the analogy between the workings of the human body and those of the earth.

Another set of drawings that Leonardo produced for the amusement of the Sforza court were pen-and-ink caricatures of funny-looking people he dubbed “visi mostruosi” (monstrous faces), which are now commonly called his “grotesques.”

As he wrote in his notes for his treatise on painting, “If the painter wishes to see beauties that charm him, it lies within his power to create them; and if he wishes to see monstrosities that are frightful, buffoonish, or ridiculous, or pitiable, he can be lord thereof.”

In notes for his treatise on painting, Leonardo recommended to young artists this practice of walking around town, finding people to use as models, and recording the most interesting ones in a portable notebook: “Take a note of them with slight strokes in a little book which you should always carry with you,” he wrote. “The positions of the people are so infinite that the memory is incapable of retaining them, which is why you should keep these sketches as your guides.”

An early example of a theme that Leonardo would return to repeatedly until the end of his life: cataclysmic scenes of destruction and deluge that consume all earthly

He was not motivated by wealth or material possessions. In his notebooks, he decried “men who desire nothing but material riches and are absolutely devoid of the desire for wisdom, which is the sustenance and truly dependable wealth of the mind.”2

“In narrative paintings you should closely intermingle direct opposites, because they offer a great contrast to each other, especially when they are adjacent. Thus, have the ugly one next to the beautiful, the large next to the small, the old next to the young.”

Sought to harmonize the proportions of a human to that of a church, an effort that would culminate with an iconic drawing by Leonardo that came to symbolize the harmonious relationship between man and the universe.

Leonardo made a philosophical pitch that drew on the analogy, of which he was so fond, between human bodies and buildings. “Medicines, when properly used, restore health to invalids, and a doctor will make the right use of them if he understands the nature of man,” he wrote. “This too is what the sick cathedral needs—it needs a doctor-architect, who understands the nature of the building and the laws on which correct construction is based.”

The greater the weight placed on the arches, the less the arch transmits the weight to the columns:

Marcus Vitruvius Pollio, born around 80 BC, served in the Roman army under Caesar and specialized in the design and construction of artillery machines.

Vitruvius later became an architect and worked on a temple, no longer in existence, in the town of Fano in Italy. His most important work was literary, the only surviving book on architecture from classical antiquity: De Architectura, known today as The Ten Books on Architecture.13 For many dark centuries, Vitruvius’s work had been forgotten, but in the early 1400s it was one of the many pieces of classical writing, including Lucretius’s epic poem On the Nature of Things and Cicero’s orations, that were rediscovered and collected by the pioneering Italian humanist Poggio Bracciolini.

More broadly, Vitruvius’s belief that the proportions of man are analogous to those of a well-conceived temple—and to the macrocosm of the world—became central to Leonardo’s worldview.

When Leonardo drew his Vitruvian Man, he had a lot of interrelated ideas dancing in his imagination. These included the mathematical challenge of squaring the circle, the analogy between the microcosm of man and the macrocosm of earth, the human proportions to be found through anatomical studies, the geometry of squares and circles in church architecture, the transformation of geometric shapes, and a concept combining math and art that was known as “the golden ratio” or “divine proportion.”

“Though I have no power to quote from authors as they have,” he proclaimed almost proudly, “I shall rely on a far more worthy thing—on experience.”1 Throughout his life, he would repeat this claim to prefer experience over received scholarship. “He who has access to the fountain does not go to the water-jar,” he wrote.2 This made him different from the archetypal Renaissance Man, who embraced the rebirth of wisdom that came from rediscovered works of classical antiquity.

We can see a turning point in the early 1490s, when he undertook to teach himself Latin, the language not only of the ancients but also of serious scholars of his era.

In that regard, Leonardo was born at a fortunate moment. In 1452 Johannes Gutenberg began selling Bibles from his new printing press, just when the development of rag processing was making paper more readily available.

Leonardo thus was able to become the first major European thinker to acquire a serious knowledge of science without being formally schooled in Latin or Greek.

In the late 1480s he itemized five books he owned: the Pliny, a Latin grammar book, a text on minerals and precious stones, an arithmetic text, and a humorous epic poem, Luigi Pulci’s Morgante, about the adventures of a knight and the giant he converted to Christianity, which was often performed at the Medici court.

Thus Leonardo became a disciple of both experience and received wisdom. More important, he came to see that the progress of science came from a dialogue between the two. That in turn helped him realize that knowledge also came from a related dialogue: that between experiment and theory.

A natural observer and experimenter, he was neither wired nor trained to wrestle with abstract concepts. He preferred to induce from experiments rather than deduce from theoretical principles.

He would try to look at facts and from them figure out the patterns and natural forces that caused those things to happen.

Scholastic theologians of the Middle Ages had fused Aristotle’s science with Christianity to create an authorized creed that left little room for skeptical inquiry or experimentation. Even the humanists of the early Renaissance preferred to repeat the wisdom of classical texts rather than test it.

When he began absorbing knowledge from books in the 1490s, it helped him realize the importance of being guided not only by experiential evidence but also by theoretical frameworks. More important, he came to understand that the two approaches were complementary, working hand in hand.

He even came to be dismissive of experimenters who relied on practice without any knowledge of the underlying theories. “Those who are in love with practice without theoretical knowledge are like the sailor who goes onto a ship without rudder or compass and who never can be certain whither he is going,” he wrote in 1510. “Practice must always be founded on sound theory.”

As a result, Leonardo became one of the major Western thinkers, more than a century before Galileo, to pursue in a persistent hands-on fashion the dialogue between experiment and theory that would lead to the modern Scientific Revolution. Aristotle had laid the foundations, in ancient Greece, for the method of partnering inductions and deductions: using observations to formulate general principles, then using these principles to predict outcomes.

Any person who puts “Describe the tongue of the woodpecker” on his to-do list is overendowed with the combination of curiosity and acuity.

His curiosity, like that of Einstein, often was about phenomena that most people over the age of ten no longer puzzle about: Why is the sky blue? How are clouds formed? Why can our eyes see only in a straight line? What is yawning?

We can, if we wish, not just marvel at him but try to learn from him by pushing ourselves to look at things more curiously and intensely.

Look carefully and separately at each detail.

Deep observation must be done in steps: “If you wish to have a sound knowledge of the forms of objects, begin with the details of them, and do not go on to the second step until you have the first well fixed in memory.”

Leonardo had a strategy he used to refine his observational skills. He would write down marching orders to himself, determining how he would sequence his observations in a methodical step-by-step way. “First define the motion of the wind and then describe how the birds steer through it with only the simple balancing of the wings and tail,” he wrote in one example. “Do this after the description of their anatomy.”

Leonardo thus realized, before other scientists, that a bird stays aloft not merely because the wings beat downward against the air but also because the wings propel the bird forward and the air lessens in pressure as it rushes over the wing’s curved top surface.

He used drawing as a tool for thinking.

A major enterprise of the late Renaissance was finding a way to equalize the power of an unwinding spring.

His mechanical ingenuity is combined with his artistic passion for spirals and curls.

Leonardo understood the concept of what he called impetus, which is what happens when a force pushes an object and gives it momentum. “A body in motion desires to maintain its course in the line from which it started,” he wrote. “Every movement tends to maintain itself; or, rather, every body in motion continues to move so long as the influence of the force that set it in motion is maintained in it.”9 Leonardo’s insights were a precursor to what Newton, two hundred years later, would make his first law of motion: that a body in motion will stay in the same motion unless acted upon by another force.

What prevents perpetual motion, Leonardo realized, is the inevitable loss of momentum in a system when it rubs against reality. Friction causes energy to be lost and prevents motion from being perpetual.

Through his work on machinery, Leonardo developed a mechanistic view of the world foreshadowing that of Newton. All movements in the universe—of human limbs and of cogs in machines, of blood in our veins and of water in rivers—operate according to the same laws, he concluded. These laws are analogous; the motions in one realm can be compared to those in another realm, and patterns emerge. “Man is a machine, a bird is a machine, the whole universe is a machine,” wrote Marco Cianchi in an analysis of Leonardo’s devices.

Leonardo increasingly came to realize that mathematics was the key to turning observations into theories. It was the language that nature used to write her laws. “There is no certainty in sciences where mathematics cannot be applied,” he declared.

One of Leonardo’s close friends at Milan’s court was Luca Pacioli, a mathematician who developed the first widely published system for double-entry bookkeeping.

His sixty illustrations for Pacioli were the only drawings he published during his lifetime.

Leonardo’s mastery of perspective added to the three-dimensional look. He could envision the shapes in his head as real objects, then convey them on the page.

Leonardo became the first person to discover the center of gravity of a triangular pyramid (one-quarter of the way up a line from the base to the peak).

Pacioli’s book focused on the golden ratio, or divine proportion, an irrational number that expresses a ratio that pops up often in number series, geometry, and art. It is approximately 1.61803398, but (being irrational) has decimals that stretch on randomly forever. The golden ratio occurs when you divide a line into two parts in such a way that the ratio between the whole length and the longer part is equal to the ratio between the longer part and the shorter part. For example, take a line that’s 100 inches long and divide it into two parts of 61.8 inches and 38.2 inches. That comes close to the golden ratio, because 100 divided by 61.8 is about the same as 61.8 divided by 38.2; in both cases, it’s approximately 1.618.

Euclid wrote about this ratio in around 300 BC, and it has fascinated mathematicians ever since. Pacioli was the first to popularize the name divine proportion for it. In his book by that title, he described the way it turns up in studies of geometric solids such as cubes and prisms and polyhedrons. In popular lore, including in Dan Brown’s The Da Vinci Code, the golden ratio is found throughout Leonardo’s art.11 If so, it is doubtful it was intentional.

As an artist, Leonardo was particularly interested in how the shapes of objects transformed when they moved. From his observations on the flow of water, he developed an appreciation for the idea of the conservation of volume: as a quantity of water flows, its shape changes, but its volume remains exactly the same.

An example would be if you took a square and transformed it into a circle with the exact same area. A three-dimensional example would be showing how a sphere could be transformed into a cube with the same volume.

By grappling with these transformations and persistently recording his insights, Leonardo helped to pioneer the field of topology, which looks at how shapes and objects can undergo transformations while keeping some of the same properties.

We now know that a mathematical process for squaring a circle requires use of a transcendental number, in this case π, which cannot be expressed as a fraction and is not the root of any polynomial with rational coefficients.

Leonardo wanted to know how psychological emotions led to physical motions. As a result, he would also become interested in the way the nervous system works and how optical impressions are processed.

When he moved to Milan, he discovered that the study of anatomy there was pursued primarily by medical scholars rather than by artists.

If there were not so much else to remember him for, Leonardo could have been celebrated as a pioneer of dentistry.

Beginning with the drapery studies done in Verrocchio’s studio, Leonardo mastered the art of rendering light hitting rounded and curved objects. Now he was deploying that art to transform, and make beautiful, the study of anatomy.

More important, his fascination with the connection between the mind and the body became a key component of his artistic genius: showing how inner emotions are manifest in outward gestures. “In painting, the actions of the figures are, in all cases, expressive of the purpose of their minds,” he wrote.14 As he was finishing his first round of anatomical studies, he was beginning work on what would be the greatest expression in the history of art of that maxim, The Last Supper.

“When the arm is bent, the fleshy part shrinks to two-thirds of its length,” he recorded. “When a man kneels down he will diminish by the fourth part of his height. . . . When a heel is raised, the tendon and ankle get closer to each other by a finger’s breadth. . . . When a man sits down, the distance from his seat to the top part of his head will be half of his height plus the thickness and length of the testicles.”

Plus the thickness and length of the testicles? Once again it is useful to pause and marvel. Why the obsessiveness? Why the need for reams of data? Partly, at least initially, it was to help him paint humans, or horses, in various poses and movements. But there was something grander involved. Leonardo had set for himself the most magnificent of all tasks for the mind of mankind: nothing less than knowing fully the measure of man and how he fits into the cosmos. In his notebook, he proclaimed his intention to fathom what he called “universale misura del huomo,” the universal measure of man.17 It was the quest that defined Leonardo’s life, the one that tied together his art and his science.

Leonardo was a master at storytelling and conveying a sense of dramatic motion, and like many of his paintings, beginning with the Adoration of the Magi, the Virgin of the Rocks is a narrative. In his first version of the painting, the androgynous curly-haired angel begins the narrative by looking out directly from the scene, catching our eye, smiling enigmatically, and pointing to make us look at the baby Saint John. John in turn is dropping to his knees and clasping his hands in reverence toward the baby Jesus, who returns the gesture with a sign of blessing. The Madonna, her body twisted in motion, glances down at John and grasps his shoulder protectively while hovering her other hand over Jesus. And as our eyes finish a clockwise rotation of the scene, we notice the left hand of the angel holding Jesus as he leans on the rocky precipice over a pond, his hand touching the ledge. Taken in as a whole, it becomes a sequential medley of hand gestures presaging The Last Supper.

Leonardo’s apprentices and students did not merely copy his designs. A show at the Louvre in 2012 featured paintings that students and assistants in his workshop did of his masterpieces. Many were variations that were produced alongside his original, indicating that he and his colleagues were together exploring various alternative approaches to the planned painting. While Leonardo worked on the master version, other versions were being painted under his supervision.20

The angel, like the one he painted for Verrocchio’s Baptism of Christ, is an example of Leonardo’s proclivity for gender fluidity.

The drawing is fascinating because it is one of the best displays of Leonardo’s genius as a draftsman. With a few simple lines and brilliant strokes, concise and precise, he is able to create a sketch of unsurpassed beauty. At first glance it captivates you, then its deceptive simplicity draws you into a prolonged and profound engagement.

The drawing is an exquisite example of Leonardo’s use of hatching to create shadows and texture. These parallel strokes are delicate and tight in some places (the shadow on her left cheek) and bold and spacious in others (her back shoulder). The variations in the hatching allow, with just simple strokes, wondrous gradations of shadow and subtle blurring of contours.

Like the angel in the Louvre’s Virgin of the Rocks, she stares out at us even as her left eye drifts. As you walk back and forth, her eyes follow you. She drinks you in.

Portrait of a Musician (fig. 67), painted in the mid-1480s. His only known portrait of a man, there are no surviving records or contemporary mentions of it.

And unlike Leonardo’s other works, his body faces in the same direction as his gaze, with no sense of movement.

Cecilia Gallerani’s alluring beauty would be captured for the ages. At the height of their relationship, around 1489, when she was fifteen, Ludovico commissioned Leonardo to paint her portrait

The result is a stunning and innovative masterpiece, in many ways the most delightful and charming of Leonardo’s paintings. Other than the Mona Lisa, it is my favorite of his works.

Painted in oil on a walnut panel, the portrait of Cecilia, now known as Lady with an Ermine, was so innovative, so emotionally charged and alive, that it helped to transform the art of portraiture. The twentieth-century art historian John Pope-Hennessy called it the “first modern portrait” and “the first painting in European art to introduce the idea that a portrait may express the sitter’s thoughts through posture and gestures.”6 Instead of being shown in profile, as was traditional, she is in three-quarters view.

The twisting head and body, a form of contrapposto, had become one of Leonardo’s lively signatures, such as in the angel of Virgin of the Rocks.

He gave a rigorous scientific and aesthetic defense of painting, which was then considered a mechanical art, arguing that it should instead be regarded as the highest of the liberal arts, transcending poetry and music and sculpture.

This type of staged debate on the comparative value of various intellectual endeavors, ranging from math to philosophy to art, was a staple of evenings at the Sforza Castle. Known as a paragone, from the Italian word for “comparison,” such a discourse was a way for artists and scholars to attract patrons and elevate their social status during the Italian Renaissance.

in 1489 by Francesco Puteolano, who argued that poetry and historical writing were most important. The reputations and memories of the great rulers, including Caesar and Alexander the Great, came from historians rather than sculptors or painters, he said.

The goal of Leonardo’s argument was to elevate the work of painters—and their social status—by linking their art to the science of optics and the mathematics of perspective. By exalting the interplay between art and science, Leonardo wove an argument that was integral to understanding his genius: that true creativity involves the ability to combine observation with imagination, thereby blurring the border between reality and fantasy. A great painter depicts both, he said.

He had first tackled the complexities of shadows when drawing draperies as an exercise in Verrocchio’s studio. He came to understand that the use of shadows, not lines, was the secret to modeling three-dimensional objects on a two-dimensional surface. The primary goal of a painter, Leonardo declared, “is to make a flat surface display a body as if modeled and separated from this plane.” This crowning achievement of painting “arises from light and shade.”

Reading his studies on reflected light provides us with a deeper appreciation for the subtleties of the light-dappled shadow on the edge of Cecilia’s hand in Lady with an Ermine or the Madonna’s hand in Virgin of the Rocks, and it reminds us why these are innovative masterpieces. Studying the paintings, in turn, leads to a more profound understanding of Leonardo’s scientific inquiry into rebounding and reflected light. This iterative process was true for him as well: his analysis of nature informed his art, which informed his analysis of nature.

He realized that nature itself, independent of how our eyes perceive it, does not have precise lines.

In his mathematical studies, he made a distinction between numerical quantities, which involve discrete and indivisible units, and continuous quantities of the sort found in geometry, which involve measurements and gradations that are infinitely divisible. Shadows are in the latter category; they come in continuous, seamless gradations rather than in discrete units that can be delineated. “Between light and darkness there is infinite variation, because their quantity is continuous,” he wrote.

That was not a radical proposition. But Leonardo then took a further step. Nothing in nature, he realized, has precise mathematical lines or boundaries or borders. “Lines are not part of any quantity of an object’s surface, nor are they part of the air which surrounds this surface,” he wrote. He realized that points and lines are mathematical constructs.

Instead an artist needs to represent the shape and volume of objects by relying on light and shadow.

Leonardo’s insistence that all boundaries, both in nature and in art, are blurred led him to become the pioneer of sfumato, the technique of using hazy and smoky outlines such as those so notable in the Mona Lisa.

Sfumato is not merely a technique for modeling reality more accurately in a painting. It is an analogy for the blurry distinction between the known and the mysterious, one of the core themes of Leonardo’s life.

Like much of his science, his optics research was begun to help inform his art, but by the 1490s he was pursuing it with a relentless, seemingly insatiable and pure curiosity.

In a small notebook sketch done late in his life, which historian James Ackerman called “a token of one of the most consequential changes in the history of Western art,” Leonardo shows a receding row of trees. Each one loses a little detail, until the ones near the horizon are just a simple shape devoid of individual branches. Even in his botanical drawings and the depiction of plants in some of his paintings, leaves in the foreground are more distinct than those in the background.

Acuity perspective is related to what Leonardo called aerial perspective: things become blurrier in the distance not only because their details disappear as they become smaller but also because the air and mists soften distant objects.

When Leonardo was painting The Last Supper (fig. 74), spectators would visit and sit quietly just so they could watch him work. The creation of art, like the discussion of science, had become at times a public event.

Ludovico Sforza. Upon the death of his nephew, he had become the official Duke of Milan in early 1494, and he set about enhancing his stature in a time-honored way, through art patronage and public commissions.

When Leonardo was summoned by the duke, they ended up having a discussion of how creativity occurs. Sometimes it requires going slowly, pausing, even procrastinating. That allows ideas to marinate, Leonardo explained. Intuition needs nurturing. “Men of lofty genius sometimes accomplish the most when they work least,” he told the duke, “for their minds are occupied with their ideas and the perfection of their conceptions, to which they afterwards give form.”

By conveying ripples of motions and emotions, Leonardo was able not merely to capture a moment but to stage a drama, as if he were choreographing a theatrical performance.

The twelve apostles are clustered into groups of three. Starting on our left, we can sense the flow of time, as if the narrative moves from left to right. On the far left is the cluster of Bartholomew, James the Minor, and Andrew, all still showing the immediate reaction of surprise at Jesus’ announcement. Bartholomew, alert and tough, is in the process of leaping to his feet, “about to rise, his head forward,” as Leonardo wrote.

The second trio from the left is Judas, Peter, and John. Dark and ugly and hook-nosed, Judas clutches in his right hand the bag of silver he has been given for promising to betray Jesus, whose words he knows are directed at him. He rears back, knocking over a salt cellar (which is clearly visible in early copies but not the current painting) in a gesture that becomes notorious. He leans away from Jesus and is painted in shadow. Even as his body recoils and twists, his left hand reaches for the incriminating bread that he and Jesus will share. “He that dippeth his hand with me in the dish shall betray me,” Jesus says, according to Matthew. Or as in the gospel according to Mark, “Behold, the hand of him that betrayeth me is with me on the table.

Peter is pugnacious and agitated, elbowing forward in indignation. “Who is it of whom he speaks?” he asks. He seems ready to take action. In his right hand is a long knife; he would, later that evening, slice off the ear of a servant of the high priest while trying to protect Jesus from the mob that came to arrest him.

By contrast, John is quiet, knowing that he is not suspect; he seems saddened by yet resigned to what he knows cannot be prevented. Traditionally, John is shown asleep or lying on Jesus’ breast. Leonardo shows him a few seconds later, after Jesus’ pronouncement, wilting sadly.

Dan Brown in his novel The Da Vinci Code, which draws on The Templar Revelation by Lynn Picknett and Clive Prince, wove a conspiracy theory that has as one piece of evidence the assertion that the effeminate-looking John is actually secretly meant to be Mary Magdalene, the faithful follower of Jesus.

Ross King points out in a book on The Last Supper, “On the contrary: Leonardo was skilled at blurring the differences between the sexes.”

Jesus, sitting alone in the center of The Last Supper, his mouth still slightly open, has finished making his pronouncement. The expressions of the other figures are intense, almost exaggerated, as if they are players in a pageant. But Jesus’ expression is serene and resigned. He looks calm, not agitated. He is slightly larger than the apostles, although Leonardo cleverly disguised the fact that he has used this trick. The open window with the bright landscape beyond forms a natural halo. His blue cloak is painted with ultramarine, the most expensive of pigments. In his studies of optics, Leonardo had discovered that objects against a light background look larger than when against a dark background.

The trio to the right of Jesus includes Thomas, James the Greater, and Philip. Thomas raises his index finger with his hand turned inward in a pointing gesture closely associated with Leonardo.

Later he will be known as doubting Thomas because he demanded proof of Jesus’ resurrection, which Jesus provided by letting Thomas place a finger in his wounds.

The final trio on the right comprises Matthew, Thaddeus, and Simon. They are already in a heated discussion about what Jesus may have meant. Look at the cupped right hand of Thaddeus.

Is he slapping his hand down as if to say, I knew it? Is he jerking his thumb toward Judas? Now look at Matthew. Are his two upturned palms gesturing toward Jesus or Judas? The viewer need not feel bad about being confused; in their own ways Matthew and Thaddeus are also confused about what has just occurred, and they are trying to sort it out and turning to Simon for answers.

Jesus’ right hand is reaching out to a stemless glass one-third filled with red wine. In a dazzling detail, his little finger is seen through the glass itself. Just beyond the glass are a dish and a piece of bread. His left hand is palm up, gesturing at another piece of bread, which he gazes at with downcast eyes.

That gesture and glance create the second moment that shimmers in the narrative of the painting: that of the institution of the Eucharist. In the gospel of Matthew, it occurs in the moment after the announcement of the betrayal: “Jesus took bread, and blessed it, and broke it, and gave it to the disciples, and said, ‘Take, eat; this is my body.’ And he took the cup, and gave thanks, and gave it to them, saying, ‘Drink ye all of it, for this is my blood of the new testament, which is shed for many for the remission of sins.’ This part of the narrative reverberates outward from Jesus, encompassing both the reaction to his revelation that Judas will betray him and the institution of the holy sacrament.

with a painting as large as The Last Supper, the viewer might see it from the front or the side or while walking past. That required what Leonardo called “complex perspective,” which is a mix of natural and artificial perspective. The artificial part was needed to adjust for the fact that a person looking at a very large painting would be closer to some parts of it than to other parts. “No surface can be seen exactly as it is,” Leonardo wrote, “because the eye that sees it is not equally remote from all its edges.”

In The Last Supper, the painted room diminishes in size so quickly that the back wall is just large enough to have three windows showing the landscape outside. The tapestries are not proportional. The table is too narrow for a comfortable supper, and the apostles are all on one side of it, where there are not enough places for them to sit. The floor is raked forward, like a stage, and the table is slanted a bit toward us as well. The characters are all at the forefront, as if in a play, and even their gestures are theatrical.

All told, The Last Supper is a mix of scientific perspective and theatrical license, of intellect and fantasy, worthy of Leonardo. His study of perspective science had not made him rigid or academic as a painter. Instead, it was complemented by the cleverness and ingenuity he had picked up as a stage impresario. Once he knew the rules, he became a master at fudging and distorting them, as if creating perspectival sfumato.

As a result, The Last Supper, both in its creation and in its current state, becomes not just an example of Leonardo’s genius but also a metaphor for it. It was innovative in its art and too innovative in its methods. The conception was brilliant but the execution flawed. The emotional narrative is profound but slightly mysterious, and the current state of the painting adds another thin veil of mystery to the ones that so often shroud Leonardo’s life and work.

But his life became unsettled in the late 1490s, after Caterina’s death and the completion of The Last Supper. The bronze for his horse monument had been redirected in 1494 to make cannon to defend against a possible French invasion, and it soon became clear that Ludovico was not going to replace it.

Larger forces intervened to rescue Leonardo from his employment concerns. In the summer of 1499, an invasion force sent by the new French king, Louis XII, was bearing down on Milan. Leonardo added up the money in his cash box, 1,280 lire, distributed some to Salai (20 lire) and others, and then proceeded to hide the rest in paper packets around his studio to keep it safe from invaders and looters.

The French were, it turned out, protective of Leonardo. The day after his arrival, the king went to see The Last Supper, and he even asked whether it might be possible to cart it back to France. Fortunately, his engineers told him it was impossible. Instead of fleeing, Leonardo spent the next few months working with the French.

In fact, he had forged a secret deal with the new French governor of Milan, the Count of Ligny, to meet him in Naples and act as a military engineer inspecting fortifications.

When Leonardo reached Florence in late March 1500, he found a city that had just lived through a reactionary spasm that threatened to destroy its role in the vanguard of Renaissance culture. In 1494 a radical friar named Girolamo Savonarola had led a religious rebellion against the ruling Medici and instituted a fundamentalist regime that imposed strict new laws against homosexuality, sodomy, and adultery.

On Mardi Gras of 1497 Savonarola led what became known as the “Bonfire of the Vanities,” in which books, art, clothing, and cosmetics were set aflame. The following year, popular opinion turned on him, and he was hanged and burned in the central square of Florence.

Friar Pietro, in one of his letters to the persistent Isabella, described a painting that Leonardo was doing at the request of Louis XII’s secretary, Florimond Robertet. “The little picture he is working on is of a Madonna who is seated as if she were about to spin yarn,” he wrote, “and the child has placed his foot in the basket of yarns and has grasped the yarnwinder, and stares attentively at the four spokes, which are in the form of a cross, and he smiles and grips it tightly, as if he were longing for this cross, not wishing to yield it to his mother, who appears to want to take it away from him.”

When he returned to Florence in 1500, Leonardo set up a collaborative workshop, and production of some pictures, especially small devotional ones, became a team effort, just as it had been in Verrocchio’s studio.

But the Yarnwinder paintings are energized by what had become Leonardo’s special ability to convey a psychological narrative.

There is a flow of physical motions as Jesus reaches toward the cross-like object, his finger pointed heavenward, the gesture that Leonardo loved. His moist eyes are shiny with a tiny sparkle of luster, and they have their own narrative: he is just the age when a baby can discern objects and focus on them, and he is doing so with a concerted effort that combines his sight with his sense of touch. We sense that his ability to focus on the cross causes a premonition of his fate. He looks innocent and at first playful, but if you look at his mouth and eyes you sense a resigned and even loving comfort with what will be his destiny. By comparing Madonna of the Yarnwinder to the Benois Madonna (fig. 13), we can see the historic leap Leonardo made by turning static scenes into emotion-laden narratives.

Our eyes swirl counterclockwise as the narrative continues with Mary’s motions and emotions. Her face and her hand indicate anxiety, a desire to intervene, but also an understanding and an acceptance of what shall be. In the Virgin of the Rocks paintings (figs. 64 and 65), Mary’s hovering hand offers a serene benediction; in the Yarnwinders, her gesture is more conflicted, as if coiled to grasp her child while also recoiling from the temptation to intervene. She reaches out nervously, as if trying to decide whether to restrain him from his fate.

Leonardo’s studio was like a shop in which he devised a painting and his assistants worked with him to make multiple copies. This is similar to the way it had been in Verrocchio’s bottega.

How did the collaboration occur? What was the nature of the team and the teamwork? As with so many examples in history where creativity was turned into products, Leonardo’s Florence studio involved individual genius combined with teamwork. Both vision and execution were required.

one of Leonardo’s greatest masterpieces, the Virgin and Child with Saint Anne (fig. 79), featuring Mary sitting on the lap of her mother. The final painting combines many elements of Leonardo’s artistic genius: a moment transformed into a narrative, physical motions that match mental emotions, brilliant depictions of the dance of light, delicate sfumato, and a landscape informed by geology and color perspective. It was proclaimed to be “Leonardo da Vinci’s ultimate masterpiece” (l’ultime chef d’oeuvre) in the title of the catalogue published by the Louvre for a 2012 exhibition celebrating its restoration—this from the museum that also owns the Mona Lisa.

It is important, Leonardo wrote, to “have a movement of a person’s limbs appropriate to that person’s mental movements.” His painting of the Virgin and Child with Saint Anne shows what he meant. Mary’s right arm is stretched as she tries to restrain the Christ child, showing a protective but gentle love. But he is intent on wrestling with the lamb, his leg over its neck and his hands grappling with its head. The lamb, as Friar Pietro told us, represents the Passion, Jesus’ fate, and he will not be restrained from it.

The image of a squirming boy with what looks like two mothers conjures up Leonardo’s own childhood being raised by both his birth mother, Caterina, and his slightly younger stepmother.

Leonardo had also been wrestling with the question of why the sky appears blue, and around that time he had correctly concluded that it had to do with the water vapor in the air.

Most significant, the painting conveys the paramount theme in Leonardo’s art: the spiritual connection and analogy between the earth and humans. Echoing so many of his paintings—Ginevra de’ Benci, Virgin of the Rocks, Madonna of the Yarnwinder, and of course the Mona Lisa—a river curls from the distant horizon of the macrocosm of the earth and seems to flow into the veins of the Holy Family, ending with the lamb that foreshadows the Passion. The curving flow of the river connects to the flowing composition of the characters.

The Saint Anne is the most complex and layered of Leonardo’s panel paintings, and many see it as a masterpiece on a par with the Mona Lisa, perhaps even surpassing it because it is more complex in its composition and motion.

The myth of Leda and the swan tells how the Greek god Zeus assumed the form of a swan and seduced the beautiful mortal princess Leda. She produced two eggs, from which hatched two sets of twins Helen (later known as Helen of Troy) and Clytemnestra, and Castor and Pollux. Leonardo’s depiction focuses more on fertility than sex; instead of painting the seduction scene, as others had done, he chose to portray the moment of the births, showing Leda caressing the swan as the four children squirm from their shells. One of the most vivid copies is by his pupil Francesco Melzi (fig. 81).

When Leonardo was working on this painting during his second period in Florence in the early 1500s, he was doing his most intense studies on the flight of birds and also planning a test flight of one of his flying machines, which he hoped to launch from the top of nearby Swan Mountain (Monte Ceceri). His note about his childhood memory of a bird flying into his crib and flapping its tail in his mouth is also from this period.

The painting conveys a domestic and familial harmony, a pleasant portrayal of a couple at home by their lake, cuddling as they admire their newborns. It also goes beyond the erotic to focus on the tale’s procreative aspects. From the lushness of the seeding plants, to the fecundity of the soil and the hatching of the eggs, the painting is a celebration of the fertility of nature. Unlike the usual depictions of the Leda myth, Leonardo’s is not about sex but birth.2

Ludovico Sforza, Leonardo’s patron in Milan, had a reputation for ruthlessness that included, among other alleged acts, poisoning his nephew in order to seize the ducal crown. But Ludovico was a choir boy compared to Leonardo’s next patron, Cesare Borgia.

Machiavelli used him as a model of cunning in The Prince and taught that his ruthlessness was a tool for power.1

Cesare Borgia was the son of the Spanish-Italian cardinal Rodrigo Borgia, soon to become Pope Alexander VI, who vies for the hotly contested title of most libertine Renaissance pope.

he forged an alliance with the French, and he was with King Louis XII marching into Milan in 1499. The day after their arrival, they went to see The Last Supper, and there Borgia first met Leonardo. Knowing Leonardo, it is likely that during the next few weeks he showed Borgia his military engineering designs.

in June 1502 Borgia was back. As his army sacked more surrounding towns, he commanded the leaders in Florence to send a delegation to hear his latest demands. Two people were selected to try to deal with him.

Accompanying him was the son of a bankrupt lawyer, well-educated but poor, whose writing skills and savvy understanding of power games had established him as Florence’s cleverest young diplomat: Niccolò Machiavelli.

He shared with Leonardo the trait of being a sharp observer.

Leonardo may have gone to work with Borgia at the behest of Machiavelli and Florence’s leaders as a gesture of goodwill, similar to the way he had been dispatched twenty years earlier to Milan as a diplomatic gesture to Ludovico Sforza. Or he may have been sent as a way for Florence to have an agent embedded with Borgia’s forces.

Borgia, it turned out, had disguised himself as a Knight Hospitaller and snuck away with three trusted guards to ride north at a furious pace to reinstate himself in the good graces of Louis, which he did.

“Be sure that the escape tunnel does not lead to the inner fortress, lest the fortress be captured by treachery or betrayal of the lord.”

While he was in Imola with Machiavelli and Borgia, Leonardo made what may be his greatest contribution to the art of war. It is a map of Imola, but not any ordinary map (fig. 87).18 It is a work of beauty, innovative style, and military utility. It combines, in his inimitable manner, art and science.

Drawn in ink with colored washes and black chalk, the Imola map was an innovative step in cartography. The moat around the fortified town is tinted a subtle blue, the walls are silvery, and the roofs of the houses brick red. The aerial view is from directly overhead, unlike most maps of the time. On the edges he has specified the distances to nearby towns, useful information for military campaigns, but written in his elegant mirror script, indicating that the version that survives is a copy he made for himself rather than Borgia.

Around this time, he perfected the odometer he had been developing to measure long distances (fig. 88).19 On a cart he mounted a vertical cog wheel, which looks like the front wheel of a wheelbarrow, that intersects with a horizontal cog wheel. Every time the vertical wheel completed a revolution, it would move the horizontal wheel a notch, and that would cast a stone into a container.

Acting as an artist-engineer, Leonardo had devised a new military weapon: accurate, detailed, and easily read maps.

In a larger sense, Leonardo’s maps are another example of one of his greatest, though underappreciated, innovations: devising new methods for the visual display of information.

In a land where the Medici, Sforzas, and Borgias jostled for power, Leonardo was able to time his patronage affiliations well and know when to move on. But there is more. Even as he remained aloof from most current events, he seemed to be attracted to power.

Just before Pisa broke away, a major world event made Florence even more eager to control a sea outlet. In March 1493 Christopher Columbus returned safely from his first voyage across the Atlantic Ocean, and the report of his discoveries quickly spread throughout Europe. This was soon followed by a flurry of other accounts of amazing explorations. Amerigo Vespucci, whose cousin Agostino worked with Machiavelli in the Florentine chancery, helped supply Columbus’s third voyage in 1498, and the following year he made his own voyage across the Atlantic, landing in what is now Brazil. Unlike Columbus, who thought he was finding a route to India, Vespucci correctly reported to his Florentine patrons that he had “arrived at a new land which for many reasons . . . we observed to be a continent.” His correct surmise led to its being named America, after him.

An entry for an account book in Florence that month lists a set of expenses and then adds, “This money has been spent to provide six horse coaches and to pay the board expenses for the expedition with Leonardo in the territory of Pisa to divert the Arno from its course and take it away from Pisa.”

Diverting the Arno River from its course and taking it away from Pisa? It was an audacious way to reconquer the city without storming the wall or wielding any weapons. If the river could be channeled somewhere else, Pisa would be cut off from the sea and lose its source of supply. The primary advocates of the idea included the two clever friends who had been holed up together that past winter in Imola, Leonardo da Vinci and Niccolò Machiavelli.

Even though it failed, the project to divert the Arno rekindled Leonardo’s interest in a larger scheme: creating a navigable waterway between Florence and the Mediterranean Sea.

But as with so many of his projects, Leonardo ended up not finishing the Battle of Anghiari, and what he painted is now lost. We can envision it mainly through copies. The best, which shows only the central part of what would have been a much larger mural, is by Peter Paul Rubens (fig. 91), which was made from other copies in 1603, after Leonardo’s unfinished work was covered up.

Heightening the significance of the commission was the fact that Leonardo would end up pitted against his personal and professional young rival, Michelangelo, who was chosen in early 1504 to paint the other large mural in the hall.

During the seventeen years that Leonardo was away in Milan, Michelangelo became Florence’s hot new artist. He was apprenticed to the thriving Florence workshop of the painter Domenico Ghirlandaio, won the patronage of the Medici, and traveled to Rome in 1496, where he carved his Pietà, showing Mary grieving over the body of Jesus.

By 1500 the two artists were back in Florence. Michelangelo, then twenty-five, was a celebrated but petulant sculptor, and Leonardo, forty-eight, was a genial and generous painter who had a following of friends and young students. It is enticing to think of what might have occurred if Michelangelo had treated him as a mentor. But that did not happen. As Vasari reported, he displayed instead “a very great disdain” toward Leonardo.

Michelangelo’s painting has the sharp, delineated outlines that Leonardo, with his love of sfumato and blurred borders, scorned as a matter of philosophy, optics, mathematics, and aesthetics. To define objects, Michelangelo used lines rather than following Leonardo’s practice of using shadows, which is why Michelangelo’s look flat rather than three-dimensional.

But Leonardo was obsessed by the optics, mathematics, and art of perspective.

For The Last Supper he had come up with tricks and illusions and artifices to make his work appear realistic from different vantages. He was able to make a preferred vantage point that was far away from the painting; he calculated it would ideally be located ten to twenty times as far away as the painting was wide. But the area that he was supposed to paint in Florence’s council hall was fifty-five feet long, twice that of The Last Supper, and his mural would be viewed from at most seventy feet away, far less than twice its width.

He was a perfectionist faced with challenges other artists would have disregarded but that he could not. So he put down his brushes. That behavior meant he would never again receive a public commission. But it is also what allowed him to go down in history as an obsessed genius rather than merely a reliable master painter.

“These battle cartoons of Leonardo and Michelangelo are the turning point of the Renaissance,” according to Kenneth Clark.

To understand Leonardo, it is necessary to understand why he moved away from Florence, this time for good. One reason is simple: he liked Milan better. It had no Michelangelo, no cadre of half-brothers suing him, no ghost of his father hovering. It had royalty rather than republicans, with jubilant pageants rather than the after-stench of bonfires of the vanities. It had doting patrons rather than oversight committees. And the foremost patron there was the one who loved Leonardo the most, Charles d’Amboise, the French royal governor who had written a flowery letter reminding the Florentines how brilliant their native son was.

Florence was the artistic center of the Italian Renaissance, but Milan and its nearby university town of Pavia had become more intellectually diverse.

He dissected the corpse of a man who claimed to be a hundred, planned a test of one of his flying machines, began a treatise on geology and water, devised a glass tank to examine the way flowing water deposits sediment, and swam underwater to compare the propulsion of a fish tail to a bird’s wing, jotting his conclusions on the same notebook page where he drafted his angry letter to his half-brothers.

We should pause to imagine the dandy-dressing Leonardo, now in his mid-fifties and at the height of his fame as a painter, spending his night hours at an old hospital in his neighborhood talking to patients and dissecting bodies. It is another example of his relentless curiosity that would astonish us if we had not become so used to it.

In his quest to figure out how the centenarian died, Leonardo made a significant scientific discovery: he documented the process that leads to arteriosclerosis, in which the walls of arteries are thickened and stiffened by the accumulation of plaque-like substances.

“The network of vessels behaves in man as in oranges, in which the peel becomes tougher and the pulp diminishes the older they become.”

Leonardo, who was not strongly religious, pushed back on the fundamentalists who considered dissection heretical. He believed it was a way to appreciate God’s handiwork. “You should not be distressed that your discoveries come through another’s death; rather you should rejoice that our Creator has provided an instrument of such excellence,” he wrote on a tinted blue notebook page on which he drew the muscles and bones of the neck.

“I have dissected more than ten human bodies,” he wrote, and after making that statement he would dissect even more, working on each as long as possible, until they decomposed so badly he was forced to move on. “As one body did not last so long, it was necessary to proceed by stages with as many bodies as would render my knowledge complete.” He then performed even more dissections so that he could ascertain the variances between humans.

Then comes my favorite item on any Leonardo list: “Describe the tongue of the woodpecker.” This is not just a random entry. He mentioned the woodpecker’s tongue again on a later page, where he described and drew the human tongue. “Make the motions of the woodpecker,” he wrote.

There is an echo in this passage of Leonardo’s memory of coming across the mouth of a cave as a young man. As in that tale, he had to overcome his fear to go into a dark and fearful space. Although at times he was irresolute and willing to abandon tasks, his powerful curiosity tended to overcome any hesitations when it came to exploring nature’s wonders.

In most of his studies of nature, Leonardo theorized by making analogies. His quest for knowledge across all the disciplines of arts and sciences helped him see patterns. Occasionally this mode of thinking misled him, and it sometimes substituted for reaching more profound scientific theories. But this cross-disciplinary thinking and pattern-seeking was his hallmark as the quintessential Renaissance Man, and it made him a pioneer of scientific humanism.

So here is another secret to Leonardo’s unique ability to paint a facial expression: he is probably the only artist in history ever to dissect with his own hands the face of a human and that of a horse to see if the muscles that move human lips are the same ones that can raise the nostrils of the nose.

Leonardo’s studies of the human heart, conducted as part of his overall anatomical and dissection work, were the most sustained and successful of his scientific endeavors.26 Informed by his love of hydraulic engineering and his fascination with the flow of liquids, he made discoveries that were not fully appreciated for centuries.

In the early 1500s the European understanding of the heart was not all that different from that described in the second century AD by Galen, whose work was revived during the Renaissance. Galen believed that the heart was not merely a muscle but was made of a special substance that gave it a vital force. Blood was made in the liver, he taught, and distributed through the veins. Vital spirits were produced by the heart and distributed through arteries, which Galen and his successors considered a separate system. Neither the blood nor vital spirits circulated, he thought; instead, they pulsed back and forth in the veins and arteries.

Leonardo was also able to show, contrary to Galen, that the heart is simply a muscle rather than some form of special vital tissue. Like all muscles, the heart has its own blood supply and nerves. “It is nourished by an artery and veins, as are other muscles,” he found.

Leonardo’s greatest achievement in his heart studies, and indeed in all of his anatomical work, was his discovery of the way the aortic valve works, a triumph that was confirmed only in modern times. It was birthed by his understanding, indeed love, of spiral flows. For his entire career, Leonardo was fascinated by the swirls of water eddies, wind currents, and hair curls cascading down a neck.

Leonardo dedicated himself to his anatomy studies with a persistence and diligence that were often lacking in his other endeavors.

He was mainly motivated by his own curiosity.

He was more interested in pursuing knowledge than in publishing it. And even though he was collegial in his life and work, he made little effort to share his findings.

This is true for all of his studies, not just his work on anatomy. The trove of treatises that he left unpublished testifies to the unusual nature of what motivated him. He wanted to accumulate knowledge for its own sake, and for his own personal joy, rather than out of a desire to make a public name for himself as a scholar or to be part of the progress of history. Some have even said that he wrote in mirror script partly to guard his discoveries from prying eyes; I do not think that is true, but it is indisputable that his passion for gathering knowledge was not matched by one for sharing it widely. As the Leonardo scholar Charles Hope has pointed out, “He had no real understanding of the way in which the growth of knowledge was a cumulative and collaborative process.”

Modern anatomy instead began twenty-five years after Leonardo’s death, when Andreas Vesalius published his epochal and beautifully produced On the Fabric of the Human Body. That was the book that Leonardo—perhaps in conjunction with Marcantonio della Torre, had he not died young from the plague—could have preceded and surpassed.

He was skillful at discerning how patterns resonate in nature, and the grandest and most encompassing of these analogies, in both his art and his science, was the comparison between the body of man and the body of the earth. “Man is the image of the world,” he wrote.

Known as the microcosm-macrocosm relationship, it harkened back to the ancients.

As a painter who marveled at nature’s patterns, Leonardo embraced the microcosm-macrocosm connection as more than merely an analogy. He viewed it as having a spiritual component, which he expressed in his drawing of Vitruvian Man. As we have seen, this mystical connection between humans and the earth is reflected in many of his masterpieces, from Ginevra de’ Benci to Saint Anne to Madonna of the Yarnwinder and eventually the Mona Lisa. It also became an organizing principle for his scientific inquiries. When he was immersed in his anatomical research on the human digestive system, he instructed himself, “First give the comparison with the water of the rivers; then with that of the bile which goes to the stomach against the course of the food.”

Codex Leicester.I More focused than most of his other notebooks, it contains seventy-two pages jammed with long written passages and 360 drawings on geology, astronomy, and the dynamics of flowing water.

Among the questions it addresses: What causes springs of water to emerge from mountains? Why do valleys exist? What makes the moon shine? How did fossils get on mountains? What causes water and air to swirl in a vortex? And, most emblematically, why is the sky blue?

As he embarked on the Codex Leicester, Leonardo reached back to the microcosm-macrocosm analogy as his framework. “The body of the earth, like the bodies of animals, is interwoven with ramifications of veins, which are all joined together and are formed for the nutrition and vivification of this earth and of its creatures,” he wrote, echoing his words from almost two decades earlier.5 And on the following page he added, “Its flesh is the soil, its bones are the arrangements of the connections of the rocks of which the mountains are composed, its cartilage is the porous rock, its blood is the veins of waters; the lake of the blood, which is throughout the heart, is the ocean; its breathing and the increase and decrease of the blood through the pulses in the earth is thus: it is the flow and ebb of the sea.”6

By the time he finished the Codex Leicester, he would discover that the comparison between the earth and the human body was not always useful. Instead, he came to fathom how nature had two traits that sometimes appeared to be in conflict: there was a unity to nature that resonated in its patterns and analogies, but there was also a wondrously infinite variety.

The primary focus of the Codex Leicester is the topic that Leonardo regarded as the most fundamental force in the life of the planet and in our bodies: the role and movements of fluids and, in particular, water.

Water provided the perfect manifestation of Leonardo’s fascination with how shapes are transformed when in motion. How can something change its shape—a square becoming a circle, a torso narrowing as it twists—and keep the exact same area or volume? Water provides an answer. Leonardo learned early on that it cannot be compressed; a given quantity always has the exact same volume, whatever the shape of the river or container. So flowing water is constantly going through perfect geometric transformations. No wonder he loved it.

“When you put together the science of the motions of water, remember to include under each proposition its application, in order that this science may not be useless.”15

Leonardo had a keen interest in what happens when a flow of water is obstructed. The dynamics of water, he realized, are connected to the two proto-Newtonian ideas about motion that he embraced: impetus and percussion.

Impetus, a concept developed in the Middle Ages and adopted by Leonardo, describes how a body set in motion tends to keep moving in the same direction. It is a rudimentary precursor to the concepts of inertia, momentum, and Newton’s first law. Percussion involves what happens when a body in motion hits another object; it will be reflected or deflected at an angle and with a force that can be calculated. Leonardo’s understanding of fluid dynamics was also informed by his studies of transformations; when water is deflected, it changes path and shape, but it always remains the exact same volume.

One mark of a great mind is the willingness to change it. We can see that in Leonardo. As he wrestled with his earth and water studies during the early 1500s, he ran into evidence that caused him to revise his belief in the microcosm-macrocosm analogy. It was Leonardo at his best, and we have the great fortune of being able to watch that evolution as he wrote the Codex Leicester.

The evolution of Leonardo’s thinking about the microcosm-macrocosm analogy began with his curiosity about why water, which should in theory tend to settle on the earth’s surface, emerges from springs and flows into rivers at the top of mountains. The veins of the earth, he wrote, carry “the blood that keeps the mountains alive.”

Only after pitting various theories against experience did Leonardo eventually get to the correct answer: the existence of springs and mountain rivers, indeed the entire circulation of water on the earth, results from the evaporation of surface water, the formation of clouds, and the subsequent rains.

Leonardo’s willingness to question and then abandon the enticing analogy between the circulation of water on the earth and the circulation of blood in the human body shows his curiosity and ability to be open-minded. Throughout his life, he was brilliant at discerning patterns and abstracting from them a framework that could be applied across disciplines. His geology studies show an even greater talent: not letting these patterns blind him. He came to appreciate not only nature’s similarities but also its infinite variety. Yet even as he abandoned the simplistic version of the microcosm-macrocosm analogy, he retained the aesthetic and spiritual concept underlying it: the harmonies of the cosmos are reflected in the beauty of living creatures.

Il sole nó si muóve. The sun does not move.

Is this statement a brilliant leap decades ahead of Copernicus, Galileo, and the realization that the sun does not revolve around the earth? Or is it merely a random thought, perhaps a note for a pageant or play?

More impressive was his realization that the moon does not emit light but reflects the light of the sun, and that a person standing on the moon would see that the earth reflects light in the same way.

Leonardo was always on the lookout for powerful patrons, and in 1513, with Milan still controlled by his former patrons the Sforzas, a new one appeared in Rome. In March of that year, Giovanni de’ Medici was elected to become Pope Leo X. The son of Lorenzo “the Magnificent” de’ Medici, the Florentine ruler who was a halfhearted patron to Leonardo and sent him off to Milan as a young man, Giovanni was the last non-priest to maneuver himself into the papacy. Much of his time was spent tending to the Vatican’s uncertain alliance with France, which was again aiming to retake Milan and was making pacts with various other Italian cities. The new pope would also later face the threat of Martin Luther and his Reformation.

Over the years, Leonardo became increasingly interested in the mathematics involved in focusing a mirror, drawing scores of diagrams of light rays from different directions hitting a curved surface and showing the angles at which they would be reflected. He tackled the problem identified by Ptolemy in AD 150 and studied by the eleventh-century Arab mathematician Alhazen of finding the point on a concave mirror where light coming from a certain source will be reflected to a designated spot (akin to finding the spot on the edge of a circular billiard table where you have to hit a cue ball so that it will bounce and hit your target). Leonardo failed to solve this using pure math. So in a series of drawings, he made a device that could solve the problem mechanically. He was better at using visualizations than equations.

“The pope has found out that I have skinned three corpses,” he wrote, and he blamed it on the jealous Giovanni. “This person has hindered me in anatomy, denouncing it before the Pope and also at the hospital.”

And as Kenneth Clark noted, “Mystery to Leonardo was a shadow, a smile and a finger pointing into darkness.”

So it makes sense to consider the Mona Lisa near the end of his career, exploring it as the culmination of a life spent perfecting an ability to stand at the intersection of art and nature. The poplar panel with multiple layers of light oil glazes, applied over the course of many years, exemplifies the multiple layers of Leonardo’s genius. What began as a portrait of a silk merchant’s young wife became a quest to portray the complexities of human emotion, made memorable through the mysteries of a hinted smile, and to connect our nature to that of our universe. The landscape of her soul and of nature’s soul are intertwined.

Ginevra de’ Benci was made by a young artist with astonishing skills of observation. The Mona Lisa is the work of a man who had used those skills to immerse himself in a lifetime of intellectual passions. The inquiries chronicled on his thousands of notebook pages—of light rays striking curved objects, dissections of human faces, geometrical volumes being transformed into new shapes, flows of turbulent water, the analogies between the earth and human bodies—had helped him fathom the subtleties of depicting motion and emotion. “His insatiable curiosity, his restless leaps from one subject to another, have been harmonized in a single work,” Kenneth Clark wrote of the Mona Lisa. “The science, the pictorial skill, the obsession with nature, the psychological insight are all there, and so perfectly balanced that at first we are hardly aware of them.”

At the time when he was perfecting Lisa’s smile, Leonardo was spending his nights in the depths of the morgue under the hospital of Santa Maria Nuova, peeling the flesh off cadavers and exposing the muscles and nerves underneath. He became fascinated about how a smile begins to form and instructed himself to analyze every possible movement of each part of the face and determine the origin of every nerve that controls each facial muscle. Tracing which of those nerves are cranial and which are spinal may not have been necessary for painting a smile, but Leonardo needed to know.

Perhaps the most interesting derivatives of the Mona Lisa made by Leonardo’s followers are the seminude variations often called Monna Vanna, of which there remain at least eight, one of them attributed to Salai

Much of Leonardo’s career was consumed by his quest for patrons who would be unconditionally paternalistic, supportive, and indulgent in ways that his own father had only occasionally been. Although Piero da Vinci got his son a good apprenticeship and helped him get commissions, his behavior was variable from beginning to end: he declined to legitimate his son and excluded him from his will. His primary bequest to his son was to give him an insatiable drive for an unconditional patron.

“The Medici made me and destroyed me,” he wrote cryptically in his notebook at the time of Giuliano’s death.

Along the way, he and his traveling party stopped in Milan. Salai decided to stay there, at least temporarily. He was then thirty-six, solidly middle-aged and no longer playing the role of Leonardo’s pretty-boy companion or competing for attention with the aristocratic Melzi, who was still only twenty-five and remained at Leonardo’s side. Salai would settle down at the vineyard and house on the edge of Milan that had been given to Leonardo by Ludovico Sforza.

Perhaps another reason Salai stayed behind was that Leonardo had a new manservant, Battista de Vilanis, who traveled with him from Rome to France. He would soon replace Salai in Leonardo’s affections. Salai would end up inheriting only half of the Milan vineyard and its rights; Battista would get the other half.

Francis proved to be the perfect patron for Leonardo. He would admire Leonardo unconditionally, never pester him about finishing paintings, indulge his love of engineering and architecture, encourage him to stage pageants and fantasias, give him a comfortable home, and pay him a regular stipend. Leonardo was given the title “First Painter, Engineer, and Architect to the King,” but his value to Francis was his intellect and not his output. Francis had an unquenchable thirst for learning, and Leonardo was the world’s best source of experiential knowledge. He could teach the king about almost any subject there was to know, from how the eye works to why the moon shines. In turn, Leonardo could learn from the erudite and graceful young king. As Leonardo once wrote in his notebooks, referring to Alexander the Great and his tutor, “Alexander and Aristotle were teachers of one another.”

Francis gave Leonardo something he had continually sought: a comfortable stipend that was not dependent on producing any paintings. In addition, he was given the use of a small red-brick manor house, with sandstone trimming and playful spires, next to Francis’s castle in the Loire Valley village of Amboise. Known as the Château de Cloux, and now called Clos Lucé, Leonardo’s house (fig. 138) was set amid almost three acres of gardens and vineyards and connected by an underground tunnel to the king’s Château d’Amboise, about five hundred yards away.

Leonardo’s interest in the art and science of movement, and in particular the flow and swirl of water and wind, climaxed in a series of turbulent drawings that he made during his final years in France.

Deeply personal yet coolly analytic in parts, they provide a powerful and dark expression of many of the themes of his life: the melding of art and science, the blurred line between experience and fantasy, and the frightful power of nature.

The drawings also convey, I believe, his own emotional turmoil as he faced his final days, partly hobbled by a stroke. They became an outlet for his feelings and fears. “They are an outpouring of something really personal,” according to Windsor curator Martin Clayton

Throughout his life he had been obsessed with water and its movements. One of his first drawings, the landscape of the Arno done when he was twenty-one, shows a placid river, calm and life-giving as it meanders gently past fertile land and tranquil villages. It displays no signs of turbulence, just a few gentle ripples. Like a vein, it nourishes life. In his notebooks, there are dozens of references to water as the life-giving fluid that forms the vein that nourishes the earth. “Water is the vital humor [vitale umore] of the arid earth,” he wrote. “Flowing with unceasing vehemence through the ramifying veins, it replenishes all the parts.”17 In the Codex Leicester he described, by his own count, “657 cases of water and its depths.”18 His mechanical engineering work included close to a hundred devices for moving and diverting water.

Now, near the end of his life, he depicted water and its swirls not as calm or tamed but as filled with fury.

for those who love curls and swirls, as Leonardo did, the drawings are an artistic expression of great aesthetic power. They remind us of the curls cascading down the back of the angel in his Annunciation, the painting he made some forty years earlier. Indeed, the underdrawing of the angel’s curls, as revealed by a spectrographic analysis, is strikingly similar to the spirals of the deluge drawings.

His deluge drawings are based on storms he had witnessed and described in his notebooks, but they are also the product of a fevered and frenzied imagination. He was a master at blurring lines, and in his deluge drawings he did so between reality and fantasia.

The deluge drawings conjure up the story of the Flood in Genesis, a topic treated by Michelangelo and many other artists over the years, but Leonardo makes no mention of Noah. He was conveying more than a biblical tale. At one point he adds Greek and Roman classical gods to the fray: “Neptune will be seen in the midst of the water with his trident, and let Aeolus with his winds be shown entangling the trees floating uprooted and whirling in the huge waves.”22 He drew on Virgil’s Aeneid, Ovid’s Metamorphoses, and the thunderous natural phenomena in book 6 of Lucretius’s On the Nature of Things. The drawings and text also conjure up the tale he wrote in Milan in the 1490s, ostensibly addressed to “the Devatdar of Syria.”

Leonardo did not focus on, or for that matter even hint at, the wrath of God in his deluge writings and drawings. He conveyed instead his belief that chaos and destruction are inherent in the raw power of nature. The psychological effect is more harrowing than if he were merely depicting a tale of punishment from an angry God. He was imparting his own emotions and thereby tapping into ours. Hallucinatory and hypnotic, the deluge drawings are the unnerving bookend to a life of nature drawing that began with a sketch of the placid Arno flowing near his native village.

Then abruptly, almost at the end of the page, he breaks off his writing with an “et cetera.” That is followed by a line, written in the same meticulous mirror script as the previous lines of his analysis, explaining why he is putting down his pen. “Perché la minestra si fredda,” he writes. Because the soup is getting cold.

It is the final piece of writing we have by Leonardo’s hand, our last scene of him working. Picture him in the upstairs study of his manor house, with its beamed ceiling and fireplace and the view of his royal patron’s Château d’Amboise. Mathurine, his cook, is down in the kitchen. Perhaps Melzi and others of the household are already at the table. After all these years, he is still stabbing away at geometry problems that have not yielded the world very much but have given him a profound appreciation of the patterns of nature. Now, however, the soup is getting cold.

His science led him to adopt many heretical beliefs, including that the fetus in the womb does not have a soul of its own and that the biblical Flood did not happen. Unlike Michelangelo, a man consumed at times with religious fervor, Leonardo made a point of not expounding much on religion during his lifetime. He said that he would not endeavor “to write or give information of those things of which the human mind is incapable and which cannot be proved by an instance of nature,” and he left such matters “to the minds of friars, fathers of the people, who by inspiration possess the secrets.”

There had apparently been an estrangement, one that had grown with the ascent of Melzi and the arrival of Battista. Salai was no longer at Leonardo’s side when he made the will. Nevertheless, he lived up to his reputation as a sticky-fingered little devil, one who was somehow able to get his hands on things. When he was killed five years later by a crossbow, the inventory of his estate showed that, perhaps during a visit to France, he had been given or had taken many copies of Leonardo’s paintings and possibly some of the originals, perhaps including the Mona Lisa and Leda and the Swan. Always the con artist, it is unclear whether the prices listed in his estate are true values, thus making it hard to know which were copies.

even in his death, there is a veil of mystery. We cannot portray him with crisp sharp lines, nor should we want to, just as he would not have wanted to portray Mona Lisa that way. There is something nice about leaving a little to our imagination. As he knew, the outlines of reality are inherently blurry, leaving a hint of uncertainty that we should embrace. The best way to approach his life is the way he approached the world: filled with a sense of curiosity and an appreciation for its infinite wonders.

“Tell me if anything was ever done,” he repeatedly scribbled in notebook after notebook. “Tell me. Tell me. Tell me if ever I did a thing. . . . Tell me if anything was ever made.”

And by refusing to churn out works that he had not perfected, he sealed his reputation as a genius rather than a master craftsman. He enjoyed the challenge of conception more than the chore of completion.

Similarly, he looked upon his art and engineering and his treatises as a part of a dynamic process, always receptive to a refinement by the application of a new insight.

Relinquishing a work, declaring it finished, froze its evolution. Leonardo did not like to do that. There was always something more to be learned, another stroke to be gleaned from nature that would make a picture closer to perfect.

His facility for combining observation with fantasy allowed him, like other creative geniuses, to make unexpected leaps that related things seen to things unseen. “Talent hits a target that no one else can hit,” wrote the German philosopher Arthur Schopenhauer. “Genius hits a target no one else can see.”

LEARNING FROM LEONARDO

Be curious, relentlessly curious. “I have no special talents,” Einstein once wrote to a friend. “I am just passionately curious.”

Seek knowledge for its own sake. Not all knowledge needs to be useful. Sometimes it should be pursued for pure pleasure.

Retain a childlike sense of wonder. At a certain point in life, most of us quit puzzling over everyday phenomena. We might savor the beauty of a blue sky, but we no longer bother to wonder why it is that color. Leonardo did.

Observe. Leonardo’s greatest skill was his acute ability to observe things. It was the talent that empowered his curiosity, and vice versa.

Start with the details. In his notebook, Leonardo shared a trick for observing something carefully: Do it in steps, starting with each detail. A page of a book, he noted, cannot be absorbed in one stare; you need to go word by word. “If you wish to have a sound knowledge of the forms of objects, begin with the details of them, and do not go on to the second step until you have the first well fixed in memory.”

See things unseen.

He mixed theatrical ingenuity with fantasy. This gave him a combinatory creativity. He could see birds in flight and also angels, lions roaring and also dragons.

Go down rabbit holes. He filled the opening pages of one of his notebooks with 169 attempts to square a circle. In eight pages of his Codex Leicester, he recorded 730 findings about the flow of water; in another notebook, he listed sixty-seven words that describe different types of moving water.

Get distracted.

Respect facts. Leonardo was a forerunner of the age of observational experiments and critical thinking. When he came up with an idea, he devised an experiment to test it. And when his experience showed that a theory was flawed—such as his belief that the springs within the earth are replenished the same way as blood vessels in humans—he abandoned his theory and sought a new one.

Procrastinate.

creativity requires time for ideas to marinate and intuitions to gel. “Men of lofty genius sometimes accomplish the most when they work least,” he explained,

Let the perfect be the enemy of the good.

He carried around masterpieces such as his Saint Anne and the Mona Lisa to the end, knowing there would always be a new stroke he could add.

Think visually. Leonardo was not blessed with the ability to formulate math equations or abstractions. So he had to visualize them, which he did with his studies of proportions, his rules of perspective, his method for calculating reflections from concave mirrors, and his ways of changing one shape into another of the same size.

Avoid silos.

He knew that art was a science and that science was an art.

Let your reach exceed your grasp.

Indulge fantasy.

Just as Leonardo blurred the lines between science and art, he did so between reality and fantasy. It may not have produced flying machines, but it allowed his imagination to soar.

Create for yourself, not just for patrons.

Collaborate. Genius is often considered the purview of loners who retreat to their garrets and are struck by creative lightning. Like many myths, that of the lone genius has some truth to it.

Vitruvian Man was produced after sharing ideas and sketches with friends. Leonardo’s best anatomy studies came when he was working in partnership with Marcantonio della Torre.

Genius starts with individual brilliance. It requires singular vision. But executing it often entails working with others. Innovation is a team sport. Creativity is a collaborative endeavor.

Make lists. And be sure to put odd things on them. Leonardo’s to-do lists may have been the greatest testaments to pure curiosity the world has ever seen.

Take notes, on paper.

Be open to mystery. Not everything needs sharp lines.

Describe the tongue of the woodpecker

The tongue of a woodpecker can extend more than three times the length of its bill. When not in use, it retracts into the skull and its cartilage-like structure continues past the jaw to wrap around the bird’s head and then curve down to its nostril. In addition to digging out grubs from a tree, the long tongue protects the woodpecker’s brain. When the bird smashes its beak repeatedly into tree bark, the force exerted on its head is ten times what would kill a human. But its bizarre tongue and supporting structure act as a cushion, shielding the brain from shock.

Gridiron Genius






Gridiron Genius Book Cover




Gridiron Genius





Michael Lombardi





Sports & Recreation




Crown Archetype




September 11, 2018




288



One of the best books I read in 2018. One of the best books about football. One of the best books about leadership. I am going to hold on to this one for a long time and break it out in 5 years when my daughter is ready to coach her first field hockey team.

Lombardi has found himself in unique positions with some of the greatest minds in football. He shares insights from Bill Walsh, Al Davis, and Bill Belichik.He provides a blueprint for building a championship organization. He gets into the details from personnel to practice to game-day decisions that win titles,

Gridiron Genius explains how to evaluate, acquire, and utilize personnel. Lombardi explains how the smartest leaders script everything: from an afternoon's special-teams practice to a season's playoff run to a decade-long organizational blueprint.

Thinking Fast and Slow






Thinking, Fast and Slow Book Cover




Thinking, Fast and Slow





Daniel Kahneman





Business & Economics




Macmillan




October 25, 2011




499



I'm not going to sugar coat it. This book was a G.R.I.N.D. No joke. This is not for the faint of heart. BUT! It is a great book. The author is a Nobel Prize winning economist. Right. I know. Kahneman describes how are brain processes information and makes decisions. System 1 is the fast-twitch response. System 2 is the deep thought response. System 1 IS A LIAR! System 2 is not much better. But it is trying. Ultimately, Kahneman is trying to get you to think about how you think. Slow down. Give System 2 a chance. It took me MONTHS to get through this book. It requires you to read, let it sink in, put the book down, go take a walk, get a good night's sleep and then pick it up again the next day.

Own the Day






Own the Day, Own Your Life Book Cover




Own the Day, Own Your Life





Aubrey Marcus





Self-Help




Harper Wave




April 17, 2018




448



Aubrey Marcus owns Onnit. I learned about Aubrey and Onnit via the Joe Rogan podcast. Marcus presents a solid program of mind-body-spirit wellness practices. All very simple. Not sure there was a whole lot of new info for me but I like how it is packaged up. Very simple kettlebell and stretching movements can be found in the book. It would also be helpful if you are trying to improve on a specific area. Got to that chapter, digest it and start applying.

Conspiracy






Conspiracy Book Cover




Conspiracy





Ryan Holiday





Biography & Autobiography




Penguin




February 27, 2018




336

I am a Ryan Holiday fan-boy. I just want to be clear. I love how he writes. This book is a bit of a departure for him. It was enjoyable.

In 2007, Gawker Media, outed PayPal founder and billionaire investor Peter Thiel as gay. Thiel's didn't consider himself a public figure, and believed the information was private.

This lead to a decade-long, "meticulously plotted conspiracy" that would end nearly a decade later with a $140 million dollar judgment against Gawker, its bankruptcy and with Nick Denton, Gawker's CEO and founder, out of a job.

Why had Thiel done this? How had no one discovered it? What would this mean--for the First Amendment? For privacy? For culture?

It's a study in power, strategy, and one of the most wildly ambitious--and successful--secret plots in recent memory. I agree with the outcome. I am not 100% sure it was right.

The Unsettling of America






The Unsettling of America Book Cover




The Unsettling of America





Wendell Berry





Social Science




Counterpoint




September 1, 2015




240



First published in 1977, it still holds up. A collection of essays from Wendell Berry about the transition to "agribusiness" and its impact on us. The scary part is that in 2018, his arguments are even MORE alarming. We have lost a sense of community. We are destroying nature by chasing profits. Where and when did we go wrong? I feel like this is a must read. It is one of the best books I have ever read.

Tiny Beautiful Things






Tiny Beautiful Things Book Cover




Tiny Beautiful Things





Cheryl Strayed





Self-Help




Random House Digital, Inc.




2012




353



This collection might just contain all the things we need to know. It is profound in so many ways. This is one of those books that you should read over and over until the many (many!) truths that Cheryl Strayed shares with us become embedded in our souls. If that sounds corny to you - you need this book more than anyone.

The Little Book of Hygge






The Little Book of Hygge Book Cover




The Little Book of Hygge





Meik Wiking





Self-Help




Penguin UK




September 1, 2016




287



The Danish word hygge is one of those beautiful words that doesn't directly translate into English, but it more or less means comfort, warmth or togetherness. Hygge is the feeling you get when you are cuddled up on a sofa with a loved one, in warm knitted socks, in front of the fire, when it is dark, cold and stormy outside. It that feeling when you are sharing good, comfort food with your closest friends, by candle light and exchanging easy conversation. It is those cold, crisp blue sky mornings when the light through your window is just right. Denmark is the happiest nation in the world and Meik puts this largely down to them living the hygge way. They focus on the small things that really matter, spend more quality time with friends and family and enjoy the good things in life. "The Little Book of Hygge" will give you practical steps and tips to become more hygge

Perennial Seller






Perennial Seller Book Cover




Perennial Seller





Ryan Holiday





Business & Economics




Penguin




July 18, 2017




256



I am a big fan of Ryan Holiday. I love his work on Stoicism. This book gets into some areas that interest me. The concept of "networking" - how to do it, how not to do it. "Marketing." Building an email list. Controlling your content. Establishing a side-hustle. Etc, etc.

Holiday reveals that the key to success for many perennial sellers is that their creators don’t distinguish between the making and the marketing. The product’s purpose and audience are in the creator’s mind from day one. By thinking holistically about the relationship between their audience and their work, creators of all kinds improve the chances that their offerings will stand the test of time.