Call Sign Chaos

Call Sign Chaos, Learning to Lead


In any organization, it’s all about selecting the right team. The two qualities I was taught to value most in selecting others for promotion or critical roles were initiative and aggressiveness.

the Marines reward initiative aggressively implemented.

Slowly but surely, we learned there was nothing new under the sun: properly informed, we weren’t victims—we could always create options

Part I: Direct Leadership

Lee’s Lieutenants, by Douglas Freeman, and Liddell Hart’s Strategy

My early years with my Marines taught me leadership fundamentals, summed up in the three Cs. The first is competence.

Be brilliant in the basics. Don’t dabble in your job; you must master it. That applies at every level as you advance. Analyze yourself. Identify weaknesses and improve yourself. If you’re not running three miles in eighteen minutes, work out more; if you’re not a good listener, discipline yourself; if you’re not swift at calling in artillery fire, rehearse. Your troops are counting on you. Of course you’ll screw up sometimes; don’t dwell on that

Fire and maneuver—block and tackle—decide battle.

Second, caring. To quote Teddy Roosevelt, “Nobody cares how much you know, until they know how much you care.”

Third, conviction

Competence, caring, and conviction combine to form a fundamental element—shaping the fighting spirit of your troops. Leadership means reaching the souls of your troops, instilling a sense of commitment and purpose in the face of challenges so severe that they cannot be put into words.

Marines believe that attitude is a weapon system

I said to each recruiter, “have a clear goal: four recruits a month who can graduate from boot camp. Anything you need from me, I’ll get you. We will succeed as a team, with all hands pulling their weight.”

I had learned in the fleet that in harmonious, effective units, everyone owns the unit mission. If you as the commander define the mission as your responsibility, you have already failed. It was our mission, never my mission. The thirty-eight recruiters were my subordinate commanders. “Command and control,” the phrase so commonly used to describe leadership inside and outside the military, is inaccurate. In the Corps, I was taught to use the concept of “command and feedback.” You don’t control your subordinate commanders’ every move; you clearly state your intent and unleash their initiative. Then, when the inevitable obstacles or challenges arise, with good feedback loops and relevant data displays, you hear about it and move to deal with the obstacle. Based on feedback, you fix the problem. George Washington, leading a revolutionary army, followed a “listen, learn, and help, then lead,” sequence.

It’s all about clear goals and effective coaching.

You can’t have an elite organization if you look the other way when someone craps out on you.

Finally, I understood what President Eisenhower had passed on. “I’ll tell you what leadership is,” he said. “It’s persuasion and conciliation and education and patience. It’s long, slow, tough work. That’s the only kind of leadership I know.”

When tasked with supporting other units, select those you most hate to give up. Never advantage yourself at the expense of your comrades.

I adapted a technique used by Roman legions, which built rectangular camps. I organized our camp (or laager) in a triangular shape so that every man knew where he fit. The triangle always pointed north toward the enemy. Day or night, regardless of where we made camp, everyone knew the exact locations of the mortar pits, the communications tent, the fuel compound, and his command element. We were oriented toward the enemy, so all hands could roll out in battle formation at a moment’s notice

Men who are familiarized to danger meet it without shrinking; whereas troops unused to service often apprehend danger where no danger is.”

Every commander and chief executive officer needs tools to scan the horizon for danger or opportunities. Juliets proved invaluable to me by providing a steady stream of dispassionate information. I chose men who I was confident would maintain trust. What kept the Juliets from being seen as a spy ring by my subordinate commanders was their ability to keep confidences when those commanders shared concerns. They knew that information would be conveyed to me alone.

Read the ancient Greeks and how they turned out their warriors,” he said

If you haven’t read hundreds of books, you are functionally illiterate, and you will be incompetent, because your personal experiences alone aren’t broad enough to sustain you. Any commander who claims he is “too busy to read” is going to fill body bags with his troops as he learns the hard way. The consequences of incompetence in battle are final. History teaches that we face nothing new under the sun. The Commandant of the Marine Corps maintains a list of required reading for every rank. All Marines read a common set; in addition, sergeants read some books, and colonels read others. Even generals are assigned a new set of books that they must consume. At no rank is a Marine excused from studying. When I talked to any group of Marines, I knew from their ranks what books they had read. During planning and before going into battle, I could cite specific examples of how others had solved similar challenges. This provided my lads with a mental model as we adapted to our specific mission.

at the executive level, your job is to reward initiative in your junior officers and NCOs and facilitate their success. When they make mistakes while doing their best to carry out your intent, stand by them. Examine your coaching and how well you articulate your intent. Remember the bottom line: imbue in them a strong bias for action.

By decentralizing authority to take full advantage of opportunities on the broader front, we maneuvered faster than the enemy, getting inside his decision-making loop.

There is a gift,” Napoleon wrote in his memoirs, “of being able to see at a glance the possibilities offered by the terrain….One can call it coup d’oeil [to see in the blink of an eye] and it is inborn in great generals.” “It really is the commander’s coup d’oeil,” Clausewitz agreed, “his ability to see things simply, to identify the whole business of war completely with himself, that is the essence of good generalship. Only if the mind works in this comprehensive fashion can it achieve the freedom it needs to dominate events and not be dominated by them.”

As Churchill noted, “To each there comes in their lifetime a special moment when they are figuratively tapped on the shoulder and offered the chance to do a very special thing, unique to them and fitted to their talents. What a tragedy if that moment finds them unprepared or unqualified for that which could have been their finest hour

we drastically cut down staff size by employing “skip-echelon,” a technique I learned in discussions with a voluble English-speaking Iraqi major my battalion had captured in the 1991 Gulf War. In most military organizations, each level of command—or echelon—has staff sections with the same functions, like personnel management, intelligence gathering, operational planning, and logistics support. As the Iraqi major explained, such duplication wasted time and manpower and added no value

Throughout my career, I’ve preferred to work with whoever was in place. When a new boss brings in a large team of favorites, it invites discord and the concentration of authority at higher levels. Using skip-echelon meant trusting subordinate commanders and staffs. I chose to build on cohesive teams, support them fully, and remove those who didn’t wind up measuring up.

Business management books often stress “centralized planning and decentralized execution.” That is too top-down for my taste. I believe in a centralized vision, coupled with decentralized planning and execution.

The amphibious landing,” MacArthur explained, “is the most powerful tool we have to employ. We must strike hard and deep into enemy territory. The deep envelopment, based upon surprise, which severs the enemy supply lines, is and always has been the most decisive maneuver of war.

Part II: Executive Leadership

I pulled books off the shelves, and began studying campaigns in Mesopotamia, starting with Xenophon’s Anabasis and books on Alexander the Great—working my way forward.

What Hagee saw was what Xenophon faced when he marched deep into Mesopotamia 2,400 years ago. Xenophon’s ten thousand soldiers were a tiny minority among the people. He recognized that they must quickly gain control or the countryside would rise up against them.

Ripping out an authoritarian regime leaves you responsible for security, water, power, and everything else. Removing Saddam will unleash the majority Shiites, defanging the minority Sunnis, who won’t take lightly their loss of domination.”

Marcus Aurelius’s Meditations was my constant companion

I don’t care how operationally brilliant you are; if you can’t create harmony—vicious harmony—on the battlefield, based on trust across different military services, foreign allied militaries, and diplomatic lines, you need to go home, because your leadership is obsolete.

I knew I needed an organizing principle, and to the commanders I made it clear that the success of the mission depended on speed: speed of operations and movement would be prefaced by speed of information-passing and decision-making. Armed with this intent, my troops would keep punching through before the enemy could react.

Our campaign’s success was based on not giving the enemy time to react. We would turn inside the enemy’s “OODA” loop, an acronym coined by the legendary maverick Air Force Colonel John Boyd. To win a dogfight, Boyd wrote, you have to observe what is going on, orient yourself, decide what to do, and act before your opponent has completed his version of that same process, repeating and repeating this loop faster than your foe. According to Boyd, a fighter pilot didn’t win because he had faster reflexes; he won because his reflexes were connected to a brain that thought faster than his opponent’s. Success in war requires seizing and maintaining the initiative—and the Marines had adopted Boyd’s OODA loop as the intellectual framework for maneuver warfare. Used with decentralized decision-making, accelerating our OODA loops results in a cascading series of disasters confronting the enemy

Field Marshal Slim wrote in World War II: “As officers,” he wrote, “you will neither eat, nor drink, nor sleep, nor smoke, nor even sit down until you have personally seen that your men have done those things. If you will do this for them, they will follow you to the end of the world. And, if you do not, I will break you.”

Operational tempo is a state of mind. I’ve always tried to be hard on issues but not on spirits. Yet I needed unity of commitment, from every commander down through the youngest sailor and Marine. Once across the Tigris, my spread-out division could face two Republican Guard divisions. I needed the entire division on the same tempo. We had to be all in, all the time.

“No better friend, no worse enemy” was in play, and I sent a smile of thanks to General Lucius Cornelius Sulla, the Roman soldier who, two thousand years ago, had those words inscribed on his tombstone and passed it on to me

It is not the young man who misses the days he does not know,” Marcus Aurelius wrote. “It is the living who bear the pain of those missed days.”

The central question was how to kill or capture the insurgents while persuading the population to turn against the insurgent cause. If we needed “new ideas” to help us construct our plan, old books were full of them. I reminded my men that Alexander the Great would not be perplexed by the enemy we faced. In 330 B.C., he first conquered the country, then instituted fair laws and orderly practices. It
wasn’t a bad model to consider.

My command challenge was to convey to my troops a seemingly contradictory message: “Be polite, be professional—but have a plan to kill everyone you meet.”

Anyone who has studied history knows that an enemy always moves against your perceived weakness, and this enemy had chosen irregular warfare. Now we had to adapt faster than they could, getting inside their OODA loop. Having watched how swiftly Islamist terrorism was spreading, I believed we would be fighting for years. Accordingly, irregular warfare had to be a core competency, but without the Marine Corps’s developing tunnel vision and ignoring other kinds of threats. My approach in adapting our warfighting to this enemy was to insist on the pervasive implementation of decentralized decision-making.

Situational recognition isn’t unique to battle. Notice how often a college quarterback calls out the wrong signal, resulting in a broken play. To cut down on those mental mistakes, former Ohio State coach Urban Meyer devoted team meetings to hands-on simulation exercises, demanding that his players respond to confused situations. The goal was the assimilation of knowledge to take with them into the next game so that they would recognize the same situation when it occurred.

For me, “player-coach” aptly describes the role of a combat leader, or any real leader.

Peter Drucker, the business guru, criticized business executives for devoting too much time to planning, rather than understanding the nature of the corporation itself. As he put it, “Culture eats strategy for lunch.” The output of any organization, driven by its culture, must reflect the leadership’s values in order to be effective.

thinking, using deception and turning faster inside his decision loop, always assuming that he would adapt

“ The trinity of chance, uncertainty, and friction [will] continue to characterize war,” Clausewitz wrote, “and will make anticipation of even the first-order consequences of military action highly conjectural.”

Part III: Strategic Leadership

I had never gone in front of a hearing without a “murder board,” where I rehearsed succinct answers to complex questions. As long as you are candid and have done your homework, such hearings are not an intellectual challenge.

One of my predecessors at CENTCOM, General Zinni, had taught me to break information into three categories. The first was housekeeping, which allowed me to be anticipatory—for example, munitions stockage levels and ship locations. The second was decision-making, to maintain the rhythm of operations designed to ensure that our OODA loops were functioning at the speed of relevance. The third were alarms, called “night orders.” These addressed critical events—for instance, a U.S. embassy in distress or a new outbreak of hostilities. “Alarm” information had to be immediately brought to my attention, day or night.

keeping me informed following my mantra “What do I know? Who needs to know? Have I told them?” I repeated it so often that it appeared on index cards next to the phones in some offices.

Commander’s intent” has a special meaning in the military that requires time and thought. A commander must state his relevant aim. Intent is a formal statement in which the commander puts himself or herself on the line. Intent must accomplish the mission, it has to be achievable, it must be clearly understood, and at the end of the day, it has to deliver what the unit was tasked with achieving. Your moral authority as a commander is heavily dependent on the quality of this guidance and your troops’ sense of confidence in it: the expectation that they will use their initiative, aligning subordinate actions. You must unleash initiative rather than suffocate it.

By conveying my intent in writing and in person, I was out to win their coequal “ownership” of the mission: it wasn’t my mission; rather from private through general, it was our mission. I stressed to my staff that we had to win only one battle: for the hearts and minds of our subordinates. They will win all the rest—at the risk and cost of their lives. Once the intent was clearly conveyed, the mission was left in the hands of our junior officers and NCOs, and their animating spirits coached our troops to achieving my aim.

Trust is the coin of the realm for creating the harmony, speed, and teamwork to achieve success at the lowest cost. Trusted personal relationships are the foundation for effective fighting teams, whether on the playing field, the boardroom, or the battlefield. When the spirit of your team is on the line and the stakes are high, confidence in the integrity and commitment of those around you will enable boldness and resolution; a lack of trust will see brittle, often tentative execution of even the best-laid plans. Nothing compensates for a lack of trust. Lacking trust, your unit will pay a steep price in combat.

Yet it’s not enough to trust your people; you must be able to convey that trust in a manner that subordinates can sense. Only then can you fully garner the benefits. From mission-type orders that left subordinates with freedom of action to declining to take detailed briefs if I thought it would remove subordinate commanders’ sense of ownership over their own operations, my coaching style exhibited confidence in juniors I knew were ready to take charge. I had also found, in Tora Bora’s missed opportunity to prevent Osama bin Laden’s escape, that I had to build awareness and trust above me.

While processes are boring to examine, leaders must know their own well enough that they can master them and not be mastered, even derailed, by them. In competitive situations, a faster operating tempo than your adversary’s is a distinct asset. A smoothly operating team can more swiftly move through the observe/orient/decide/act loop, multiplying the effectiveness of its numbers. Left untouched, processes imposed by unneeded echelons will marginalize subordinate audacity.

All hands had to be thinking all the time: What do I know? Who needs to know? Have I told them? Additionally, by reducing the size of headquarters staffs, we reduced demands for information flow from subordinate units, which could then principally focus on the enemy rather than answering higher headquarters’ queries.

How to Change Your Mind

How to Change Your Mind Book Cover How to Change Your Mind
Michael Pollan
Body, Mind & Spirit
Penguin Books
May 14, 2019

This is an incredible book. There are a lot of notes. (And I skipped the first chapters covering the history.) I am trying a new approach to highlighting the passages I found especially relevant. Bold, larger font, and even LARGER font. Tell me if you like it.

Michael Pollan set out to research how LSD and psilocybin (the active ingredient in magic mushrooms) are being used to provide relief to people suffering from difficult-to-treat conditions such as depression, addiction and anxiety.

These remarkable substances are improving the lives not only of the mentally ill but also of healthy people coming to grips with the challenges of everyday life.

Pollan sifts the historical record to separate the truth about these mysterious drugs from the myths that have surrounded them since the 1960s, when a handful of psychedelic evangelists inadvertently catalyzed a powerful backlash against what was then a promising field of research. 

The true subject of Pollan's "mental travelogue" is not just psychedelic drugs but also the eternal puzzle of human consciousness and how, in a world that offers us both suffering and joy, we can do our best to be fully present and find meaning in our lives.

The soul should always stand ajar. —EMILY DICKINSON


As the literary theorists would say, the psychedelic experience is highly “constructed.” If you are told you will have a spiritual experience, chances are pretty good that you will, and, likewise, if you are told the drug may drive you temporarily insane, or acquaint you with the collective unconscious, or help you access “cosmic consciousness,” or revisit the trauma of your birth, you stand a good chance of having exactly that kind of experience

Psychologists call these self-fulfilling prophecies “expectancy effects,” and they turn out to be especially powerful in the case of psychedelics

the molecular structure of mescaline closely resembled that of adrenaline. Could schizophrenia result from some kind of dysfunction in the metabolism of adrenaline, transforming it into a compound that produced the schizophrenic rupture with reality?

The fact that such a vanishingly small number of LSD molecules could exert such a profound effect on the mind was an important clue that a system of neurotransmitters with dedicated receptors might play a role in organizing our mental experience. This insight eventually led to the discovery of serotonin and the class of antidepressants known as SSRIs.

in 1953, Osmond and Hoffer noted that the LSD experience appeared to share many features with the descriptions of delirium tremens reported by alcoholics—the hellish, days-long bout of madness alcoholics often suffer while in the throes of withdrawal. Many recovering alcoholics look back on the hallucinatory horrors of the DTs as a conversion experience and the basis of the spiritual awakening that allows them to remain sober.

controlled LSD-produced delirium help alcoholics stay sober?”

Osmond and Hoffer tested this hypothesis on more than seven hundred alcoholics, and in roughly half the cases, they reported, the treatment worked: the volunteers got sober and remained so for at least several months

From the first,” Hoffer wrote, “we considered not the chemical, but the experience as a key factor in therapy

The emphasis on what subjects felt represented a major break with the prevailing ideas of behaviorism in psychology, in which only observable and measurable outcomes counted and subjective experience was deemed irrelevant

When the therapists began to analyze the reports of volunteers, their subjective experiences while on LSD bore little if any resemblance to the horrors of the DTs, or to madness of any kind. To the contrary, their experiences were, for the most part, incredibly—and bafflingly—positive.

descriptions of, say, “a transcendental feeling of being united with the world,” one of the most common feelings reported. Rather than madness, most volunteers described sensations such as a new ability “to see oneself objectively”; “enhancement in the sensory fields”; profound new understandings “in the field of philosophy or religion”; and “increased sensitivity to the feelings of others.”*

What a psychiatrist might diagnose as depersonalization, hallucinations, or mania might better be thought of as instances of mystical union, visionary experience, or ecstasy. Could it be that the doctors were mistaking transcendence for insanity?

one of the best ways to avoid a bad session was the presence of an engaged and empathetic therapist, ideally someone who had had his or her own LSD experience. They came to suspect that the few psychotic reactions they did observe might actually be an artifact of the metaphorical white room and white-coated clinician. Though the terms “set” and “setting” would not be used in this context for several more years (and became closely identified with Timothy Leary’s work at Harvard a decade later), Osmond and Hoffer were already coming to appreciate the supreme importance of those factors in the success of their treatment

Few members of AA realize that the whole idea of a spiritual awakening leading one to surrender to a “higher power”—a cornerstone of Alcoholics Anonymous—can be traced to a psychedelic drug trip.

LSD could reliably occasion the kind of spiritual awakening he believed one needed in order to get sober; however, he did not believe the LSD experience was anything like the DTs, thus driving another nail in the coffin of that idea. Bill W. thought there might be a place for LSD therapy in AA, but his colleagues on the board of the fellowship strongly disagreed, believing that to condone the use of any mind-altering substance risked muddying the organization’s brand and message.

Therapists who administered doses of LSD as low as 25 micrograms (and seldom higher than 150 micrograms) reported that their patients’ ego defenses relaxed, allowing them to bring up and discuss difficult or repressed material with relative ease. This suggested that the drugs could be used as an aid to talking therapy, because at these doses the patients’ egos remained sufficiently intact to allow them to converse with a therapist and later recall what was discussed.

Freud called dreams “the royal road” to the subconscious, bypassing the gates of both the ego and the superego, yet the road has plenty of ruts and potholes: patients don’t always remember their dreams, and when they do recall them, it is often imperfectly. Drugs like LSD and psilocybin promised a better route into the subconscious.

Stanislav Grof, who trained as a psychoanalyst, found that under moderate doses of LSD his patients would quickly establish a strong transference with the therapist, recover childhood traumas, give voice to buried emotions, and, in some cases, actually relive the experience of their birth—our first trauma and, Grof believed (following Otto Rank), a key determinant of personality. (Grof did extensive research trying to correlate his patients’ recollections of their birth experience on LSD with contemporaneous reports from medical personnel and parents. He concluded that with the help of LSD many people can indeed recall the circumstances of their birth, especially when it was a difficult one.)

He came to believe that “under LSD the fondest theories of the therapist are confirmed by his patient.” The expectancy effect was such that patients working with Freudian therapists returned with Freudian insights (framed in terms of childhood trauma, sexual drives, and oedipal emotions), while patients working with Jungian therapists returned with vivid archetypes from the attic of the collective unconscious, and Rankians with recovered memories of their birth traumas.

Cohen wrote that “any explanation of the patient’s problems, if firmly believed by both the therapist and the patient, constitutes insight or is useful as insight.” Yet he qualified this perspective by acknowledging it was “nihilistic,” which, scientifically speaking, it surely was. For it takes psychotherapy perilously close to the world of shamanism and faith healing, a distinctly uncomfortable place for a scientist to be. And yet as long as it works, as long as it heals people, why should anyone care?

Andrew Weil in his 1972 book, The Natural Mind.

They do something, surely, but most of what that is may be self-generated. Or as Stanislav Grof put it, psychedelics are “nonspecific amplifiers” of mental processes.)

an obscure former bootlegger and gunrunner, spy, inventor, boat captain, ex-con, and Catholic mystic named Al Hubbard

Hubbard was the first researcher to grasp the critical importance of set and setting in shaping the psychedelic experience. He instinctively understood that the white walls and fluorescent lighting of the sanitized hospital room were all wrong. So he brought pictures and music, flowers and diamonds, into the treatment room, where he would use them to prime patients for a mystical revelation or divert a journey when it took a terrifying turn.

What Hubbard was bringing into the treatment room was something well known to any traditional healer. Shamans have understood for millennia that a person in the depths of a trance or under the influence of a powerful plant medicine can be readily manipulated with the help of certain words, special objects, or the right kind of music

Vancouver, where he had persuaded Hollywood Hospital to dedicate an entire wing to treating alcoholics with LSD.* Hubbard would often fly his plane down to Los Angeles to discreetly ferry Hollywood celebrities up to Vancouver for treatment. It was this sideline that earned him the nickname Captain Trips

Al Hubbard moved between these far-flung centers of research like a kind of psychedelic honeybee, disseminating information, chemicals, and clinical expertise while building what became an extensive network across North America


Seventy-eight percent of clients said the experience had increased their ability to love, 71 percent registered an increase in self-esteem, and 83 percent said that during their sessions they had glimpsed “a higher power, or ultimate reality.” Those who had such an experience were the ones who reported the most lasting benefits from their session. Don Allen told me that most clients emerged with “notable and fairly sustainable changes in beliefs, attitudes, and behavior, way above statistical probability.” Specifically, they became “much less judgmental, much less rigid, more open, and less defended

The foundation also conducted studies to determine if LSD could in fact enhance creativity and problem solving

The Whole Earth Network Brand would subsequently gather together (which included Peter Schwartz, Esther Dyson, Kevin Kelly, Howard Rheingold, and John Perry Barlow)

Herbert Kelman, a colleague in the department who later became Leary’s chief adversary, recalls the new professor as “personable” (Kelman helped him find his first house) but says, “I had misgivings about him from the beginning. He would often talk out of the top of his head about things he knew nothing about, like existentialism, and he was telling our students psychology was all a game. It seemed to me a bit cavalier and irresponsible.”

In four hours by the swimming pool in Cuernavaca I learned more about the mind, the brain, and its structures than I did in the preceding fifteen as a diligent psychologist,” he wrote later in Flashbacks, his 1983 memoir. “I learned that the brain is an underutilized biocomputer . . . I learned that normal consciousness is one drop in an ocean of intelligence. That consciousness and intelligence can be systematically expanded. That the brain can be reprogrammed.”

Drawing on their extensive fieldwork, however, Leary did do some original work theorizing the idea of “set” and “setting,” deploying the words in this context for the first time in the literature

The Concord Prison Experiment sought to discover if the potential of psilocybin to change personality could be used to reduce recidivism in a population of hardened criminals

But when Rick Doblin at MAPS meticulously reconstructed the Concord experiment decades later, reviewing the outcomes subject by subject, he concluded that Leary had exaggerated the data; in fact, there was no statistically significant difference in the rates of recidivism between the two groups.

Their unspoken model was the Eleusinian mysteries, in which the Greek elite gathered in secret to ingest the sacred kykeon and share a night of revelation

It’s often said that in the 1960s psychedelics “escaped from the laboratory,” but it would probably be more accurate to say they were thrown over the laboratory wall, and never with as much loft or velocity as by Timothy Leary and Richard Alpert at the end of 1962

another explosive article in the Crimson got them both fired. This one was written by an undergraduate named Andrew Weil. Weil had arrived at Harvard with a keen interest in psychedelic drugs—he had devoured Huxley’s Doors of Perception in high school—and when he learned about the Psilocybin Project, he beat a path to Professor Leary’s office door to ask if he could participate.

But he wanted badly to be part of Leary and Alpert’s more exclusive club, so when in the fall of 1962 Weil began to hear about other undergraduates who had received drugs from Richard Alpert, he was indignant. He went to his editor at the Crimson and proposed an investigation.

This was not, suffice it to say, Andrew Weil’s proudest moment, and when I spoke to him about it recently, he confessed that he’s felt badly about the episode ever since and had sought to make amends to both Leary and Ram Dass. (Two years after his departure from Harvard, Alpert embarked on a spiritual journey to India and returned as Ram Dass.) Leary readily accepted Weil’s apology—the man was apparently incapable of holding a grudge—

but Ram Dass refused to talk to Weil for years, which pained him. But after Ram Dass suffered a stroke in 1997, Weil traveled to Hawaii to seek his forgiveness. Ram Dass finally relented, telling Weil that he had come to regard being fired from Harvard as a blessing. “If you hadn’t done what you did,” he told Weil, “I would never have become Ram Dass.”

As Ram Dass, and the author of the 1971 classic Be Here Now, he would put his own lasting mark on American culture, having blazed one of the main trails by which Eastern religion found its way into the counterculture and then the so-called New Age.

Andrew Weil, who as a young doctor volunteered in the Haight-Ashbury Free Clinic in 1968, saw a lot of bad trips and eventually developed an effective way to “treat” them. “I would examine the patient, determine it was a panic reaction, and then tell him or her, ‘Will you excuse me for a moment? There’s someone in the next room who has a serious problem.’ They would immediately begin to feel much better.”


I have in mind a specific subset of that world, populated by perhaps

a couple hundred “guides,” or therapists, working with a variety of psychedelic substances in a carefully prescribed manner, with the intention of healing the ill or bettering the well by helping them fulfill their spiritual, creative, or emotional potential

Many of these guides are credentialed therapists, so by doing this work they are risking not only their freedom but also their professional licenses

Some are religious professionals—rabbis and ministers of various denominations; a few call themselves shamans; one described himself as a druid

a shaggy reunion of that whole 1970s class of alternative “modalities” that usually get lumped together under the rubric of the “human potential movement” and that has as its world headquarters Esalen

Zeff also left a posthumous (and anonymous) account of his work, in the form of a 1997 book called The Secret Chief, a series of interviews with a therapist called Jacob conducted by his close friend Myron Stolaroff. (In 2004, Zeff’s family gave Stolaroff permission to disclose his identity and republish the book as The Secret Chief Revealed.)

The guide had asked him to bring along an object of personal significance, so Zeff brought his Torah

helped his patients break through their defenses, bringing buried layers of unconscious material to the surface, and achieve spiritual insights, often in a single session

During his long career, Zeff helped codify many of the protocols of underground therapy, setting forth the “agreements” guides typically make with their clients—regarding confidentiality (strict), sexual contact (forbidden), obedience to the therapist’s instructions during the session (absolute), and so on—and developing many of the ceremonial touches, such as having participants take the medicine from a cup: “a very important symbol of the transformation experience

For example, some prominent underground therapists have been recruited to help train a new cohort of psychedelic guides to work in university trials of psychedelic drugs. When the Hopkins team wanted to study the role of music in the guided psilocybin session, it reached out to several underground guides, surveying their musical practices.

James Fadiman came to the MAPS conference “on the science track,” to give a talk about the value of the guided entheogenic journey

Soon after the meeting in San Jose, a “wiki” appeared on the Internet—a collaborative website where individuals can share documents and together create new content. (Fadiman included the URL in his 2011 book, The Psychedelic Explorer’s Guide.)

There’s also a link to a thoughtful “Code of Ethics for Spiritual Guides,” which acknowledges the psychological and physical risks of journeying and emphasizes the guide’s ultimate responsibility for the well-being of the client

Perhaps the most useful document on the website is the “Guidelines for Voyagers and Guides”

I found a small shrine populated with spiritual artifacts from a bewildering variety of traditions: a Buddha, a crystal, a crow’s wing, a brass bowl for burning incense, a branch of sage. At the back of the shrine stood two framed photographs, one of a Hindu guru I didn’t recognize and the other of a Mexican curandera I did: María Sabina.

clients were often asked to contribute an item of personal significance before embarking on their journeys. What I was tempted to dismiss as a smorgasbord of equal-opportunity New Age tchotchkes, I would eventually come to regard more sympathetically, as the material expression of the syncretism prevalent in the psychedelic community.

the first New Age graduate school”—the California Institute of Integral Studies. Founded in 1968, the institute specializes in “transpersonal psychology,” a school of therapy with a strong spiritual orientation rooted in the work of Carl Jung and Abraham Maslow as well as the “wisdom traditions” of the East and the West, including Native American healing and South American shamanism. Stanislav Grof, a pioneer of both transpersonal and psychedelic therapy, has been on the faculty for many years

In 2016, the institute began offering the nation’s first certificate program in psychedelic therapy.

vocation. “I help people find out who they are so they can live their lives fully.

You need a strong ego in order to let go of it and then be able to spring back to your boundaries.

The biggest fears that come up are the fear of death and the fear of madness. But the only thing to do is surrender. So surrender!

He was not overly concerned about the psychedelics—most of them concentrate their effects in the mind with remarkably little impact on the cardiovascular system—but one of the drugs I mentioned he advised I avoid. This was MDMA, also known as Ecstasy or Molly, which has been on schedule 1 since the mid-1980s, when it emerged as a popular rave drug.

MDMA lowers psychological defenses and helps to swiftly build a bond between patient and therapist

Guides told me MDMA was a good way to “break the ice” and establish trust before the psychedelic journey.

Wilhelm Reich, “my hero.” Along the way, he discovered that LSD was a powerful tool for exploring the depths of his own psyche, allowing him to reexperience and then let go of the anger and depression that hobbled him as a young man. “There was more light in my life after that. Something shifted.”


These medicines have shown me that something quote-unquote impossible exists. But I don’t think it’s magic or supernatural. It’s a technology of consciousness we don’t understand yet.”

For some people, the privilege of having had a mystical experience tends to massively inflate the ego, convincing them they’ve been granted sole possession of a key to the universe. This is an excellent recipe for creating a guru. The certitude and condescension for mere mortals that usually come with that key can render these people insufferable. But that wasn’t Fritz. To the contrary. His otherworldly experiences had humbled him, opening him up to possibilities and mysteries without closing him to skepticism—or to the pleasures of everyday life on this earth

he met Stan Grof at a breathwork course at Esalen

Grof was ostensibly teaching holotropic breathwork, the non-pharmacological modality he had developed after psychedelics were made illegal

he put on some music, something generically tribal and rhythmic, dominated by the pounding of a drum

to go with a modest dose—a hundred micrograms, with “a booster” after an hour or two if I wanted one

It’s like when you see a mountain lion,” he suggested. “If you run, it will chase you. So you must stand your ground.” I was reminded of the “flight instructions” that the guides employed at Johns Hopkins: instead of turning away from any monster that appears, move toward it, stand your ground, and demand to know, “What are you doing in my mind? What do you have to teach me?”

I was now joining, the lineage of all the tribes and peoples down through time and around the world who used such medicines in their rites of initiation

Yet I still had agency: I could change at will the contents of my thoughts, but in this dreamy state, so wide open to suggestion, I was happy to let the terrain, and the music, dictate my path.

I got absorbed watching a white tracery of mycelium threading among the roots and linking the trees in a network intricate beyond comprehension. I knew all about this mycelial network, how it forms a kind of arboreal Internet allowing the trees in a forest to exchange information, but now what had been merely an intellectual conceit was a vivid, felt reality of which I had become a part.

Love is everything.

No—you must not have heard me: it’s everything!

For what after all is the sense of banality, or the ironic perspective, if not two of the sturdier defenses the adult ego deploys to keep from being overwhelmed—by our emotions, certainly, but perhaps also by our senses, which are liable at any time to astonish us with news of the sheer wonder of the world. If we are ever to get through the day, we need to put most of what we perceive into boxes neatly labeled “Known,” to be quickly shelved with little thought to the marvels therein, and “Novel,” to which, understandably, we pay more attention, at least until it isn’t that anymore. A psychedelic is liable to take all the boxes off the shelf, open and remove even the most familiar items, turning them over and imaginatively scrubbing them until they shine once again with the light of first sight

I never achieved a transcendent, “non-dual” or “mystical-like” experience, and as I recapped the journey with Fritz the following morning, I registered a certain disappointment. But the novel plane of consciousness I’d spent a few hours wandering on had been interesting and pleasurable and, I think, useful to me. I would have to see if its effects endured, but it felt as though the experience had opened me up in unexpected ways.

It reminded me of the pleasantly bizarre mental space that sometimes opens up at night in bed when we’re poised between the states of being awake and falling asleep—so-called hypnagogic consciousness.

For the moment that interfering neurotic who, in waking hours, tries to run the show, was blessedly out of the way,” as Aldous Huxley put it in The Doors of Perception

The notion of a few years of psychotherapy condensed into several hours seemed about right

she recited a long and elaborate Native American prayer. She invoked in turn the power of each of the cardinal directions, the four elements, and the animal, plant, and mineral realms, the spirits of which she implored to help guide me on my journey.

an amethyst in the shape of a heart, a purple crystal holding a candle, little cups

filled with water, a bowl holding a few rectangles of dark chocolate, the two “sacred items” she had asked me to bring (a bronze Buddha a close friend had brought back from a trip to the East; the psilocybin coin Roland Griffiths had given me at our first meeting)

a fragrant South American wood that Indians burn ceremonially, and the jet-black wing of a crow

I was God and God was me

The reawakening of her spiritual life led her onto the path of Tibetan Buddhism and eventually to take the vow of an initiate: “‘To assist all sentient beings in their awakening and their enlightenment.’ Which is still my vocation.”

two grams. Mary planned to offer me another two grams along the way, for a total of four. This would roughly approximate the dose being given to volunteers in the NYU and Hopkins trials and was equivalent to roughly three hundred micrograms of LSD—twice as much as I had taken with Fritz.

Called the Mindfold Relaxation Mask, Mary told me, it had been expressly designed for this purpose by Alex Grey, the psychedelic artist.

sound begat space

Relax and float downstream

I saw she had turned into María Sabina, the Mexican curandera who had given psilocybin to R. Gordon Wasson in that dirt basement in Huautla de Jiménez sixty years ago

Later, when I did, she was flattered: María Sabina was her hero.

By adulthood, the mind has gotten very good at observing and testing reality and developing confident predictions about it that optimize our investments of energy (mental and otherwise) and therefore our survival. So rather than starting from scratch to build a new perception from every batch of raw data delivered by the senses, the mind jumps to the most sensible conclusion based on past experience combined with a tiny sample of that data. Our brains are prediction machines optimized by experience, and when it comes to faces, they have boatloads of experience: faces are always convex, so this hollow mask must be a prediction error to be corrected.

There was life after the death of the ego. This was big news.

When I think back on this part of the experience, I’ve occasionally wondered if this enduring awareness might have been the “Mind at Large” that Aldous Huxley described during his mescaline trip in 1953. Huxley never quite defined what he meant by the term—except to speak of “the totality of the awareness belonging to Mind at Large”—but he seems to be describing a universal, shareable form of consciousness unbounded by any single brain. Others have called it cosmic consciousness, the Oversoul, or Universal Mind

Could it be there is another ground on which to plant our feet? For the first time since embarking on this project, I began to understand what the volunteers in the cancer-anxiety trials had been trying to tell me: how it was that a psychedelic journey had granted them a perspective from which the very worst life can throw at us, up to and including death, could be regarded objectively and accepted with equanimity.

Bleached skulls and bones and the faces of the familiar dead passed before me, aunts and uncles and grandparents, friends and teachers and my father-in-law—with a voice telling me I had failed to properly mourn all of them. It was true. I had never really reckoned the death of anyone in my life; something had always gotten in the way. I could do it here and now and did.

We settled on the second of Bach’s unaccompanied cello suites, performed by Yo-Yo Ma

The suite in D minor is a spare and mournful piece that I’d heard many times before, often at funerals, but until this moment I had never truly listened to it.

Never before has a piece of music pierced me as deeply as this one did now. Though even to call it “music” is to diminish what now began to flow, which was nothing less than the stream of human consciousness, something in which one might glean the very meaning of life and, if you could bear it, read life’s last chapter. (A question formed: Why don’t we play music like this at births as well as funerals? And the answer came immediately: there is too much life-already-lived in this piece, and poignancy for the passing of time that no birth, no beginning, could possibly withstand it.)

Four hours and four grams of magic mushroom into the journey, this is where I lost whatever ability I still had to distinguish subject from object, tell apart what remained of me and what was Bach’s music. Instead of Emerson’s transparent eyeball, egoless and one with all it beheld, I became a transparent ear, indistinguishable from the stream of sound that flooded my consciousness until there was nothing else in it, not even a dry tiny corner in which to plant an I and observe

minutes it took for that piece to, well, change everything. Or so it seemed; now, its vibrations subsiding, I’m less certain. But for the duration of those exquisite moments, Bach’s cello suite had had the unmistakable effect of reconciling me to death—to the deaths of the people now present to me, Bob’s and Ruthellen’s and Roy’s, Judith’s father’s, and so many others, but also to the deaths to come and to my own, no longer so far off. Losing myself in this music was a kind of practice for that—for losing myself, period. Having let go of the rope of self and slipped into the warm waters of this worldly beauty—Bach’s sublime music, I mean, and Yo-Yo Ma’s bow caressing those four strings suspended over that envelope of air—I felt as though I’d passed beyond the reach of suffering and regret.

what had I learned? That I had had no reason to be afraid: no sleeping monsters had awakened in my unconscious and turned on me. This was a deep fear that went back several decades, to a

terrifying moment in a hotel room in Seattle when, alone and having smoked too much cannabis, I had had to marshal every last ounce of will to keep myself from doing something deeply crazy and irrevocable

Temporarily freed from the tyranny of the ego, with its maddeningly reflexive reactions and its pinched conception of one’s self-interest, we get to experience an extreme version of Keats’s “negative capability”—the ability to exist amid doubts and mysteries without reflexively reaching for certainty. To cultivate this mode of consciousness, with its exceptional degree of selflessness (literally!), requires us to transcend our subjectivity or—it comes to the same thing—widen its circle so far that it takes in, besides ourselves, other people and, beyond that, all of nature. Now I understood how a psychedelic could help us to make precisely that move, from the first-person singular to the plural and beyond. Under its influence, a sense of our interconnectedness—that platitude—is felt, becomes flesh. Though this perspective is not something a chemical can sustain for more than a few hours, those hours can give us an opportunity to see how it might go. And perhaps to practice being there.

That’s when Andrew Weil and Wade Davis published a paper called “Identity of a New World Psychoactive Toad

I had the feeling—no, the knowledge—that every single thing there is is made of love.

There are children to raise. And there is an infinite amount of time to be dead.’”

How can you be sure this was a genuine spiritual event and not just a drug experience?” “It’s an irrelevant question,” she replied coolly. “This was something being revealed to me.”

I felt for the first time gratitude for the very fact of being, that there is anything whatsoever

Not sure exactly where to begin, I realized it might be useful to measure my experiences against those of the volunteers in the Hopkins and NYU studies. I decided to fill out one of the Mystical Experience Questionnaires (MEQs)* that the scientists had their subjects complete, hoping to learn if mine qualified.

The MEQ asked me to rank a list of thirty mental phenomena—thoughts, images, and sensations that psychologists and philosophers regard as typical of a mystical experience. (The questionnaire draws on the work of William James, W. T. Stace, and Walter Pahnke.)

I concluded that the MEQ was a poor net for capturing my encounter with the toad. The result was psychological bycatch, I decided, and should probably be tossed out.

Reflecting just on the cello interlude, for example, I could easily confirm the “fusion of [my] personal self into a larger whole,” as well as the “feeling that [I] experienced something profoundly sacred and holy” and “of being at a spiritual height” and even the “experience of unity with ultimate reality

My psilocybin journey with Mary yielded a sixty-six on the Mystical Experience Questionnaire. For some reason, I felt stupidly proud of my score. (There I was again, doing being.

Yet I think it would be wrong to discard the mystical, if only because so much work has been done by so many great minds—over literally thousands of years—to find the words for this extraordinary human experience and make sense of it. When we read the testimony of these minds, we find a striking commonality in their descriptions, even if we civilians can’t quite understand what in the world (or out of it) they’re talking about.

According to scholars of mysticism, these shared traits generally include a vision of unity in which all things, including the self, are subsumed (expressed in the phrase “All is one”); a sense of certainty about what one has perceived (“Knowledge has been revealed to me”); feelings of joy, blessedness, and satisfaction; a transcendence of the categories we rely on to organize the world, such as time and space or self and other; a sense that whatever has been apprehended is somehow sacred (Wordsworth: “Something far more deeply interfused” with meaning) and often paradoxical (so while the self may vanish, awareness abides). Last is the conviction that the experience is ineffable, even as thousands of words are expended in the attempt to communicate its power. (Guilty.)

Likewise, certain mystical passages from literature that once seemed so overstated and abstract that I read them indulgently (if at all), now I can read as a subspecies of journalism. Here are three nineteenth-century examples, but you can find them in any century. Ralph Waldo Emerson crossing a wintry New England commons in “Nature”: Standing on the bare ground,—my head bathed by the blithe air, and uplifted into infinite space,—all mean egotism vanishes. I become a transparent eye-ball. I am nothing. I see all. The currents of the Universal Being circulate through me; I am part or particle of God. Or Walt Whitman, in the early lines of the first (much briefer and more mystical) edition of Leaves of Grass: Swiftly arose and spread around me the peace and joy and knowledge that pass all the art and argument of the earth; And I know that the hand of God is the elderhand of my own, And I know that the spirit of God is the eldest brother of my own, And that all the men ever born are also my brothers . . . and the women my sisters and lovers, And that a kelson* of the creation is love. And here is Alfred, Lord Tennyson, describing in a letter the “waking trance” that descended upon him from time to time since his boyhood: All at once, as it were out of the intensity of the consciousness of individuality, the individuality itself seemed to dissolve and fade into boundless being; and this was not a confused state, but the clearest of the clearest, the surest of the surest; utterly beyond words, where death was an almost laughable impossibility; the loss of personality (if so it were) seeming no extinction, but the only true life.

But I have no problem using the word “spiritual” to describe elements of what I saw and felt, as long as it is not taken in a supernatural sense. For me, “spiritual” is a good name for some of the powerful mental phenomena that arise when the voice of the ego is muted or silenced.

The journeys have shown me what the Buddhists try to tell us but I have never really understood: that there is much more to consciousness than the ego, as we would see if it would just shut up.

When Huxley speaks of the mind’s “reducing valve”—the faculty that eliminates as much of the world from our conscious awareness as it lets in—he is talking about the ego. That stingy, vigilant security guard admits only the narrowest bandwidth of reality, “a measly trickle of the kind of consciousness which will help us to stay alive.”

In the words of R. M. Bucke, a nineteenth-century Canadian psychiatrist and mystic, “I saw that the universe is not composed of dead matter, but is, on the contrary, a living Presence.”) “Ecology” and “coevolution” are scientific names for the same phenomena: every species a subject acting on other subjects

So perhaps spiritual experience is simply what happens in the space that opens up in the mind when “all mean egotism vanishes.” Wonders (and terrors) we’re ordinarily defended against flow into our awareness; the far ends of the sensory spectrum, which are normally invisible to us, our senses can suddenly admit. While the ego sleeps, the mind plays, proposing unexpected patterns of thought and new rays of relation. The gulf between self and world, that no-man’s-land which in ordinary hours the ego so vigilantly patrols, closes down, allowing us to feel less separate and more connected, “part and particle” of some larger entity. Whether we call that entity Nature, the Mind at Large, or God hardly matters. But it seems to be in the crucible of that merging that death loses some of its sting.


All three molecules are tryptamines. A tryptamine is a type of organic compound (an indole, to be exact) distinguished by the presence of two linked rings, one of them with six atoms and the other with five

The group of tryptamines we call “the classical psychedelics” have a strong affinity with one particular type of serotonin receptor, called the 5-HT2A. These receptors are found in large numbers in the human cortex, the outermost, and evolutionarily most recent, layer of the brain. Basically, the psychedelics resemble serotonin closely enough that they can attach themselves to this receptor site in such a way as to activate it to do various things.

This has led some scientists to speculate that the human body must produce some other, more bespoke chemical for the express purpose of activating the 5-HT2A receptor—perhaps an endogenous psychedelic that is released under certain circumstances, perhaps when dreaming

One candidate for that chemical is the psychedelic molecule DMT,

SSRI antidepressants

He did this by giving subjects a drug called ketanserin that blocks the receptor; when he then administered psilocybin, nothing happened.

To the dissolution of my ego, for example, and the collapse of any distinction between subject and object? Or to the morphing in my mind’s eye of Mary into María Sabina?

All these questions concern the contents of consciousness

What neuroscientists and philosophers and psychologists mean by consciousness is the unmistakable sense we have that we are, or possess, a self that has experiences.

How do you explain mind—the subjective quality of experience—in terms of meat, that is, in terms of the physical structures or chemistry of the brain?

Some scientists have raised the possibility that consciousness may pervade the universe, suggesting we think of it the same way we do electromagnetism or gravity, as one of the fundamental building blocks of reality.

A psychedelic drug is powerful enough to disrupt the system we call normal waking consciousness in ways that may force some of its fundamental properties into view.

In contrast, someone on a psychedelic remains awake and able to report on what he or she is experiencing in real time.

links between our brains and our minds.

neuroscientist named Robin Carhart-Harris has been working since 2009 to identify the “neural correlates,” or physical counterparts, of the psychedelic experience. By injecting volunteers with LSD and psilocybin and then using a variety of scanning technologies—including functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG)

His professor sent him to read a book called Realms of the Human Unconscious by Stanislav Grof.

he would use psychedelic drugs and modern brain-imaging technologies to build a foundation of hard science beneath the edifice of psychoanalysis. “Freud said dreams were the royal road to the unconscious,” he reminded me. “Psychedelics could turn out to be the superhighway

LSD, Feilding believes, enhances cognitive function and facilitates higher states of consciousness by increasing cerebral circulation. A second way to achieve a similar result is by means of the ancient practice of trepanation. This deserves a brief digression.

Trepanation involves drilling a shallow hole in the skull supposedly to improve cerebral blood circulation; in effect, it reverses the fusing of the cranial bones that happens in childhood

she trepanned herself in 1970, boring a small hole in the middle of her forehead with an electric drill. (She documented the procedure in a short but horrifying film called Heartbeat in the Brain.)

potential of psychedelics to improve brain function

But because frequent use of LSD can lead to tolerance, it’s entirely possible that for some people 150 micrograms merely “adds a certain sparkle to consciousness.”)

He had concluded from his research, and would tell anyone who asked, that alcohol was more dangerous than cannabis and that using Ecstasy was safer than riding a horse.

injection of psilocybin and then slide into an fMRI scanner to have his tripping brain imaged.

Carhart-Harris’s working hypothesis was that their brains would exhibit increases in activity, particularly in the emotion centers. “I thought it would look like the dreaming brain

Carhart-Harris got a surprise: “We were seeing decreases in blood flow”—blood flow being one of the proxies for brain activity that fMRI measures.

Carhart-Harris and his colleagues had discovered that psilocybin reduces brain activity, with the falloff concentrated in one particular brain network that at the time he knew little about: the default mode network.

Carhart-Harris began reading up on it. The default mode network, or DMN, was not known to brain science until 2001. That was when Marcus Raichle, a neurologist at Washington University, described it in a landmark paper published in the Proceedings of the National Academy of Sciences, or PNAS. The network forms a critical and centrally located hub of brain activity that links parts of the cerebral cortex to deeper (and older) structures involved in memory and emotion.*

Raichle had discovered the place where our minds go to wander—to daydream, ruminate, travel in time, reflect on ourselves, and worry. It may be through these very structures that the stream of our consciousness flows.

The default network stands in a kind of seesaw relationship with the attentional networks that wake up whenever the outside world demands our attention; when one is active, the other goes quiet, and vice versa

the default mode is most active when we are engaged in higher-level “metacognitive” processes such as self-reflection, mental time travel, mental constructions (such as the self or ego), moral reasoning, and “theory of mind”—the ability to attribute mental states to others, as when we try to imagine “what it is like” to be someone else

As a whole, the default mode network exerts a top-down influence on other parts of the brain, many of which communicate with one another through its centrally located hub. Robin has described the DMN variously as the brain’s “orchestra conductor,” “corporate executive,” or “capital city,” charged with managing and “holding the whole system together.” And with keeping the brain’s unrulier tendencies in check.

of competing signals from one system do not interfere with those from another.

As mentioned, the default mode network appears to play a role in the creation of mental constructs or projections, the most important of which is the construct we call the self, or ego.

Self-reflection can lead to great intellectual and artistic achievement but also to destructive forms of self-regard and many types of unhappiness. (In an often-cited paper titled “A Wandering Mind Is an Unhappy Mind,” psychologists identified a strong correlation between unhappiness and time spent in mind wandering, a principal activity of the default mode network.)

“ego dissolution.”

Shortly after Carhart-Harris published his results in a 2012 paper in PNAS (“Neural Correlates of the Psychedelic State as Determined by fMRI Studies with Psilocybin”), Judson Brewer, a researcher at Yale who was using fMRI to study the brains of experienced meditators, noticed that his scans and Robin’s looked remarkably alike. The transcendence of self reported by expert meditators showed up on fMRIs as a quieting of the default mode network. It appears that when activity in the default mode network falls off precipitously, the ego temporarily vanishes, and the usual boundaries we experience between self and world, subject and object, all melt away.

This sense of merging into some larger totality is of course one of the hallmarks of the mystical experience; our sense of individuality and separateness hinges on a bounded self and a clear demarcation between subject and object. But all that may be a mental construction, a kind of illusion—just as the Buddhists have been trying to tell us. The psychedelic experience of “non-duality” suggests that consciousness survives the disappearance of the self, that it is not so indispensable as we—and it—like to think. Carhart-Harris suspects that the loss of a clear distinction between subject and object might help explain another feature of the mystical experience: the fact that the insights it sponsors are felt to be objectively true—revealed truths rather than plain old insights.

The mystical experience may just be what it feels like when you deactivate the brain’s default mode network. This can be achieved any number of ways: through psychedelics and meditation, as Robin Carhart-Harris and Judson Brewer have demonstrated, but perhaps also by means of certain breathing exercises (like holotropic breathwork), sensory deprivation, fasting, prayer, overwhelming experiences of awe, extreme sports, near-death experiences, and so on.

IF THE DEFAULT MODE network is the conductor of the symphony of brain activity, you would expect its temporary absence from the stage to lead to an increase in dissonance and mental disorder—as indeed appears to happen during the psychedelic journey

Taken as a whole, the default mode network exerts an inhibitory influence on other parts of the brain, notably including the limbic regions involved in emotion and memory, in much the same way Freud conceived of the ego keeping the anarchic forces of the unconscious id in check.

Carhart-Harris hypothesizes that these and other centers of mental activity are “let off the leash” when the default mode leaves the stage, and in fact brain scans show an increase in activity (as reflected by increases in blood flow and oxygen consumption) in several other brain regions, including the limbic regions, under the influence of psychedelics. This disinhibition might explain why material that is unavailable to us during normal waking consciousness now floats to the surface of our awareness, including emotions and memories and, sometimes, long-buried childhood traumas. It is for this reason that some scientists and psychotherapists believe psychedelics can be profitably used to surface and explore the contents of the unconscious mind.

But the default mode network doesn’t only exert top-down control over material arising from within; it also helps regulate what is let into consciousness from the world outside. It operates as a kind of filter (or “reducing valve”) charged with admitting only that “measly trickle” of information required for us to get through the day. If not for the brain’s filtering mechanisms, the torrent of information the senses make available to our brains at any given moment might prove difficult to process—as indeed is sometimes the case during the psychedelic experience. “The question,” as David Nutt puts it, “is why the brain is ordinarily so constrained rather than so open?” The answer may be as simple as “efficiency.” Today most neuroscientists work under a paradigm of the brain as a prediction-making machine. To form a perception of something out in the world, the brain takes in as little sensory information as it needs to make an educated guess. We are forever cutting to the chase, basically, and leaping to conclusions, relying on prior experience to inform current perception.

At least when it is working normally, the brain, presented with a few visual clues suggesting it is looking at a face, insists on seeing the face as a convex structure even when it is not because that’s the way faces usually are.

“predictive coding”

The model suggests that our perceptions of the world offer us not a literal transcription of reality but rather a seamless illusion woven from both the data of our senses and the models in our memories.

a kind of controlled hallucination. This raises a question: How is normal waking consciousness any different from other, seemingly less faithful productions of our imagination—such as dreams or psychotic delusions or psychedelic trips? In fact, all these states of consciousness are “imagined”: they’re mental constructs that weave together some news of the world with priors of various kinds. But in the case of normal waking consciousness, the handshake between the data of our senses and our preconceptions is especially firm. That’s because it is subject to a continual process of reality testing, as when you reach out to confirm the existence of the object in your visual field or, upon waking from a nightmare, consult your memory to see if you really did show up to teach a class without any clothes on. Unlike these other states of consciousness, ordinary waking consciousness has been optimized by natural selection to best facilitate our everyday survival.

“If it were possible to temporarily experience another person’s mental state, my guess is that it would feel more like a psychedelic state than a ‘normal’ state, because of its massive disparity with
whatever mental state is habitual with you.”

You quickly realize there is no single reality out there waiting to be faithfully and comprehensively transcribed. Our senses have evolved for a much narrower purpose and take in only what serves our needs as animals of a particular kind. The bee perceives a substantially different spectrum of light than we do; to look at the world through its eyes is to perceive ultraviolet markings on the petals of flowers (evolved to guide their landings like runway lights) that don’t exist for us. That example is at least a kind of seeing—a sense we happen to share with bees. But how do we even begin to conceive of the sense that allows bees to register (through the hairs on their legs) the electromagnetic fields that plants produce? (A weak charge indicates another bee has recently visited the flower; depleted of nectar, it’s probably not worth a stop.) Then there is the world according to an octopus! Imagine how differently reality presents itself to a brain that has been so radically decentralized, its intelligence distributed across eight arms so that each of them can taste, touch, and even make its own “decisions” without consulting headquarters.

By adulthood, the brain has gotten very good at observing and testing reality and developing reliable predictions about it that optimize our investments of energy (mental and otherwise) and therefore our chances of survival. Uncertainty is a complex brain’s biggest challenge, and predictive coding evolved to help us reduce it.

The Entropic Brain: A Theory of Conscious States Informed by Neuroimaging Research with Psychedelic Drugs,” published in Frontiers in Human Neuroscience in 2014. Here, Carhart-Harris attempts to lay out his grand synthesis of psychoanalysis and cognitive brain science. The question at its heart is, do we pay a price for the achievement of order and selfhood in the adult human mind? The paper concludes that we do. While suppressing entropy (in this context, a synonym for uncertainty) in the brain “serves to promote realism, foresight, careful reflection and an ability to recognize and overcome wishful and paranoid fantasies,” at the same time this achievement tends to “constrain cognition” and exert “a limiting or narrowing influence on consciousness.”

Magical thinking is one way for human minds to reduce their uncertainty about the world, but it is less than optimal for the success of the species.

Better way to suppress uncertainty and entropy in the human brain emerged with the evolution of the default mode network, Carhart-Harris contends, a brain-regulating system that is absent or undeveloped in lower animals and young children. Along with the default mode network, “a coherent sense of self or ‘ego’ emerges” and, with that, the human capacity for self-reflection and reason. Magical thinking gives way to “a more reality-bound style of thinking, governed by the ego.” Borrowing from Freud, he calls this more highly evolved mode of cognition “secondary consciousness.” Secondary consciousness “pays deference to reality and diligently seeks to represent the world as precisely as possible” in order to minimize “surprise and uncertainty (i.e. entropy).”

The article offers an intriguing graphic depicting a “spectrum of cognitive states,” ranging from high-entropy mental states to low ones. At the high-entropy end of the spectrum, he lists psychedelic states; infant consciousness; early psychosis; magical thinking; and divergent or creative thinking. At the low-entropy end of the spectrum, he lists narrow or rigid thinking; addiction; obsessive-compulsive disorder; depression; anesthesia; and, finally, coma.

Carhart-Harris suggests that the psychological “disorders” at the low-entropy end of the spectrum are not the result of a lack of order in the brain but rather stem from an excess of order. When the grooves of self-reflective thinking deepen and harden, the ego becomes overbearing. This is perhaps most clearly evident in depression, when the ego turns on itself and uncontrollable introspection gradually shades out reality. Carhart-Harris cites research indicating that this debilitating state of mind (sometimes called heavy self-consciousness or depressive realism) may be the result of a hyperactive default mode network, which can trap us in repetitive and destructive loops of rumination that eventually close us off from the world outside. Huxley’s reducing valve contracts to zero. Carhart-Harris believes that people suffering from a whole range of disorders characterized by excessively rigid patterns of thought—including addiction, obsessions, and eating disorders as well as depression—stand to benefit from “the ability of psychedelics to disrupt stereotyped patterns of thought and behavior by disintegrating the patterns of [neural] activity upon which they rest.”

So it may be that some brains could stand to have a little more entropy, not less. This is where psychedelics come in. By quieting the default mode network, these compounds can loosen the ego’s grip on the machinery of the mind, “lubricating” cognition where before it had been rusted stuck. “Psychedelics alter consciousness by disorganizing brain activity,” Carhart-Harris writes. They increase the amount of entropy in the brain, with the result that the system reverts to a less constrained mode of cognition.*

It’s not just that one system drops away,” he says, “but that an older system reemerges.” That older system is primary consciousness, a mode of thinking in which the ego temporarily loses its dominion and the unconscious, now unregulated, “is brought into an observable space.” This, for Carhart-Harris, is the heuristic value of psychedelics to the study of the mind, though he sees therapeutic value as well.

It’s worth noting that Carhart-Harris does not romanticize psychedelics and has little patience for the sort of “magical thinking” and “metaphysics” that they nourish in their acolytes—such as the idea that consciousness is “transpersonal,” a property of the universe rather than the human brain. In his view, the forms of consciousness that psychedelics unleash are regressions to a “more primitive” mode of cognition. With Freud, he believes that the loss of self, and the sense of oneness, characteristic of the mystical experience—whether occasioned by chemistry or religion—return us to the psychological condition of the infant on its mother’s breast, a stage when it has yet to develop a sense of itself as a separate and bounded individual. For Carhart-Harris, the pinnacle of human development is the achievement of this differentiated self, or ego, and its imposition of order on the anarchy of a primitive mind buffeted by fears and wishes and given to various forms of magical thinking.

Too much entropy in the human brain may lead to atavistic thinking and, at the far end, madness, yet too little can cripple us as well. The grip of an overbearing ego can enforce a rigidity in our thinking that is psychologically destructive.

Was it that hippies gravitated to psychedelics, or do psychedelics create hippies

When the influence of the DMN declines, so does our sense of separateness from our environment.

“I am not separate from nature, but a part of nature”)

The various scanning technologies that the Imperial College lab has used to map the tripping brain show that the specialized neural networks of the brain—such as the default mode network and the visual processing system—each become disintegrated, while the brain as a whole becomes more integrated as new connections spring up among regions that ordinarily kept mainly to themselves or were linked only via the central hub of the DMN. Put another way, the various networks of the brain became less specialized.

Distinct networks became less distinct under the drug,” Carhart-Harris and his colleagues wrote, “implying that they communicate more openly,” with other brain networks. “The brain operates with greater flexibility and interconnectedness under hallucinogens.”

the usual lines of communications within the brain are radically reorganized when the default mode network goes off-line and the tide of entropy is allowed to rise.

But when the brain operates under the influence of psilocybin, as shown on the right, thousands of new connections form, linking far-flung brain regions that during normal waking consciousness don’t exchange much information. In effect, traffic is rerouted from a relatively small number of interstate highways onto myriad smaller roads linking a great many more destinations. The brain appears to become less specialized and more globally interconnected, with considerably more intercourse, or “cross talk,” among its various neighborhoods.

Likewise, the establishment of new linkages across brain systems can give rise to synesthesia, as when sense information gets cross-wired so that colors become sounds or sounds become tactile. Or the new links give rise to hallucination, as when the contents of my memory transformed my visual perception of Mary into María Sabina, or the image of my face in the mirror into a vision of my grandfather.

The increase in entropy allows a thousand mental states to bloom, many of them bizarre and senseless, but some number of them revelatory, imaginative, and, at least potentially, transformative.

If problem solving is anything like evolutionary adaptation, the more possibilities the mind has at its disposal, the more creative its solutions will be.

If, as so many artists and scientists have testified, the psychedelic experience is an aid to creativity—to thinking “outside the box”—this model might help explain why that is the case. Maybe the problem with “the box” is that it is singular.

Franz Vollenweider has suggested that the psychedelic experience may facilitate “neuroplasticity”: it opens a window in which patterns of thought and behavior become more plastic and so easier to change. His model sounds like a chemically mediated form of cognitive behavioral therapy. But so far this is all highly speculative; as yet there has been little mapping of the brain before and after psychedelics to determine what, if anything, the experience changes in a lasting way.

Carhart-Harris argues in the entropy paper that even a temporary rewiring of the brain is potentially valuable, especially for people suffering from disorders characterized by mental rigidity. A high-dose psychedelic experience has the power to “shake the snow globe,” he says, disrupting unhealthy patterns of thought and creating a space of flexibility—entropy—in which more salubrious patterns and narratives have an opportunity to coalesce as the snow slowly resettles.

entropy suggests the gradual deterioration of a hard-won order, the disintegration of a system over time. Certainly getting older feels like an entropic process—a gradual running down and disordering of the mind and body. But maybe that’s the wrong way to think about it. Robin Carhart-Harris’s paper got me wondering if, at least for the mind, aging is really a process of declining entropy, the fading over time of what we should regard as a positive attribute of mental life.

With experience and time, it gets easier to cut to the chase and leap to conclusions—clichés that imply a kind of agility but that in fact may signify precisely the opposite: a petrifaction of thought. Think of it as predictive coding on the scale of life; the priors—and by now I’ve got millions of them—usually have my back, can be relied on to give me a decent enough answer, even if it isn’t a particularly fresh or imaginative one. A flattering term for this regime of good enough predictions is “wisdom.”

Reading Robin’s paper helped me better understand what I was looking for when I decided to explore psychedelics: to give my own snow globe a vigorous shaking, see if I could renovate my everyday mental life by introducing a greater measure of entropy, and uncertainty, into it.

Judson Brewer, the neuroscientist who studies meditation, has found that a felt sense of expansion in consciousness correlates with a drop in activity in one particular node of the default mode network—the posterior cingulate cortex (PCC), which is associated with self-referential processing

Baby consciousness is so different from adult consciousness as to constitute a mental country of its own, one from which we are expelled sometime early in adolescence. Is there a way back in?

Gopnik proposes we regard the mind of the young child as another kind of “altered state,” and in a number of respects it is a strikingly similar one.

“professor consciousness,”

Gopnik asks us to think about child consciousness in terms of not what’s missing from it or undeveloped but rather what is uniquely and wonderfully present—qualities that she believes psychedelics can help us to better appreciate and, possibly, reexperience.

In The Philosophical Baby, Gopnik draws a useful distinction between the “spotlight consciousness” of adults and the “lantern consciousness” of young children. The first mode gives adults the ability to narrowly focus attention on a goal. (In his own remarks, Carhart-Harris called this “ego consciousness” or “consciousness with a point.”) In the second mode—lantern consciousness—attention is more widely diffused, allowing the child to take in information from virtually anywhere in her field of awareness, which is quite wide, wider than that of most adults.

To borrow Judson Brewer’s terms, lantern consciousness is expansive, spotlight consciousness narrow, or contracted. The adult brain directs the spotlight of its attention where it will and then relies on predictive coding to make sense of what it perceives. This is not at all the child’s approach, Gopnik has discovered. Being inexperienced in the way of the world, the mind of the young child has comparatively few priors, or preconceptions, to guide her perceptions down the predictable tracks. Instead, the child approaches reality with the astonishment of an adult on psychedelics.

In teaching computers how to learn and solve problems, AI designers speak in terms of “high temperature” and “low temperature” searches for the answers to questions. A low-temperature search (so-called because it requires less energy) involves reaching for the most probable or nearest-to-hand answer, like the one that worked for a similar problem in the past. Low-temperature searches succeed more often than not. A high-temperature search requires more energy because it involves reaching for less likely but possibly more ingenious and creative answers—those found outside the box of preconception. Drawing on its wealth of experience, the adult mind performs low-temperature searches most of the time.

Gopnik believes that both the young child (five and under) and the adult on a psychedelic have a stronger predilection for the high-temperature search; in their quest to make sense of things, their minds explore not just the nearby and most likely but “the entire space of possibilities.”

High-temperature searches can yield answers that are more magical than realistic

Gopnik has tested this hypothesis on children in her lab and has found that there are learning problems that four-year-olds are better at solving than adults.

they’ll conduct lots of high-temperature searches, testing the most far-out hypotheses. “Children are better learners than adults in many cases when the solutions are nonobvious”

I think of childhood as the R&D stage of the species, concerned exclusively with learning and exploring. We adults are production and marketing.”

Children don’t invent these new tools, they don’t create the new environment, but in every generation they build the kind of brain that can best thrive in it. Childhood is the species’ ways of injecting noise into the system of cultural evolution.” “Noise,” of course, is in this context another word for “entropy.”

The child’s brain is extremely plastic, good for learning, not accomplishing”—better for “exploring rather than exploiting.” It also has a great many more neural connections than the adult brain.

But as we reach adolescence, most of those connections get pruned, so that the “human brain becomes a lean, mean acting machine.” A key element of that developmental process is the suppression of entropy, with all of its implications, both good and bad. The system cools, and hot searches become the exception rather than the rule. The default mode network comes online.

Consciousness narrows as we get older,” Gopnik says. “Adults have congealed in their beliefs and are hard to shift,” she has written, whereas “children are more fluid and consequently more willing to entertain new ideas.

“The short summary is, babies and children are basically tripping all the time.”

There are a range of difficulties and pathologies in adults, like depression, that are connected with the phenomenology of rumination and an excessively narrow, ego-based focus,” Gopnik says. “You get stuck on the same thing, you can’t escape, you become obsessive, perhaps addicted. It seems plausible to me that the psychedelic experience could help us get out of those states, create an opportunity in which the old stories of who we are might be rewritten.”


spiritual knickknacks—a large glazed ceramic mushroom, a Buddha, a crystal

psychedelics (usually psilocybin rather than LSD, because, as Ross explained, it “carries none of the political baggage of those three letters”) could be used to lift depression and break addictions—to alcohol, cocaine, and tobacco.

Charles Grob, the UCLA psychiatrist whose 2011 pilot study of psilocybin for cancer anxiety cleared the path for the NYU and Hopkins trials, acknowledges that “in a lot of ways we are simply picking up the torch from earlier generations of researchers who had to put it down because of cultural pressures.

To cite one obvious example, conventional drug trials of psychedelics are difficult if not impossible to blind: most participants can tell whether they’ve received psilocybin or a placebo, and so can their guides. Also, in testing these drugs, how can researchers hope to tease out the chemical’s effect from the critical influence of set and setting? Western science and modern drug testing depend on the ability to isolate a single variable, but it isn’t clear that the effects of a psychedelic drug can ever be isolated, whether from the context in which it is administered, the presence of the therapists involved, or the volunteer’s expectations. Any of these factors can muddy the waters of causality.

Charles Grob well appreciates the challenge but is also refreshingly unapologetic about it: he describes psychedelic therapy as a form of “applied mysticism”

We must also pay heed to the examples provided us by such successful applications of the shamanic paradigm.” Under that paradigm, the shaman/therapist carefully orchestrates “extrapharmacological variables” such as set and setting in order to put the “hyper-suggestible properties” of these medicines to best use. This is precisely where psychedelic therapy seems to be operating: on a frontier between spirituality and science that is as provocative as it is uncomfortable.

Pharmaceutical companies are no longer investing in the development of so-called CNS drugs—medicines targeted at the central nervous system

yet only about half of the people who take their lives have ever received mental health treatment.

People were journeying to early parts of their lives and coming back with a profound new sense of things, new priorities

“existential distress.” Existential distress is what psychologists call the complex of depression, anxiety, and fear common in people confronting a terminal diagnosis

“flight instructions” written by the Hopkins researcher Bill Richards.

Bossis suggested that Patrick use the phrase “Trust and let go” as a kind of mantra for his journey. Go wherever it takes you, he advised: “Climb staircases, open doors, explore paths, fly over

offered is always to move toward, rather than try to flee, anything truly threatening or monstrous you encounter—look it straight in the eyes. “Dig in your heels and ask, ‘What are you doing in my mind?’ Or, ‘What can I learn from you?’”

In 1965, Sidney Cohen wrote an essay for Harper’s (“LSD and the Anguish of Dying”) exploring the potential of psychedelics to “alter[] the experience of dying

Cohen wrote, “but we live and die imprisoned within ourselves.”

The idea was to use psychedelics to escape the prison of self. “We wanted to provide a brief, lucid interval of complete egolessness to demonstrate that personal intactness was not absolutely necessary, and that perhaps there was something ‘out there’”—something greater than our individual selves that might survive our demise

In 1972, Stanislav Grof and Bill Richards, who were working together at Spring Grove, wrote that LSD gave patients an experience “of cosmic unity” such that death, “instead of being seen as the absolute end of everything and a step into nothingness, appears suddenly as a transition into another type of existence . . . The idea of possible continuity of consciousness beyond physical death becomes much more plausible than the opposite.”

Patrick was asked to state his intention, which he said was to learn to cope better with the anxiety and depression he felt about his cancer and to work on what he called his “regret in life.” He placed a few photographs around the room, of himself and Lisa on their wedding day and of their dog, Arlo.

Their promise is that if you surrender to whatever happens (“trust, let go, and be open” or “relax and float downstream”), whatever at first might seem terrifying will soon morph into something else, and likely something pleasant, even blissful.

“Birth and death is a lot of work,”

I mentioned that everyone deserved to have this experience . . . that if everyone did, no one could ever do harm to another again . . . wars would be impossible to wage.

From here on, love was the only consideration . . . It was and is the only purpose. Love seemed to emanate from a single point of light . . . and it vibrated . . . I could feel my physical body trying to vibrate in unity with the cosmos . . . and, frustratingly, I felt like a guy who couldn’t dance . . . but the universe accepted it. The sheer joy . . . the bliss . . . the nirvana . . . was indescribable.

Aloud, he said, “Never had an orgasm of the soul before.” The music loomed large in the experience: “I was learning a song and the song was simple . . . it was one note . . . C . . . it was the vibration of the universe . . . a collection of everything that ever existed . . . all together equaling God.”

Patrick then described an epiphany having to do with simplicity. He was thinking about politics and food, music and architecture, and—his field—television news, which he realized was, like so much else, “over-produced. We put too many notes in a song . . . too many ingredients in our recipes . . . too many flourishes in the clothes we wear, the houses we live in . . . it all seemed so pointless when really all we needed to do was focus on the love.”

“I was being told (without words) not to worry about the cancer . . . it’s minor in the scheme of things . . . simply an imperfection of your humanity and that the more important matter . . . the real work to be done is before you. Again, love.”

He told her he “had touched the face of God.”

EVERY PSYCHEDELIC JOURNEY is different, yet a few common themes seem to recur in the journeys of those struggling with cancer. Many of the cancer patients I interviewed described an experience of either giving birth or being reborn, though none quite as intense as Patrick’s. Many also described an encounter with their cancer (or their fear of it) that had the effect of shrinking its power over them.

Now I am aware that there is a whole other ‘reality,’” one NYU volunteer told a researcher a few months after her journey. “Compared to other people, it is like I know another language.”

Bossis’s notes indicate that Patrick interpreted his journey as “pretty clearly a window . . . [on] a kind of afterlife, something beyond this physical body.” He spoke of “the plane of existence of love” as “infinite.”

He was meditating regularly, felt he had become better able to live in the present, and “described loving [his] wife even more.”

Bill Richards cited William James, who suggested we judge the mystical experience not by its veracity, which is unknowable, but by “its fruits”: Does it turn someone’s life in a positive direction?

David Nichols said, “If it gives them peace, if it helps people to die peacefully with their friends and their family at their side, I don’t care if it’s real or an illusion.”

In both the NYU and the Hopkins trials, some 80 percent of cancer patients showed clinically significant reductions in standard measures of anxiety and depression, an effect that endured for at least six months after their psilocybin session.

The dissolution of the sense of self, for example, can be understood in either psychological or neurobiological terms (as possibly the disintegration of the default mode network) and may explain many of the benefits people experienced during their journeys without resort to any spiritual conception of “oneness.” Likewise, the sense of “sacredness” that classically accompanies the mystical experience can be understood in more secular terms as simply a heightened sense of meaning or purpose.

A few key themes emerged. All of the patients interviewed described powerful feelings of connection to loved ones (“relational embeddedness” is the term the authors used) and, more generally, a shift “from feelings of separateness to interconnectedness.” In most cases, this shift was accompanied by a repertoire of powerful emotions, including “exalted feelings of joy, bliss, and love.” Difficult passages during the journey were typically followed by positive feelings of surrender and acceptance (even of their cancers) as people’s fears fell away.

Jeffrey Guss, a coauthor on the paper and a psychiatrist, interprets what happens during the session in terms of the psilocybin’s “egolytic” effects—the drug’s ability to either silence or at least muffle the voice of the ego. In his view, which is informed by his psychoanalytic training, the ego is a mental construct that performs certain functions on behalf of the self. Chief among these are maintaining the boundary between the conscious and the unconscious realms of the mind and the

Existential distress at the end of life bears many of the hallmarks of a hyperactive default network, including obsessive self-reflection and an inability to jump the deepening grooves of negative thinking. The ego, faced with the prospect of its own extinction, turns inward and becomes hypervigilant, withdrawing its investment in the world and other people. The cancer patients I interviewed spoke of feeling closed off from loved ones, from the world, and from the full range of emotions; they felt, as one put it, “existentially alone.”

By temporarily disabling the ego, psilocybin seems to open a new field of psychological possibility, symbolized by the death and rebirth reported by many of the patients I interviewed. At first, the falling away of the self feels threatening, but if one can let go and surrender, powerful and usually positive emotions flow in—along with formerly inaccessible memories and sense impressions and meanings. No longer defended by the ego, the gate between self and other—Huxley’s reducing valve—is thrown wide open. And what comes through that opening for many people, in a great flood, is love. Love for specific individuals, yes, but also, as Patrick Mettes came to feel (to know!), love for everyone and everything—love as the meaning and purpose of life, the key to the universe, and the ultimate truth.

So it may be that the loss of self leads to a gain in meaning

In preparing volunteers for their journeys, Jeffrey Guss speaks explicitly about the acquisition of meaning, telling his patients “that the medicine will show you hidden or unknown shadow parts of yourself; that you will gain insight into yourself, and come to learn about the meaning of life and existence.” (He also tells them they may have a mystical or transcendent experience but carefully refrains from defining it.) “As a result of this molecule being in your body, you’ll understand more about yourself and life and the universe.” And more often than not this happens. Replace the science-y word “molecule” with “sacred mushroom” or “plant teacher,” and you have the incantations of a shaman at the start of a ceremonial healing.

But however it works, and whatever vocabulary we use to explain it, this seems to me the great gift of the psychedelic journey, especially to the dying: its power to imbue everything in our field of experience with a heightened sense of purpose and consequence.

He had a very conscious death.”

a mystical experience, specifically a savikalpa samadhi, in which the ego vanishes when confronted with the immensity of the universe during the course of a meditation on an object—in this case, planet Earth.

Smoking became irrelevant, so I stopped.”

Six months after their psychedelic sessions, 80 percent of the volunteers were confirmed as abstinent; at the one-year mark, that figure had fallen to 67 percent, which is still a better rate of success than the best treatment now available. (A much larger randomized study, comparing the effectiveness of psilocybin therapy with the nicotine patch, is currently under way.) As in the cancer-anxiety studies, the volunteers who had the most complete mystical experiences had the best outcomes; they were, like Charles Bessant, able to quit smoking.

Right now, I’m standing here in my garden, and the light is coming through the canopy of leaves. For me to be able to stand here in the beauty of this light, talking to you, it’s only because my eyes are open to see it. If you don’t stop to look, you’ll never see it. It’s the statement of an obvious thing, I know, but to feel it, to look and be amazed by this light” is a gift he attributes to his session, which gave him “a feeling of connectedness to everything.”

Johnson believes the value of psilocybin for the addict is in the new perspective—at once obvious and profound—that it opens onto one’s life and its habits. “Addiction is a story we get stuck in, a story that gets reinforced every time we try and fail to quit: ‘I’m a smoker and I’m powerless to stop.’ The journey allows them to get some distance and see the bigger picture and to see the short-term pleasures of smoking in the larger, longer-term context of their lives.”

Perhaps this is one of the things psychedelics do: relax the brain’s inhibition on visualizing our thoughts, thereby rendering them more authoritative, memorable, and sticky.

Matt Johnson believes that psychedelics can be used to change all sorts of behaviors, not just addiction. The key, in his view, is their power to occasion a sufficiently dramatic experience to “dope-slap people out of their story. It’s literally a reboot of the system—a biological control-alt-delete. Psychedelics open a window of mental flexibility in which people can let go of the mental models we use to organize reality.”

In his view, the most important such model is the self, or ego, which a high-dose psychedelic experience temporarily dissolves. He speaks of “our addiction to a pattern of thinking with the self at the center of it.” This underlying addiction to a pattern of thinking, or cognitive style, links the addict to the depressive and to the cancer patient obsessed with death or recurrence.

We’re trapped in a story that sees ourselves as independent, isolated agents acting in the world. But that self is an illusion. It can be a useful illusion, when you’re swinging through the trees or escaping from a cheetah or trying to do your taxes. But at the systems level, there is no truth to it. You can take any number of more accurate perspectives: that we’re a swarm of genes, vehicles for passing on DNA; that we’re social creatures through and through, unable to survive alone; that we’re organisms in an ecosystem, linked together on this planet floating in the middle of nowhere. Wherever you look, you see that the level of interconnectedness is truly amazing, and yet we insist on thinking of ourselves as individual agents.” Albert Einstein called the modern human’s sense of separateness “a kind of optical delusion of his consciousness.”*

Dying, depression, obsession, eating disorders – all are exacerbated by the tyranny of an ego and the fixed narratives it constructs about our relationship to the world. By temporarily overturning that tyranny and throwing our minds into an unusually plastic state (Robin Carhart-Harris would call it a state of heightened entropy), psychedelics, with the help of a good therapist, give us an opportunity to propose some new, more constructive stories about the self and its relationship to the world, stories that just might stick.

Alcoholism can be understood as a spiritual disorder,” Ross told me the first time we met, in the treatment room at NYU. “Over time you lose your connection to everything but this compound.

“I saw Jesus on the cross,” she recalled. “It was just his head and shoulders, and it was like I was a little kid in a tiny helicopter circling around his head. But he was on the cross. And he just sort of gathered me up in his hands, you know, the way you would comfort a small child. I felt such a great weight lift from my shoulders, felt very much at peace. It was a beautiful experience.”

The teaching of the experience, she felt, was self-acceptance. “I spend less time thinking about people who have a better life than me. I realize I’m not a bad person; I’m a person who’s had a lot of bad things happen. Jesus might have been trying to tell me it was okay, that these things happen. He was trying to comfort me.”

the FDA staff surprised the researchers by asking them to expand their focus and ambition: to test whether psilocybin could be used to treat the much larger and more pressing problem of depression in the general population. As the regulators saw it, the data contained a strong enough “signal” that psilocybin could relieve depression; it would be a shame not to test the proposition, given the enormity of the need and the limitations of the therapies now available

Rosalind Watts was a young clinical psychologist working for the National Health Service when she read an article about psychedelic therapy in the New Yorker.* The idea that you might actually be able to cure mental illness rather than just manage its symptoms inspired her to write to Robin Carhart-Harris, who hired her to help out with the depression study, the lab’s first foray into clinical research

Watts’s interviews uncovered two “master” themes. The first was that the volunteers depicted their depression foremost as a state of “disconnection,” whether from other people, their earlier selves, their senses and feelings, their core beliefs and spiritual values, or nature. Several referred to living in “a mental prison,” others to being “stuck” in endless circles of rumination they likened to mental “gridlock.” I was reminded of Carhart-Harris’s hypothesis that depression might be the result of an overactive default mode network—the site in the brain where rumination appears to take place.

“It was like when you defrag the hard drive on your computer . . . I thought, ‘My brain is being defragged, how brilliant is that!’”

The second master theme was a new access to difficult emotions, emotions that depression often blunts or closes down completely. Watts hypothesizes that the depressed patient’s incessant rumination constricts his or her emotional repertoire. In other cases, the depressive keeps emotions at bay because it is too painful to experience them.

More than half of the Imperial volunteers saw the clouds of their depression eventually return, so it seems likely that psychedelic therapy for depression, should it prove useful and be approved, will not be a onetime intervention. But even the temporary respite the volunteers regarded as precious, because it reminded them there was another way to be that was worth working to recapture. Like electroconvulsive therapy for depression, which it in some ways resembles, psychedelic therapy is a shock to the system—a “reboot” or “defragging”—that may need to be repeated every so often. (Assuming the treatment works as well when repeated.) But the potential of the therapy has regulators and researchers and much of the mental health community feeling hopeful.

None of these psychedelic therapies have yet proven themselves to work in large populations; what successes have been reported should be taken as promising signals standing out from the noise of data, rather than as definitive proofs of cure. Yet the fact that psychedelics have produced such a signal across a range of indications can be interpreted in a more positive light. When a single remedy is prescribed for a great many illnesses, to paraphrase Chekhov, it could mean those illnesses are more alike than we’re accustomed to think.

It could be as straightforward as the notion of a “mental reboot”—Matt Johnson’s biological control-alt-delete key—that jolts the brain out of destructive patterns (such as Kessler’s “capture”), affording an opportunity for new patterns to take root. It could be that, as Franz Vollenweider has hypothesized, psychedelics enhance neuroplasticity. The myriad new connections that spring up in the brain during the psychedelic experience, as mapped by the neuroimaging done at Imperial College, and the disintegration of well-traveled old connections, may serve simply to “shake the snow globe,” in Robin Carhart-Harris’s phrase, a predicate for establishing new pathways.

Mendel Kaelen, a Dutch postdoc in the Imperial lab, proposes a more extended snow metaphor: “Think of the brain as a hill covered in snow, and thoughts as sleds gliding down that hill. As one sled after another goes down the hill, a small number of main trails will appear in the snow. And every time a new sled goes down, it will be drawn into the preexisting trails, almost like a magnet.” Those main trails represent the most well-traveled neural connections in your brain, many of them passing through the default mode network. “In time, it becomes more and more difficult to glide down the hill on any other path or in a different direction.

“Think of psychedelics as temporarily flattening the snow. The deeply worn trails disappear, and suddenly the sled can go in other directions, exploring new landscapes and, literally, creating new pathways.” When the snow is freshest, the mind is most impressionable, and the slightest nudge—whether from a song or an intention or a therapist’s suggestion—can powerfully influence its future course.

Robin Carhart-Harris’s theory of the entropic brain represents a promising elaboration on this general idea, and a first stab at a unified theory of mental illness that helps explain all three of the disorders we’ve examined in these pages. A happy brain is a supple and flexible brain, he believes; depression, anxiety, obsession, and the cravings of addiction are how it feels to have a brain that has become excessively rigid or fixed in its pathways and linkages—a brain with more order than is good for it. On the spectrum he lays out (in his entropic brain article) ranging from excessive order to excessive entropy, depression, addiction, and disorders of obsession all fall on the too-much-order end. (Psychosis is on the entropy end of the spectrum, which is why it probably doesn’t respond to psychedelic therapy.)

The therapeutic value of psychedelics, in Carhart-Harris’s view, lies in their ability to temporarily elevate entropy in the inflexible brain, jolting the system out of its default patterns

So many of the volunteers I spoke to, whether among the dying, the addicted, or the depressed, described feeling mentally “stuck,” captured in ruminative loops they felt powerless to break. They talked about “prisons of the self,” spirals of obsessive introspection that wall them off from other people, nature, their earlier selves, and the present moment. All these thoughts and feelings may be the products of an overactive default mode network, that tightly linked set of brain structures implicated in rumination, self-referential thought, and metacognition—thinking about thinking. It stands to reason that by quieting the brain network responsible for thinking about ourselves, and thinking about thinking about ourselves, we might be able to jump that track, or erase it from the snow.

The default mode network appears to be the seat not only of the ego, or self, but of the mental faculty of time travel as well. The two are of course closely related: without the ability to remember our past and imagine a future, the notion of a coherent self could hardly be said to exist; we define ourselves with reference to our personal history and future objectives. (As meditators eventually discover, if we can manage to stop thinking about the past or future and sink into the present, the self seems to disappear.) Mental time travel is constantly taking us off the frontier of the present moment. This can be highly adaptive; it allows us to learn from the past and plan for the future. But when time travel turns obsessive, it fosters the backward-looking gaze of depression and the forward pitch of anxiety. Addiction, too, seems to involve uncontrollable time travel. The addict uses his habit to organize time: When was the last hit, and when can I get the next?

Another type of mental activity that neuroimaging has located in the DMN (and specifically in the posterior cingulate cortex) is the work performed by the so-called autobiographical or experiential self: the mental operation responsible for the narratives that link our first person to the world, and so help define us. “This is who I am.” “I don’t deserve to be loved.” “I’m the kind of person without the willpower to break this addiction.” Getting overly attached to these narratives, taking them as fixed truths about ourselves rather than as stories subject to revision, contributes mightily to addiction, depression, and anxiety. Psychedelic therapy seems to weaken the grip of these narratives, perhaps by temporarily disintegrating the parts of the default mode network where they operate.

“The ego keeps us in our grooves,” as Matt Johnson puts it. For better and, sometimes, for worse. For occasionally the ego can become tyrannical and turn its formidable powers on the rest of us.* Perhaps this is the link between the various forms of mental illness that psychedelic therapy seems to help most: all involve a disordered ego—overbearing, punishing, or misdirected

David Foster Wallace asked his audience to “think of the old cliché about ‘the mind being an excellent servant but a terrible master.’ This, like many clichés, so lame and unexciting on the surface, actually expresses a great and terrible truth,” he said. “It is not the least bit coincidental that adults who commit suicide with firearms almost always shoot themselves in the head. They shoot the terrible master.”

Consider the case of the mystical experience: the sense of transcendence, sacredness, unitive consciousness, infinitude, and blissfulness people report can all be explained as what it can feel like to a mind when its sense of being, or having, a separate self is suddenly no more.

Now I’m inclined to think a much better and certainly more useful antonym for “spiritual” might be “egotistical.” Self and Spirit define the opposite ends of a spectrum, but that spectrum needn’t reach clear to the heavens to have meaning for us. It can stay right here on earth. When the ego dissolves, so does a bounded conception not only of our self but of our self-interest. What emerges in its place is invariably a broader, more openhearted and altruistic—that is, more spiritual—idea of what matters in life. One in which a new sense of connection, or love, however defined, seems to figure prominently. “The psychedelic journey may not give you what you want,” as more than one guide memorably warned me, “but it will give you what you need.”

Brewer invited me to visit his lab at the Center for Mindfulness at the University of Massachusetts medical school in Worcester to run some experiments on my own default mode network.

The posterior cingulate cortex is a centrally located node within the default mode network involved in self-referential mental processes

The PCC is believed to be the locus of the experiential or narrative self; it appears to generate the narratives that link what happens to us to our abiding sense of who we are. Brewer believes that this particular operation, when it goes awry, is at the root of several forms of mental suffering, including addiction.

As Brewer explains it, activity in the PCC is correlated not so much with our thoughts and feelings as with “how we relate to our thoughts and feelings.” It is where we get “caught up in the push and pull of our experience.” (This has particular relevance for the addict: “It’s one thing to have cravings,” as Brewer points out, “but quite another to get caught up in your cravings.”)

A brief daily meditation had become a way for me to stay in touch with the kind of thinking I’d done on psychedelics. I discovered my trips had made it easier for me to drop into a mentally quiet place, something that in the past had always eluded me. So I closed my eyes and began to follow my breath. I had never tried to meditate in front of other people, and it felt awkward, but when Brewer put the graph up on the screen, I could see that I had succeeded in quieting my PCC—not by a lot, but most of the bars dipped below baseline. Yet the graph was somewhat jagged, with several bars leaping above baseline

He sounded excited by the idea that the mere recollection of a psychedelic experience might somehow replicate what happens in the brain during the real thing.

EPILOGUE: In Praise of Neural Diversity

Here was Paul Summergrad, MD, the former head of the American Psychiatric Association, seated next to Tom Insel, MD, the former head of the National Institute of Mental Health. The panel was organized and moderated by George Goldsmith, an American entrepreneur and health industry consultant based in London

It was clear to everyone in the standing-room crowd exactly what the three men on the panel represented: the recognition of psychedelic therapy by the mental health establishment

He suggested that psychedelics would probably need to be rebranded in the public mind and that it would be essential to steer clear of anything that smacked of “recreational use.”

It’s not at all clear what the business model might be. Yet.

George Goldsmith envisions a network of psychedelic treatment centers, facilities in attractive natural settings where patients will go for their guided sessions. He has formed a company called Compass Pathways to build these centers in the belief they can offer a treatment for a range of mental illnesses sufficiently effective and economical that Europe’s national health services will reimburse for them

Katherine MacLean, the former Hopkins researcher who wrote the landmark paper on openness, hopes someday to establish a “psychedelic hospice,” a retreat center somewhere out in nature where not only the dying but their loved ones can use psychedelics to help them let go—the patient and the loved ones both.

in 2016, the California Institute of Integral Studies graduated its first class of forty-two psychedelic therapists.

“That was a very different time. People wouldn’t even talk about cancer or death then. Women were tranquilized to give birth; men weren’t allowed in the delivery room! Yoga and meditation were totally weird. Now mindfulness is mainstream and everyone does yoga, and there are birthing centers and hospices all over. We’ve integrated all these things into our culture. And now I think we’re ready to integrate psychedelics.”

For me, working one-on-one with an experienced guide in a safe place removed from my everyday life turned out to be the ideal way to explore psychedelics. Yet there are other ways to structure the psychedelic journey—to provide a safe container for its potentially overwhelming energies. Ayahuasca and peyote are typically used in a group, with the leader, often but not necessarily a shaman, acting in a supervisory role and helping people to navigate and interpret their experiences.

Not only did my guides create a setting in which I felt safe enough to surrender to the psychedelic experience, but they also helped me to make sense of it afterward.

I don’t mean to suggest I have achieved this state of ego-transcending awareness, only tasted it. These experiences don’t last, or at least they didn’t for me. After each of my psychedelic sessions came a period of several weeks in which I felt noticeably different—more present to the moment, much less inclined to dwell on what’s next. I was also notably more emotional and surprised myself on several occasions by how little it took to make me tear up or smile. I found myself thinking about things like death and time and infinity, but less in angst than in wonder.

This was a way of being I treasured, but, alas, every time it eventually faded. It’s difficult not to slip back into the familiar grooves of mental habit; they are so well worn; the tidal pull of what the Buddhists call our “habit energies” is difficult to withstand. Add to this the expectations of other people, which subtly enforce a certain way of being yourself, no matter how much you might want to attempt another. After a month or so, it was pretty much back to baseline.

“the folds of my gray flannel trousers”: Huxley, Doors of Perception, 33.
“We were amazed”: Fadiman, Psychedelic Explorer’s Guide, 185.
“If we learned one thing”: Lattin, Harvard Psychedelic Club, 74.
“using hallucinogens for seductions”: Weil, “Strange Case of the Harvard Drug Scandal.”

“Standing on the bare ground”: Emerson, Nature, 13.
“Swiftly arose and spread around me”: Whitman, Leaves of Grass, 29.
“All at once, as it were out of the intensity”: Tennysons, “Luminous Sleep.”

The bee perceives a substantially different spectrum: Srinivasan, “Honey Bees as a Model for Vision, Perception, and Cognition”; Dyer et al., “Seeing in Colour.”


Emerson, Ralph Waldo. Nature. Boston: James Munroe, 1836.

Fadiman, James. The Psychedelic Explorer’s Guide: Safe, Therapeutic and Sacred Journeys.Rochester, Vt.: Park Street Press, 2011.

Grof, Stanislav. LSD: Doorway to the Numinous: The Groundbreaking Psychedelic Research into Realms of the Human Unconscious. Rochester, Vt.: Park Street Press, 2009.

James, William. The Varieties of Religious Experience. EBook. Project Gutenberg, 2014.

Decade of the Brain. Berkeley: University of California Press, 2013. Lattin, Don. The Harvard Psychedelic Club: How Timothy Leary, Ram Dass, Huston Smith, and Andrew Weil Killed the Fifties and Ushered in a New Age for America. New York: HarperCollins, 2010.

Tennyson, Alfred. “Luminous Sleep.” The Spectator, Aug. 1, 1903.

Weil, Andrew T. “The Strange Case of the Harvard Drug Scandal.” Look, Nov. 1963.
Whitman, Walt. Leaves of Grass: The First (1855) Edition. New York: Penguin, 1986.

The Captain’s Class

The Captain Class Book Cover The Captain Class
Sam Walker
Business & Economics
Random House
May 16, 2017

Top 10 book for me. I've already read it twice. 🙂 Make sure you get the updated version. There is so much to like in Walker's theory and supporting analysis. Its not the coach. Its not the high paid perennial all-star. Its the person that goes out and GRINDS.

A bold new theory of leadership drawn from elite captains throughout sports— The seventeen most dominant teams in sports history had one thing in common: Each employed the same type of captain—a singular leader with an unconventional set of skills and tendencies. Drawing on original interviews with athletes, general managers, coaches, and team-building experts, Sam Walker identifies the seven core qualities of the Captain Class—from extreme doggedness and emotional control to tactical aggression and the courage to stand apart.

The Bombers gave me a taste of what it was like to play on an excellent team, and this had rewired my brain to believe it was my God-given right to experience the same sensation many times over.

body language, and observed their pregame rituals. When they offered theories about what made their collaborations successful, I jotted them down in my notebook. No matter the sport, I always heard the same handful of explanations—we practice hard, we play for each other, we never quit, we have a great coach, we always come through in the clutch. More than anything, I was struck by the businesslike sameness of these groups and by how nonchalantly their members spoke about winning. It was as if they were part of a machine in which every cog and sprocket was functioning precisely as intended. “You do your job so everyone around you can do their job,” Tom Brady once said. “There’s no big secret to it.”

No matter the sport, I always heard the same handful of explanations—we practice hard, we play for each other, we never quit, we have a great coach, we always come through in the clutch. More than anything, I was struck by the businesslike sameness of these groups and by how nonchalantly their members spoke about winning. It was as if they were part of a machine in which every cog and sprocket was functioning precisely as intended. “You do your job so everyone around you can do their job,” Tom Brady once said. “There’s no big secret to it.”

In the end, I was shocked to discover that the world’s most extraordinary sports teams didn’t have many propulsive traits in common, they had exactly one. And it was something I hadn’t anticipated.

I was shocked to discover that the world’s most extraordinary sports teams didn’t have many propulsive traits in common, they had exactly one. And it was something I hadn’t anticipated.

It’s the notion that the most crucial ingredient in a team that achieves and sustains historic greatness is the character of the player who leads it.


Before Hungary, soccer teams were thought to be collections of individuals with specific orders to do distinct things. A left-winger was supposed to patrol the left-hand touchline, for instance, while a striker’s job was to play forward at all times with an eye on the goal—no more, no less. The Hungarian Golden Team destroyed this notion. It didn’t respect rigidity. It was fluid. Players switched positions and dispositions all the time, depending on the circumstances.

What distinguished them was a style of play that erased specialization, forced players to subordinate their egos, and coaxed superior performances out of unlikely characters.

Alpha Lions Identifying the World’s Greatest Teams

consider every team from every major sport anywhere in the world through the fullness of history.

yielded a spreadsheet of candidates that ran into the thousands.

Question 1: What qualifies as a team?
To settle the matter, I decided that a group of athletes can only be considered a team in the fullest sense of the word if it meets the following three criteria: A. It has five or more members.

I decided that a group of athletes can only be considered a team in the fullest sense of the word if it meets the following three criteria: A. It has five or more members.

I decided to eliminate all teams that involve dyads: doubles tennis, doubles luge, Olympic beach volleyball, pairs skating, and ice dancing. I also eliminated curling, which involves teams of three.

In the end, the smallest units I included were basketball teams, which field five members, and where the average contributions of the players at each position should theoretically account for about 20 percent of the team total.

The smallest units I included were basketball teams, which field five members, and where the average contributions of the players at each position should theoretically account for about 20 percent of the team total.
B. Its members interact with the opponent.
C. Its members work together.

Because the athletes on these teams never physically interact with their teammates, I eliminated them. This rule put two major sports on the bubble: baseball

This rule put two major sports on the bubble: baseball and cricket.

There is one aspect of both baseball and cricket that distinguishes these games from other low-interaction sports, however—the amount of teammate coordination

Question 2: How do you separate the wheat from the chaff?
A. The team played a “major” sport.

To decide which ones to include, I resorted to looking at television ratings. Unless a sport’s premier matches attracted many millions of viewers, it was axed. The only sport that passed this test was Australian rules football.

I resorted to looking at television ratings. Unless a sport’s premier matches attracted many millions of viewers, it was axed. The only sport that passed this test was Australian rules football.

B. It played against the world’s top competition.

I eliminated Canadian football, professional ice hockey in Russia and Sweden, and all European men’s and women’s domestic professional basketball associations, among others. This rule also disqualified intercollegiate team sports in the United States, where the player pool is limited to currently enrolled students and the quality of play is inferior to that seen in professional leagues or at the Olympic level.

C. Its dominance stretched over many years.

The first assumption we can make about luck is that some teams probably owe their accomplishments to an extraordinary abundance of it. At the same time, we can assume that a handful of teams out there managed to win multiple titles despite having suffered more bad luck than good. It’s also possible that some teams control their own destiny by putting themselves in enterprising positions where a little luck goes a long way (have fun trying to measure that!). The principle of regression to the mean tells us that if you wait long enough, any overheated level of performance, good or bad, is likely to fade.

No team would be included in my sample unless it played at an elite level for a period of at least four seasons.

Question 3: What qualifies as freakish?

After applying Questions 1 and 2 to the field, only 122 teams survived the putsch, a group I will call my “finalists.”

The first metric I considered was winning percentage. Many famous teams, including the 1950s Hungarians, have fared well by this yardstick. But winning percentage has several liabilities. It doesn’t account for the strength of a team’s opponents, for one. It also favors teams that play fewer games.

A fairer way to judge a team’s win rate is by its standard deviation from the mean, which measures the magnitude of how superior its record is in relation to those of its competitors. This number is more meaningful than raw winning percentage, but it also fails to factor in the quality of the opposition. By this measure, a team that fattens up on cupcakes while losing all of its marquee matches might still come out ahead.

They gauge a team’s success by underlying measures of its performance, such as how many more points, goals, or runs it scored relative to opponents. Some statisticians will wrap several of these metrics into a “power rating” that rewards teams for their overall efficiency, regardless of their records. There are two problems with this concept: First, it can fail to account for the difference between playing well in crucial games and running up the score on patsies; second, if a team fails to win a championship, does anyone really care about its power rating?

The thing that ultimately distinguishes a freak team isn’t how impressively it won – only that it did.

The best statistic available for isolating a team’s ability to win, especially in consequential matches, is the Elo rating system, which was first adapted to sports in 1997 by a California software engineer named Bob Runyan.

As a chess enthusiast, Runyan was familiar with an evaluation system designed in 1960 by a Marquette University physics professor named Arpad Elo. The formula ranked elite chess masters by giving them running point tallies based on the outcome of every match they played, plus the weighted quality of the opponent and the weighted significance of the event. A win against a highly rated master in a major tournament, for example, would add more points to the tally, while a low-stakes victory against a weak opponent in an exhibition woulnd’t matter as much. I remember looking at the FIFA rankings and seeing they were really bad and thinking the ones in chess were really goood,” Runyan told me.

In the end, however, I decided to keep the statistics at the periphery. While I knew that Elo ratings and other available measures might be useful from time to time, I wouldn’t be able to rely on any one metric exclusively.

Claim 1: It had sufficient opportunity to prove itself. All of these finalist teams, no matter the sport they played, were exceptional dynasties.

some unanswered questions about their true ability.

The Homestead Grays of baseball’s Negro National League won eight titles in nine seasons, along with 68 percent of their games from 1937 to 1945, but because of the strict segregation of the time they were not allowed to take on the leading all-white teams of the major leagues.

Claim 2: Its record stands alone.

To make a case to be one of the greatest in history, a team must have put together some exceptionally long or concentrated burst of success that can be defined by cumulative wins or titles, and that goes beyond the accomplishments of every other team that has played the same genre of sport. In other words, its achievements have to have been unique.

The World’s Most Elite Teams

After I had evaluated every team in sports history, only seventeen stood up to all eight of these various questions, tests, subtests, rules, and claims.

My only goal was to create the purest possible sample of laboratory specimens—a group of empirical freaks that had so few blemishes of any kind that I could feel comfortable using them to explore the question I was really after: What do the most dominant teams in history have in common?

The Collingwood Magpies, Australian rules football (1927–30):
The New York Yankees, Major League Baseball (1949–53): this group is the only one in baseball history to win five consecutive World Series titles.

Hungary, International men’s soccer (1950–55):
The Montreal Canadiens, National Hockey League (1955–60):

The Boston Celtics, National Basketball Association (1956–69): The Celtics won an unparalleled eleven NBA championships in thirteen seasons, including one stretch of eight in a row, dwarfing the achievements of every other NBA dynasty.

Brazil, International men’s soccer (1958–62):

The Pittsburgh Steelers, National Football League (1974–80): This team made the playoffs six times in a row and won an unrivaled four Super Bowls in six seasons. It compiled an 80–22–1 record through the 1980 Super Bowl and notched the second-highest Elo rating in NFL history.

The Soviet Union, International men’s ice hockey (1980–84):
The New Zealand All Blacks, International rugby union (1986–90):
Yellow highlight | Location: 560
Cuba, International women’s volleyball (1991–2000):

Australia, International women’s field hockey (1993–2000): The Hockeyroos won two Olympic gold medals, plus four consecutive Champions Trophy competitions, and back-to-back World Cups. They lost only 11 percent of their matches during this span, scoring 785 goals while allowing just 220.

The United States, International women’s soccer (1996–99):
The San Antonio Spurs, National Basketball Association (1997–2016):
The New England Patriots*4, National Football League (2001–2018):
Barcelona, Professional soccer (2008–13):
France, International men’s handball (2008–15):
The New Zealand All Blacks, International rugby union (2011–15):

TWO   Captain Theory The Importance of “Glue Guys”

I could tell that the Celtics were quantitatively remarkable, but not in the way I’d expected. According to regular-season Elo ratings compiled by FiveThirtyEight, only one of their eleven championship squads managed to crack the top fifty in NBA history.

Even more curiously, the advanced metrics that statisticians use to measure the contributions of individual players showed that the Celtics never had any individual member whose isolated performance ranked among the best in history. No Celtics player led the NBA in scoring during its string of titles. In seven of its eleven championship seasons, it didn’t place a single scorer in the top ten. I quickly set the statistics aside to look for other explanations.

Red Auerbach, the Tier One Celtics ran a basic offense, and he gave the players the freedom to improvise on the court.

There is zero chance that the Celtics got lucky. Their freakish run was too long for that. The only explanation that made sense to me was that this team, like the Hungarians of the 1950s, was somehow better than the sum of its parts. As spongy as this might sound, there must have been a rare bond between the players that coaxed superior performances out of people who wouldn’t have achieved them somewhere else.

“team chemistry”

Was it a function of how long a group of athletes had been playing together, and how well they could anticipate their teammates’ next moves? Was it a measure of how well their strengths offset their weaknesses? Or was it a reflection of how much everybody on the team liked one another and how splendidly they got along?

The basic idea behind chemistry is that a team’s interpersonal dynamics will have an impact on its performance.

“Individual commitment to a group effort,” he once said, “that is what makes a team work, a company work, a society work, a civilization work.”

for every team in the top tier of my study that seemed to be tightly knit, with players who came from similar backgrounds and formed lifelong friendships, there was another that had been riven at times by internal feuds and divisions. I didn’t see a pattern there.

Boston’s dominance continued for so many years that from the beginning to the end, the roster turned over almost completely.

There were, however, two Celtics players whose careers overlapped the streak precisely. And one of them was Bill Russell.

I wondered if Russell, himself, had been the catalyst.

it was a supreme expression of desire. Russell hadn’t flown into action because anyone expected him to but because he could not bear to see his team lose.

On a whim, I decided to make a list of the names of the primary player-leaders of these seventeen teams to see if any of their careers also served as bookends for their teams’ Tier One performances. Here are the names:

  • Syd Coventry, Collingwood Magpies
  • Yogi Berra, New York Yankees
  • Ferenc Puskás, Hungary
  • Maurice Richard, Montreal Canadiens
  • Bill Russell, Boston Celtics
  • Hilderaldo Bellini, Brazil
  • Jack Lambert, Pittsburgh Steelers
  • Valeri Vasiliev, Soviet Union
  • Wayne Shelford, New Zealand All Blacks
  • Mireya Luis, Cuba
  • Rechelle Hawkes, Australia
  • Carla Overbeck, United States
  • Tim Duncan, San Antonio Spurs
  • Tom Brady, New England Patriots
  • Carles Puyol, Barcelona
  • Jérôme Fernandez, France
  • Richie McCaw, New Zealand All Blacks

The results of this little exercise stopped me cold. The Celtics weren’t the only team whose Tier One performance corresponded in some way to the arrival and departure of one particular player. In fact, they all did.

The crucial component of the job is interpersonal. The captain is the figure who holds sway over the dressing room by speaking to teammates as a peer, counseling them on and off the field, motivating them, challenging them, protecting them, resolving disputes, enforcing standards, inspiring fear when necessary, and above all setting a tone with words and deeds.

Baseball managers, when asked about the secrets of team cohesion, like to use the word “glue.”

It’s glue that supposedly prevents teams from splintering into cliques or being torn asunder by egos. It was another usage of the term that came to mind, however. When individual players devote themselves to unifying the team, baseball managers call them “glue guys.”

One influential player can unify an entire team. Once a match begins, the manager no longer influences the outcome. “On the field, the person responsible for making sure the eleven players acted as a team was the club captain,”

“The single most important ingredient after you get the talent is internal leadership. It’s not the coaches as much as one single person or people on the team who set higher standards than that team would normally set for itself.”

Could it be that the one thing that lifts a team into the top .001 percent of teams in history is the leader of the players?

What distinguished Russell on the court was his dedication to playing without the ball. In the 1950s, basketball defenders were taught never to leave their feet. Russell not only took to the air to block shots, he went after shots most people considered unblockable. He focused his efforts on anticipating rebounds, clogging the lane, intercepting passes, and setting and evading picks. According to modern defensive metrics, Russell’s career mark in “defensive win shares” is the best in NBA history—and by a 23 percent margin.

If you knew you were heading into the toughest fight of your life, whom would you choose to lead you?

Shelford was a member of New Zealand’s indigenous Maori tribe. Even in a passive state, his face conveyed strength, purpose, and command—or mana, as it’s known in Maori.

The story of Shelford’s mauled scrotum is only one example of alarmingly reckless behavior by these Tier One captains.

Mireya Luis, the future captain of the Cuban women’s volleyball team, once reported to practice four days after giving birth to her daughter, then played in a match at the World Championships fourteen days later.

  1. They lacked superstar talent. Most of the Tier One captains were not the best players on their teams, or even major stars.
  2. They weren’t fond of the spotlight.
  3. They didn’t “lead” in the traditional sense. But most Tier One captains played subservient roles on their teams, deferred to star players, and relied heavily on the talent around them to carry the scoring burden.
  4. They were not angels. Time and again, these captains played to the edge of the rules, did unsportsmanlike things, or generally behaved in a way that seemed to threaten their teams’ chances of winning.
  5. They did potentially divisive things. On various occasions they had disregarded the orders of coaches, defied team rules and strategies, and given candid interviews in which they’d spoken out against everyone from fans, teammates, and coaches to the overlords of the sport.
  6. They weren’t the usual suspects.
  7. Nobody had ever mentioned this theory. None of them had ever singled out the captain as a team’s driving force.
  8. The captain isn’t the primary leader. On most teams, the highest position in the pecking order belongs to the coach or manager.

Five frequently cited qualities of superior teams that seemed at once both plausible and researchable. They are: the presence of an otherworldly superstar, a high level of overall talent, deep financial resources, a winning culture maintained by effective management, and, finally, the most widely accepted explanation of all—superior coaching. I set out to kick the tires on each of them.

Theory 1: It takes a GOAT.

In all, twelve of the seventeen teams in Tier One enjoyed the services of a GOAT candidate.

They are: Collingwood’s Gordon Coventry (Syd’s kid brother); the Yankees’ Joe DiMaggio; Hungary’s Ferenc Puskás; Maurice Richard of the Montreal Canadiens; Brazil’s Pelé; Soviet hockey’s Viacheslav Fetisov, Sergei Makarov, and Vladislav Tretiak; Cuba’s Regla Torres; Australian field hockey’s Alyson Annan; Michelle Akers of the U.S. women’s soccer team; Tom Brady of the New England Patriots; Barcelona’s Lionel Messi; French handball’s Nikola Karabatić; and Dan Carter of the 2011–15 New Zealand All Blacks.

It’s also clear that the presence of a GOAT doesn’t guarantee success at the team level.

Player Efficiency Rating, a statistic developed by the sports columnist John Hollinger. PER takes into account both offense and defense. It gives individual players a score based on a tally of the positive contributions they make on the court—not just scoring but also blocking shots and grabbing rebounds—minus the negative things they do, such as missing shots or turning the ball over. After adjusting for the number of minutes played, PER is expressed as a rate.

Astonishingly, only three of the GOAT candidates on the seventeen Tier One teams also captained them.

In every other case, the most dominant teams in history had hierarchies in which the leader of the players was not the go-to superstar. So even though these teams had GOATs, they hadn’t tapped them to lead. This suggested that a team is more likely to become elite if it has a captain that leads from the shadows.

Theory 2: It’s a matter of overall talent. The best teams had “clusters” of above-average performers.

On teams where there was a large ability gap, they wrote, “the superstar, or highest-performing team member, dominated the discourse.” As this person took charge, the other students showed a tendency to back off, even when they believed—correctly—that the high achiever was wrong. Because of this, their group scores suffered.

The marginal athletes would defer to the star, who insisted on taking a vast majority of the shots, even when a player of lesser skill was wide open.

On the cluster teams, however, the researchers found that discussions about the quiz responses were more democratic. Many members of the group chimed in, and the debates tended to be longer and more thorough, with everyone having their say. More often than not, the researchers wrote, these kinds of groups “were able to come to a consensus on a correct answer choice.”

This study showed that for units roughly the size of basketball teams, the collective talent level, and the ability to work democratically, turned out to be far more valuable than the isolated skill of one supreme achiever. “Having a superstar on your team is only beneficial if the rest of the team also scores relatively high,” the researchers wrote.

These groups weren’t driven by a single visionary but by an extraordinary concentration of brainpower.

Baseball. In this sport, as I noted earlier, teammate interaction plays a limited role while the performance of individual players has a larger impact. Studies have shown that baseball teams that continue to add more talent do not reach a point of diminishing returns. The more stars a baseball team has, the better it should be. If there is a correlation between talent clusters and superior performance, baseball was the one sport where it would absolutely have to exist.

WAR – wins above replacement, a formula that uses game statistics to measure how much more (or less) each player contributes to their team’s victories than a statistically average player would.

Any elite team needs a passel of skillful players, and it’s probably better if their abilities are balanced. Nevertheless, my analysis of baseball in general, the Yankees in particular, and the experience of Real Madrid didn’t support the idea that a talent cluster is something teams have to have in order to achieve and sustain freakish success.

Theory 3: It’s the money, stupid. Spending the most money on players doesn’t guarantee titles,

Theory 4: It’s a question of management.

The fourth theory on my list is the notion that freak teams are products of a long tradition of institutional excellence, or a culture of winning.

The idea that the ghosts of the past are primarily responsible for a team’s ascent to greatness is, after all, basically a vote for the paranormal.

In reality, a team’s ability to uphold a tradition of excellence comes down to something rather mundane—the quality of its upper management.

If enlightened management is the key to sustaining a team’s culture, and culture is the secret to outsize success, one team in particular would have to be its standard-bearer—the All Blacks. Not only did this team’s name appear twice in Tier One, another All Blacks team from 1961 to 1969 also ascended to Tier Two.

The national rugby team of New Zealand is, by any reasonable accounting, the world’s preeminent sports dynasty.

Theory 5: It’s the coach.

FOUR   Do Coaches Matter? The Vince Lombardi Effect

the kind of hunger that comes from being counted out.

He looked like a fire hydrant dressed for a job interview.

“Perfection is not attainable. But if we chase perfection we can catch excellence”

“You survived, okay? Now I want you to play thirty minutes of Green Bay football, and let’s see if they can adjust to you.”

As I knew from reading his autobiography, Davis believed that his team owed its success, almost entirely, to Vincent Thomas Lombardi’s motivational powers.

“I tell ya, Coach Lombardi probably could have been a great minister, because he said things with the voice. Sometimes the voice had a chilling effect on you.”

“It was like he could make you rise to play at a level you didn’t even know about.”

“It is essential to understand that battles are primarily won in the hearts of men,” he once said.

Many of Stengel’s Yankees considered their manager to be an annoying buffoon, and they sometimes disregarded his instructions entirely.

The next aspect of coaching I looked at was tactics—the idea that these Tier One coaches might have devised sophisticated strategies that put their teams a step ahead.

The Australian field hockey coach Ric Charlesworth was widely acclaimed for his innovations, too, which included a system of ice-hockey-style shift changes to keep his players fresh.

A nearly equal number of Tier One coaches were not prizewinning strategists.

Sebes organized the Hungarians to play a fluid style of football that was a precursor of the 4-2-4 formation Brazil perfected during its Tier One dynasty.

“He was a street footballer from small childhood,”

He was not only a great player and captain but also a ‘playing coach.’ He saw everything, exerted great discipline over the whole team, and could analyze footballing situations on the run. A few brief instructions on the field from him and all our problems were solved.”

The coach can try to set the mood, talk through the game, encourage and explain, but in the end it’s the players who have to solve the real problems on the pitch.”

Measure the relative importance of coaches at the elite levels of sports. These studies support three basic conclusions:

  1. Coaches don’t win many games. In fact, the performances of a handful of top stars had more influence on the season’s final standings than the decisions of all of the league’s managers combined.
  2. Coaches don’t have a big impact on player performance. “Our most surprising finding,” the authors wrote, “was that most of the coaches in our data set did not have a statistically significant impact on player performance relative to a generic coach.”
  3. Changing coaches is not a cure-all. He discovered that distressed teams that changed managers, and distressed teams that stayed the course, achieved almost precisely the same results. In other words, sacking the coach was no more effective than simply riding it out.

“Give me a fit bunch of players with a good general level of ability.” McHale enforced this ideology, just as Lombardi did, by exercising a level of control that would be impossible today.

Many of the coaches and managers from teams in Tier One, including Blake, Guardiola, McHale, and Wyllie, and also from Tier Two—soccer’s Franz Beckenbauer and Johann Cruyff in particular—had been highly decorated captains before becoming managers. This suggests the lessons these men learned on the field about the power of captaincy might have informed the way they constructed the units they coached.

The only way to become a Tier One coach is to identify the perfect person to lead the players.

PART II   THE CAPTAINS The Seven Methods of Elite Leaders

One thing I noticed about Russell and the other Tier One captains was that when their careers ended, people always said some version of the same thing: There would never be anyone else like them.

his refusal to participate in his 1975 Hall of Fame induction ceremony. The Hall of Fame, Russell explained, is an institution that honors individuals. Russell had declined, he said, because he believed his basketball career should be remembered as a symbol of team play.

He didn’t score many points because his team didn’t need him to. He didn’t care about statistics or personal accolades and didn’t mind letting teammates take the credit.

Only how many titles we won.” Russell devoted himself instead to defense, and to doing whatever grunt work fell through the cracks.

His resistance to basketball awards was a rejection of the universal instinct to separate individuals from the collective. His brand of leadership had nothing to do with the outside world or how he was perceived. It was entirely focused on the internal dynamics of his team.

his style of captaincy was just so unusual that nobody recognized it. The public never connected his atypical leadership to the atypical success of the Celtics.

It’s true that these Tier One captains, in the contexts of their various sports, looked like one-offs. They were certainly nothing like the flawless leaders of our imaginations. As I compiled their biographies, however, I noticed something else: how closely they resembled one another. To a spooky degree, their behaviors and beliefs, and the way they approached their work, lined up. The impulsive, reckless, and putatively self-defeating behavior they exhibited was, in fact, calculated to fortify the team. Their strange and seemingly disqualifying personal traits were not damaging but actually made their teammates more effective on the field. These men and women were not aberrations after all. They were members of a forgotten tribe.


  1. Extreme doggedness and focus in competition.
  2. Aggressive play that tests the limits of the rules.
  3. A willingness to do thankless jobs in the shadows.
  4. A low-key, practical, and democratic communication style.
  5. Motivates others with passionate nonverbal displays.
  6. Strong convictions and the courage to stand apart.
  7. Ironclad emotional control.

FIVE   They Just Keep Coming Doggedness and Its Ancillary Benefits

The era of the mercenary international soccer superstar, or Galáctico, had not yet begun.

After playing mostly on the left side of the defense, he’d been slotted in at center back, mainly because nobody thought he had the speed to play wide.

Puyol had taken Figo’s betrayal personally. At the very least, the job of defending Barcelona’s honor would fall to a patriot.

“I had only one purpose and that was to stop him,” Puyol said.

Looking back, Puyol acknowledged that the day he marked Figo was the day he became known in Barcelona. But that wasn’t what mattered to him. “We won,” he said, “which is most important.”

One of the highest compliments coaches can pay athletes is to describe them as relentless, to say that they just keep coming.

If any Tier One team leader best embodies the virtue of doggedness, it is Lawrence Peter Berra, the catcher for the New York Yankees.

Though people mocked his swing-at-everything approach, Berra hit .280 in his first full season in New York, posting a near-elite .464 slugging percentage and striking out only twelve times.

He was not a very good catcher.

During spring training in 1949, the Yankees’ manager, Casey Stengel, decided to send Berra to school. He brought in Bill Dickey—a legendary former catcher and Hall of Famer—to teach Berra how to play the position. The two spent hours together as Dickey tweaked everything from Berra’s positioning and signal calling to his throwing mechanics.

At the same time, three of the Yankees’ veteran pitchers—Eddie Lopat, Vic Raschi, and Allie Reynolds—decided that if they wanted to win, they, too, would have to help make Berra a better catcher. The pitchers, who had become close friends off the field, even gave this mentorship a nickname: the Project.

From his shaky beginnings, Berra went on to win fourteen league titles with the Yankees in nineteen seasons and ten World Series titles overall, the most for any player. He set a record for home runs by a catcher and won three MVP awards. He was elected to the Hall of Fame in 1972.

The main point of difference is that their natural ability seemed to bear no relation to the size of their accomplishments. Something enabled them to set aside their limitations and tune out the skepticism from their critics. But what was it?

Carol Dweck has become one of the world’s preeminent experts on the subject of how people, especially children, cope with challenge and difficulty.

While solving the easy problems, most of the children spoke positively about the test and their performances. They were uniformly happy and confident. But when faced with the harder “failure” problems, most of the children’s moods turned dark. They said they didn’t like the test, or felt bored or anxious. When asked why they thought they weren’t doing well, they didn’t attribute their struggles to the difficulty of the problems—they blamed their own lack of ability. Faced with adversity, their problem-solving skills deteriorated, too. They simply stopped trying.

A smaller group of kids had a different reaction, however. Faced with the failure problems, they kept working. They didn’t think they were dumb; they believed they just hadn’t found the right strategy yet. A few reacted in a shockingly positive way. One boy pulled up his chair, rubbed his hands together, and said, “I love a challenge.” These persistent kids, as a group, hadn’t been any better at solving the easy problems. In fact, their strategies suggested that they were, on average, slightly less skillful. But when the going got tough, they didn’t get down on themselves. They viewed the unsolved problems as puzzles to be mastered through effort.

The helpless kids were preoccupied with their performance. They wanted to look smart even if it meant avoiding the difficult problems. The mastery-oriented children were motivated by the desire to learn. They saw failure as a chance to improve their skills.

What Dweck ultimately discovered is that these children had different ideas about the nature of ability. The helpless kids viewed their skills as fixed from birth. They believed they were either smart enough to do something, or they weren’t, and it was up to others to render a verdict. The mastery kids had a more malleable sense of their intelligence: They believed it could be grown through effort. “They don’t necessarily think everyone’s the same or anyone can be Einstein,” Dweck said, “but they believe everyone can get smarter if they work at it.”

While common sense suggests that a person’s natural ability should inspire self-confidence, Dweck’s research showed that in most cases, ability has very little to do with it. A person’s reaction to failure is everything.

Can a captain’s doggedness make an entire team play better?

The same unceasing drive was something displayed by Russell, Puyol, Berra, Richard, and every other captain in Tier One. Early struggles culminated in a defining moment, a breakthrough that left no doubt about their desire to win at any cost. And in each case, after they had established this fact, their teams began to turn the corner. The pattern was so consistent that it suggested their doggedness might, in fact, have been contagious.

While the force applied did grow with every new person added, the average force applied by each person fell. Rather than amplifying the power of individuals, the act of pulling as a team caused each person to pull less hard than they had when pulling alone. Later researchers coined a name for this phenomenon. They called it social loafing.

The less identifiable one person’s effort is, the less effort they put in.

They wanted to see whether one person giving a maximum effort could incite others to improve their performances. The scientists grouped their shouters in pairs and, before they began shouting, told them that their partner was a high-effort performer. In these situations, something interesting happened. The pairs screamed just as hard together as they had alone. The knowledge that a teammate was giving it their all was enough to prompt people to give more themselves.

The Fordham study seemed to confirm my suspicions about Tier One captains: Their displays of tenacity could have positively influenced the way their teams performed.

Puyol dashed over to the trainers with an expression of cartoonish urgency. Unless Barcelona wanted to substitute Puyol (which it didn’t), the only option was to staple the wound right there on the sidelines. Puyol was fine with that. His only concern was that the process went quickly. As the trainer examined the cut, Puyol impatiently grabbed at the staple gun as if he wanted to employ it himself. When the trainer snapped the staple in place, Puyol didn’t flinch. He ran to the touchline, manically waving his hands at the referee. Minutes later, with Puyol back in place, Barcelona’s Lionel Messi scored the winning goal. In an interview, Puyol described the incident as “nothing.”

I have always felt I had to give everything. That’s how I’ve always been. It’s my way of respecting football and respecting my teammates.”

“Winning is difficult,” he said, “but to win again is much more difficult—because egos appear. Most people who win once have already achieved what they wanted and don’t have any more ambition.”

I asked him whether he thought his effort was contagious. “I think that when you see a teammate go to the maximum and give everything—I don’t mean myself, but anyone—what you cannot do is to just stand there and let another team’s player pass right by you,” he said. “If everybody is giving one hundred percent and you are only giving eighty percent, it shows. So I think it makes everyone go to one hundred percent.”

• One of the most confounding laws of human nature is that when faced with a task, people will work harder alone than they will when joined in the effort—a phenomenon known as social loafing. There is, however, an antidote. It’s the presence of one person who leaves no doubt that they are giving it everything they’ve got.
• The captains of the greatest teams in sports history had an unflagging commitment to playing at their maximum capability. Although they were rarely superior athletes, they demonstrated an extreme level of doggedness in competition, and in their conditioning and preparation. They also put pressure on their teammates to continue competing even when victory was all but assured.

SIX   Intelligent Fouls Playing to the Edge of the Rules

In training, the Cubans would raise the nets by eight inches to match the height of the men’s game. They strengthened their legs by leaping one hundred times onto a tall box while holding weights. “They hit harder than some men’s teams,” noted Mike Hebert, a retired American volleyball coach who saw them practice. “Every attempt appeared to have the spiker’s reputation riding on it.”

“Listen here,” Catalina said. “I didn’t give birth to a daughter so she could go and cry in front of her adversary. And don’t go to the hairdresser anymore, because I saw you changed your hair. You went to Atlanta to play volleyball, not to get your hair groomed!”

Luis was part of something larger than she was and had a responsibility to control her emotions. There was no choice but to find a way to pull her team through this.

“In Atlanta we were past strategies,” Luis told me. “It was fundamental. We were out for victory at any cost.”

Shouting expletives at your opponent wasn’t explicitly barred by volleyball’s code of conduct—but it certainly violated the spirit of sportsmanship.

There are two activities in polite society in which it’s okay to do harmful things to other people in the pursuit of victory. The first is war. The second is sports. Part of the deal, however, is that there are some lines not to be crossed.

The guiding principle is that it’s not whether a team plays hard to win but that it plays with honor.

Sport was supposed to be the province of upright ladies and gentlemen. You wouldn’t try to psych out your opponents by calling them names.

The one captain I’ve met who epitomized what people expect a modern leader to be is Derek Jeter of the New York Yankees.

One of the things I noticed about the Tier One captains was how often they had pushed the frontiers of the rules in pressure situations, sometimes with ugly results. What I had not understood is that these flare-ups were not always impulsive acts performed in the heat of battle. In some cases, they were premeditated.

With Brazil ahead 10–3, Luis leaned over and shouted the first insult across the net. Bitches.

Luis called the players together at the end of the break, without the coaches present. If things continued like this, she told them, they would lose. She and Carvajal had been prodding and insulting the Brazilians; now it was time for everyone else to join in.

At this point, the strategy entered its most dangerous phase. Scheffer, the referee, summoned the two captains to his chair and asked Luis why her team was insulting the Brazilians. “I told him, ‘Don’t worry, it won’t happen again.’ ” Then she walked back to her teammates and made a gesture that appeared to say “Cool it.” But rather than hedging her bet, Luis decided to double down. Once safely out of earshot, she told them, “Girls, we have to keep insulting them!”

“Termino,” the dejected Brazilian commentator said, emphasizing each syllable. “Ter-mi-no.”

The match would be remembered as one of the greatest shootouts in volleyball history but also one of the sport’s biggest embarrassments. Its legacy is confusing. What Luis had done wasn’t some impulsive act like McCaw’s extended foot—it had been a calculated offense that violated every definition of fair play. It had also worked. The slurs had woken up the Cubans while discombobulating the Brazilians to the point that they contributed to their own defeat. “They got what they wanted,” Brazil’s Virna Dias later said.

How are we supposed to view Luis’s “leadership” during this match? Was it the mark of a true champion or a brute?

In 1961, Arnold Buss, a psychologist at the University of Pittsburgh, published one of the first comprehensive books about human aggression. He concluded, based in part on laboratory experiments, that people exhibit two distinct flavors of aggression: The first is a “hostile” one driven by anger or frustration and motivated by the reward of seeing someone hurt or punished; the second is an “instrumental” one that isn’t motivated by a desire to injure but by the determination to achieve a worthwhile goal.

“You have to distinguish between assertiveness and aggression,” Buss said. “There is a low correlation between them.”

In a 2007 book, Aggression and Adaptation: The Bright Side to Bad Behavior, a team of American psychologists noted that nearly all of the most highly ambitious, powerful, and successful people in business display at least some level of hostility and aggressive self-expression. The authors didn’t go so far as to argue that these behaviors constitute “moral goodness,” but they didn’t dismiss them as the mark of evil, either. “Aggressive behavior offers avenues for personal growth, goal attainment and positive peer regard,” they wrote.

They didn’t test the boundaries of the rules in order to hurt people, although injuries to bodies or feelings were possible. Their goal was to win.

These were aggressive acts that pushed the limits of what’s acceptable, but they were also instrumental.

This idea, that aggression is a skill, is something many elite captains instinctively endorsed.

‘intelligent’ or ‘useful’ fouls, but they remain fouls, and I took yellow cards so that we didn’t face worse consequences.” The key, Deschamps said, is to maintain self-control and to know when it’s okay to foul and when you are “too far up the referee’s nose” to get away with it. “It’s something you feel. It’s a feeling. It’s a form of intelligence.”

Trying to hurt opponents for the sake of inflicting pain wasn’t right, but roughing them up for the purpose of rattling and distracting them was.

While competing, they wrote, athletes exist in a “game frame” where they engage in “game reasoning” that allows them to adopt a code of behavior different from the one that applies in the outside world.

They called this phenomenon bracketed morality. This suggests that when athletes take the field they enter a parallel universe—one with different boundaries in which doing what’s broadly considered to be moral isn’t always the correct move. In other words, once somebody enters the game frame, they judge their own behavior differently, even if the outside world does not.

Aggression, she said, “is part of the game, too. And how you do it is important. I don’t think we did this in a cruel way. It wasn’t meant…I don’t know how to tell you. It wasn’t nice, but it was a show that derived from the pursuit of a medal.”

always tried to transmit joy, or energy, with my smile,” she said. “It motivated my team.”

People who are aggressive all the time, she said, “are just rude.”

the difference between leaders who worry about how they’re perceived and leaders who drag their teams through challenges by any means necessary. The world puts a lot of pressure on athletes, especially captains, to be champions and paragons of virtue. But these two things do not always correlate. It’s sometimes one or the other. The most decorated captains in history understood this.


When it comes to behaving aggressively, there is a persistent view that a person who does so must be suffering from some kind of psychological or spiritual deficiency. What people fail to understand is that all aggression is not the same. There is a “hostile” variety that is intended to do harm and an “instrumental” form that is employed in pursuit of a worthwhile goal. While the captains in Tier One often did ugly things, they did so while operating within the fuzzy confines of the rules of sports. The difference between a captain who upholds the principles of sportsmanship at all times and a captain who bends it to its edges is that the latter captain is more concerned with winning than with how the public perceives them.

SEVEN   Carrying Water The Invisible Art of Leading from the Back

Deschamps was unassuming but proud. Unlike Cantona, he’d already earned a pair of European club titles, one with Juve and another as captain of Marseille. Before he retired in 2001, he would become one of only three captains in my study to lead two different teams into Tier Two. How he would respond was anybody’s guess.

In the seventh century B.C., Chionis of Sparta swept the sprinting events at the Olympics. The Greeks decided to honor him by carving his name on a stone memorial at Olympia.

In the dressing room, former Manchester United captain Roy Keane once wrote, “the gap between what we do—and feel—and other people’s reality is alarming. The media hero is not necessarily the Man in here….Ditto the crowd pleaser. We live in a make-believe world created by the media, which is largely though not entirely, fiction. The fictional hero is often an arsehole.”

Beyond this, most of the Tier One captains had zero interest in the trappings of fame. They didn’t pursue the captaincy for the prestige it conveyed—if they pursued it at all. In 2004, when Carles Puyol’s teammates unanimously elected him captain, his was the only dissenting vote. “I thought it was more ethical to vote for others,” he told me.

All of my research showed that contrary to the public view, it is possible for a water carrier who prefers toiling in the service of others to become a strong captain. In fact, superior leadership is just as likely (if not more so) to come from the team’s rear quarters than to emanate from its frontline superstar. Carrying water, especially on defense, is clearly vital to a team’s success, even if it’s not something that inspires people to compose epic poems or chisel their names in stone.

But after Duncan got his hands on the trophy, I watched him carry it calmly across the room and open the bathroom door. He pulled his teammate and closest friend on the team, David Robinson, inside with him, and slammed it shut. Whatever emotions needed to pour out of Duncan in that moment, they were none of the public’s business.

Duncan’s selfless approach to basketball did earn him one prominent fan, however. Bill Russell, the other basketball captain in Tier One, raved that Duncan was the league’s most efficient player, the one who wasted the least motion—and emotion—on the court. Russell especially admired the way Duncan played without the ball. “He sets picks to make the offense operate,” Russell said, “not necessarily to get himself a shot.”

It’s not sexy. But it’s efficient.”

When Duncan retired in 2016, his teams had won five NBA championships and had made the playoffs in all nineteen of his seasons. Individually, he managed to set the most impressive mark of all—winning more games with one team than any player in NBA history.

One of the great paradoxes of management is that the people who pursue leadership positions most ardently are often the wrong people for the job. They’re motivated by the prestige the role conveys rather than a desire to promote the goals and values of the organization.

One of Hackman’s central beliefs was that people were far too quick to assume that the success or failure of a team was directly attributable to the person running it. “We mistakenly assume that the best leaders are those who stand on whatever podium they can command and, through their personal efforts in real time, extract greatness from their teams.” In reality, only 10 percent of a team’s performance depended on what the leader did once the performance was under way. But when it came to that 10 percent, Hackman found no evidence that a leader’s charisma, or even their specific methods, made any difference. It didn’t even matter if the leader performed all of the key leadership functions on the team—all that mattered was that these jobs got done. When good leaders saw these conditions eroding, they would tinker with new strategies to get things back on track. Leaders, Hackman believed, were more effective when they worked like jazz musicians, freely improvising with the flow of things, and less like orchestra players, who follow a written score under the direction of a conductor.

“From a functional perspective,” he wrote, “effective team leaders are those who do, or who arrange to get done, whatever is critical for the team to accomplish its purpose.”

The Tier One captains had varying levels of talent. Some were superstars in their own right—most were not. Duncan’s basketball skills put him at the high end of the scale. When his team found itself in a precarious situation, his teammates knew that if he wanted to, he had the ability to swoop in to save the day—to take the big shot. Most of the other captains didn’t have that power. They had unspectacular skills or played rear-facing positions.

She was a defender whose skills, according to one former coach, were “average at best.” She did not project the kind of confidence, or game-changing ability, leaders are supposed to display. But Overbeck’s humility had an upside for the team. By getting rid of the ball as soon as she had the opportunity, she increased the amount of time it was at the feet of superior athletes—and because she rarely left the pitch, this selfless instinct helped the team generate more scoring chances. The same functional mentality touched everything she did, even off the field. When the U.S. team arrived at a hotel after some grueling international flight, Overbeck would carry everyone’s bags to their hotel rooms. “I’m the captain,” she explained, “but I’m no better than anybody else. I’m certainly not a better soccer player.”

After some brutal conditioning drill, “they’d be dying, and I’d be like, ‘F-ing Norway is doing shit like this.’ I’m sure they hated me.”

The Fordham study of shouters (see Chapter Five) showed that hard work is contagious and that one player’s exertion can elevate the performances of others. But Overbeck’s brand of doggedness had another component. Her work ethic in training, combined with her bag-schlepping humility on and off the field, allowed her to amass a form of currency she could spend however she saw fit. She didn’t use it to dominate play on the field. She used it to ride her teammates when they needed to be woken up, knowing that it wouldn’t create resentment. Anson Dorrance, who coached the team from 1986 to 1994, said he believed Overbeck carried the team’s luggage so that when she got on the field, “she could say anything she wanted.”

“She had a genuineness about her,” her teammate Briana Scurry said. “You knew she was on your side, even if she was laying into you. Carla was the heartbeat of that team and the engine. Everything about the essence of the team—that was Carla.”

If the chief responsibility of a team leader is to direct the other players on the field, then by all rights these captains must have found ways to influence, if not control, the team’s tactics.

For some Tier One captains, this “quarterbacking” function was plain to see.

said of Cuba’s Mireya Luis. “She wouldn’t get mad, but if you did something wrong she would immediately correct it. She would correct any of the mistakes the players made because she had great vision for volleyball.”

Deschamps’s approach to leadership was as functional as it gets. On a team, he said, “you can’t only have architects. You also need bricklayers.”

As he talked about his time playing with Zidane, Deschamps made an interesting point—the relationship, he said, went both ways. Yes, he served Zidane by making sure he got the ball, but Zidane relied on him to make those passes. Zidane, he said, “also needed me.”

The idea that a player who serves the team can also create dependency was something I had never considered. Deschamps, as his team’s primary midfield setup man, was able to dictate the action ahead of him by deciding which players got the ball. His superstar teammates not only looked to him for passes, they coveted his approval.

His job was to hold the middle of the field and mark the other team’s best striker—a role that required him to stand his ground while the world’s biggest, fastest players plowed into him like a tackling dummy.

We assume that the team is the star and the star is the team. On the seventeen teams in Tier One, however, the captains were rarely stars, nor did they act like it. They shunned attention. They gravitated to functional roles. They carried water.

The great captains lowered themselves in relation to the group whenever possible in order to earn the moral authority to drive them forward in tough moments. The person at the back, feeding the ball to others, may look like a servant—but that person is actually creating dependency. The easiest way to lead, it turns out, is to serve.

EIGHT   Boxing Ears and Wiping Noses Practical Communication

As much as Carla Overbeck hated the public eye, her cloak of reserve dropped the moment a match began. “I was very vocal,” she told me. If a teammate stuck a tackle, she said she would be the first to praise her, “but if they weren’t working hard, I would let them know that, too. If I got on someone for not working hard, as soon as they would shred somebody I would be all over them, telling them how great they were.” Didier Deschamps said that within the confines of his team, he was rarely quiet. “I talked during the warm-up, I talked in the locker room, I talked on the field, I talked at halftime. And I kept talking afterward. You have to talk. That’s how you can correct something.”

During Jack Lambert’s captaincy of the Steelers, the team had a long-standing tradition of gathering in the sauna after games—away from the coaches and the press—to both decompress and have unvarnished conversations about how they had played. It was a no-bullshit zone where candor reigned, accountability was demanded, and no one was above criticism.

Viktor Tikhonov, the taskmaster coach of the Soviet Red Army hockey team in the 1980s, was not beloved by his players. But by requiring them to train and compete under intense pressure, apart from their families, for as many as eleven months a year, he forced them to bond so tightly that their identities were no longer distinct.

One of the most convivial teams in Tier One was the 1949–53 New York Yankees. On this team, veterans didn’t haze the rookies the way Yogi Berra had been hazed—they took them under their wings. They eliminated cliques by hosting team barbecues to which everyone was invited. It was this team’s collection of veteran pitchers who, in 1949, took it upon themselves to help turn Berra into a formidable catcher.

Berra did some of his finest work when his pitchers were struggling. Sometimes he’d tell them to take it easy, or crack jokes to cut the tension. Other times he lit a fire. “Yogi made you bear down,” said the pitcher Whitey Ford.

“Berra proved not only to be a good listener,” the author Sol Gittleman wrote, “but what every catcher must be: the subtle psychologist and manipulator of his pitchers.”

One of the oldest puzzles of human interaction is why some groups of people, but not all of them, learn to operate on the same wavelength—to think, and act, as one. Scientists who study group dynamics have found some evidence that over time, when a group of individuals become accustomed to performing a task together, they can develop something called shared cognition.

Other researchers have shown that when a team begins to master “unconscious” communication, its overall performance improves significantly—even if the skill level of each individual member stays the same.

Over seven years, starting in 2005, a group of researchers from the Human Dynamics Laboratory at the Massachusetts Institute of Technology studied teams from twenty-one organizations, ranging from banks to hospitals to call centers, to see how they communicated and how those communication patterns influenced their performance.

Right away, the MIT study confirmed what we all suspect: that communication matters. Whether a team was packed with talented, intelligent, and highly motivated individuals, or whether it had achieved solid results in the past, its communication style on any given day was still the best indicator of its performance.

The MIT researchers found that a key factor was the level of “energy and engagement” the members displayed in social settings outside formal meetings. In other words, teams that talked intently among themselves in the break room were more likely to achieve superior results at work. How much time every member of the group spent talking also proved to be crucial. On the best teams, speaking time was doled out equitably—no single person ever hogged the floor, while nobody shrank from the conversation, either. In an ideal situation, Pentland wrote, “everyone on the team talks and listens in roughly equal measure, keeping contributions short and sweet.”

The researchers were also able to isolate the data signatures of the “natural leaders” of these productive units, whom the scientists called charismatic connectors. “Badge data show that these people circulate actively, engaging people in short, high-energy conversations,” Pentland wrote. “They are democratic with their time—communicating with everyone equally and making sure all team members get a chance to contribute. They’re not necessarily extroverts, although they feel comfortable approaching other people. They listen as much as or more than they talk and are usually very engaged with whomever they’re listening to. We call it ‘energized but focused listening.’

“We have to keep playing,” he said. “We don’t sit back.”

In addition to the words he used, he felt it was also important to touch people while talking to them and to synchronize his words with his body language. “You have to match up what you want to say with your facial expression,” he said. “The players know when I’m happy or not. They can hear it and they can also see it.”

Words are an important part of the equation—but there’s a lot more to it.

Body language was by far the most significant factor. Their words barely mattered.

The results, the authors wrote, “suggest, first, that our consensual intuitive judgments might be unexpectedly accurate and, second, that we communicate—unwittingly—a great deal of information about ourselves.”

In his 1995 book, Emotional Intelligence, the psychologist Daniel Goleman outlined a theory based on an idea that scientists had been kicking around since the 1960s. Goleman believed that a person’s ability to recognize, regulate, conjure, and project emotions is a distinct form of brainpower—one that can’t be revealed by a standard IQ test. People who have high emotional fluency understand how to use “emotional information” to change their thinking and behavior, which can help them perform better in settings where they have to interact with others. Goleman also believed that emotional intelligence was closely correlated to the skills required to be an effective leader, and that it can be more significant in this regard than IQ or even a person’s technical expertise.

Spurs players kept up a never-ending dialogue: “Come on, do the work…get to the middle…step back, step back…can’t stop moving…pace, pace…not too much, Patty…red, red, red…look behind, beeeehiiind!”

During their unrivaled nineteen-season streak of consistency, the Spurs won five NBA titles by playing grinding defense, running disciplined plays with picks and screens, and excelling in the low post. The Spurs were never the stars of the NBA’s offensive or defensive statistical tables. But they were outliers in one category: communication. Like other Tier One teams, the Spurs spent a lot of time talking among themselves, mostly as a means of tightening their choreography.

A guy that wants to put the pressure on himself.” Compared to most of the captains in Tier One, Duncan seemed to have a profound lack of affect.

The Onion once poked fun at Duncan for this in an article with the headline “Tim Duncan Hams It Up for Crowd by Arching Left Eyebrow Slightly”).

There was one thing about Duncan that caught my attention, however—his eyes.

His face might have been inscrutable, but his eyes never left any mystery about what he was thinking.

It was during timeouts, when he wasn’t playing, that Duncan’s eyes came fully alive. They were always moving—darting around to scan the faces of teammates and coaches, the referees, the video board, even the fans. Duncan had several timeout rituals. The moment the whistle blew, he’d pop up from the bench before everyone else and walk out to slap hands with the players as they came off the floor. Then he would vector over to the assistant coaches’ huddle to have a look at their notes (something few NBA players do). When San Antonio’s coach, Gregg Popovich, knelt down to address the team, Duncan would stake out a spot just behind his left shoulder. From this vantage point, he could see what “Pop” was scribbling on his dry-erase board and add his input when necessary. This vantage point also allowed him to monitor the body language of his teammates sitting in front of him.

After every timeout, when Popovich finished talking, Duncan would seek out one or two teammates, speaking with them softly but intently, sometimes wagging his finger as he explained a strategic point. He also touched them often, slapping hands or butts, tossing an arm around their shoulders, or, in lighter moments, playfully bumping them. As I watched him run this circuit, I realized that all of Duncan’s movements were calculated. Like those charismatic connectors in the MIT study, he circulated widely among the team and was democratic with his time. He felt comfortable approaching everyone. He listened as much as he talked and never broke eye contact.

“Tim always finds ways to get the message across, even if it’s little, quick, and short. If something needs to be said, he’ll say it. If not, he’ll leave it alone. So when he does speak, everyone listens.”

The great irony of Duncan’s leadership was that even though he didn’t like to talk, he worked hard to create an environment in which talking was encouraged.

“He doesn’t judge people,” Popovich said of Duncan. “He tries to figure out who they are, what they do, and what their strengths are. He just has a very good sense about people. When we learned that about him…we knew we were going to be able to bring almost anybody here, unless they were a serial killer, and he was going to be able to figure out what to do with them.

In addition to exploring the power of body language, the Harvard study of teaching fellows examined another idea—whether there was an ideal combination of gestures and expressions a person can make.

Ambady and Rosenthal noticed that the lowest-rated teachers had a tendency to sit, shake their heads, frown, and fidget with their hands. Those gestures seemed to be ones worth avoiding. The highest-rated teachers were generally more active than the others, but beyond that their gestures were all over the map.

Charisma was not some universal, repeatable, or even easily recognizable quality in a person. There was no “correct” set of mannerisms that increased one’s odds of making a favorable impression.

In other words, whatever notions we have of what traits make somebody charismatic are implicitly wrong. It doesn’t matter what kind of body language or speech pattern people use when communicating with others. What matters is that they develop a formula that works for them.

Other Tier One captains used a different approach. They engaged with their teammates constantly—listening, observing, and inserting themselves into every meaningful moment. They didn’t think of communication as a form of theater. They saw it as an unbroken flow of interactions, a never-ending parade of boxing ears, delivering hugs, and wiping noses.


They led without fanfare.

One of the great scientific discoveries about effective teams is that their members talk to one another. They do it democratically, with each person taking a turn. The leaders of these kinds of teams circulated widely, talking to everyone with enthusiasm and energy. The teams in Tier One had talkative cultures like this, too—and the person who fostered and sustained that culture was the captain. Despite their lack of enthusiasm for talking publicly, most of these captains, inside the private confines of their teams, talked all the time and strengthened their messages with gestures, stares, touches, and other forms of body language. The secret to effective team communication isn’t grandiosity. It’s a stream of chatter that is practical, physical, and consistent.

NINE   Calculated Acts The Power of Nonverbal Displays

Lambert’s most powerful weapon on the field, however, was something intangible. He scared the living shit out of people.

“I’m really not that wild, either. I’m emotional, but I know what I’m doing. It’s a series of calculated acts.”

“He was probably the biggest intimidator on the team,” Milie said. “He liked having blood on his uniform.”

On the field, he went out of his way to project extreme passion and emotion. This seemed to me like an altogether different impulse—a more primal form of communication that belonged in a separate category.

In his 1960 book, Crowds and Power, Canetti described the way an emotion could sweep rapidly and wordlessly through a group of people, creating an irresistible impulse to join in. “Most of them do not know what has happened and, if questioned, have no answer; but they hurry to be there where most other people are,” he wrote. “There is a determination in their movement which is quite different from the expression of ordinary curiosity. It seems as though the movement of some of them transmits itself to the others.” In the crowd, “the individual feels that he is transcending the limits of his own person.”

Canetti believed that people didn’t decide to join mobs; they were moved to do so by an emotional contagion that seeped into them unconsciously, creating a simultaneous alignment of their biology. That contagion would drive them to pursue some unified course of action, even at the risk of injury or death. A crowd, Canetti wrote, “wants to experience for itself the strongest possible feeling of its animal force.”

The discovery of these reactive cells, or mirror neurons, as the scientists called them, offered the first physical evidence that the phenomenon of brain interconnectedness that researchers had observed in groups might be the result of a complex, hardwired neurochemical system in our bodies that operates below consciousness.

Yet dozens of experiments done under the umbrella of “emotional intelligence” have made one thing clear: Many effective leaders can—and do—use this subconscious system to manipulate the emotions of their followers. Daniel Goleman and another psychologist, Richard Boyatzis, writing on this subject in 2008, said they believe that great leaders are the ones “whose behavior powerfully leverages this system of brain interconnectedness.”

surface acting. This occurs when a person puts on an expression, or takes some subtle action, to try to influence the people around them.

What all of this research shows is that anyone who wants to change the emotional composition of a group—whether it’s a Viennese mob or a football team—can do so by tapping into an invisible network that connects all people together. Strong leaders, if they are so inclined, can bypass the conscious minds of their followers and communicate directly with their brains.

New Zealand’s native Maori tribe were renowned warriors, famous the world over for their intimidating facial tattoos, their skill at wielding giant staffs made of wood or whalebone, and celebrating victories in battle by eating the roasted hearts of their enemies. The haka, which is basically a group dance, was an ancient component of Maori warcraft, a tightly choreographed spectacle of ignition performed in a variety of circumstances but mainly before battle. The haka was meant to paralyze the enemy with dread by conveying the idea that the warriors had come under the influence of the gods. It was also used to create a collective frenzy among the warriors that put their bodies into perfect sync. The message it sent, as the haka expert Inia Maxwell put it, was that “we’re going to battle and we’re not really expecting to come back alive or injury-free, so let’s throw everything at it.”

Ka Mate. To perform it, the All Blacks lined up at midfield before kickoff in a wedge formation facing the other team. The ritual began when the haka leader, standing in the center, shouted, “Kia rite!” (“Be ready!”)

“Ringa pakia!” (“Slap the hands against the thighs!”)
“Uma tiraha!” (“Puff out the chest!”)
“Turi whatia!” (“Bend the knees!”)
“Hope whai ake!” (“Let the hip follow!”)
“Waewae takahia kia kino!” (“Stamp the feet as hard as you can!”)

“Ka mate, ka mate?” (“Am I going to die, am I going to die?”)
“Ka ora, ka ora?” (“Or will I live, or will I live?”)

Shelford forced the All Blacks to practice the ritual, and then practice it some more. As the weeks rolled on, his team became increasingly engrossed in the performance. “It started to mean something to them,” he said.

Shelford’s reinvigorated haka clearly became a source of energy for the team and a problem for its opponents.

Maurice “Rocket” Richard. The legendary center Jean Béliveau once wrote that Richard “embodied a force, an energy, something that rubbed off on many of his teammates and carried us to five straight championships.”

“The Rocket was more than a hockey player,” his former coach Dick Irvin said. “It was his fury, his desire, and his intensity that motivated the Canadiens.”

Inside the dressing room in the final minutes before the game, Richard would swivel his head methodically from one side of the room to the other, stopping to stare at each of his teammates until they met his eyes. When he was done, he would make some clipped statement, like “Let’s go out and win it.” Given what we know about emotional contagion, deep and surface acting, mirror neurons, and the speed at which the brain registers strong emotions, this tactic suddenly takes on a different cast. It’s as if Richard knew that by locking his beams on people and making them see his face, he could download his own intensity right into them.

If there is a pathway into the minds of human beings that bypasses consciousness and absorbs the emotions of others; and if this pathway can be activated by the sight of a bloody uniform, a hair-raising tribal dance, or just a deep stare; and if these displays can propel a team to run faster, jump higher, hit harder, and push through pain and exhaustion, then these captains must have been masters of the art.

Philipp Lahm, the German soccer captain who, like Deschamps, led two different teams into Tier Two (more on him in a moment), summed it up well. Lahm believed that without passion, even the best teams won’t win, and that the passion of one player could elevate the performance of an entire unit. When a leader does something dramatic on the field, he said, “it releases energies you didn’t even know you had.”


Our brains are capable of making deep, powerful, fast-acting, and emotional connections with the brains of people around us. This kind of synergy doesn’t require our participation. It happens automatically, whether we’re aware of it or not.

TEN   Uncomfortable Truths The Courage to Stand Apart

He hadn’t just played through some routine malady. He’d suffered a heart attack.

Yet Vasiliev’s extreme act of rebellion hadn’t injected any of these toxins. Rather, it set in motion a series of events that brought his teammates closer, cemented his leadership, and paved the way for the team to reel off one of the seventeen most dominant streaks in sports history. There was a strong circumstantial case to be made that the moment Vasiliev attacked his coach was the moment his team made its turn toward greatness.

But all of the Tier One captains, to varying degrees, stood up to management during their careers.

“Red Teaming,” in which a team working on a project will designate one person, or a small group of people, to make the most forceful argument they can muster for why the idea that’s currently on the table is a bad one.

Some dissent is a good thing—a strong leader should stand up for the team. Vince Lombardi once said that a captain’s leadership should be based on “truth” and that superior captains identify with the group and support it at all times, “even at the risk of displeasing superiors.” Nevertheless, there’s a line between a level of dissent that’s effective for a team and a level of dissent that destroys its cohesion.

A few captains in my study, however, had engaged in a different, more explosive kind of dissent. They hadn’t just spoken out against their coaches or managers, they had also publicly criticized their teammates.

To prod the others into improving by calling them out.

Even more remarkably, he didn’t play a set position. Depending on the team’s tactical needs at any given moment, he would switch between defense and midfield, from the left side to the right.

Richard Hackman, the Harvard organizational psychologist who studied performance teams and extolled the virtues of functional leadership, had also observed the role leaders play in helping groups navigate conflict. All of his research supported one strong conclusion—all great leaders will find themselves right in the middle of it. In order to be effective, Hackman wrote, a team leader “must operate at the margins of what members presently like and want rather than at the center of the collective consensus.”

Hackman believed that dissenting wasn’t just a crucial function of a leader but a form of courage.

Hackman’s research left little doubt that teams need some internal push and pull in order to achieve great things.

Jehn had conducted studies on teams that showed that certain kinds of disagreements didn’t have a negative effect—in fact, teams that had high levels of conflict were often more likely to engage in open discussions that helped them arrive at novel solutions to problems. The worst outcomes came when groups engaged in thoughtless agreements.

In 2012, Jehn and two colleagues published a meta-analysis of sixteen different experiments based on 8,880 teams. The paper’s goal was to test a theory Jehn had developed about the nature of group conflict. Jehn believed that “conflict” needed to be better defined. She believed that dissent inside teams took several different forms. One was something she called personal or relationship conflict, which is defined as the manifestation of some personality clash—an interpersonal ego-driven showdown between a team’s members. This kind of dispute was distinct from another form, task conflict, which is defined as any disagreement that isn’t personal but arises from, and is focused on, the actual execution of the work at hand. There was a difference, she believed, between teams that squabbled because the members didn’t like one another and teams that fought over their different views of how to solve a problem they were working on.

Teams that had engaged in personal conflict had shown significant decreases in trust, cohesion, satisfaction, and commitment—all of which had a negative impact on their performance. For teams that had undergone task conflict, however, the effect on their performance was basically neutral. Arguing about the job at hand hadn’t helped them, but it hadn’t hurt.

“We have found that task conflicts are not necessarily disruptive for group outcomes,” the authors wrote. “Instead, conditions exist under which task conflict is positively related to group performance.”

To lead effectively, Lahm believed, a captain has to speak truth not only to power but to teammates as well. “It’s a totally romantic idea that you have to be eleven friends,” he said.

As much as we might be conditioned to fear it, dissent inside a team can be a powerful force for good. It’s also clear that great captains have to be willing to stand apart when they believe it’s necessary—to endure that “pain of independence” the researchers describe. There are limits, of course. No team can sustain itself for long if the captain, or anyone else, stokes the kind of conflict that’s based on petty hatreds or personal beefs. The principled stands they take must be aimed at defending their fellow players, the way Valeri Vasiliev did, or by keeping the camera pointed squarely at tactics, as Philipp Lahm did by dissecting Bayern’s personnel decisions.

All of this suggests that in any high-pressure team environment, even beyond sports, dissent is a priceless commodity. A leader who isn’t afraid to take on the boss, or the boss’s boss, or just stand up in the middle of a team meeting and say, “Here’s what we’re doing wrong,” is an essential component of excellence.

ELEVEN   The Kill Switch Regulating Emotion

Hawkes had all the classic traits of a Tier One captain. She didn’t score much, wasn’t exceptionally fast, and did not display dazzling stickwork. She focused on her conditioning and on perfecting the sport’s quieter, more team-oriented skills—trapping the ball, passing, tackling, changing directions.

Charlesworth had come to believe that by eliminating the fixed captaincy, the other players would feel more responsible for the outcome, which would empower them to work harder on the field. He believed that a revolving captaincy would end any politics or jostling among the players for the role.

“social loafing” that the French scientist Maximilien Ringelmann first observed.

push everyone into taking a leadership role. He required the players to change their uniform numbers constantly and forced everyone, even the stars, to sit out games occasionally in order to keep them hungry and motivated. In 1996 he had named four players, including Hawkes, as permanent members of a “leadership group,” which was later expanded to six. The members of the group would take turns filling the captain’s role on the field.

Once the season began, the open captaincy became a source of tension. The players suspected one another of lobbying for the honor, and when match captains were announced there were sour faces in the dressing room. Cliques hardened.

“The wheels fell off a little bit,” Hawkes recounted. “I don’t know if I can put it down to leadership. Subliminally, maybe I took a step back. Maybe the loss of the captaincy did have a psychological effect on me that I wasn’t aware of.”

When I asked Ric Charlesworth about his decision to name Renita Garard captain for the final, he said he hadn’t given it much thought; he hadn’t considered, or known, the effect it would have on Hawkes.

Even though her coach questioned her fitness to lead, Hawkes had the strength of character to block out her humiliation, remove her own concerns from the equation entirely, and continue leading the players in the face of enormous pressure.

After eighteen months of humiliation, just a few hours before it was set to end, she had to cope with the biggest setback of her career. She may have had the right sort of brain to handle these things—it might be as simple as that. But when I asked her about this ability, she didn’t see it as a sign of her exceptional biology. Emotional control, she told me, was just another form of discipline.

“You have to regulate emotion,” she said. “You can bring it back at some later stage, but when you know you’ve got something to do, you can remove it from your thoughts, put it in a vault, and get on with what you need to get on with.”

After meeting the Dalai Lama in India in 1992, Davidson decided to turn his attention to a more practical question. He wanted to know whether people could train themselves to be more resilient. Over the years, Davidson had become a strong believer in the concept of neuroplasticity, the idea that people’s brains will physically change over time and that those changes can depend on their life experiences.

What Davidson wanted to know was whether people could make positive changes intentionally.

He set out to explore a theory he’d long suspected to be true—that meditation, especially the long, grueling kind that Buddhist monks engage in, might cause this kind of brain rewiring to occur. Are people who meditate better at recovering from adversity?

The meditation experts, Davidson said, “exhibited something that we have identified as one very important constituent of well-being, which is the ability to rapidly recover from adversity.”

There is one thing we can say with certainty, however: At times when they were flooded with negativity, these captains engaged some kind of regulatory mechanism that shut those emotions off before they could have deleterious effects. In other words, they came equipped with a kill switch.

There’s no doubt that great captains use emotion to drive their teams. But like aggression and conflict, emotion comes in more than one flavor. It can enable, but it can also disable. During their careers, the Tier One captains all faced some issue that stirred up powerful negative emotions—an injury, a rebuke, a personal tragedy, even a climate of political injustice. These captains not only continued playing through setbacks—they excelled. They walled off these destructive emotions in order to serve the interests of the team.

A person’s ability to regulate emotion is largely governed by the kind of brain wiring they’re born with. Nevertheless, our genes provide us with a little wiggle room, and our brains do possess the ability to change over time. Scientists also believe it’s possible that we can force them to change through patience and practice. The Tier One captains suggest that this might be true. They displayed and, in one case, developed a kill switch for negative emotions.

PART III   THE OPPOSITE DIRECTION Leadership Mistakes and Misperceptions

The fact is that in the history of human events, nothing draws a larger and more diverse audience than two elite groups of athletes competing.

Part of our desire to join a great collective stems from the desire to be nobly led. We want to be inspired. We are programmed to respond to brave, steadfast, and fiercely committed leadership—the kind we see on great sports teams.

TWELVE   False Idols Flawed Captains and Why We Love Them

A master of aggressive displays, Keane once said that when he sensed his team getting too comfortable, he would make a reckless challenge or a bruising tackle just to “inject some angry urgency into the contest.”

“Aggression must be met with aggression.”

The rampant aggression that made Roy Keane such an icon was the same quality that made him different from the Tier One captains.

“Anger can be an emotion of action as the physiological surge of the sympathetic nervous system can lend itself to an increase in strength, stamina, speed and a decrease in perception of pain,” he wrote.

Abrams found that the studies presented more evidence that playing angry can produce negative returns. It wasn’t just that anger could draw sanctions from the referees. Intense anger, he wrote, could also harm a player’s performance “due to impairment in fine motor coordination, problem-solving, decision-making and other cognitive processes.”

After controlling for variables like position and minutes played, the researchers found that “aggressive” players—those with the highest technical foul rates—were, in fact, different from their colleagues. Some of their qualities were positive: They were more likely to excel at tasks that required power and explosive energy, such as rebounding and shot blocking. They also tended to take, and make, more field goals. The “energy” that a technical foul creates, or the angry disposition behind it, “may facilitate successful performance in some aspects of the game,” the researchers said.

While they took more foul shots, they were no better at converting them. When it came to taking, and making, three-point shots, the players who competed in a “high-arousal state” struggled mightily. The aggressive players also showed a greater propensity to commit turnovers. “Aggressive players may be prone to recklessness, which is consistent with research showing that angry people tend to engage in risky decision-making,” they said.

Researchers have spent a lot of time looking at the question of why some people are more aggressive than others. They have suggested that these people have different kinds of brains, suffer from cognitive impairment or immaturity, or possess a “warrior gene” that predisposes them to risky behavior. One psychologist, Michael Apter of Georgetown University, theorized that aggression is driven by the pursuit of a pleasure sensation that comes from seeing a rival’s fortunes reversed.

Another idea, backed by laboratory experiments, is that some people have chronically hostile and irritable personalities—they possess a “hostility bias” that makes neutral actions seem threatening and prompts them to react angrily to challenges.

But the Case Western scientists believed that restraint wasn’t some machinelike force; it was a resource—a form of energy people kept in reserve. The levels of these reserves varied not only between people but within them. In other words, our restraint tanks will either be empty or full at any moment, depending on how often we’ve been forced to draw from them.

The key argument this study made was that restraint is finite. The more we’re forced to employ our self-control, the less of it we have; and the less we have, the less able we are to inhibit our worst impulses.

The bigger problem, when it comes to Roy Keane, is that the least effective parts of his character are what he’s most admired for—the fighting, the lack of contrition, and the unyielding barrage of hostility he directed at everyone around him. From the outside, these things made him so vividly different from other captains that they seemed to be the hallmarks of his success as a leader. They overshadowed the things he did that actually helped his team: his dogged play, his water carrying, and his unrivaled talent for making displays of powerful emotion to shore up his teammates. When soccer fans say their team needs a captain like Roy Keane, what they’re really saying is that it lacks an enforcer on the pitch who intimidates the opposition, or that the players are too soft and comfortable. These things sound good in online forums, but the evidence suggests they’re not the kinds of qualities that turn teams into long-standing Tier One dynasties.

Jordan didn’t have the kinds of violent episodes Keane did, but he was still highly aggressive, constantly probing the limits of what the referees would allow, especially in the area of shit-talking opponents.

his teams never made it to Tier One. The second is that Jordan did not match the Captain Class blueprint.

As captain, Jordan led mostly by needling and belittling his teammates, who lived in perpetual fear of his famously sharp tongue. When Jordan lost confidence in a player, he would lobby management to get rid of him.

As Cartwright once put it: “You just play until there’s no game left in your uniform.”

The second difference was the way he played basketball. Jordan rarely labored in the service of his team. He ran the Bulls’ offense as he wished, to the exclusion of the supporting cast, and judged everything the organization did by how much it helped him.

But the fact remains that the Bulls hadn’t been able to make their “turn” until Bill Cartwright joined Jordan in the captaincy. It was Bill Cartwright who carried the water, put in the work, and provided the practical communication. He was, in short, the kind of Captain Class presence the team hadn’t had.

In a 1993 interview with Oprah Winfrey, Jordan conceded that he might be a “compulsive competitor.”

Jordan’s obsession with winning never shut off. It was a permanent condition that seemed to be driven by deep emotional forces. Basketball had proved to be a good conduit for a while, but it hadn’t been enough. After retiring, he barely took a breath before setting off on a new challenge: trying to make the roster of Major League Baseball’s Chicago White Sox. Jordan played 127 games in 1994 for the minor-league Birmingham Barons, hitting a measly .202 with 114 strikeouts.

At the six-minute mark, the speech took a strange turn. Jordan told a story about his high school coach, who hadn’t promoted him to the varsity basketball team as a sophomore. “I wanted to make sure you understood,” Jordan said. “You made a mistake, dude.” The crowd laughed and applauded. Jordan poked out the famous tongue, as if he’d slipped back into game mode.

His speech devolved into a long catalog of ancient beefs as he took shots at former NBA players, coaches, and executives who’d disrespected him. It wasn’t the speech of a legend. It was the speech given by an underdog who succeeded despite everyone else’s best efforts.

The reviews of Jordan’s address were resoundingly negative. The NBA writer Adrian Wojnarowski likened it to “a bully tripping nerds with lunch trays in the school cafeteria.” Jordan, he wrote, “revealed himself to be strangely bitter.”

Like Roy Keane, Jordan played angry, but his anger wasn’t the kind that pushed him to violence—he rarely lost his temper on the court. Jordan’s anger was an elaborate fabrication. To play his best, he needed to feel slighted, which, in turn, fired him up to go out and try to prove the doubters wrong. “That’s how I got myself motivated,” he once said. “I had to trick myself, to find a focus to go out and play at a certain level.”

The captains in Tier One seemed to have a kill switch to block negative emotions. Jordan had rigged his control box to supply them with fertilizer. The problem with Jordan’s approach is that when the games ended and the arena lights shut off, his emotional appetite did not. He set off to find another game, another kind of challenge—preferably one in which he would be underestimated.

The reason Jordan quit basketball in his prime after winning three NBA championships is that nobody dared to question him anymore. He wasn’t bored, he’d simply run out of fuel. In the end, he wasn’t so much a star as a meteor. When his anger finally burned out, so did the Bulls.

Jordan’s constant criticism so rankled the veteran guard Steve Kerr that the two men got into a fistfight during preseason training camp.

The notion that he was also an elite leader is not only wrong, it does a disservice to the institution of captaincy.

Jordan and Roy Keane were false idols. As leaders, they were not purebred members of the Captain Class. For teammates, coaches, and executives, their captaincies were the stuff of a thousand migraines.

They didn’t always make for great television. That’s what we’ve come to expect, however. So that’s what we continue to get. The chief reason teams choose the wrong people to lead them is because the public judges every captain against this distorted picture.

THIRTEEN   The Captaincy in Winter Leadership’s Decline, and How to Revive It

NFL’s New York Jets took them up on it. Matt Slauson, one of the team’s veteran linemen, said the absence of captains “kind of forces guys to step up and take ownership.” After posting an 8–8 record the season before, the Jets dropped to 6–10.

“Today’s game is led by core groups of players,” explained Brooks Laich, a veteran center who’d played for the suddenly leaderless Toronto Maple Leafs. “It’s not done by one individual.”

During this period I noticed another troubling development. Many teams began naming captains for reasons that had nothing to do with their leadership ability.

Building up a player’s loyalty, or giving him a vote of confidence, was one thing. But in many cases, teams made a more fundamental mistake. They convinced themselves that the captaincy was the natural right of the player with the highest market value.

To Arsenal, Brazil, the Mets, and a host of other teams, the captaincy had come down to which superstar’s ego needed stroking, or which player cost the team the most money, or which promising youngster they hoped to build around. It had ceased to be a matter of which player was the most fit to lead.

in 2016, the sports industry took in an estimated ninety billion dollars, a sum not too far behind the global market for cancer treatments.

The amount of cash pouring in was so substantial that it changed the underlying motives of the business. From the earliest days of organized team sports, the surest path to financial success was to win. In the new economy, the chief goal was to turn your games into appointment television.

The primary beneficiaries of this new order were the rarest commodity in sports—the kind of bankable superstar players and coaches that people will tune in to watch.

As they became richer, more sought after, and more essential to putting on a good show, these celebrity coaches and athletes started throwing their weight around.

Unless the captain was the superstar, the captain was a bystander.

in Silicon Valley, is that organizations should adopt “flat” structures, in which management layers are thin or even nonexistent. Star employees are more productive, the theory goes, and more likely to stay, when they are given autonomy and offered a voice in decision-making.

Proponents of flatness say it increases the speed of the feedback loop between the people at the top of the pyramid and the people who do the frontline work, allowing for a faster, more agile culture of continuous improvement.

I started to wonder if I was really writing a eulogy.

After all this time, and all the energy we’ve spent studying team leadership, why haven’t we figured it out? Why are we still tinkering with the formula?

Burns concluded that there were two distinct types of leadership—one that was “transactional” and another that was “transformational.” Transactional leadership occurred when the person in charge cared most about making sure their underlings followed orders and that the hierarchical lines of an organization were strictly maintained. There were no appeals to higher ideals, just a series of orders given and carried out. The more desirable model, transformational leadership, only came to pass when leaders focused on the values, beliefs, and needs of their followers, and engaged them in a charismatic way that inspired them to reach higher levels of motivation, morality, and achievement. The secret of transformational leadership, Burns wrote, is that “people can be lifted into their better selves.”

Great leaders, the canon says, show a talent for navigating complexities, promoting freedom of choice, practicing what they preach, appealing to reason, nurturing followers through coaching and mentorship, inspiring cooperation and harmony by showing genuine concern for others, and using “authentic, consistent means” to rally people to their point of view.

The captains in Tier One displayed many of these traits. They were conscientious, principled, and inspirational, and connected with their teammates in ways that elevated their performances. Yet there were things about the way they led their teams that didn’t square with the definition Burns put forward. These men and women were often lacking in talent and charisma. Rather than leading from the front, they avoided speeches, shunned the spotlight, and performed difficult and thankless jobs in the shadows. They weren’t always steadfast examples of virtue, either.

Truth be told, transformational leadership seemed like a grab bag into which every imaginable positive trait had been thrown. It presented an idealized view of leadership, one that was less attainable than aspirational. Of course, maybe that’s the whole point: Leaders cut from the same cloth as Moses, Gandhi, and Napoleon come along so infrequently that no rational person should expect to meet one. The best we can do is to try to understand them, and to help the inferior leaders we settled for make incremental improvements.

After a while, people get tired of waiting for a unicorn to wander into the building, so they start looking for new ways to construct teams that don’t require unicorns at all.

I started to suspect that the real reason we can’t agree on the formula for elite team leadership is that we’ve overcomplicated things. We’ve been so busy scanning the horizon for transformational knights in shining armor that we’ve ignored the likelier truth: there are hundreds upon thousands of potentially transformative leaders right in our midst. We just lack the ability to recognize them.

“They are certainly not a group of ‘supermen.’…They are not born heroes, either; they become heroes.”

Leadership = P × M × D. Gal told me that the first variable—the P—stood for potential, which he defined as a person’s God-given leadership ability. This was a natural gift that couldn’t be taught, he said, and would start to become evident in a person’s behavior as early as kindergarten. But it also wasn’t excessively rare; many members of an army unit might have these skills.

To become a leader, however, a person with potential also needed to possess the next variable: M. “The prerequisite to be effective is motivation,” he said. These two variables were something of a twin set. People who had leadership potential often had the motivation to fulfill the role. But it was the third variable in Gal’s equation that caught my attention: D for development.

Here, Gal believed, biology played no role. Any leadership candidate, no matter how gifted, had to make an effort to learn the ropes and to prove that they had the right qualities. “You have to earn your leadership over time, to prove that your charisma is used the right way and that it flows in a positive group-oriented direction.”

People who build sports teams have started conflating talent, or market value, with leadership. They have eliminated hierarchies that allow team leaders to exist in a robust middle layer of management. They are afraid to choose leaders that defy conventional wisdom or whose penchant for creating friction inside the team works against their economic priorities.

The best set of instructions I have come across—the one that most closely matches my own observations about Tier One captains—was compiled by Richard Hackman, the late Harvard social and organizational psychologist, who spent decades observing teams of all kinds as they worked. While their goals were as different as landing a plane is from performing a piece of classical music, Hackman focused his attention on comparing how their preparations and processes affected their outcomes. By doing so, he pieced together the outlines of a theory on the nature of effective team stewardship, or as he put it, the “personal qualities that appear to distinguish excellent team leaders from those for whom leadership is a struggle.” Hackman’s theory consisted of four principles:

  1. Effective leaders know some things. The best team leaders seemed to have a solid understanding of the conditions that needed to be present inside a team in order for its members to thrive. In other words,they developed a vision for the way things ought to be.
  2. Effective leaders know how to do some things. In “performance” situations, Hackman noticed that the most skillful leaders seemed to always sound the right notes. They understood the “themes” that were most important in whatever situation the team was in, and knew how to close the gap between the team’s current state of being and the one it needed to reach in order to succeed.
  3. Effective leaders should be emotionally mature. Hackman understood that leading a team could be “an emotionally challenging undertaking.” Great captains have to manage their own anxieties while coping with the feelings of others. The most mature leaders didn’t run away from anxiety or try to paper it over. Rather, they would pour into it with an eye towards learning about it – and by doing so find the right way to defuse it.
  4. Effective leaders need a measure of personal courage. The basic work of a leader, Hackman believed, was to move a group away from its entrenched system and into a better, more prosperous one. In other words, a leader’s job is to help a team make the turn toward greatness. To do this, he believed, a leader—by definition—had to “operate at the margins of what members presently like and want rather than at the center of the collective consensus.” To push a team forward, a leader must disrupt its routines and challenge its definition of what is normal. Because this kind of thing produces resistance, even anger, leaders have to have the courage to stand apart – even if they end up paying a substantial personal toll for doing so.
  5. The “strange” thing about Hackman’s four rules, as he put it, was what they didn’t include. There was nothing in there about a person’s personality, or values, or charisma. There was no mention whatsoever of their talent. Leading a team effectively wasn’t a matter of skill and magnetism, it was all tied up in the quotidian business of leadership. To Hackman, the cheif trait of superior leaders wasn’t what they were like but what they did on a daily basis.

The second challenge in choosing a leader—one that is no less vital—is knowing what kind of people to avoid.

Deborah Gruenfeld, a social psychologist at Stanford’s business school, has spent most of her career studying the roles of individuals inside organizations. She is one of the world’s leading experts on the psychology of power.

As a result, many people wrongly believe they can claim status inside an organization by “tricking” others into thinking they’re entitled to it even if they might not be. It’s an outgrowth of the old adage “fake it till you make it.”

According to Gruenfeld, the research suggests that the opposite is true. In real life, she says, people often attain and hold power within an organization by downplaying their qualifications. “We gain status more readily, and more reliably, by acting just a little less deserving than we actually are.”

They won status by doing everything in their power to suggest they didn’t deserve it.

In 2016, Bret Stephens wrote a column in the opinion pages of The Wall Street Journal in which he described a conversation he’d had with his eleven-year-old son. The subject was the difference between fame and heroism. His son’s point of view on the subject was that famous people depend on what other people think of them to be who they are. Heroes just care about whether they do everything right.

Stephens went on to describe a modern phenomenon, fed by all forms of traditional and social media, in which people devote considerable energy to boasting about their talents and pretending to be great, even when they’re not. He called this “posture culture.”

When I read this, I realized that this is exactly the kind of mindset that has become tangled up with our views about captains. All too often, the people who propose themselves for positions of power are quick to trumpet their abilities. And those of us who make these decisions are often swayed by the force of their personality.

The truth is that leadership is a ceaseless burden. It’s not something people should do for the self-reflected glory, or even because they have oodles of charisma or surpassing talent. It’s something they should do because they have the humility and fortitude to set aside the credit, and their own gratification and well-being, for the team – not just in pressure-packed moments but in every minute of every day.

This instinct shouldn’t be confused with the desire to make others happy. Scientists have shown that a team’s perceptions of its work and of the efficacy of its leader often have no bearing on how well it performs. A great leader is dedicated to doing whatever it takes to make success more likely, even if it’s unpopular, or controversial, or outrageous, or completely invisible to others. A leader has to be committed, above all else, to getting it right.

A leader is best when people barely know he exists, not so good when people obey and acclaim him, worst when they despise him,” he wrote. “Fail to honor others and they will fail to honor you. But of a good leader, who talks little, when his work is done, his aims fulfilled, they will say, “we did this ourselves.”


BOSTON, 2004
As Rodriguez glared at Arroyo, Jason Varitek, the Red Sox catcher, entered the frame. One of a catcher’s jobs is to protect his pitchers from large angry men carrying bats, so Varitek walked right up to the Yankees star, who towered over him, and delivered a message. “I told him, in choice words, to get to first base,” Varitek said. Rodriguez took a couple steps forward, his eyes narrowing to slits. “Fuck you!” he shouted. This sort of behavior was unusual for Rodriguez, who wasn’t known as a hothead. Varitek stood his ground, so Rodriguez pointed a finger at him. “Come on!”

In 98 percent of these situations, the hitter settles down. The home-plate umpire might head over to have a word with the pitcher and his manager, but that’s essentially it. This instance would belong to the other 2 percent. With a single furious motion, Varitek shoved his hands, one of them still attached to his catcher’s mitt, straight into Rodriguez’s face. The force of this lunging punch, combined with Rodriguez’s forward motion, was strong enough to jar his head violently backward and to lift his feet off the ground.

I decided to circle back—just out of curiosity—to see if there had been any one event that sparked their metamorphosis. The search didn’t take long. It was the afternoon of July 24. After the brawl ended, the energy in the ballpark was completely different. The brawl had brought Boston’s fans roaring to life, and the Red Sox players seemed energized. “Huge adrenaline surge on our end,” said the Boston pitcher Curt Schilling.

After “The Punch,” as it became known, the wandering, undisciplined vibe I’d seen in the Boston clubhouse melted away, replaced by a palpable sense of purpose.

Empiricists don’t believe in the concept of “momentum” in sports. They find it ridiculous to think that a single display of emotion by a respected member of a team could produce a contagion powerful enough to upend the laws of probability.

At thirty-two years old, Jason Varitek was entering the downslope of his career. During the off-season, the Red Sox, pessimistic about his age, his numbers, and his prospects, had lowballed him on a contract extension.

“I was just trying to protect Bronson,” he said afterward. “For protecting a teammate, I’ll take whatever comes.”

Appendix Tier One: The Elite
They had at least five members; they competed in sports where the athletes must interact or coordinate their efforts during competition while also engaging directly with their opponents; they competed in a major spectator sport with millions of fans; their dominance lasted for at least four years; they had ample opportunities to prove themselves against the world’s top competition; and, finally, their achievements stood apart in some way from all other teams in the history of their sport.

Collingwood Magpies (Australian rules football), 1927–30
New York Yankees (Major League Baseball), 1949–53
Hungary (men’s soccer), 1950–55
Montreal Canadiens (National Hockey League),1955–60
Boston Celtics (National Basketball Association), 1956–69
Brazil (men’s soccer), 1958–62
Pittsburgh Steelers (National Football League), 1974–80
Soviet Union (men’s ice hockey), 1980–84
New Zealand All Blacks (rugby union), 1986–90
Cuba (women’s volleyball), 1991–2000
Australia (women’s field hockey), 1993–2000
United States (women’s soccer), 1996–99
San Antonio Spurs (NBA), 1997–2016
New England Patriots (NFL), 2001–18
Barcelona (professional soccer), 2008–13
France (men’s handball), 2008–15
New Zealand All Blacks (rugby union), 2011–15

The “Double” Captains
Three outstanding soccer captains led more than one team into Tier Two. Because of this rare achievement, they were given special consideration in the book. Franz Beckenbauer; Germany (1970–74) and Bayern Munich (1971–76) Didier Deschamps; France (1998–2001) and Olympique de Marseille (1988–93) Philipp Lahm; Germany (2010–14) and Bayern Munich (2012–16)

On an elite team, the captain will accept the occasional rebuke, but it comes at a price. The coach has to extend the same courtesy. Belichick knew Brady wasn’t afraid to rip up the playbook. His greatest innovation was learning not to feel threatened by it.

Brady and Belichick understood that you can’t smother your partner with love—and you shouldn’t hold your tongue when it matters. You can’t learn to collaborate unless you learn how to fight.

Baseball: Major League Baseball
New York Yankees* 1936–41 Captained until 1939 by Lou Gehrig, this team won four World Series titles in a row and five of six but failed to match the record of five straight.

Atlanta Braves 1991–2005 Won fourteen division titles in fifteen seasons and appeared in the World Series five times but only won it once. B New York Yankees 1996–2000 Won four World Series titles in five seasons, falling one win short of the record. This team did not name a captain, although many say outfielder Paul O’Neill was its unofficial leader.

Basketball: National Basketball Association
Los Angeles Lakers 1980–88 Won five NBA titles in nine seasons behind Kareem Abdul-Jabbar but lost in the first round of the playoffs in 1981. B

Boston Celtics 1983–87 Won two NBA titles in four straight Finals appearances under captain Larry Bird. B Chicago Bulls* 1991–98 Won six NBA titles in eight seasons with a 79 percent win rate in its title-winning seasons under captains Michael Jordan, Bill Cartwright, and Scottie Pippen but finished second and third in its division in ’94 and ’95 and dropped out of the quarterfinals of the NBA playoffs in those seasons. B Miami Heat 2010–14 Won two NBA championships in four straight Finals appearances with four captains, including LeBron James and Dwyane Wade. B

Field Hockey: Men’s International
Netherlands 1996–2000 Won two Olympic gold medals but only one World Cup and three of five Champions Trophies. B Australia* 2008–14 Won two World Cups, two consecutive Commonwealth Games, and five straight Champions Trophies but lost the ’12 Olympics and ’14 Champions Trophy.

Field Hockey: Women’s International
Netherlands 1983–87 Won one Olympic title, two World Cups, and two European titles but fell short of Australia’s marks. B Netherlands 2009–12 Won one Olympic gold medal, one of two World Cups, and two straight European titles under captain Maartje Paumen but won only one of four Champions Trophies. B

Football: National Football League
Yellow highlight | Location: 4,623
Miami Dolphins 1971–74 Won two Super Bowls, four division titles, and 84 percent of its regular-season games and recorded the modern NFL’s first undefeated season behind captains Nick Buoniconti, Bob Griese, and Larry Little but lost the 1971 Super Bowl by three touchdowns and lost in the conference playoffs in ’74.

San Francisco 49ers* 1981–95 Won five Super Bowls and thirteen division titles in eighteen seasons, amassing a .742 win percentage and earning the highest single-season Elo rating for a modern NFL team under a list of captains that included Joe Montana, Ronnie Lott, Spencer Tillman, and Steve Young. But it fell short of Pittsburgh’s record of four titles in six years and failed to match the record of the 2001–18 New England Patriots, who appeared in eight Super Bowls in seventeen seasons. B Dallas Cowboys 1992–95 Won three Super Bowls in four seasons while amassing the best overall Elo rating in NFL history for any team during a four-season stretch. B

WATCH LIST Barça Dreams: A True Story of FC Barcelona. Entropy Studio, Gen Image Media, 2015. Bill Russell: My Life, My Way. HBO Sports, 2000. Capitão Bellini: Herói Itapirense. HBR TV, 2012. Carles Puyol: 15 Años, 15 Momentos. Barça TV, 2014. Dare to Dream: The Story of the U.S. Women’s Soccer Team. HBO Studios, 2005. Die Mannschaft (Germany at the 2014 World Cup). Little Shark Entertainment, 2014. England v Hungary 1953: The Full Match at Wembley. Mastersound, 2007. Fire and Ice: The Rocket Richard Riot. Barna-Alper and Galafilm Productions, 2000. Height of Passion: FC Barcelona vs. Real Madrid. Forza Productions, 2004. Hockeyroos Win Gold (2000 Olympic Final).

Australian Olympic Committee, 2013. Inside Bayern Munich. With Owen Hargreaves. BT Sport, 2015. Legends of All Blacks Rugby. Go Entertain, 1999. Les Experts: Le Doc (French handball at the 2009 World Championships). Canal+ TV, 2009. Les Yeux Dans Les Bleus (France in the 1998 World Cup). 2P2L Télévision, 1998. Mud & Glory: Buck Shelford. TVNZ, 1990. Nine for IX: The 99ers. ESPN Films, 2013. Of Miracles and Men. (Soviet hockey at the 1980 Olympics). ESPN Films 30 for 30, 2015. Pelé: The King of Brazil. Janson Media, 2010. Pelé and Garrincha: Gods of Brazil. Storyville, BBC Four, 2002. Puskás Hungary. Filmplus, 2009. Red Army (Soviet hockey). Sony Pictures Classics, 2014. Tim Duncan and Bill Russell Go One on One., 2009. Tim Duncan: Inside Stuff. NBA Inside Stuff, ABC, November 2004.

Weight of a Nation (2011 New Zealand All Blacks World Cup campaign). Sky Network Television, 2012. Yogi Berra: American Sports Legend. Time Life Records, 2004.

Bowling Alone

Bowling Alone Book Cover Bowling Alone
Robert D. Putnam
Simon and Schuster
August 7, 2001

Excellent book especially considering the times in which we are living. (2020). I took so many notes that I decided to only show the summaries and passages that really jumped out at me. Much to think about. Shows how changes in work, family structure, women's roles, and other factors have caused people to become increasingly disconnected from family, friends, neighbors, and democratic structures--and how they may reconnect.

SECTION ONE Introduction
CHAPTER 1 Thinking about Social Change in America

Let’s sum up what we’ve learned about trends in political participation. On the positive side of the ledger, Americans today score about as well on a civics test as our parents and grandparents did, though our self-congratulation should be restrained, since we have on average four more years of formal schooling than they had.33 Moreover, at election time we are no less likely than they were to talk politics or express interest in the campaign. On the other hand, since the mid-1960s, the weight of the evidence suggests, despite the rapid rise in levels of education Americans have become perhaps 10–15 percent less likely to voice our views publicly by running for office or writing Congress or the local newspaper, 15–20 percent less interested in politics and public affairs, roughly 25 percent less likely to vote, roughly 35 percent less likely to attend public meetings, both partisan and nonpartisan, and roughly 40 percent less engaged in party politics and indeed in political and civic organizations of all sorts. We remain, in short, reasonably well-informed spectators of public affairs, but many fewer of us actually partake in the game.

In the 1990s roughly three in four Americans didn’t trust the government to do what is right most of the time.

To summarize: Organizational records suggest that for the first two-thirds of the twentieth century Americans’ involvement in civic associations of all sorts rose steadily, except for the parenthesis of the Great Depression. In the last third of the century, by contrast, only mailing list membership has continued to expand, with the creation of an entirely new species of “tertiary” association whose members never actually meet. At the same time, active involvement in face-to-face organizations has plummeted, whether we consider organizational records, survey reports, time diaries, or consumer expenditures. We could surely find individual exceptions—specific organizations that successfully sailed against the prevailing winds and tides—but the broad picture is one of declining membership in community organizations. During the last third of the twentieth century formal membership in organizations in general has edged downward by perhaps 10–20 percent. More important, active involvement in clubs and other voluntary associations has collapsed at an astonishing rate, more than halving most indexes of participation within barely a few decades. Many Americans continue to claim that we are “members” of various organizations, but most Americans no longer spend much time in community organizations—we’ve stopped doing committee work, stopped serving as officers, and stopped going to meetings. And all this despite rapid increases in education that have given more of us than ever before the skills, the resources, and the interests that once fostered civic engagement. In short, Americans have been dropping out in droves, not merely from political life, but from organized community life more generally.

SECTION TWO Trends in Civic Engagement and Social Capital
CHAPTER 2 Political Participation
Let’s sum up what we’ve learned about trends in political participation. On the positive side of the ledger, Americans today score about as well on a civics test as our parents and grandparents did, though our self-congratulation should be restrained, since we have on average four more years of formal schooling than they had.33 Moreover, at election time we are no less likely than they were to talk politics or express interest in the campaign. On the other hand, since the mid-1960s, the weight of the evidence suggests, despite the rapid rise in levels of education Americans have become perhaps 10–15 percent less likely to voice our views publicly by running for office or writing Congress or the local newspaper, 15–20 percent less interested in politics and public affairs, roughly 25 percent less likely to vote, roughly 35 percent less likely to attend public meetings, both partisan and nonpartisan, and roughly 40 percent less engaged in party politics and indeed in political and civic organizations of all sorts. We remain, in short, reasonably well-informed spectators of public affairs, but many fewer of us actually partake in the game.

In the 1990s roughly three in four Americans didn’t trust the government to do what is right most of the time.

CHAPTER 3 Civic Participation
To summarize: Organizational records suggest that for the first two-thirds of the twentieth century Americans’ involvement in civic associations of all sorts rose steadily, except for the parenthesis of the Great Depression. In the last third of the century, by contrast, only mailing list membership has continued to expand, with the creation of an entirely new species of “tertiary” association whose members never actually meet. At the same time, active involvement in face-to-face organizations has plummeted, whether we consider organizational records, survey reports, time diaries, or consumer expenditures. We could surely find individual exceptions—specific organizations that successfully sailed against the prevailing winds and tides—but the broad picture is one of declining membership in community organizations. During the last third of the twentieth century formal membership in organizations in general has edged downward by perhaps 10–20 percent. More important, active involvement in clubs and other voluntary associations has collapsed at an astonishing rate, more than halving most indexes of participation within barely a few decades. Many Americans continue to claim that we are “members” of various organizations, but most Americans no longer spend much time in community organizations—we’ve stopped doing committee work, stopped serving as officers, and stopped going to meetings. And all this despite rapid increases in education that have given more of us than ever before the skills, the resources, and the interests that once fostered civic engagement. In short, Americans have been dropping out in droves, not merely from political life, but from organized community life more generally.

CHAPTER 4 Religious Participation

LET US SUMMARIZE what we have learned about the religious entry in America’s social capital ledger. First, religion is today, as it has traditionally been, a central fount of American community life and health. Faith-based organizations serve civic life both directly, by providing social support to their members and social services to the wider community, and indirectly, by nurturing civic skills, inculcating moral values, encouraging altruism, and fostering civic recruitment among church people.

Second, the broad oscillations in religious participation during the twentieth century mirror trends in secular civic life—flowering during the first six decades of the century and especially in the two decades after World War II, but then fading over the last three or four decades.

Moreover, as in politics and society generally, this disengagement appears tied to generational succession. For the most part younger generations (“younger” here includes the boomers) are less involved both in religious and in secular social activities than were their predecessors at the same age.

Finally, American religious life over this period has also reenacted the historically familiar drama by which more dynamic and demanding forms of faith have surged to supplant more mundane forms. At least so far, however, the community-building efforts of the new denominations have been directed inward rather than outward, thus limiting their otherwise salutary effects on America’s stock of social capital. In short, as the twenty-first century opens, Americans are going to church less often than we did three or four decades ago, and the churches we go to are less engaged with the wider community. Trends in religious life reinforce rather than counterbalance the ominous plunge in social connectedness in the secular community.

CHAPTER 5 Connections in the Workplace

CHAPTER 6 Informal Social Connections

Virtually alone among major sports, only bowling has come close to holding its own in recent years.53 Bowling is the most popular competitive sport in America. Bowlers outnumber joggers, golfers, or softball players more than two to one, soccer players (including kids) by more than three to one, and tennis players or skiers by four to one.

CHAPTER 7 Altruism, Volunteering, and Philanthropy

When Aristotle observed that man is by nature a political animal, he was almost surely not thinking of schmoozing. Nevertheless, our evidence suggests that most Americans connect with their fellows in myriad informal ways. Human nature being what it is, we are unlikely to become hermits. On the other hand, our evidence also suggests that across a very wide range of activities, the last several decades have witnessed a striking diminution of regular contacts with our friends and neighbors. We spend less time in conversation over meals, we exchange visits less often, we engage less often in leisure activities that encourage casual social interaction, we spend more time watching (admittedly, some of it in the presence of others) and less time doing. We know our neighbors less well, and we see old friends less often. In short, it is not merely “do good” civic activities that engage us less, but also informal connecting.

The rise in volunteering is sometimes interpreted as a natural counter-weight to the decline in other forms of civic participation. Disenchanted with government, it is said, members of the younger generation are rolling up their sleeves to get the job done themselves. The profile of the new volunteerism directly contradicts that optimistic thesis. First, the rise in volunteering is concentrated among the boomers’ aging, civic parents, whereas the civic dropouts are drawn disproportionately from the boomers. Second, volunteering is part of the syndrome of good citizenship and political involvement, not an alternative to it. Volunteers are more interested in politics and less cynical about political leaders than nonvolunteers are. Volunteering is a sign of positive engagement with politics, not a sign of rejection of politics.

This evidence also deflates any easy optimism about the future of volunteerism, for the recent growth has depended upon a generation fated to pass from the scene over the next decade or two.

Compared with their elders, however, they probably will not. So far, the boomer cohort continues to be less disposed to civic engagement than their parents and even to some extent less than their own children, so it is hazardous to assume that the rising tide of volunteerism of the past two decades will persist in the next two.

young Americans in the 1990s displayed a commitment to volunteerism without parallel among their immediate predecessors. This development is the most promising sign of any that I have discovered that America might be on the cusp of a new period of civic renewal, especially if this youthful volunteerism persists into adulthood and begins to expand beyond individual caregiving to broader engagement with social and political issues.

CHAPTER 8 Reciprocity, Honesty, and Trust

Almost imperceptibly, the treasure that we spend on getting it in writing has steadily expanded since 1970, as has the amount that we spend on getting lawyers to anticipate and manage our disputes. In some respects, this development may be one of the most revealing indicators of the fraying of our social fabric. For better or worse, we rely increasingly—we are forced to rely increasingly—on formal institutions, and above all on the law, to accomplish what we used to accomplish through informal networks reinforced by generalized reciprocity—that is, through social capital.

CHAPTER 9 Against the Tide? Small Groups, Social Movements, and the Net

The evidence on small groups, social movements, and telecommunications is more ambiguous than the evidence in earlier chapters. All things considered, the clearest exceptions to the trend toward civic disengagement are 1) the rise in youth volunteering discussed in chapter 7; 2) the growth of telecommunication, particularly the Internet; 3) the vigorous growth of grassroots activity among evangelical conservatives; and 4) the increase in self-help support groups. These diverse countercurrents are a valuable reminder that society evolves in multiple ways simultaneously.

CHAPTER 10 Introduction

During the first two-thirds of the century Americans took a more and more active role in the social and political life of their communities—in churches and union halls, in bowling alleys and clubrooms, around committee tables and card tables and dinner tables. Year by year we gave more generously to charity, we pitched in more often on community projects, and (insofar as we can still find reliable evidence) we behaved in an increasingly trustworthy way toward one another. Then, mysteriously and more or less simultaneously, we began to do all those things less often.

compared with our own recent past, we are less connected. We remain interested and critical spectators of the public scene. We kibitz, but we don’t play. We maintain a facade of formal affiliation, but we rarely show up. We have invented new ways of expressing our demands that demand less of us. We are less likely to turn out for collective deliberation—whether in the voting booth or the meeting hall—and when we do, we find that discouragingly few of our friends and neighbors have shown up. We are less generous with our money and (with the important exception of senior citizens) with our time, and we are less likely to give strangers the benefit of the doubt. They, of course, return the favor.

Thin, single-stranded, surf-by interactions are gradually replacing dense, multistranded, well-exercised bonds.

Large groups with local chapters, long histories, multiple objectives, and diverse constituencies are being replaced by more evanescent, single-purpose organizations, smaller groups that “reflect the fluidity of our lives by allowing us to bond easily but to break our attachments with equivalent ease.”

vertiginous rise of staff-led interest groups purpose built to represent our narrower selves. Place-based social capital is being supplanted by function-based social capital. We are withdrawing from those networks of reciprocity that once constituted our communities.

Why, beginning in the 1960s and 1970s and accelerating in the 1980s and 1990s, did the fabric of American community life begin to unravel? Before we can consider reweaving the fabric, we need to address this mystery. It is, if I am right, a puzzle of some importance to the future of American democracy.

social scientists, faced with a trend like declining social participation, look for concentrations of effects.

First, effects triggered by social change often spread well beyond the point of initial contact. If, for example, the dinner party has been undermined by the movement of women into the paid labor force—and we shall find some evidence for that view—such a development might well inhibit dinner parties not only among women who work outside the home, but also among stay-at-homes, tired of doing all the inviting.

Second, in our routine screening of the usual suspects, none stands out in the initial lineup. Civic disengagement appears to be an equal opportunity affliction.

To be sure, the levels of civic engagement differ across these categories, as we have already noted—more informal socializing among women, more civic involvement among the well-to-do, less social trust among African Americans, less voting among independents, more altruism in small towns, more church attendance among parents, and so on.

Education is one of the most important predictors—usually, in fact, the most important predictor—of many forms of social participation—from voting to associational membership, to chairing a local committee to hosting a dinner party to giving blood.

education is an especially powerful predictor of participation in public, formally organized activities.

Education is in part a proxy for privilege—for social class and economic advantage—but when income, social status, and education are used together to predict various forms of civic engagement, education stands out as the primary influence.

Thus education boosts civic engagement sharply, and educational levels have risen massively. Unfortunately, these two plain facts only deepen our central mystery. If anything, the growth in education should have increased civic engagement.

CHAPTER 11 Pressures of Time and Money

Contrary to expectations that unemployment would radicalize its victims, social psychologists found that the jobless became passive and withdrawn, socially as well as politically.13 As my economic situation becomes more dire, my focus narrows to personal and family survival. People with lower incomes and those who feel financially strapped are much less engaged in all forms of social and community life than those who are better

To sum up: The available evidence suggests that busyness, economic distress, and the pressures associated with two-career families are a modest part of the explanation for declining social connectedness. These pressures have targeted the kinds of people (especially highly educated women) who in the past bore a disproportionate share of the responsibility for community involvement, and in that sense this development has no doubt had synergistic effects that spread beyond those people themselves. With fewer educated, dynamic women with enough free time to organize civic activity, plan dinner parties, and the like, the rest of us, too, have gradually disengaged. At the same time, the evidence also suggests that neither time pressures nor financial distress nor the movement of women into the paid labor force is the primary cause of civic disengagement over the last two decades.39 The central exculpatory fact is that civic engagement and social connectedness have diminished almost equally for both women and men, working or not, married or single, financially stressed or financially comfortable.

CHAPTER 12 Mobility and Sprawl

Americans chose to move to the suburbs and to spend more time driving, presumably because we found the greater space, larger homes, lower-cost shopping and housing—and perhaps, too, the greater class and racial segregation—worth the collective price we have paid in terms of community. On the other hand, DDB Needham Life Style survey data on locational preferences suggest that during the last quarter of the twentieth century—the years of rapid suburbanization—suburban living gradually became less attractive compared to residence in either the central city or smaller towns.24 Whatever our private preferences, however, metropolitan sprawl appears to have been a significant contributor to civic disengagement over the last three or four decades for at least three distinct reasons.

First, sprawl takes time. More time spent alone in the car means less time for friends and neighbors, for meetings, for community projects, and so on. Though this is the most obvious link between sprawl and disengagement, it is probably not the most important.

Second, sprawl is associated with increasing social segregation, and social homogeneity appears to reduce incentives for civic involvement, as well as opportunities for social networks that cut across class and racial lines. Sprawl has been especially toxic for bridging social capital.

Third, most subtly but probably most powerfully, sprawl disrupts community “boundedness.” Commuting time is important in large part as a proxy for the growing separation between work and home and shops.

“communities that appear to foster participation—the small and relatively independent communities—are becoming rarer and rarer.”25 Three decades later this physical fragmentation of our daily lives has had a visible dampening effect on community involvement.

CHAPTER 13 Technology and Mass Media
Political communications specialist Roderick Hart argues that television as a medium creates a false sense of companionship, making people feel intimate, informed, clever, busy, and important. The result is a kind of “remote-control politics,” in which we as viewers feel engaged with our community without the effort of actually being engaged.

Americans at the end of the twentieth century were watching more TV, watching it more habitually, more pervasively, and more often alone, and watching more programs that were associated specifically with civic disengagement (entertainment, as distinct from news). The onset of these trends coincided exactly with the national decline in social connectedness, and the trends were most marked among the younger generations that are (as we shall see in more detail in the next chapter) distinctively disengaged. Moreover, it is precisely those Americans most marked by this dependence on televised entertainment who were most likely to have dropped out of civic and social life—who spent less time with friends, were less involved in community organizations, and were less likely to participate in public affairs.

The evidence is powerful and circumstantial, though because it does not derive from randomized experiments, it cannot be fully conclusive about the causal effects of television and other forms of electronic entertainment. Heavy users of these new forms of entertainment are certainly isolated, passive, and detached from their communities, but we cannot be entirely certain that they would be more sociable in the absence of television. At the very least, television and its electronic cousins are willing accomplices in the civic mystery we have been unraveling, and more likely than not, they are ringleaders.

CHAPTER 14 From Generation to Generation

And reversing the effects of their departure will be as difficult as trying to heat a tubful of bathwater that has become cold: a lot of really hot water will have to be added to raise the average temperature. Unless America experiences a dramatic upward boost in civic engagement in the next few years, Americans in the twenty-first century will join, trust, vote, and give even less than we did at the end of the twentieth.

To sum up: Much of the decline of civic engagement in America during the last third of the twentieth century is attributable to the replacement of an unusually civic generation by several generations (their children and grandchildren) that are less embedded in community life. In speculating about explanations for this sharp generational discontinuity, I am led to the conclusion that the dynamics of civic engagement in the last several decades have been shaped in part by social habits and values influenced in turn by the great mid-century global cataclysm. It is not, however, my argument that world war is a necessary or a praiseworthy means toward the goal of civic reengagement. We must acknowledge the enduring consequences—some of them, I have argued, powerfully positive—of what we used to call “the war,” without at the same time glorifying martial virtues or mortal sacrifice. (This is precisely the dilemma addressed so effectively by director Steven Spielberg in Saving Private Ryan.) When a generation of Americans early in the twentieth century reflected on both the horrors of war and the civic virtues that it inculcated, they framed their task as the search for “the moral equivalent of war.”59 Insofar as the story of this chapter contains any practical implication for civic renewal, it is that.

CHAPTER 15 What Killed Civic Engagement? Summing Up

Decline in civic commitment on the part of business leaders. As Wal-Mart replaces the corner hardware store, Bank of America takes over the First National Bank, and local owners are succeeded by impersonal markets, the incentives for business elites to contribute to community life atrophy.

“Where are the power elite when you need them?” he said. “They’re all off at corporate headquarters in some other state.”9

First, pressures of time and money, including the special pressures on two-career families, contributed measurably to the diminution of our social and community involvement during these years.

Second, suburbanization, commuting, and sprawl also played a supporting role. Again, a reasonable estimate is that these factors together might account for perhaps an additional 10 percent of the problem.

Third, the effect of electronic entertainment—above all, television—in privatizing our leisure time has been substantial.

Fourth and most important, generational change—the slow, steady, and ineluctable replacement of the long civic generation by their less involved children and grandchildren—has been a very powerful factor. The effects of generational succession vary significantly across different measures of civic engagement—greater for more public forms, less for private schmoozing—but as a rough rule of thumb we concluded in chapter 14 that this factor might account for perhaps half of the overall decline.

CHAPTER 17 Education and Children’s Welfare

Conversely, where social connectedness is lacking, schools work less well, no matter how affluent the community. Moreover, social capital continues to have powerful effects on education during the college years. Extracurricular activities and involvement in peer social networks are powerful predictors of college dropout rates and college success, even holding constant precollegiate factors, including aspirations.28 In other words, at Harvard as well as in Harlem, social connectedness boosts educational attainment. One of the areas in which America’s diminished stock of social capital is likely to have the most damaging consequences is the quality of education (both in school and outside) that our children receive.

CHAPTER 18 Safe and Productive Neighborhoods
As sociologist Robert Sampson states: “Lack of social capital is one of the primary features of socially disorganized communities.”14 The best evidence available on changing levels of neighborhood connectedness suggests that most Americans are less embedded in their neighborhood than they were a generation ago.15 This is due in part to the fact that women—long the stalwart neighborhood builders—are now much more likely to be away at work during the day than their mothers were. And professional men, who once lent their skills to neighborhood associations, are spending longer hours on the job than their fathers did.

William Julius Wilson, described the downward spiral in his 1987 classic The Truly Disadvantaged: “The basic thesis is not that ghetto culture went unchecked following the removal of higher-income families in the inner city, but that the removal of these families made it more difficult to sustain the basic institutions in the inner city (including churches, stores, schools, recreational facilities, etc.) in the face of prolonged joblessness. And as the basic institutions declined, the social organization of inner-city neighborhoods (defined here to include a sense of community, positive neighborhood identification, and explicit norms and sanctions against aberrant behavior) likewise declined.”

However, it is worth underlining that had the “culture of suburbia” and the social pathologies of middle-class white communities attracted equal attention, we would be able to draw a more balanced assessment of the impact of social-capital deficits in Grosse Point as well as in the central city of Detroit. There is no reason to suppose that the effects (good and bad) of social capital on neighborhood life are limited to poor or minority communities. A second reason for emphasizing the role of social capital in poor communities is this: Precisely because poor people (by definition) have little economic capital and face formidable obstacles in acquiring human capital (that is, education), social capital is disproportionately important to their welfare.

CHAPTER 19 Economic Prosperity

Understanding the detailed linkages between social capital and economic performance is a lively field of inquiry at the moment, so it would be premature to claim too much for the efficacy of social capital or to describe exactly when and how networks of social connectedness boost the aggregate productivity of an economy. Research on social capital and economic development in what we once called the “Third World” is appearing at a rapid rate, based on work in such far-flung sites as South Africa, Indonesia, Russia, India, and Burkina Faso. Similarly rich work is under way on how Americans might improve the plight of our poorest communities by enabling those communities to invest in social capital and empowering them to capitalize on the social assets they already have.31 For the moment, the links between social networks and economic success at the individual level are understood. You can be reasonably confident that you will benefit if you acquire a richer social network, but it is not yet entirely clear whether that reflects merely your ability to grab a larger share of a fixed pie, or whether if we all have richer social networks, we all gain. The early returns, however, encourage the view that social capital of the right sort boosts economic efficiency, so that if our networks of reciprocity deepen, we all benefit, and if they atrophy, we all pay dearly.

CHAPTER 20 Health and Happiness
In the decades since the Fab Four topped the charts, life satisfaction among adult Americans has declined steadily. Roughly half the decline in contentment is associated with financial worries, and half is associated with declines in social capital: lower marriage rates and decreasing connectedness to friends and community. Not all segments of the population are equally gloomy. Survey data show that the slump has been greatest among young and middle-aged adults (twenty to fifty-five). People over fifty-five—our familiar friends from the long civic generation—are actually happier than were people their age a generation ago.24 Some of the generational discrepancy is due to money worries: despite rising prosperity, young and middle-aged people feel less secure financially. But some of the disparity is also due to social connectedness. Young and middle-aged adults today are simply less likely to have friends over, attend church, or go to club meetings than were earlier generations. Psychologist Martin Seligman argues that more of us are feeling down because modern society encourages a belief in personal control and autonomy more than a commitment to duty and common enterprise. This transformation heightens our expectations about what we can achieve through choice and grit and leaves us unprepared to deal with life’s inevitable failures. Where once we could fall back on social capital—families, churches, friends—these no longer are strong enough to cushion our fall.25 In our personal lives as well as in our collective life, the evidence of this chapter suggests, we are paying a significant price for a quarter century’s disengagement from one another.

CHAPTER 21 Democracy
Without such face-to-face interaction, without immediate feedback, without being forced to examine our opinions under the light of other citizens’ scrutiny, we find it easier to hawk quick fixes and to demonize anyone who disagrees. Anonymity is fundamentally anathema to deliberation.

the best predictors of cooperation with the decennial census is one’s level of civic participation. Even more striking is the finding that communities that rank high on measures of social capital, such as turnout and social trust, provide significantly higher contributions to public broadcasting, even when we control for all the other factors that are said to affect audience preferences and expenditures—education, affluence, race, tax deductibility, and public spending.55 Public broadcasting is a classic example of a public good—I obtain the benefit whether or not I pay, and my contribution in itself is unlikely to keep the station on the air. Why should any rational, self-interested listener, even one addicted to Jim Lehrer, send off a check to the local station? The answer appears to be that, at least in communities that are rich in social capital, civic norms sustain an expanded sense of “self-interest” and a firmer confidence in reciprocity.

Similarly, research has found that military units are more effective when bonds of solidarity and trust are high, and that communities with strong social networks and grassroots associations are better at confronting unexpected crises than communities that lack such civic resources.56 In all these instances our collective interest requires actions that violate our immediate self-interest and that assume our neighbors will act collectively, too. Modern society is replete with opportunities for free-riding and opportunism. Democracy does not require that citizens be selfless saints, but in many modest ways it does assume that most of us much of the time will resist the temptation to cheat. Social capital, the evidence increasingly suggests, strengthens our better, more expansive selves. The performance of our democratic institutions depends in measurable ways upon social capital.

CHAPTER 22 The Dark Side of Social Capital
Something in the first half of the twentieth century made successive cohorts of Americans more tolerant, but that generational engine failed to produce further increases in tolerance among those born in the second half of the century. The late X’ers are no more tolerant than the early boomers. So the biggest generational gains in tolerance are already behind us. By contrast, something happened in America in the second half of the twentieth century to make people less civically engaged. The late X’ers are a lot less engaged than the early boomers. As a result, the biggest generational losses in engagement still lie ahead.

Virtually no cohort in America is more engaged or more tolerant than those born around 1940–45. They are the liberal communitarians par excellence. Their parents were as engaged, but less tolerant. Their children are as tolerant, but less engaged. For some reason, that cohort inherited most of their parents’ sense of community, but they discarded their parents’ intolerance.

Community-mongers have fostered intolerance in the past, and their twenty-first-century heirs need to be held to a higher standard. That said, the greatest threat to American liberty comes from the disengaged, not the engaged. The most intolerant individuals and communities in America today are the least connected, not the most connected.

Race is the most important embodiment of the ethical crosscurrents that swirl around the rocks of social capital in contemporary America. It is perhaps foolhardy to offer a brief interpretation of those issues here, but it would be irresponsible to avoid them.

Here is one way of framing the central issue facing America as we become ever more diverse ethnically. If we had a golden magic wand that would miraculously create more bridging social capital, we would surely want to use it. But suppose we had only an aluminum magic wand that could create more social capital, but only of a bonding sort. This second-best magic wand would bring more blacks and more whites to church, but not to the same church, more Hispanics and Anglos to the soccer field, but not the same soccer field. Should we use it? As political scientist Eileen McDonagh has put the point vividly: “Is it better to have neighborhoods legally restricted on the basis of race, but with everyone having everyone else over for dinner, or is it better to have neighborhoods unrestricted on the basis of race, but with very little social interaction going on between neighbors?”

SECTION FIVE What Is to Be Done?
CHAPTER 23 Lessons of History: The Gilded Age and the Progressive Era

Then, as now, new concentrations of wealth and corporate power raised questions about the real meaning of democracy. Then, as now, massive urban concentrations of impoverished ethnic minorities posed basic questions of social justice and social stability. Then, as now, the comfortable middle class was torn between the seductive attractions of escape and the deeper demands of redemptive social solidarity.

Then, as now, new forms of commerce, a restructured workplace, and a new spatial organization of human settlement threatened older forms of solidarity. Then, as now, waves of immigration changed the complexion of America and seemed to imperil the unum in our pluribus. Then, as now, materialism, political cynicism, and a penchant for spectatorship rather than action seemed to thwart idealistic reformism.

Above all, then, as now, older strands of social connection were being abraded—even destroyed—by technological and economic and social change. Serious observers understood that the path from the past could not be retraced, but few saw clearly the path to a better future.

The institutions of civil society formed between roughly 1880 and 1910 have lasted for nearly a century. In those few decades the voluntary structures of American society assumed modern form. Essentially, the trends toward civic disengagement reviewed in section II of this book register the decay of that very structure over the last third of the twentieth century.

For all the difficulties, errors, and misdeeds of the Progressive Era, its leaders and their immediate forebears in the late nineteenth century correctly diagnosed the problem of a social-capital or civic engagement deficit. It must have been tempting in 1890 to say, “Life was much nicer back in the village. Everybody back to the farm.” They resisted that temptation to reverse the tide, choosing instead the harder but surer path of social innovation. Similarly, among those concerned about the social-capital deficit today, it would be tempting to say, “Life was much nicer back in the fifties. Would all women please report to the kitchen, and turn off the TV on the way?” Social dislocation can easily breed a reactionary form of nostalgia. On the contrary, my message is that we desperately need an era of civic inventiveness to create a renewed set of institutions and channels for a reinvigo-rated civic life that will fit the way we have come to live. Our challenge now is to reinvent the twenty-first-century equivalent of the Boy Scouts or the settlement house or the playground or Hadassah or the United Mine Workers or the NAACP. What we create may well look nothing like the institutions Progressives invented a century ago, just as their inventions were not carbon copies of the earlier small-town folkways whose passing they mourned. We need to be as ready to experiment as the Progressives were. Willingness to err—and then correct our aim—is the price of success in social reform.

CHAPTER 24 Toward an Agenda for Social Capitalists
our current plight is a pervasive and continuing generational decline in almost all forms of civic engagement.

Let us find ways to ensure that by 2010 the level of civic engagement among Americans then coming of age in all parts of our society will match that of their grandparents when they were that same age, and that at the same time bridging social capital will be substantially greater than it was in their grandparents’

So I challenge America’s employers, labor leaders, public officials, and employees themselves: Let us find ways to ensure that by 2010 America’s workplace will be substantially more family-friendly and community-congenial, so that American workers will be enabled to replenish our stocks of social capital both within and outside the workplace.

Let us act to ensure that by 2010 Americans will spend less time traveling and more time connecting with our neighbors than we do today, that we will live in more integrated and pedestrian-friendly areas, and that the design of our communities and the availability of public space will encourage more casual socializing with friends and neighbors.

So I challenge America’s clergy, lay leaders, theologians, and ordinary worshipers: Let us spur a new, pluralistic, socially responsible “great awakening,” so that by 2010 Americans will be more deeply engaged than we are today in one or another spiritual community of meaning, while at the same time becoming more tolerant of the faiths and practices of other Americans.

NO SECTOR OF AMERICAN SOCIETY will have more influence on the future state of our social capital than the electronic mass media and especially the Internet. If we are to reverse the adverse trends of the last three decades in any fundamental way, the electronic entertainment and telecommunications industry must become a big part of the solution instead of a big part of the problem. So I challenge America’s media moguls, journalists, and Internet gurus, along with viewers like you (and me): Let us find ways to ensure that by 2010 Americans will spend less leisure time sitting passively alone in front of glowing screens and more time in active connection with our fellow citizens. Let us foster new forms of electronic entertainment and communication that reinforce community engagement rather than forestalling

Let us find ways to ensure that by 2010 significantly more Americans will participate in (not merely consume or “appreciate”) cultural activities from group dancing to songfests to community theater to rap festivals. Let us discover new ways to use the arts as a vehicle for convening diverse groups of fellow citizens.

So I challenge America’s government officials, political consultants, politicians, and (above all) my fellow citizens: Let us find ways to ensure that by 2010 many more Americans will participate in the public life of our communities—running for office, attending public meetings, serving on committees, campaigning in elections, and even voting.

social capitalists need to avoid false debates. One such debate is “top-down versus bottom-up.” The roles of national and local institutions in restoring American community need to be complementary; neither alone can solve the problem. Another false debate is whether government is the problem or the solution. The accurate answer, judging from the historical record (as I argued in chapter 15), is that it can be both.

The final false debate to be avoided is whether what is needed to restore trust and community bonds in America is individual change or institutional change. Again, the honest answer is “Both.”

Thoughts in Solitude

Thoughts in Solitude Book Cover Thoughts in Solitude
Thomas Merton
Farrar Straus & Giroux
October 1, 1998

This is a short book but it is full of impactful thoughts. I like to read these books slowly - one page, one small chapter at a time and spend the day reflecting on it before moving on. There were many instances where Merton's reflections really got in my head and stuck there.

Teach me to bear a humility which shows me, without ceasing, that I am a liar and a fraud and that, even though this is so, I have an obligation to strive after truth, to be as true as I can, even though I will inevitably find all my truth half poisoned with deceit.

Chew on that for a little while.

The murderous din of our materialism cannot be allowed to silence the independent voices which will never cease to speak: whether they be the voice of the Christian Saints, or the voices of Oriental sages like Lao-Tse or the Zen Masters, or the voices of men like Thoreau or Martin Buber, or Max Picard.

To be a person implies responsibility and freedom, and both these imply a certain interior solitude, a sense of personal integrity, a sense of one’s own reality and of one’s ability to give himself to society – or to refuse that gift.

When men are merely submerged in a mass of impersonal human beings be pushed around by automatic forces, they lose their true humanity, their integrity, their ability to love, their capacity for self-determination. When society is made up of men who know no interior solitude it can no longer be held together by love: and consequently it is held together by a violent and abusive authority. But when men are violently deprived of the solitude and freedom which are their due, the society in which they live becomes putrid, it festers with servility, resentment and hate.

If we make good use of what we have, if we make it serve our good desires, we can do better than another who merely serves his temperament instead of making it serve him.

…the things that we love tell us what we are.

We must return from the desert like Jesus or St. John, with our capacity for feeling expanded and deepened, strengthened against the appeals of falsity, warned against temptation, great, noble and pure.

Living is the constant adjustment of thought to life and life to thought in such a way that we are always growing, always experiencing new things in the old and old things in the new. Thus life is always new.

Self-conquest is really self-surrender.

Hope is the secret of true asceticism. It denies our own judgements and desires and rejects the world in its present state, not because either we or the world are evil, but because we are not in a condition to make the best use of our own or of the world’s goodness.

There is no neutraility between gratitude and ingratitude. Those who are not grateful soon begin to complain of everything. Those who do not love, hate.

True gratitude and hypocrisy cannot exist together. They are totally incompatible. Gratitude on itself makes us sincere – or if it does not, then it is not true gratitude.

Gratitude therefore, takes nothing for granted, is never unresponsive, is constantly awakening to new wonder and to praise of the goodness of God.

Knowing that he has nothing he also knows that he NEEDS everything and he is not afraid to beg for what he needs and to get it where he can.

The proud man loves his own illusion and self-sufficiency.

The humble man begs for a share in what everybody else has received.

Meditative prayer is a stern discipline, and one which cannot be learned by violence. It requires unending courage and perserverance, and those who are not willing to work at it patiently will finally end in compromise. Here, as elsewhere, compromise is only another name for failure.

To meditate is to think. And yet successful meditation is much more than “affections,” much more than a series of prepared “acts” which one goes through.

In meditative prayer, one thinks and speaks not only with his mind and lips, but in a certain sense with his WHOLE BEING. Prayer is then not just a formula of words, or a series of desires springing up in the heart – it is the orientation of our whole body, mind and spirit to God in silence, attention, and adoration.

One cannot then enter into meditation, in this sense, without a kind of inner upheaval. By upheaval I do not mean a disturbance, but a breaking out of routine, a liberation of the heart from the cares and preoccupations of one’s daily business. The reason why so few people apply themselves seriously to mental prayer is precisely that this inner upheaval is necessary, and they are usually incapable of the effort required to make it.

If we try to contemplate God without having turned the face on our inner self entirely in His direction, we will end up inevitably by contemplating ourselves, and we will perphaps plunge into the abyss of warm darkness which is our own sensible nature. This is not a darkness in which one can safely remain passive.

The “turning” of our whole self to God can be achieved only by deep and sincere and simple faith, envlivened by a hope which knows that contact with God is possible, and love which desires above all things to do His will.

Poverty is the door to freedom, not because we remain imprisoned in the anxiety and constraint which poverty of itself implies, but because, finding nothing in ourselves that is a source of hope, we know there is nothing in ourselves worth defending.

Your life is shaped by the end you live for. You are made in the image of what you desire. To unify your life unify your desires. To spritualize your life, spiritualize your desires. To spiritualize your desires, desire to be without desire.

Poverty is not merely a matter of not having “things.” It is an attitude which leads us to renounce some of the advantages which come from the use of things.

…anything that tends to affirm us as distinct from others, as superior to others in such a way that we take satisfaction in these peculiarities and treat them as “possessions.”

Even the ability to help others to give our time and possessions to them can be “possessed” with attachment if by these actions we are really forcing ourselves on others and obligating them to ourselves.

Poverty means need.

If we were really humble, we would know to what extent we are liars!

Teach me to bear a humility which shows me, without ceasing, that I am a liar and a fraud and that, even though this is so, I have an obligation to strive after truth, to be as true as I can, even though I will inevitably find all my truth half poisoned with deceit.

It is not speaking that breaks our silence, but the anxiety to be heard.

The words of the proud man impose silence on all others, so that he alone may be heard. The humble man speaks only in order to be spoken to. The humble man asks nothing but an alms, then waits and listens.

If our life is poured out in useless words, we will never hear anything, will never become anything, and in the end, because we have said everything before we had anything to say we shall be left speech-less at the moment of our greatest decision.

Let me seek, then, the gist of silence and poverty, and solitude, where everything I touch is turned into prayer: where the sky is my prayer, the birds are my prayer, the wind in the trees is my prayer, for God is in all.

For this to be so I must be really poor. I must seek nothing: but I must be most content with whatever I have from God. True poverty is that of the beggar who is glad to receive alms from anyone, but especially from God.

Gratitude is therefore the heart of the solitary life, as it is the heart of the Christian life.


Walden and Civil Disobedience Book Cover Walden and Civil Disobedience
Henry David Thoreau, W. S. Merwin,
Signet Classic

I first read Walden in the late 1980's in college. I had no idea what I was doing. This is a fact that becomes more and more clear as I age. Thoreau wrote this to be like a "bible". It is not meant to be read over a week or so for a college class. It is meant to be absorbed. Slowly. And that is exactly how I approached it this time around. I am glad I did.

I went to the woods because I wished to live deliberately, to front only the essential facts of life, and see if I could not learn what it had to teach, and not, when I came to die, discover that I had not lived. I did not wish to live what was not life, living is so dear; nor did I wish to practise resignation, unless it was quite necessary. I wanted to live deep and suck out all the marrow of life, to live so sturdily and Spartan-like as to put to rout all that was not life, to cut a broad swath and shave close, to drive life into a corner, and reduce it to its lowest terms, and, if it proved to be mean, why then to get the whole and genuine meanness of it, and publish its meanness to the world; or if it were sublime, to know it by experience, and be able to give a true account of it in my next excursion.

Moreover, I, on my side, require of every writer, first or last, a simple and sincere account of his own life, and not merely what he has heard of other men’s lives; some such account as he would send to his kindred from a distant land; for if he has lived sincerely, it must have been in a distant land to me. Perhaps these pages are more particularly addressed to poor students. As for the rest of my readers, they will accept such portions as apply to them. I trust that none will stretch the seams in putting on the coat, for it may do good service to him whom it fits.

I have travelled a good deal in Concord; and everywhere, in shops, and offices, and fields, the inhabitants have appeared to me to be doing penance in a thousand remarkable ways.

I see young men, my townsmen, whose misfortune it is to have inherited farms, houses, barns, cattle, and farming tools; for these are more easily acquired than got rid of. Better if they had been born in the open pasture and suckled by a wolf, that they might have seen with clearer eyes what field they were called to labor in.

Most men, even in this comparatively free country, through mere ignorance and mistake, are so occupied with the factitious cares and superfluously coarse labors of life that its finer fruits cannot be plucked by them. Their fingers, from excessive toil, are too clumsy and tremble too much for that.

how vaguely all the day he fears, not being immortal nor divine, but the slave and prisoner of his own opinion of himself, a fame won by his own deeds. Public opinion is a weak tyrant compared with our own private opinion. What a man thinks of himself, that it is which determines, or rather indicates, his fate.

The mass of men lead lives of quiet desperation. What is called resignation is confirmed desperation. From the desperate city you go into the desperate country, and have to console yourself with the bravery of minks and muskrats. A stereotyped but unconscious despair is concealed even under what are called the games and amusements of mankind. There is no play in them, for this comes after work. But it is a characteristic of wisdom not to do desperate things.

Old deeds for old people, and new deeds for new.

One farmer says to me, “You cannot live on vegetable food solely, for it furnishes nothing to make bones with”; and so he religiously devotes a part of his day to supplying his system with the raw material of bones; walking all the while he talks behind his oxen, which, with vegetable-made bones, jerk him and his lumbering plow along in spite of every obstacle. Some things are really necessaries of life in some circles, the most helpless and diseased, which in others are luxuries merely, and in others still are entirely unknown.

When a man is warmed by the several modes which I have described, what does he want next? Surely not more warmth of the same kind, as more and richer food, larger and more splendid houses, finer and more abundant clothing, more numerous, incessant, and hotter fires, and the like. When he has obtained those things which are necessary to life, there is another alternative than to obtain the superfluities; and that is, to adventure on life now, his vacation from humbler toil having commenced.

In any weather, at any hour of the day or night, I have been anxious to improve the nick of time, and notch it on my stick too; to stand on the meeting of two eternities, the past and future, which is precisely the present moment; to toe that line. You will pardon some obscurities, for there are more secrets in my trade than in most men’s, and yet not voluntarily kept, but inseparable from its very nature. I would gladly tell all that I know about it, and never paint “No Admittance” on my gate.

I long ago lost a hound, a bay horse, and a turtle dove, and am still on their trail. Many are the travellers I have spoken concerning them, describing their tracks and what calls they answered to. I have met one or two who had heard the hound, and the tramp of the horse, and even seen the dove disappear behind a cloud, and they seemed as anxious to recover them as if they had lost them themselves.

I sometimes try my acquaintances by such tests as this—Who could wear a patch, or two extra seams only, over the knee? Most behave as if they believed that their prospects for life would be ruined if they should do it. It would be easier for them to hobble to town with a broken leg than with a broken pantaloon.

It is an interesting question how far men would retain their relative rank if they were divested of their clothes.

Even in our democratic New England towns the accidental possession of wealth, and its manifestation in dress and equipage alone, obtain for the possessor almost universal respect.

All men want, not something to do with, but something to do, or rather something to be.

The head monkey at Paris puts on a traveller’s cap, and all the monkeys in America do the same.

I cannot believe that our factory system is the best mode by which men may get clothing. The condition of the operatives is becoming every day more like that of the English; and it cannot be wondered at, since, as far as I have heard or observed, the principal object is, not that mankind may be well and honestly clad, but, unquestionably, that corporations may be enriched.

From the cave we have advanced to roofs of palm leaves, of bark and boughs, of linen woven and stretched, of grass and straw, of boards and shingles, of stones and tiles. At last, we know not what it is to live in the open air, and our lives are domestic in more senses than we think. From the hearth the field is a great distance. It would be well, perhaps, if we were to spend more of our days and nights without any obstruction between us and the celestial bodies, if the poet did not speak so much from under a roof, or the saint dwell there so long. Birds do not sing in caves, nor do doves cherish their innocence in dovecots.

Consider first how slight a shelter is absolutely necessary.

Many a man is harassed to death to pay the rent of a larger and more luxurious box who would not have frozen to death in such a box as this.

it is evident that the savage owns his shelter because it costs so little, while the civilized man hires his commonly because he cannot afford to own it; nor can he, in the long run, any better afford to hire.

the cost of a thing is the amount of what I will call life which is required to be exchanged for it, immediately or in the long run.

On applying to the assessors, I am surprised to learn that they cannot at once name a dozen in the town who own their farms free and clear.

What has been said of the merchants, that a very large majority, even ninety-seven in a hundred, are sure to fail, is equally true of the farmers.

And when the farmer has got his house, he may not be the richer but the poorer for it,

only death will set them free.

While civilization has been improving our houses, it has not equally improved the men who are to inhabit them. It has created palaces, but it was not so easy to create noblemen and kings.

But how do the poor minority fare? Perhaps it will be found that just in proportion as some have been placed in outward circumstances above the savage, others have been degraded below him. The luxury of one class is counterbalanced by the indigence of another. On the one side is the palace, on the other are the almshouse and “silent poor.”

I should not need to look farther than to the shanties which everywhere border our railroads, that last improvement in civilization;

Most men appear never to have considered what a house is, and are actually though needlessly poor all their lives because they think that they must have such a one as their neighbors have.

Shall we always study to obtain more of these things, and not sometimes to be content with less?

By the blushes of Aurora and the music of Memnon, what should be man’s morning work in this world?

I would rather sit in the open air, for no dust gathers on the grass, unless where man has broken ground.

we are inclined to spend more on luxury than on safety and convenience, and it threatens without attaining these to become no better than a modern drawing-room, with its divans, and ottomans, and sun-shades, and a hundred other oriental things, which we are taking west with us, invented for the ladies of the harem and the effeminate natives of the Celestial Empire,

The very simplicity and nakedness of man’s life in the primitive ages imply this advantage, at least, that they left him still but a sojourner in nature. When he was refreshed with food and sleep, he contemplated his journey again.

We have adopted Christianity merely as an improved method of agri-culture.

gewgaws upon the mantelpiece,

Before we can adorn our houses with beautiful objects the walls must be stripped, and our lives must be stripped, and beautiful housekeeping and beautiful living be laid for a foundation: now, a taste for the beautiful is most cultivated out of doors, where there is no house and no housekeeper.

There were some slight flurries of snow during the days that I worked there; but for the most part when I came out on to the railroad, on my way home, its yellow sand heap stretched away gleaming in the hazy atmosphere, and the rails shone in the spring sun, and I heard the lark and pewee and other birds already come to commence another year with us. They were pleasant spring days, in which the winter of man’s discontent was thawing as well as the earth, and the life that had lain torpid began to stretch itself.

In those days, when my hands were much employed, I read but little, but the least scraps of paper which lay on the ground, my holder, or tablecloth, afforded me as much entertainment, in fact answered the same purpose as the Iliad.

I have thus a tight shingled and plastered house, ten feet wide by fifteen long, and eight-feet posts, with a garret and a closet, a large window on each side, two trap doors, one door at the end, and a brick fireplace opposite.

my excuse is that I brag for humanity rather than for myself; and my shortcomings and inconsistencies do not affect the truth of my statement.

Those things for which the most money is demanded are never the things which the student most wants. Tuition, for instance, is an important item in the term bill, while for the far more valuable education which he gets by associating with the most cultivated of his contemporaries no charge is made.

I mean that they should not play life, or study it merely, while the community supports them at this expensive game, but earnestly live it from beginning to end. How could youths better learn to live than by at once trying the experiment of living?

I should have known more about it. Even the poor student studies and is taught only political economy, while that economy of living which is synonymous with philosophy is not even sincerely professed in our colleges. The consequence is, that while he is reading Adam Smith, Ricardo, and Say, he runs his father in debt irretrievably.

As if the main object were to talk fast and not to talk sensibly.

I have learned that the swiftest traveller is he that goes afoot.

To make a railroad round the world available to all mankind is equivalent to grading the whole surface of the planet.

This spending of the best part of one’s life earning money in order to enjoy a questionable liberty during the least valuable part of it reminds me of the Englishman who went to India to make a fortune first, in order that he might return to England and live the life of a poet.

My farm outgoes for the first season were, for implements, seed, work, etc., $14.72-1/2.

Arthur Young among the rest, that if one would live simply and eat only the crop which he raised, and raise no more than he ate, and not exchange it for an insufficient quantity of more luxurious and expensive things, he would need to cultivate only a few rods of ground, and that it would be cheaper to spade up that than to use oxen to plow it, and to select a fresh spot from time to time than to manure the old, and he could do all his necessary farm work as it were with his left hand at odd hours in the summer; and thus he would not be tied to an ox, or horse, or cow, or pig, as at present.

I was more independent than any farmer in Concord, for I was not anchored to a house or farm, but could follow the bent of my genius, which is a very crooked one, every moment.

I am wont to think that men are not so much the keepers of herds as herds are the keepers of men,

the prosperity of the farmer is still measured by the degree to which the barn overshadows the house.

Most of the stone a nation hammers goes toward its tomb only. It buries itself alive. As for the Pyramids, there is nothing to wonder at in them so much as the fact that so many men could be found degraded enough to spend their lives constructing a tomb for some ambitious booby, whom it would have been wiser and manlier to have drowned in the Nile, and then given his body to the dogs.

By surveying, carpentry, and day-labor of various other kinds in the village in the meanwhile, for I have as many trades as fingers, I had earned $13.34.

Thus I could avoid all trade and barter, so far as my food was concerned, and having a shelter already, it would only remain to get clothing and fuel.

My furniture, part of which I made myself—and the rest cost me nothing of which I have not rendered an account—consisted of a bed, a table, a desk, three chairs, a looking-glass three inches in diameter, a pair of tongs and andirons, a kettle, a skillet, and a frying-pan, a dipper, a wash-bowl, two knives and forks, three plates, one cup, one spoon, a jug for oil, a jug for molasses, and a japanned lamp.

The customs of some savage nations might, perchance, be profitably imitated by us, for they at least go through the semblance of casting their slough annually; they have the idea of the thing, whether they have the reality or not. Would it not be well if we were to celebrate such a “busk,” or “feast of first fruits,” as Bartram describes to have been the custom of the Mucclasse Indians? “When a town celebrates the busk,” says he, “having previously provided themselves with new clothes, new pots, pans, and other household utensils and furniture, they collect all their worn out clothes and other despicable things, sweep and cleanse their houses, squares, and the whole town of their filth, which with all the remaining grain and other old provisions they cast together into one common heap, and consume it with fire. After having taken medicine, and fasted for three days, all the fire in the town is extinguished. During this fast they abstain from the gratification of every appetite and passion whatever. A general amnesty is proclaimed; all malefactors may return to their town.”

“On the fourth morning, the high priest, by rubbing dry wood together, produces new fire in the public square, from whence every habitation in the town is supplied with the new and pure flame.”

They then feast on the new corn and fruits, and dance and sing for three days, “and the four following days they receive visits and rejoice with their friends from neighboring towns who have in like manner purified and prepared themselves.”

For more than five years I maintained myself thus solely by the labor of my hands, and I found that, by working about six weeks in a year, I could meet all the expenses of living.

trade curses everything it handles; and though you trade in messages from heaven, the whole curse of trade attaches to the business.

I did not wish to spend my time in earning rich carpets or other fine furniture, or delicate cookery, or a house in the Grecian or the Gothic style just yet.

For myself I found that the occupation of a day-laborer was the most independent of any, especially as it required only thirty or forty days in a year to support one.

I would have each one be very careful to find out and pursue his own way, and not his father’s or his mother’s or his neighbor’s instead.

The Jesuits were quite balked by those Indians who, being burned at the stake, suggested new modes of torture to their tormentors. Being superior to physical suffering, it sometimes chanced that they were superior to any consolation which the missionaries could offer;

What is a house but a sedes, a seat?—better if a country seat.

for a man is rich in proportion to the number of things which he can afford to let alone.

but before the owner gave me a deed of it, his wife—every man has such a wife—changed her mind and wished to keep it, and he offered me ten dollars to release him.

The real attractions of the Hollowell farm, to me, were: its complete retirement, being, about two miles from the village, half a mile from the nearest neighbor, and separated from the highway by a broad field; its bounding on the river, which the owner said protected it by its fogs from frosts in the spring, though that was nothing to me; the gray color and ruinous state of the house and barn, and the dilapidated fences, which put such an interval between me and the last occupant; the hollow and lichen-covered apple trees, gnawed by rabbits, showing what kind of neighbors I should have; but above all, the recollection I had of it from my earliest voyages up the river, when the house was concealed behind a dense grove of red maples, through which I heard the house-dog bark.

Old Cato, whose “De Re Rusticâ” is my “Cultivator,” says—and the only translation I have seen makes sheer nonsense of the passage—”When you think of getting a farm turn it thus in your mind, not to buy greedily; nor spare your pains to look at it, and do not think it enough to go round it once. The oftener you go there the more it will please you, if it is good.”

I was seated by the shore of a small pond, about a mile and a half south of the village of Concord and somewhat higher than it, in the midst of an extensive wood between that town and Lincoln, and about two miles south of that our only field known to fame, Concord Battle Ground;

They say that characters were engraven on the bathing tub of King Tchingthang to this effect: “Renew thyself completely each day; do it again, and again, and forever again.”

fame. It was Homer’s requiem; itself an Iliad and Odyssey in the air, singing its own wrath and wanderings.

Little is to be expected of that day, if it can be called a day, to which we are not awakened by our Genius, but by the mechanical nudgings of some servitor, are not awakened by our own newly acquired force and aspirations from within, accompanied by the undulations of celestial music, instead of factory bells, and a fragrance filling the air—to a higher life than we fell asleep from; and thus the darkness bear its fruit, and prove itself to be good, no less than the light.

All memorable events, I should say, transpire in morning time and in a morning atmosphere. The Vedas say, “All intelligences awake with the morning.” Poetry and art, and the fairest and most memorable of the actions of men, date from such an hour.

To be awake is to be alive.

We must learn to reawaken and keep ourselves awake, not by mechanical aids, but by an infinite expectation of the dawn, which does not forsake us in our soundest sleep.

I went to the woods because I wished to live deliberately, to front only the essential facts of life, and see if I could not learn what it had to teach, and not, when I came to die, discover that I had not lived. I did not wish to live what was not life, living is so dear; nor did I wish to practise resignation, unless it was quite necessary. I wanted to live deep and suck out all the marrow of life, to live so sturdily and Spartan-like as to put to rout all that was not life, to cut a broad swath and shave close, to drive life into a corner, and reduce it to its lowest terms, and, if it proved to be mean, why then to get the whole and genuine meanness of it, and publish its meanness to the world; or if it were sublime, to know it by experience, and be able to give a true account of it in my next excursion.

Our life is frittered away by detail.

Simplicity, simplicity, simplicity!

In the midst of this chopping sea of civilized life, such are the clouds and storms and quicksands and thousand-and-one items to be allowed for, that a man has to live, if he would not founder and go to the bottom and not make his port at all, by dead reckoning, and he must be a great calculator indeed who succeeds. Simplify, simplify.

If we do not get out sleepers, and forge rails, and devote days and nights to the work, but go to tinkering upon our lives to improve them, who will build railroads?

We do not ride on the railroad; it rides upon us.

Hardly a man takes a half-hour’s nap after dinner, but when he wakes he holds up his head and asks, “What’s the news?” as if the rest of mankind had stood his sentinels.

never dreaming the while that he lives in the dark unfathomed mammoth cave of this world, and has but the rudiment of an eye himself.

And I am sure that I never read any memorable news in a newspaper. If we read of one man robbed, or murdered, or killed by accident, or one house burned, or one vessel wrecked, or one steamboat blown up, or one cow run over on the Western Railroad, or one mad dog killed, or one lot of grasshoppers in the winter—we never need read of another. One is enough. If you are acquainted with the principle, what do you care for a myriad instances and applications? To a philosopher all news, as it is called, is gossip, and they who edit and read it are old women over their tea.

Children, who play life, discern its true law and relations more clearly than men, who fail to live it worthily, but who think that they are wiser by experience, that is, by failure.

We think that that is which appears to be.

Let us spend one day as deliberately as Nature, and not be thrown off the track by every nutshell and mosquito’s wing that falls on the rails. Let us rise early and fast, or break fast, gently and without perturbation; let company come and let company go, let the bells ring and the children cry—determined to make a day of it. Why should we knock under and go with the stream? Let us not be upset and overwhelmed in that terrible rapid and whirlpool called a dinner, situated in the meridian shallows. Weather this danger and you are safe, for the rest of the way is down hill. With unrelaxed nerves, with morning vigor, sail by it, looking another way, tied to the mast like Ulysses. If the engine whistles, let it whistle till it is hoarse for its pains. If the bell rings, why should we run? We will consider what kind of music they are like. Let us settle ourselves, and work and wedge our feet downward through the mud and slush of opinion, and prejudice, and tradition, and delusion, and appearance, that alluvion which covers the globe, through Paris and London, through New York and Boston and Concord, through Church and State, through poetry and philosophy and religion, till we come to a hard bottom and rocks in place, which we can call reality, and say, This is, and no mistake; and then begin, having a point d’appui, below freshet and frost and fire, a place where you might found a wall or a state, or set a lamp-post safely, or perhaps a gauge, not a Nilometer, but a Realometer, that future ages might know how deep a freshet of shams and appearances had gathered from time to time.

I kept Homer’s Iliad on my table through the summer, though I looked at his page only now and then.

The heroic books, even if printed in the character of our mother tongue, will always be in a language dead to degenerate times; and we must laboriously seek the meaning of each word and line, conjecturing a larger sense than common use permits out of what wisdom and valor and generosity we have. The modern cheap and fertile press, with all its translations, has done little to bring us nearer to the heroic writers of antiquity.

Men sometimes speak as if the study of the classics would at length make way for more modern and practical studies; but the adventurous student will always study classics, in whatever language they may be written and however ancient they may be. For what are the classics but the noblest recorded thoughts of man?

We might as well omit to study Nature because she is old. To read well, that is, to read true books in a true spirit, is a noble exercise, and one that will task the reader more than any exercise which the customs of the day esteem. It requires a training such as the athletes underwent, the steady intention almost of the whole life to this object. Books must be read as deliberately and reservedly as they were written.

No wonder that Alexander carried the Iliad with him on his expeditions in a precious casket.

The works of the great poets have never yet been read by mankind, for only great poets can read them.

Moreover, with wisdom we shall learn liberality.

It is time that villages were universities, and their elder inhabitants the fellows of universities, with leisure—if they are, indeed, so well off—to pursue liberal studies the rest of their lives.

Instead of noblemen, let us have noble villages of men.

What is a course of history or philosophy, or poetry, no matter how well selected, or the best society, or the most admirable routine of life, compared with the discipline of looking always at what is to be seen? Will you be a reader, a student merely, or a seer?

my life itself was become my amusement and never ceased to be novel.

Housework was a pleasant pastime. When my floor was dirty, I rose early, and, setting all my furniture out of doors on the grass, bed and bedstead making but one budget, dashed water on the floor, and sprinkled white sand from the pond on it, and then with a broom scrubbed it clean and white; and by the time the villagers had broken their fast the morning sun had dried my house sufficiently to allow me to move in again, and my meditations were almost uninterupted.

The Fitchburg Railroad touches the pond about a hundred rods south of where I dwell.

The whistle of the locomotive penetrates my woods summer and winter, sounding like the scream of a hawk sailing over some farmer’s yard, informing me that many restless city merchants are arriving within the circle of the town, or adventurous country traders from the other side.

traveling demigod,

I hear the iron horse make the hills echo with his snort like thunder, shaking the earth with his feet, and breathing fire and smoke from his nostrils (what kind of winged horse or fiery dragon they will put into the new Mythology I don’t know),

The stabler of the iron horse was up early this winter morning by the light of the stars amid the mountains, to fodder and harness his steed. Fire, too, was awakened thus early to put the vital heat in him and get him off.

All day the fire-steed flies over the country, stopping only that his master may rest, and I am awakened by his tramp and defiant snort at midnight, when in some remote glen in the woods he fronts the elements incased in ice and snow; and he will reach his stall only with the morning star, to start once more on his travels without rest or slumber. Or perchance, at evening, I hear him in his stable blowing off the superfluous energy of the day, that he may calm his nerves and cool his liver and brain for a few hours of iron slumber.

The startings and arrivals of the cars are now the epochs in the village day. They go and come with such regularity and precision, and their whistle can be heard so far, that the farmers set their clocks by them, and thus one well-conducted institution regulates a whole country.

Do they not talk and think faster in the depot than they did in the stage-office?

On this morning of the Great Snow, perchance, which is still raging and chilling men’s blood, I bear the muffled tone of their engine bell from out the fog bank of their chilled breath, which announces that the cars are coming, without long delay, notwithstanding the veto of a New England northeast snow-storm, and I behold the plowmen covered with snow and rime, their heads peering, above the mould-board which is turning down other than daisies and the nests of field mice, like bowlders of the Sierra Nevada, that occupy an outside place in the universe.

I will not have my eyes put out and my ears spoiled by its smoke and steam and hissing.

Now that the cars are gone by and all the restless world with them, and the fishes in the pond no longer feel their rumbling, I am more alone than ever.

They are the spirits, the low spirits and melancholy forebodings, of fallen souls that once in human shape night-walked the earth and did the deeds of darkness, now expiating their sins with their wailing hymns or threnodies in the scenery of their transgressions.

I kept neither dog, cat, cow, pig, nor hens, so that you would have said there was a deficiency of domestic sounds; neither the churn, nor the spinning-wheel, nor even the singing of the kettle, nor the hissing of the urn, nor children crying, to comfort one. An old-fashioned man would have lost his senses or died of ennui before this.

Instead of no path to the front-yard gate in the Great Snow—no gate—no front-yard—and no path to the civilized world.

My nearest neighbor is a mile distant, and no house is visible from any place but the hill-tops within half a mile of my own.

But for the most part it is as solitary where I live as on the prairies. It is as much Asia or Africa as New England.

There can be no very black melancholy to him who lives in the midst of Nature and has his senses still.

But I was at the same time conscious of a slight insanity in my mood, and seemed to foresee my recovery.

I have found that no exertion of the legs can bring two minds much nearer to one another. What do we want most to dwell near to? Not to many men surely, the depot, the post-office, the bar-room, the meeting-house, the school-house, the grocery, Beacon Hill, or the Five Points, where men most congregate, but to the perennial source of our life, whence in all our experience we have found that to issue, as the willow stands near the water and sends out its roots in that direction.

ourselves in a sane sense. By a conscious effort of the mind we can stand aloof from actions and their consequences; and all things, good and bad, go by us like a torrent. We are not wholly involved in Nature. I may be either the driftwood in the stream, or Indra in the sky looking down on it. I may be affected by a theatrical exhibition; on the other hand, I may not be affected by an actual event which appears to concern me much more. I only know myself as a human entity; the scene, so to speak, of thoughts and affections; and am sensible of a certain doubleness by which I can stand as remote from myself as from another. However intense my experience, I am conscious of the presence and criticism of a part of me, which, as it were, is not a part of me, but spectator, sharing no experience, but taking note of it, and that is no more I than it is you. When the play, it may be the tragedy, of life is over, the spectator goes his way. It was a kind of fiction, a work of the imagination only, so far as he was concerned. This doubleness may easily make us poor neighbors and friends sometimes.

We are for the most part more lonely when we go abroad among men than when we stay in our chambers.

Am I not partly leaves and vegetable mould myself?

I am naturally no hermit,

I had three chairs in my house; one for solitude, two for friendship, three for society.

I have had twenty-five or thirty souls, with their bodies, at once under my roof, and yet we often parted without being aware that we had come very near to one another.

One inconvenience I sometimes experienced in so small a house, the difficulty of getting to a sufficient distance from my guest when we began to utter the big thoughts in big words. You want room for your thoughts to get into sailing trim and run a course or two before they make their port.

Also, our sentences wanted room to unfold and form their columns in the interval.

but if we speak reservedly and thoughtfully, we want to be farther apart, that all animal heat and moisture may have a chance to evaporate.

My “best” room, however, my withdrawing room, always ready for company, on whose carpet the sun rarely fell, was the pine wood behind my house.

You need not rest your reputation on the dinners you give.

I had withdrawn so far within the great ocean of solitude, into which the rivers of society empty, that for the most part, so far as my needs were concerned, only the finest sediment was deposited around me.

He had been instructed only in that innocent and ineffectual way in which the Catholic priests teach the aborigines, by which the pupil is never educated to the degree of consciousness, but only to the degree of trust and reverence, and a child is not made a man, but kept a child.

He was so genuine and unsophisticated that no introduction would serve to introduce him, more than if you introduced a woodchuck to your neighbor.

and I did not know whether he was as wise as Shakespeare or as simply ignorant as a child, whether to suspect him of a fine poetic consciousness or of stupidity.

Men who did not know when their visit had terminated, though I went about my business again, answering them from greater and greater remoteness.

Girls and boys and young women generally seemed glad to be in the woods. They looked in the pond and at the flowers, and improved their time. Men of business, even farmers, thought only of solitude and employment, and of the great distance at which I dwelt from something or other; and though they said that they loved a ramble in the woods occasionally, it was obvious that they did not.

My enemies are worms, cool days, and most of all woodchucks. The last have nibbled for me a quarter of an acre clean. But what right had I to oust johnswort and the rest, and break up their ancient herb garden?

it appeared by the arrowheads which I turned up in hoeing, that an extinct nation had anciently dwelt here and planted corn and beans ere white men came to clear the land, and so, to some extent, had exhausted the soil for this very crop.

Mine was, as it were, the connecting link between wild and cultivated fields; as some states are civilized, and others half-civilized, and others savage or barbarous, so my field was, though not in a bad sense, a half-cultivated field. They were beans cheerfully returning to their wild and primitive state that I cultivated, and my hoe played the Ranz des Vaches for them.

A long war, not with cranes, but with weeds, those Trojans who had sun and rain and dews on their side. Daily the beans saw me come to their rescue armed with a hoe, and thin the ranks of their enemies, filling up the trenches with weedy dead. Many a lusty crest—waving Hector, that towered a whole foot above his crowding comrades, fell before my weapon and rolled in the dust.

Ancient poetry and mythology suggest, at least, that husbandry was once a sacred art; but it is pursued with irreverent haste and heedlessness by us, our object being to have large farms and large crops merely. We have no festival, nor procession, nor ceremony, not excepting our cattle-shows and so-called Thanksgivings, by which the farmer expresses a sense of the sacredness of his calling, or is reminded of its sacred origin.

of regarding the soil as property, or the means of acquiring property chiefly, the landscape is deformed, husbandry is degraded with us, and the farmer leads the meanest of lives. He knows Nature but as a robber. Cato says that the profits of agriculture are particularly pious or just (maximeque pius quaestus), and according to Varro the old Romans “called the same earth Mother and Ceres, and thought that they who cultivated it led a pious and useful life, and that they alone were left of the race of King Saturn.”

As I walked in the woods to see the birds and squirrels, so I walked in the village to see the men and boys; instead of the wind among the pines I heard the carts rattle.

I observed that the vitals of the village were the grocery, the bar-room, the post-office, and the bank; and, as a necessary part of the machinery, they kept a bell, a big gun, and a fire-engine, at convenient places; and the houses were so arranged as to make the most of mankind, in lanes and fronting one another, so that every traveller had to run the gauntlet, and every man, woman, and child might get a lick at him.

Often in a snow-storm, even by day, one will come out upon a well-known road and yet find it impossible to tell which way leads to the village.

for a man needs only to be turned round once with his eyes shut in this world to be lost—do we appreciate the vastness and strangeness of nature. Every man has to learn the points of compass again as often as he awakes, whether from sleep or any abstraction. Not till we are lost, in other words not till we have lost the world, do we begin to find ourselves, and realize where we are and the infinite extent of our relations.

One afternoon, near the end of the first summer, when I went to the village to get a shoe from the cobbler’s, I was seized and put into jail, because, as I have elsewhere related, I did not pay a tax to, or recognize the authority of, the State which buys and sells men, women, and children, like cattle, at the door of its senate-house. I had gone down to the woods for other purposes. But, wherever a man goes, men will pursue and paw him with their dirty institutions, and, if they can, constrain him to belong to their desperate odd-fellow society. It is true, I might have resisted forcibly with more or less effect, might have run “amok” against society; but I preferred that society should run “amok” against me, it being the desperate party. However, I was released the next day, obtained my mended shoe, and returned to the woods in season to get my dinner of huckleberries on Fair Haven Hill. I was never molested by any person but those who represented the State.

I am convinced, that if all men were to live as simply as I then did, thieving and robbery would be unknown. These take place only in communities where some have got more than is sufficient while others have not enough.

Love virtue, and the people will be virtuous.

There are few traces of man’s hand to be seen. The water laves the shore as it did a thousand years ago.

A lake is the landscape’s most beautiful and expressive feature. It is earth’s eye; looking into which the beholder measures the depth of his own nature.

How peaceful the phenomena of the lake!

In such a day, in September or October, Walden is a perfect forest mirror, set round with stones as precious to my eye as if fewer or rarer. Nothing so fair, so pure, and at the same time so large, as a lake, perchance, lies on the surface of the earth. Sky water. It needs no fence.

Sometimes I rambled to pine groves, standing like temples, or like fleets at sea, full-rigged, with wavy boughs, and rippling with light, so soft and green and shady that the Druids would have forsaken their oaks to worship in them; or to the cedar wood beyond Flint’s Pond, where the trees, covered with hoary blue berries, spiring higher and higher, are fit to stand before Valhalla,

But the only true America is that country where you are at liberty to pursue such a mode of life as may enable you to do without these, and where the state does not endeavor to compel you to sustain the slavery and war and other superfluous expenses which directly or indirectly result from the use of such things.

the rain, bending my steps again to the pond, my haste to catch pickerel, wading in retired meadows, in sloughs and bog-holes, in forlorn and savage places, appeared for an instant trivial to me who had been sent to school and college; but as I ran down the hill toward the reddening west, with the rainbow over my shoulder, and some faint tinkling sounds borne to my ear through the cleansed air, from I know not what quarter, my Good Genius seemed to say—Go fish and hunt far and wide day by day—farther and wider—and rest thee by many brooks and hearth-sides without misgiving. Remember thy Creator in the days of thy youth. Rise free from care before the dawn, and seek adventures. Let the noon find thee by other lakes, and the night overtake thee everywhere at home.

I caught a glimpse of a woodchuck stealing across my path, and felt a strange thrill of savage delight, and was strongly tempted to seize and devour him raw; not that I was hungry then, except for that wildness which he represented. Once or twice, however, while I lived at the pond, I found myself ranging the woods, like a half-starved hound, with a strange abandonment, seeking some kind of venison which I might devour, and no morsel could have been too savage for me. The wildest scenes had become unaccountably familiar. I found in myself, and still find, an instinct toward a higher, or, as it is named, spiritual life, as do most men, and another toward a primitive rank and savage one, and I reverence them both.

I like sometimes to take rank hold on life and spend my day more as the animals

Chaucer’s nun,

No humane being, past the thoughtless age of boyhood, will wantonly murder any creature which holds its life by the same tenure that he does.

but always when I have done I feel that it would have been better if I had not fished. I think that I do not mistake. It is a faint

There is unquestionably this instinct in me which belongs to the lower orders of creation;

Beside, there is something essentially unclean about this diet and all flesh, and I began to see where housework commences, and whence the endeavor, which costs so much, to wear a tidy and respectable appearance each day, to keep the house sweet and free from all ill odors and sights.

The practical objection to animal food in my case was its uncleanness; and besides, when I had caught and cleaned and cooked and eaten my fish, they seemed not to have fed me essentially. It was insignificant and unnecessary, and cost more than it came to.

The repugnance to animal food is not the effect of experience, but is an instinct.

Is it not a reproach that man is a carnivorous animal? True, he can and does live, in a great measure, by preying on other animals; but this is a miserable way—as any one who will go to snaring rabbits, or slaughtering lambs, may learn—and he will be regarded as a benefactor of his race who shall teach man to confine himself to a more innocent and wholesome diet. Whatever my own practice may be, I have no doubt that it is a part of the destiny of the human race, in its gradual improvement, to leave off eating animals, as surely as the savage tribes have left off eating each other when they came in contact with the more civilized.

I could sometimes eat a fried rat with a good relish, if it were necessary.

I believe that water is the only drink for a wise man;

Such apparently slight causes destroyed Greece and Rome, and will destroy England and America.

Our whole life is startlingly moral. There is never an instant’s truce between virtue and vice. Goodness is the only investment that never fails. In the music of the harp which trembles round the world it is the insisting on this which thrills us. The harp is the travelling patterer for the Universe’s Insurance Company, recommending its laws, and our little goodness is all the assessment that we pay. Though the youth at last grows indifferent, the laws of the universe are not indifferent, but are forever on the side of the most sensitive. Listen to every zephyr for some reproof, for it is surely there, and he is unfortunate who does not hear it. We cannot touch a string or move a stop but the charming moral transfixes us. Many an irksome noise, go a long way off, is heard as music, a proud, sweet satire on the meanness of our lives.

We are conscious of an animal in us, which awakens in proportion as our higher nature slumbers. It is reptile and sensual, and perhaps cannot be wholly expelled; like the worms which, even in life and health, occupy our bodies.

He is blessed who is assured that the animal is dying out in him day by day, and the divine being established.

Every man is the builder of a temple, called his body, to the god he worships, after a style purely his own, nor can he get off by hammering marble instead. We are all sculptors and painters, and our material is our own flesh and blood and bones. Any nobleness begins at once to refine a man’s features, any meanness or sensuality to imbrute them.

A bellum, a war between two races of ants, the red always pitted against the black, and frequently two red ones to one black. The legions of these Myrmidons covered all the hills and vales in my wood-yard, and the ground was already strewn with the dead and dying, both red and black. It was the only battle which I have ever witnessed, the only battle-field I ever trod while the battle was raging; internecine war; the red republicans on the one hand, and the black imperialists on the other.

It was evident that their battle-cry was “Conquer or die.”

whose mother had charged him to return with his shield or upon it. Or perchance he was some Achilles, who had nourished his wrath apart, and had now come to avenge or rescue his Patroclus.

I was myself excited somewhat even as if they had been men. The more you think of it, the less the difference.

I have no doubt that it was a principle they fought for, as much as our ancestors, and not to avoid a three-penny tax on their tea; and the results of this battle will be as important and memorable to those whom it concerns as those of the battle of Bunker Hill, at least.

I never learned which party was victorious, nor the cause of the war; but I felt for the rest of that day as if I had had my feelings excited and harrowed by witnessing the struggle, the ferocity and carnage, of a human battle before my door.

why should not a poet’s cat be winged as well as his horse?

When I came to build my chimney I studied masonry.

All the attractions of a house were concentrated in one room; it was kitchen, chamber, parlor, and keeping-room; and whatever satisfaction parent or child, master or servant, derive from living in a house, I enjoyed it all.

In 1845 Walden froze entirely over for the first time on the night of the 22d of December,

The snow had already covered the ground since the 25th of November, and surrounded me suddenly with the scenery of winter. I withdrew yet farther into my shell, and endeavored to keep a bright fire both within my house and within my breast. My employment out of doors now was to collect the dead wood in the forest, bringing it in my hands or on my shoulders, or sometimes trailing a dead pine tree under each arm to my shed.

I too gave notice to the various wild inhabitants of Walden vale, by a smoky streamer from my chimney, that I was awake.—

Light-winged Smoke, Icarian bird, Melting thy pinions in thy upward flight, Lark without song, and messenger of dawn, Circling above the hamlets as thy nest; Or else, departing dream, and shadowy form Of midnight vision, gathering up thy skirts; By night star-veiling, and by day Darkening the light and blotting out the sun; Go thou my incense upward from this hearth, And ask the gods to pardon this clear flame.

pranks of a demon not distinctly named in old mythology, who has acted a prominent and astounding part in our New England life, and deserves, as much as any mythological character, to have his biography written one day; who first comes in the guise of a friend or hired man, and then robs and murders the whole family—New-England Rum.

I am not aware that any man has ever built on the spot which I occupy. Deliver me from a city built on the site of a more ancient city, whose materials are ruins, whose gardens cemeteries.

The Great Snow!

For a week of even weather I took exactly the same number of steps, and of the same length, coming and going, stepping deliberately and with the precision of a pair of dividers in my own deep tracks—to such routine the winter reduces us—yet often they were filled with heaven’s own blue.

What is a country without rabbits and partridges? They are among the most simple and indigenous animal products; ancient and venerable families known to antiquity as to modern times; of the very hue and substance of Nature, nearest allied to leaves and to the ground—and to one another; it is either winged or it is legged.

Forward! Nature puts no question and answers none which we mortals ask. She has long ago taken her resolution. “O Prince, our eyes contemplate with admiration and transmit to the soul the wonderful and varied spectacle of this universe. The night veils without doubt a part of this glorious creation; but day comes to reveal to us this great work, which extends from earth even into the plains of the ether.”

If we knew all the laws of Nature, we should need only one fact, or the description of one actual phenomenon, to infer all the particular results at that point. Now we know only a few laws, and our result is vitiated, not, of course, by any confusion or irregularity in Nature, but by our ignorance of essential elements in the calculation. Our notions of law and harmony are commonly confined to those instances which we detect; but the harmony which results from a far greater number of seemingly conflicting, but really concurring, laws, which we have not detected, is still more wonderful. The particular laws are as our points of view, as, to the traveller, a mountain outline varies with every step, and it has an infinite number of profiles, though absolutely but one form. Even when cleft or bored through it is not comprehended in its entireness.

To speak literally, a hundred Irishmen, with Yankee overseers, came from Cambridge every day to get out the ice.

Thus it appears that the sweltering inhabitants of Charleston and New Orleans, of Madras and Bombay and Calcutta, drink at my well. In the morning I bathe my intellect in the stupendous and cosmogonal philosophy of the Bhagvat-Geeta, since whose composition years of the gods have elapsed, and in comparison with which our modern world and its literature seem puny and trivial; and I doubt if that philosophy is not to be referred to a previous state of existence, so remote is its sublimity from our conceptions.

The pure Walden water is mingled with the sacred water of the Ganges. With favoring winds it is wafted past the site of the fabulous islands of Atlantis and the Hesperides, makes the periplus of Hanno, and, floating by Ternate and Tidore and the mouth of the Persian Gulf, melts in the tropic gales of the Indian seas, and is landed in ports of which Alexander only heard the names.

The phenomena of the year take place every day in a pond on a small scale. Every morning, generally speaking, the shallow water is being warmed more rapidly than the deep, though it may not be made so warm after all, and every evening it is being cooled more rapidly until the morning. The day is an epitome of the year. The night is the winter, the morning and evening are the spring and fall, and the noon is the summer. The cracking and booming of the ice indicate a change of temperature.

Few phenomena gave me more delight than to observe the forms which thawing sand and clay assume in flowing down the sides of a deep cut on the railroad through which I passed on my way to the village, a phenomenon not very common on so large a scale, though the number of freshly exposed banks of the right material must have been greatly multiplied since railroads were invented. The material was sand of every degree of fineness and of various rich colors, commonly mixed with a little clay. When the frost comes out in the spring, and even in a thawing day in the winter, the sand begins to flow down the slopes like lava, sometimes bursting out through the snow and overflowing it where no sand was to be seen before. Innumerable little streams overlap and interlace one with another, exhibiting a sort of hybrid product, which obeys half way the law of currents, and half way that of vegetation. As it flows it takes the forms of sappy leaves or vines, making heaps of pulpy sprays a foot or more in depth, and resembling, as you look down on them, the laciniated, lobed, and imbricated thalluses of some lichens; or you are reminded of coral, of leopard’s paws or birds’ feet, of brains or lungs or bowels, and excrements of all kinds. It is a truly grotesque vegetation, whose forms and color we see imitated in bronze, a sort of architectural foliage more ancient and typical than acanthus, chiccory, ivy, vine, or any vegetable leaves; destined perhaps, under some circumstances, to become a puzzle to future geologists. The whole cut impressed me as if it were a cave with its stalactites laid open to the light. The various shades of the sand are singularly rich and agreeable, embracing the different iron colors, brown, gray, yellowish, and reddish. When the flowing mass reaches the drain at the foot of the bank it spreads out flatter into strands, the separate streams losing their semi-cylindrical form and gradually becoming more flat and broad, running together as they are more moist, till they form an almost flat sand, still variously and beautifully shaded, but in which you can trace the original forms of vegetation; till at length, in the water itself, they are converted into banks, like those formed off the mouths of rivers, and the forms of vegetation are lost in the ripple marks on the bottom.

The whole bank, which is from twenty to forty feet high, is sometimes overlaid with a mass of this kind of foliage, or sandy rupture, for a quarter of a mile on one or both sides, the produce of one spring day. What makes this sand foliage remarkable is its springing into existence thus suddenly. When I see on the one side the inert bank—for the sun acts on one side first—and on the other this luxuriant foliage, the creation of an hour, I am affected as if in a peculiar sense I stood in the laboratory of the Artist who made the world and me—had come to where he was still at work, sporting on this bank, and with excess of energy strewing his fresh designs about. I feel as if I were nearer to the vitals of the globe, for this sandy overflow is something such a foliaceous mass as the vitals of the animal body. You find thus in the very sands an anticipation of the vegetable leaf. No wonder that the earth expresses itself outwardly in leaves, it so labors with the idea inwardly. The atoms have already learned this law, and are pregnant by it. The overhanging leaf sees here its prototype. Internally, whether in the globe or animal body, it is a moist thick lobe, a word especially applicable to the liver and lungs and the leaves of fat (γεἱβω, labor, lapsus, to flow or slip downward, a lapsing; λοβὁς, globus, lobe, globe; also lap, flap, and many other words); externally a dry thin leaf, even as the f and v are a pressed and dried b. The radicals of lobe are lb, the soft mass of the b (single lobed, or B, double lobed), with the liquid l behind it pressing it forward. In globe, glb, the guttural g adds to the meaning the capacity of the throat. The feathers and wings of birds are still drier and thinner leaves. Thus, also, you pass from the lumpish grub in the earth to the airy and fluttering butterfly. The very globe continually transcends and translates itself, and becomes winged in its orbit. Even ice begins with delicate crystal leaves, as if it had flowed into moulds which the fronds of waterplants have impressed on the watery mirror. The whole tree itself is but one leaf, and rivers are still vaster leaves whose pulp is intervening earth, and towns and cities are the ova of insects in their axils.

When the sun withdraws the sand ceases to flow, but in the morning the streams will start once more and branch and branch again into a myriad of others. You here see perchance how blood-vessels are formed. If you look closely you observe that first there pushes forward from the thawing mass a stream of softened sand with a drop-like point, like the ball of the finger, feeling its way slowly and blindly downward, until at last with more heat and moisture, as the sun gets higher, the most fluid portion, in its effort to obey the law to which the most inert also yields, separates from the latter and forms for itself a meandering channel or artery within that, in which is seen a little silvery stream glancing like lightning from one stage of pulpy leaves or branches to another, and ever and anon swallowed up in the sand. It is wonderful how rapidly yet perfectly the sand organizes itself as it flows, using the best material its mass affords to form the sharp edges of its channel. Such are the sources of rivers. In the silicious matter which the water deposits is perhaps the bony system, and in the still finer soil and organic matter the fleshy fiber or cellular tissue. What is a man but a mass of thawing clay.

this suggests at least that Nature has some bowels, and there again is mother of humanity. This is the frost coming out of the ground; this is Spring. It precedes the green and flowery spring, as mythology precedes regular poetry. I know of nothing more purgative of winter fumes and indigestions.

The earth is not a mere fragment of dead history, stratum upon stratum like the leaves of a book, to be studied by geologists and antiquaries chiefly, but living poetry like the leaves of a tree, which precede flowers and fruit—not a fossil earth, but a living earth; compared with whose great central life all animal and vegetable life is merely parasitic.

Many of the phenomena of Winter are suggestive of an inexpressible tenderness and fragile delicacy. We are accustomed to hear this king described as a rude and boisterous tyrant; but with the gentleness of a lover he adorns the tresses of Summer.

The first sparrow of spring!

Walden was dead and is alive again.

The change from storm and winter to serene and mild weather, from dark and sluggish hours to bright and elastic ones, is a memorable crisis which all things proclaim. It is seemingly instantaneous at last. Suddenly an influx of light filled my house, though the evening was at hand, and the clouds of winter still overhung it, and the eaves were dripping with sleety rain. I looked out the window, and lo! where yesterday was cold gray ice there lay the transparent pond already calm and full of hope as in a summer evening, reflecting a summer evening sky in its bosom, though none was visible overhead, as if it had intelligence with some remote horizon.

We need the tonic of wildness—to wade sometimes in marshes where the bittern and the meadow-hen lurk, and hear the booming of the snipe; to smell the whispering sedge where only some wilder and more solitary fowl builds her nest, and the mink crawls with its belly close to the ground. At the same time that we are earnest to explore and learn all things, we require that all things be mysterious and unexplorable, that land and sea be infinitely wild, unsurveyed and unfathomed by us because unfathomable. We can never have enough of nature.

I finally left Walden September 6th, 1847.

“Direct your eye right inward, and you’ll find A thousand regions in your mind Yet undiscovered. Travel them, and be Expert in home-cosmography.”

it is easier to sail many thousand miles through cold and storm and cannibals, in a government ship, with five hundred men and boys to assist one, than it is to explore the private sea, the Atlantic and Pacific Ocean of one’s being alone.

I left the woods for as good a reason as I went there. Perhaps it seemed to me that I had several more lives to live, and could not spare any more time for that one.

How worn and dusty, then, must be the highways of the world, how deep the ruts of tradition and conformity!

I learned this, at least, by my experiment: that if one advances confidently in the direction of his dreams, and endeavors to live the life which he has imagined, he will meet with a success unexpected in common hours.

he will live with the license of a higher order of beings. In proportion as he simplifies his life, the laws of the universe will appear less complex, and solitude will not be solitude, nor poverty poverty, nor weakness weakness.

Shall a man go and hang himself because he belongs to the race of pygmies, and not be the biggest pygmy that he can? Let every one mind his own business, and endeavor to be what he was made.

Why should we be in such desperate haste to succeed and in such desperate enterprises? If a man does not keep pace with his companions, perhaps it is because he hears a different drummer. Let him step to the music which he hears, however measured or far away. It is not important that he should mature as soon as an apple tree or an oak. Shall he turn his spring into summer?

However mean your life is, meet it and live it; do not shun it and call it hard names. It is not so bad as you are. It looks poorest when you are richest. The fault-finder will find faults even in paradise. Love your life, poor as it is. You may perhaps have some pleasant, thrilling, glorious hours, even in a poorhouse.

Things do not change; we change.

Humility like darkness reveals the heavenly lights.

It is life near the bone where it is sweetest.

Rather than love, than money, than fame, give me truth.

It Doesn’t Have to be Crazy at Work

It Doesn't Have to Be Crazy at Work Book Cover It Doesn't Have to Be Crazy at Work
Jason Fried, David Heinemeier Hansson,
Business & Economics
October 2, 2018

This is a quick, must read.

Chaos should not be the norm.

They run 6 weeks on, 2 weeks off projects. Sprints/release cycle for 6, 2 weeks to review and plan the next 6 weeks.

They don't use goals. They don't set targets for the sake of setting a target. They don't have a product road map. Promises lead to rushing. Promises pile up like debt and they accrue interest.

They opt for depth not breadth.




An unhealthy obsession with growth at any cost sets towering, unrealistic expectations that stress people out.

There’s not more work to be done all of a sudden. The problem is that there’s hardly any uninterrupted, dedicated time to do it. People are working more but getting less done. It doesn’t add up—until you account for the majority of time being wasted on things that don’t matter.

The answer isn’t more hours, it’s less bullshit. Less waste, not more production. And far fewer distractions, less always-on anxiety, and avoiding stress.

The modern workplace is sick. Chaos should not be the natural state at work. Anxiety isn’t a prerequisite for progress. Sitting in meetings all day isn’t required for success. These are all perversions of work — side effects of broken models and follow-the-lemming-off-the-cliff worst practices.

It begins with this idea: Your company is a product.

Like product development, progress is achieved through iteration.

But when you think of the company as a product, you ask different questions: Do people who work here know how to use the company? Is it simple? Complex? Is it obvious how it works? What’s fast about it? What’s slow about it? Are there bugs? What’s broken that we can fix quickly and what’s going to take a long time?

We work on projects for six weeks at a time, then we take two weeks off from scheduled work to roam and decompress.

Calm is a destination

What’s our market share? Don’t know, don’t care. It’s irrelevant. Do we have enough customers paying us enough money to cover our costs and generate a profit? Yes. Is that number increasing every year? Yes. That’s good enough for us. Doesn’t matter if we’re 2 percent of the market or 4 percent or 75 percent. What matters is that we have a healthy business with sound economics that work for us. Costs under control, profitable sales.

Mark Twain nailed it: “Comparison is the death of joy.”

we don’t do goals. At all. No customer-count goals, no sales goals, no retention goals, no revenue goals, no specific profitability goals (other than to be profitable). Seriously.

Do we want to make things better? All the time. But do we want to maximize “better” through constantly chasing goals? No thanks.

Goals are fake. Nearly all of them are artificial targets set for the sake of setting targets. These made-up numbers then function as a source of unnecessary stress until they’re either achieved or abandoned. And when that happens, you’re supposed to pick new ones and start stressing again.

Short-term planning has gotten a bum rap, but we think it’s undeserved. Every six weeks or so, we decide what we’ll be working on next. And that’s the only plan we have. Anything further out is considered a “maybe, we’ll see.”

Seeing a bad idea through just because at one point it sounded like a good idea is a tragic waste of energy and talent.

Oftentimes it’s not breaking out, but diving in, digging deeper, staying in your rabbit hole that brings the biggest gains. Depth, not breadth, is where mastery is often found.

we don’t cram. We don’t rush. We don’t stuff. We work at a relaxed, sustainable pace. And what doesn’t get done in 40 hours by Friday at 5 picks up again Monday morning at 9.

If you can’t fit everything you want to do within 40 hours per week, you need to get better at picking what to do, not work longer hours. Most of what we think we have to do, we don’t have to do at all. It’s a choice, and often it’s a poor one.

We don’t have status meetings at Basecamp.

Eight people in a room for an hour doesn’t cost one hour, it costs eight hours.

And between all those context switches and attempts at multitasking, you have to add buffer time. Time for your head to leave the last thing and get into the next thing.

A great work ethic isn’t about working whenever you’re called upon. It’s about doing what you say you’re going to do, putting in a fair day’s work, respecting the work, respecting the customer, respecting coworkers, not wasting time, not creating unnecessary work for other people, and not being a bottleneck. Work ethic is about being a fundamentally good person that others can count on and enjoy working with.

No one can see anyone else’s calendar at Basecamp.

We don’t require anyone to broadcast their whereabouts or availability at Basecamp. No butts-in-seats requirement for people at the office, no virtual-status indicator when they’re working remotely.

At Basecamp, we’ve tried to create a culture of eventual response rather than immediate response. One where everyone doesn’t lose their shit if the answer to a nonurgent question arrives three hours later.

People should be missing out! Most people should miss out on most things most of the time.

JOMO! The joy of missing out. It’s JOMO that lets you turn off the firehose of information and chatter and interruptions to actually get the right shit done.

One way we push back against this at Basecamp is by writing monthly “Heartbeats.” Summaries of the work and progress that’s been done and had by a team, written by the team lead, to the entire company. All the minutiae boiled down to the essential points others would care to know. Just enough to keep someone in the loop without having to internalize dozens of details that don’t matter.

The best companies aren’t families. They’re supporters of families. Allies of families. They’re there to provide healthy, fulfilling work environments so that when workers shut their laptops at a reasonable hour, they’re the best husbands, wives, parents, siblings, and children they can be.

If the boss really wants to know what’s going on, the answer is embarrassingly obvious: They have to ask! Not vague, self-congratulatory bullshit questions like “What can we do even better?” but the hard ones like “What’s something nobody dares to talk about?” or “Are you afraid of anything at work?” or “Is there anything you worked on recently that you wish you could do over?” Or even more specific ones like “What do you think we could have done differently to help Jane succeed?” or “What advice would you give before we start on the big website redesign project?”

Posing real, pointed questions is the only way to convey that it’s safe to provide real answers. And even then it’s going to take a while. Maybe you get 20 percent of the story the first time you ask, then 50 percent after a while, and if you’ve really nailed it as a trustworthy boss, you may get to 80 percent. Forget about ever getting the whole story.

The further away you are from the fruit, the lower it looks. Once you get up close, you see it’s quite a bit higher than you thought. We assume that picking it will be easy only because we’ve never tried to do it before.

Declaring that an unfamiliar task will yield low-hanging fruit is almost always an admission that you have little insight about what you’re setting out to do.

And any estimate of how much work it’ll take to do something you’ve never tried before is likely to be off by degrees of magnitude.

All-nighters are red flags, not green lights. If people are pulling them, pull back. Nearly everything can wait until morning.

It doesn’t matter how good you are at the job if you’re an ass. Nothing you can do for us would make up for that.

We don’t need 50 twentysomething clones in hoodies with all of the same cultural references. We do better work, broader work, and more considered work when the team reflects the diversity of our customer base. “Not exactly what we already have” is a quality in itself.

For example, when we’re choosing a new designer, we hire each of the finalists for a week, pay them $1,500 for that time, and ask them to do a sample project for us. Then we have something to evaluate that’s current, real, and completely theirs.

What we don’t do are riddles, blackboard problem solving, or fake “come up with the answer on the spot” scenarios. We don’t answer riddles all day, we do real work. So we give people real work to do and the appropriate time to do it in. It’s the same kind of work they’d be doing if they got the job.

They hire someone based on a list of previous qualifications, not on their current abilities.

Stop thinking of talent as something to be plundered and start thinking of it as something to be grown and nurtured, the seeds for which are readily available all over the globe for companies willing to do the work.

We no longer negotiate salaries or raises at Basecamp. Everyone in the same role at the same level is paid the same. Equal work, equal pay.

We assess new hires on a scale that goes from junior programmer, to programmer, to senior programmer, to lead programmer, to principal programmer (or designer or customer support or ops or whatever role we’re hiring for). We use the same scale to assess when someone is in line for a promotion. Every employee, new or old, fits into a level on the scale, and there is a salary pegged to each level per role. Once every year we review market rates and issue raises automatically. Our target is to pay everyone at the company at the top 10 percent of the market regardless of their role. So

We encourage remote work and have many employees who’ve lived all over the world while continuing to work for Basecamp.

There’s a fountain of happiness and productivity in working with a stable crew. It’s absolutely key to how we’re able to do so much with so few at Basecamp. We’re baffled that such a competitive advantage isn’t more diligently sought.

Rather than thinking of it as an office, we think of it as a library. In fact, we call our guiding principle: Library Rules.

In our office, if someone’s at their desk, we assume they’re deep in thought and focused on their work. That means we don’t walk up to them and interrupt them. It also means conversations should be kept to a whisper so as not to disturb anyone who could possibly hear you. Quiet runs the show.

To account for the need for the occasional full-volume collaboration, we’ve designated a handful of small rooms in the center of the office where people can go to if they need to work on something together (or make a private call).

In our industry, it’s become common practice to offer “unlimited vacation days.” It sounds so appealing! But peel back the label and it’s a pretty rotten practice.

Unlimited vacation is a stressful benefit because it’s not truly unlimited.

“Basecamp offers three weeks of paid vacation, a few extra personal days to use at your discretion, and the standard national holidays every year. This is a guideline, so if you need a couple extra days, no problem. We don’t track your days off, we use the honor system. Just make sure to check with your team before taking any extended absence, so they’re not left in the lurch.”

If you don’t clearly communicate to everyone else why someone was let go, the people who remain at the company will come up with their own story to explain it. Those stories will almost certainly be worse than the real reason.

A dismissal opens a vacuum, and unless you fill that vacuum with facts, it’ll quickly fill with rumors, conjecture, anxiety, and fear. If you want to avoid that, you simply have to be honest and clear with everyone about what just happened. Even if it’s hard. That’s why whenever someone leaves Basecamp, an immediate goodbye announcement is sent out companywide.

When it comes to chat, we have two primary rules of thumb: “Real-time sometimes, asynchronous most of the time” and “If it’s important, slow down.”

Important topics need time, traction, and separation from the rest of the chatter. If something is being discussed in a chat room and it’s clearly too important to process one line at a time, we ask people to “write it up” instead. This goes together with the rule “If everyone needs to see it, don’t chat about it.” Give the discussion a dedicated, permanent home that won’t scroll away in five minutes.

You can’t fix a deadline and then add more work to it. That’s not fair. Our projects can only get smaller over time, not larger. As we progress, we separate the must-haves from the nice-to-haves and toss out the nonessentials.

And who makes the decision about what stays and what goes in a fixed period of time? The team that’s working on it. Not the CEO, not the CTO. The team that’s doing the work has control over the work. They wield the “scope hammer,” as we call it. They can crush the big must-haves into smaller pieces and then judge each piece individually and objectively. Then they can sort, sift, and decide what’s worth keeping and what can wait.

Here are some of the telltale signs that your deadline is really a dreadline: An unreasonably large amount of work that needs to be done in an unreasonably short amount of time. “This massive redesign and reorganization needs to happen in two weeks. Yeah, I know half the team is out on vacation next week, but that’s not my problem.” An unreasonable expectation of quality given the resources and time. “We can’t compromise on quality—every detail must be perfect by Friday. Whatever it takes.” An ever-expanding amount of work in the same time frame as originally promised. “The CEO just told me that we also need to launch this in Spanish and Italian, not just English.”

When we present work, it’s almost always written up first. A complete idea in the form of a carefully composed multipage document. Illustrated, whenever possible. And then it’s posted to Basecamp, which lets everyone involved know there’s a complete idea waiting to be considered.

We don’t want reactions. We don’t want first impressions. We don’t want knee-jerks. We want considered feedback. Read it over. Read it twice, three times even. Sleep on it. Take your time to gather and present your thoughts—just like the person who pitched the original idea took their time to gather and present theirs.

That’s how you go deep on an idea.

Friday is the worst day to release anything.

So instead of shipping big software updates on Fridays, we now wait until Monday the following week to do it. Yes, this introduced other risks—if we somehow make a big mistake, we’re introducing it on the busiest day of the week. But knowing that also helps us be better prepared for the release. When there’s more at stake, you tend to measure twice, cut once.

This encouraged us to take quality assurance more seriously, so we can catch more issues ahead of time.

We hire when it hurts. Slowly, and only after we clearly need someone. Not in anticipation of possibly maybe.

When calm starts early, calm becomes the habit. But if you start crazy, it’ll define you. You have to keep asking yourself if the way you’re working today is the way you’d want to work in 10, 20, or 30 years. If not, now is the time to make a change, not “later.”

Today we ship things when they’re ready rather than when they’re coordinated. If it’s ready for the web, ship it! iOS will catch up when they’re ready. Or if iOS is first, Android will get there when they’re ready. The same is true for the web. Customers get the value when it’s ready wherever, not when it’s ready everywhere.

So don’t tie more knots, cut more ties. The fewer bonds, the better.

We’ve been practicing disagree and commit since the beginning, but it took Bezos’s letter to name the practice. Now we even use that exact term in our discussions. “I disagree, but let’s commit” is something you’ll hear at Basecamp after heated debates about specific products or strategy decisions.

Companies waste an enormous amount of time and energy trying to convince everyone to agree before moving forward on something. What they’ll often get is reluctant acceptance that masks secret resentment.

Instead, they should allow everyone to be heard and then turn the decision over to one person to make the final call. It’s their job to listen, consider, contemplate, and decide.

Calm companies get this. Everyone’s invited to pitch their ideas, make their case, and have their say, but then the decision is left to someone else.

Knowing when to embrace Good Enough is what gives you the opportunity to be truly excellent when you need to be.

Change makes things worse all the time. It’s easier to fuck up something that’s working well than it is to genuinely improve it. But we commonly delude ourselves into thinking that more time, more investment, more attention is always going to win.

Calm requires getting comfortable with enough.

The only way to get more done is to have less to do.

It’s not time management, it’s obligation elimination. Everything else is snake oil.

Nearly all product work at Basecamp is done by teams of three people. It’s our magic number. A team of three is usually composed of two programmers and one designer. And if it’s not three, it’s one or two rather than four or five. We don’t throw more people at problems, we chop problems down until they can be carried across the finish line by teams of three.

We rarely have meetings at Basecamp, but when we do, you’ll hardly ever find more than three people around a table. Same with conference calls or video chats. Any conversation with more than three people is typically a conversation with too many people.

What if there are five departments involved in a project or a decision? There aren’t. We don’t work on projects like that—intentionally.

Three is a wedge, and that’s why it works. Three has a sharp point.

You can do big things with small teams, but it’s a whole hell of a lot harder to do small things with big teams.

If the boss is constantly pulling people off one project to chase another, nobody’s going to get anything done.

“Pull-offs” can happen for a number of reasons, but the most common one is that someone senior has a new idea that Just Can’t Wait.

We make every idea wait a while. Generally a few weeks, at least. That’s just enough time either to forget about it completely or to realize you can’t stop thinking about it.

What makes this pause possible is that our projects don’t go on forever. Six weeks max, and generally shorter.

That means we have natural opportunities to consider new ideas every few weeks. We don’t have to cut something short to start something new. First we finish what we started, then we consider what we want to tackle next. When the urgency of now goes away, so does the anxiety. This approach also prevents unfinished work from piling up.

Having a box full of stale work is no fun. Happiness is shipping: finishing good work, sending it off, and then moving on to the next idea.

Winter is when we buckle down and take on larger, more challenging projects. Summer, with its shorter 4-day weeks, is when we tackle simpler, lighter projects.

People grow dull and stiff if they stay in the same swing for too long.

We’ve also intentionally never gotten ahead of ourselves. We’ve always kept our costs in check and never made a move that would push us back from black to red. Why? Because crazy’s in the red. Calm’s in the black.

Revenue alone is no defense, either, because revenue without a profit margin isn’t going to save you. You can easily go broke generating revenue—many companies have. But you can’t go broke generating a profit.

The worst customer is the one you can’t afford to lose. The big whale that can crush your spirit and fray your nerves with just a hint of their dissatisfaction. These are the customers who keep you up at night.

We’ve rejected the per-seat business model from day one. It’s not because we don’t like money, but because we like our freedom more!

So we take the opposite approach. Buy Basecamp today and it’s just $99/month, flat and fixed. It doesn’t matter if you have 5 employees, 50, 500, or 5,000—it’s still just $99/month total. You can’t pay us more than that.

First, since no one customer can pay us an outsized amount, no one customer’s demands for features or fixes or exceptions will automatically rise to the top. This leaves us free to make software for ourselves and on behalf of a broad base of customers, not at the behest of any single one or a privileged few. It’s a lot easier to do the right thing for the many when you don’t fear displeasing a few super customers.

Third, we didn’t want to get sucked into the mechanics that chasing big contracts inevitably leads to. Key account managers. Sales meetings. Schmoozing. The enterprise sales playbook is well established and repulsive to us. But it’s also unavoidable once you open the door to the big bucks from the big shots. Again, no thank you.

Becoming a calm company is all about making decisions about who you are, who you want to serve, and who you want to say no to. It’s about knowing what to optimize for. It’s not that any particular choice is the right one, but not making one or dithering is definitely the wrong one.

At Basecamp we live this philosophy to the extreme. We don’t show any customers anything until every customer can see it. We don’t beta-test with customers. We don’t ask people what they’d pay for something. We don’t ask anyone what they think of something. We do the best job we know how to do and then we launch it into the market. The market will tell us the truth.

Putting everything we build in front of customers beforehand is slow, costly, and results in a mountain of prerelease feedback that has to be sifted through, considered, debated, discussed, and decided upon. And yet it’s still all just a guess! That’s a lot of energy to spend guessing.

Since the beginning of Basecamp, we’ve been loath to make promises about future product improvements. We’ve always wanted customers to judge the product they could buy and use today, not some imaginary version that might exist in the future.

It’s why we’ve never committed to a product road map. It’s not because we have a secret one in the back of some smoky room we don’t want to share, but because one doesn’t actually exist. We honestly don’t know what we’ll be working on in a year, so why act like we do?

Promises lead to—rushing, dropping, scrambling, and a tinge of regret at the earlier promise that was a bit too easy to make.

Promises pile up like debt, and they accrue interest, too. The longer you wait to fulfill them, the more they cost to pay off and the worse the regret. When it’s time to do the work, you realize just how expensive that yes really was.

And if the whole world’s singing your songs And all of your paintings have been hung Just remember what was yours Is everyone’s from now on And that’s not wrong or right But you can struggle with it all you like You’ll only get uptight —Wilco, “What Light”

What people don’t like is forced change—change they didn’t request on a timeline they didn’t choose. Your “new and improved” can easily become their “what the fuck?” when it is dumped on them as a surprise.

We still run three completely different versions of Basecamp: our original software that we sold from 2004 to 2012, our second version that we sold from 2012 to 2015, and our third version that launched in 2015. Every new version was “better,” but we never force anyone to upgrade to a new version. If you signed up for the original version back in 2007, you can keep using that forever.

Things get harder as you go, not easier. The easiest day is day one. That’s the dirty little secret of business.

If you understand what the future might look like, you can visualize it and be ready when the rain doesn’t let up. It’s all about setting expectations.

Startups are easy, stayups are hard.

Turning down growth, turning down revenue. Companies are culturally and structurally encouraged to get bigger and bigger.

Maintain a sustainable, manageable size. We’d still grow, but slowly and in control.

America Before

America Before Book Cover America Before
Graham Hancock
Body, Mind & Spirit
St. Martin's Press
April 23, 2019

Hancock challenges the archaeological orthodoxy's view that North and South America were the last places to be settled by humans.

Hancock posits a theory that an ancient globally distributed system of ASTRO-ARCHITECTURE that created monuments on the ground which mimic patterns of certain constellations in the sky. Since before he wrote Fingerprints of the Gods, Hancock has been searching for a lost ancient high civilization.

The design of the sacred architecture of the world is ruled by geometry. Hancock uses Richard Dawkins' term "meme" to describe this system of behavior being passed from one individual to the other.

Stonehenge, the Pyramids, Angkor Wat, Serpent Mound, Ohio - all are concerned with deliberate orientation to the sky - some honor the solstices. Architectural, astronomical, geometrical memes across different parts of the world AND across many different time periods.

A quote from the book:

"Contrary to the mainstream, my broad conclusion is that an advanced global seafaring civilization existed during the Ice Age, that it mapped the earth as it looked then with stunning accuracy, and that it had solved the problem of longitude, which our own civilization failed to do until the invention of Harrison’s marine chronometer in the late eighteenth century. As masters of celestial navigation, as explorers, as geographers, and as cartographers, therefore, this lost civilization of 12,800 years ago was not outstripped by Western science until less than 300 years ago at the peak of the Age of Discovery."

Plato, in the oldest-surviving written source of the Atlantis tradition, describes it as an island “larger than Libya and Asia put together” situated far to the west of Europe across the Atlantic Ocean.

ARCHAEOLOGY TEACHES US THAT THE vast, inviting, resource-rich continents of North and South America were among the very last places on earth to have been inhabited by human beings. Only a handful of remote islands were settled later. This is the orthodoxy, but it is crumbling under an onslaught of compelling new evidence revealed by new technologies, notably the effective sequencing of ancient DNA.

Far from being very recent, it is beginning to look as though the human presence in the Americas may be very old—perhaps more than 100,000 years older than has hitherto been believed.

Moreover, the New World was physically, genetically, and culturally separated from the Old around 12,000 years ago when rising sea levels submerged the land bridge that formerly connected Siberia to Alaska.

“Serpent Mound Cryptoexplosion Structure.” Only since the late 1990s has mounting evidence led to today’s widespread consensus that it was, as many had long suspected, formed by a hypervelocity cosmic impact. Dating back to the time of the impact, an intense magnetic anomaly centered on the site causes compasses to give wildly inaccurate readings. I’d go further. I’d say our Serpent is Gitché Manitou—the Great Spirit and ancestral guardian of the ancient people.”

At Serpent Mound, however, as Ross Hamilton points out, these so-called superstitious primitives were demonstrably the masters of some very exacting scientific techniques. He gives me a penetrating look. “Just consider the precision with which they found true north and balanced the whole effigy around that north–south line. It was a long while before modern surveyors could match it.

All these places are man-made sanctuaries that speak to the union of heaven and earth at key moments of the year. They might rightly be described as hierophanies because their fundamental purpose is to reveal and manifest the sacred connection between macrocosm and microcosm, sky and ground, “above” and “below.”

North America has its Great Serpent Mound, a natural ridge, modified and enhanced by humans to join heaven and earth at sunset on the summer solstice.

archaeoastronomer Anthony F. Aveni,

In the 1980s, as we’ll see in part 2, there was a general acceptance that humans might have first arrived in the Americas by 12,000 or even 13,000 years ago. But those earliest migrants were deemed by archaeologists to have been scattered hunter-gatherer groups, living from hand to mouth and lacking the vision, sophistication, and level of organization required to create a monument on the scale of Serpent Mound.

No carbon dating had been done,

The first carbon-dating of Serpent Mound and found it to be much younger than everyone had supposed—not 2,000 years old or more, but 1,000 years old or less.

Instead he focuses on the form of the serpent, which he perceives as a terrestrial image of the constellation Draco.

In my 1998 book Heaven’s Mirror, for example, I present evidence that this enormous constellation, widely depicted as a serpent by many ancient cultures, served as the celestial blueprint according to which the temples of Angkor in Cambodia were laid out on the ground—with each temple “below” matching a star “above.” The essence of my case is that the notion of “as above so below” expressed in the architecture of Angkor is part of an ancient globally distributed doctrine—or “system”—that set out quite deliberately to create monuments on the ground, all around the world, to mimic the patterns of certain significant constellations in the sky.

Thus around 3000 BC, just before the start of the Pyramid Age in Egypt, the pole star was Thuban (Alpha Draconis) in the constellation Draco. At the time of the Greeks it was Beta Ursae Minoris. In AD 14000 it will be Vega.

What makes Draco particularly significant and remarkable was summed up in 1791 in two lines from a poem by Charles Darwin’s grandfather, the physician and natural philosopher Erasmus Darwin: With vast convolutions Draco holds Th’ ecliptic axis in his scaly folds. This “ecliptic axis”—astronomers today call it the “pole of the ecliptic”—is the still, fixed point in the celestial vault around which the vast circle of the north celestial pole makes its endlessly repeated 25,920-year journey. It is the one place in the sky that never moves or changes while everything else about it dances and shifts, and once you recognize it for what it is—nothing less than the very heart of heaven—it’s striking how the serpentine constellation of Draco seems to coil protectively around it.

I know what Ross is reminding me of here is a connection he’s written about between the geometry of Stonehenge and the geometry of Serpent Mound, which he regards as “two elements comprising a larger picture pointing to a highly evolved school of astro-architecture, the origin of which is not known.”9

My whole focus, since long before the publication of Fingerprints of the Gods in 1995, has been a quest for a high civilization of remote antiquity, a civilization that can rightly be described as “lost” because the very fact that it existed at all has been overlooked by archaeologists.

A site called Blackwater Draw near the town of Clovis, New Mexico, where bones of extinct Ice Age mammals were found in 1929 and assumed, rightly, to be very old.

Anthropologist Edgar B. Howard of the University of Pennsylvania disagreed.17 He began excavations at Blackwater Draw in 1933, concluded that it was possible that humans had been in North America for tens of thousands of years.

There are now two schools of thought around its proposed antiquity and duration. The so-called long interval school dates the first appearance of Clovis in North America to 13,400 years ago and its mysterious extinction and disappearance from the archaeological record to around 12,800 years ago—a period of 600 years. The “short interval” school also accepts 12,800 years ago for the end date of Clovis but sets the start date at 13,000 years ago—therefore allowing it an existence of just 200 years. Both schools agree that this unique and distinctive culture must have originated somewhere else because, from the first evidence for its presence, it is already sophisticated and fully formed, deploying advanced weapons and hunting tactics.

No traces of the early days of Clovis, of the previous evolution and development of its characteristic tools, weapons, and lifeways, have been found anywhere in Asia. All we can say for sure is that once it had made its presence felt in North America the Clovis culture spread very widely across a huge swath of the continent, with sites as far apart as Alaska, northern Mexico, New Mexico, South Carolina, Florida, Montana, Pennsylvania, and Washington state.28 Such an expansion would have been extremely rapid were it to have occurred in 600 years and seems almost miraculously fast if it was in fact accomplished in 200 years.29

A consensus soon began to emerge that no older cultures would ever be found—and what is now known as the “Clovis First” paradigm was conceived.

September 1964. That was when archaeologist C. Vance Haynes, today Regents Professor Emeritus of Anthropology at the University of Arizona and a senior member of the National Academy of Sciences, published a landmark paper in the journal Science. Snappily titled “Fluted Projectile Points: Their Age and Dispersion,”31

First, Haynes pointed out that, because of lowered sea level during the Ice Age, much of the area occupied today by the Bering Sea was above water, and where the Bering Strait now is, a tundra-covered landscape connected eastern Siberia and western Alaska.

Things changed around 14,100 years ago, Haynes claimed, when a generalized warming of global climate caused an ice-free corridor to open up between the Laurentide and the Cordilleran ice caps, allowing entry for the first time in many millennia to the rich, unglaciated plains, teeming with game, that lay to the south.34

Some 700 years later, around 13,400 years ago, the stratigraphic record of those plains starts to include Clovis artifacts. Their “abrupt appearance,” Haynes argued, supports the view “that Clovis progenitors passed through Canada” and that “from the seemingly rapid and wide dispersal of Clovis points … it appears these people may have brought the technique of fluting with them.”

If Clovis progenitors traversed a corridor through Canada … and dispersed through the United States south of the … ice border in the ensuing 700 years, then they were probably in Alaska some 500 years earlier…. The Alaskan fluted points … could represent this occupation and could, therefore, be ancestral to Clovis points and blades.”

TOM DILLEHAY, PROFESSOR OF ANTHROPOLOGY at Vanderbilt University in Tennessee, began excavations at Monte Verde in southern Chile in 1977 and found evidence that humans had been present there as far back as 18,500 years ago.

He was attacked because there are no Clovis artifacts at Monte Verde, it is 5,000 years older than the oldest securely dated Clovis sites, and it is located more than 8,000 miles south of the Bering Strait.

Likewise, in the 1990s, Canadian archaeologist Jacques Cinq-Mars excavated Bluefish Caves in the Yukon and found evidence of human activity there dating back more than 24,000 years—older than Meadowcroft and much older than Clovis.

On April 27, 2017, Tom Deméré’s paper announcing the discovery of “a 130,000-year-old archaeological site in southern California, USA,” appeared in Nature. That’s about ten times older than Clovis, eight times older than Meadowcroft, and more than five times as old as Bluefish Caves.

Like the mammoths, to which they were closely related, mastodons were swept from the face of the earth in the sudden and mysterious extinction of America’s Ice Age megafauna that took place around 12,800 years ago—the same epoch exactly that saw the equally abrupt and equally mysterious disappearance of the Clovis culture.

Deméré therefore sent several of the mastodon bones to the US Geological Survey in Colorado, where geologist Jim Paces, using the updated and refined technique, established beyond reasonable doubt that the bones were buried 130,000 years ago.19

At that point in 2017 it was still believed—though new evidence would soon substantially change the picture—that anatomically modern humans had not even left their African homeland 140,000 years ago.

“The way it was set into the ground so it would have stood upright. The other one lay in a natural horizontal position beside it but this one was found like you see it in the display. Vertical. And that, to us, immediately looked like an anomaly.” “Why?” “One suggestion is that it was perhaps left there as a marker to come back to the site on a floodplain where everything is low relief….

This looks like the result of human behavior? That it’s evidence of a deliberate, intelligent act?”

The anomalous tusk is just a small part of the story, he says. The stronger evidence comes from the mastodon’s fossilized bones, and from the rocks and stones of various sizes found distributed around the site.

“We suggest that this was a work station, that both femora were hammered and broken here on the anvil stone and that the heads were detached and just set off to the side. It feels purposeful, like the tusk. It feels like humans were breaking these bones and it’s not only what’s here that’s important but also what’s not here. I mean, originally the femora from which these heads came were three feet in length and massively thick, yet we have just a few pieces of them …”

So the fact that we have missing bits suggests to us that they were taken away, which fits this idea of human processing and transportation.”

“But if I’m correct, you’re arguing that can be explained—because what these ancient humans were doing was extracting the marrow from the bones. They were smashing up the bones. They didn’t particularly need fine tools for this.”

We’re saying that this was a carcass. It wasn’t killed by these humans. It wasn’t even butchered by these humans. Most likely it was a carcass at an advanced stage of decomposition but it still had potential for the extraction of marrow from the bones.”

The presence of spiral fractures among the bones of the Cerutti mastodon therefore leads to the inevitable conclusion that they must have been broken 130,000 years ago, when they were fresh.

Meanwhile, the presence of the hammer and anvil stones, and the evidence of how they were used to break the bones, makes it equally certain that humans were involved.

“Because,” I muse, “nothing else is going to smash up those bones and take out the marrow in that way.”

Open your mind to the possibility that instead of the peopling of the Americas being associated with the last deglaciation event [the so-called Bølling-Allerød interstadial, dated from around 14,700 years ago to around 12,800 years ago34] what we should actually be looking at is the deglaciation event before that—between 140,000 and 120,000 years ago. You get the same sort of scenario with a land bridge and ice sheets retreating and you get that same sweet spot between really low sea levels and a blockage by ice sheets, and ice sheets gone and the flooding of the land bridge.”

“As a paleontologist,” he muses, “I ask the question—why weren’t there humans here earlier? I mean, we have dispersal of Eurasian animal species into North America and dispersal of North American species into Eurasia at earlier times. So why shouldn’t humans have been here as well?”

“In other words, only humans could have done this.” “Right. Human beings who understood the properties of the stone and how to work it. If nature can’t break it, it can’t make it.”

Consider the most important pre-Clovis sites in North America in addition to Cerutti and Topper to include: Hueyatlaco, Mexico;19 Old Crow and Bluefish Caves, Canada; Calico Mountain, California; Pendejo Cave, New Mexico; Tula Springs, Nevada; Meadowcroft Rockshelter, Pennsylvania; Cactus Hill, Virginia; Paisley Five Mile Point Caves, Oregon; Schaefer and Hebior Mammoth site, Wisconsin; Buttermilk Creek, Texas; and Saltville, Virginia.

They really matter in that they offer compelling proof of the enduring presence of humans of some kind in the Americas from perhaps as far back as 130,000 years ago until today.

That’s a very long time. It might even be long enough—speaking entirely hypothetically, of course—for something that we would recognize as an advanced civilization to have emerged in the Americas alongside the hunter-gatherers, foragers, and scavengers whose simple tools dominate the pre-Clovis horizons so far excavated.

American archaeology was so riddled with pre-formed opinions about how the past should look, and about the orderly, linear way in which civilizations should evolve, that it repeatedly missed, sidelined, and downright ignored evidence for any human presence at all prior to Clovis—until, at any rate, the mass of that evidence became so overwhelming that it took the existing paradigm by storm.

If we don’t ever look for a lost civilization—because of a preconception that none could have existed—then we won’t find one.

At some point in the remote past, in some unknown location or locations, the ancestors of Native Americans interbred with an archaic—and now extinct—human species. Only recently discovered, and closely related to the more famous Neanderthals who also produced offspring with our ancestors, geneticists have named this species “the Denisovans.”

It was the consensus view of archaeologists and anthropologists during the period of “Clovis First” dominance that the Americas were settled exclusively by the overland route from Siberia via “Beringia” and southward through the ice-free corridor. Despite the collapse of “Clovis First,” this remains the consensus view today;

Several subsequent studies have pointed out that for much of its duration long stretches of the supposed ice-free corridor would have been completely uninhabitable and thus most unpromising territory for a lengthy migration.

It is certain, however, that Denisova Cave has been used and occupied by various species of human for at least 280,000 years, making it an unrivaled archive—a sort of “hall of records”—of our largely unremembered ancestral story.

At certain times during the past 280,000 years, not continuously but at intervals, it had been occupied by Neanderthals—our extinct cousins with whom, as is now widely known, our ancestors interbred and from whom some extant modern human populations have inherited as much as 1–4 percent of their DNA. Neanderthals were probably still using the cave 50,000 years ago. It wasn’t until 2010, however, when proof emerged that a human species hitherto unrecognized by science had been present at Denisova—a species now also known to have interbred with our ancestors—that the true global significance of this very obscure and remote place could begin to be fully realized. The sensational news was broken first in the pages of Nature in December 2010 in a benchmark paper, “Genetic History of an Archaic Hominin Group from Denisova Cave in Siberia.”

The mass of archaeological evidence suggests is that for extraordinarily long periods of time it functioned as a “factory” or “workshop,” and that raw materials were brought here from far-off places to be worked and fashioned.

Unusual and beautiful pieces of jewelry including pendants featuring biconical drilled-out holes, cylindrical beads, a ring carved from marble, a ring carved from mammoth ivory, and bone tubes perhaps designed to hold bone needles so they could be carried safely.

The entrance zone of the East Gallery, specifically from Level 11.1,22 were two broken pieces of a dark green chloritolite bracelet.

“This artifact was manufactured with the help of various technical methods of stone working including those that are considered non-typical for the Paleolithic period…. The bracelet demonstrates a high level of technological skills.”

In particular, to “a hole drilled close to one of the edges” of the bracelet and report that “drilling was carried out with a stable drill over the course of at least three stages. Judging by traces on the surface, the speed of drill running was considerable. Vibrations of the rotation axis of the drill are minor, and the drill made multiple rotations around its axis.”

They therefore conclude that the bracelet “constitutes unique evidence of an unexpectedly early employment of two-sided fast stationary drilling during the Early Upper Paleolithic.” This is a big deal!

At least some of these skills and technologies, like “stationary drilling” with the use of a bow drill that does not leave signs of drill vibration, would not be seen again until the Neolithic many thousands of years later. The bracelet thus refutes what the authors describe as “a common assumption” held by archaeologists that “stone drilling originated during the Upper Paleolithic, but gained the features of a well-developed technology only during the Neolithic.”

So not only was this curious bracelet unequivocally the work of anatomically archaic human beings—the Denisovans—but also it testified to their mastery of advanced manufacturing techniques in the Upper Paleolithic, many millennia ahead of the earliest use of these techniques in the Neolithic by our own supposedly “advanced” species, Homo sapiens. Also made crystal clear was the realization that the Denisovans must have possessed the same kinds of artistic sensibility and self-awareness that we habitually associate only with our own kind—for there can be no doubt that very real, conscious, aware, and unmistakably human beings had interacted with this bracelet at every stage of its conception, design, and manufacture, all the way through to its end use.

Reconstruction: The bracelet formed a torque. “It brightly shimmers in broad daylight and reveals a rich play of hunter green shades in the light of a campfire. The bracelet was hardly an everyday item. Fragile and elegant, it was apparently worn on very special occasions.

THE LOWER PART OF LEVEL 11 dates back, as we’ve seen, to around 50,000 years ago, but the bracelet was found in the upper part, officially designated Level 11.1 and provisionally dated to the Upper Paleolithic about 30,000 years ago—making it, because of its “Neolithic” characteristics, roughly 20,000 years ahead of its time.

Gone with it was a second anomalous object, an exquisite bone needle 7.6 centimeters in length, with a near-microscopic eye less than 1 millimeter in diameter drilled out at the head.43

What put an end to such speculation was the discovery of the longer, even finer and more technically perfect needle in 2016 and its location not in the upper—younger—part of Level 11 near its contact with Level 10, but instead in the much older lower part near its contact with Level 12.

Level 11 had been reassessed and its various internal strata reexamined and re-dated. The result of these new investigations was that the bracelet was no longer thought to be 30,000 years old as had originally been supposed, but 50,000 years old! A year later the Siberian Times published speculation that it might be even older—perhaps as much as “65,000 to 70,000 years old.”

What now appears to be certain is that Neanderthals, Homo sapiens (as modern humans are classified taxonomically), and Denisovans all shared and descended from a common ancestor a million years or so ago. The divergence of the Neanderthal line from the modern human line began at least 430,000 years ago, and perhaps as early as 765,000 years ago. The divergence of the Neanderthal line from the Denisovan line occurred between 381,000 and 473,000 years ago.Humans today are therefore, to a greater or lesser degree, hybrids who have inherited genes from Neanderthals, Denisovans, and archaic Homo sapiens.

1.  DNA is the genetic mechanism of inheritance, and the various types of DNA present in our cells have, as a result of scientific advances in the late twentieth and early twenty-first centuries, been subject to close investigation by a range of highly sophisticated techniques. The results of these investigations have shed light on the degree of genetic relatedness that exists between individuals and, on a larger scale, between entire populations.

2.  Located in the fluid surrounding the nucleus of every cell in our bodies, mtDNA (mitochondrial DNA) is inherited by both males and females but is passed on to offspring only by females.3 MtDNA can identify lines of descent from shared maternal, but not paternal, ancestors.4 What geneticists like about mtDNA is its abundance, being present in multiple copies per cell, giving plenty of material to work with.

3.  The same cannot be said of nuclear DNA, inherited equally from both parents, which has only two copies per cell but which encodes far more genetic information than mtDNA, allowing for far more robust and precise analyses of genetic relatedness.

4.  Within the cell nucleus are also located the chromosomes—segments of DNA that determine sex. If you have two X chromosomes you’re a female; if you have an X and a Y you’re male. Y-DNA is passed on only by males, thus facilitating the determination only of shared paternal ancestry, whereas X-DNA is inherited both through the maternal and paternal lines (since males and females both have X chromosomes) and can therefore be useful in isolating shared common ancestors along particular branches of inheritance.

In other words, genetics, unlike archaeology, is a hard science where the pronouncements of experts are based on facts, measurements, and replicable experimentation rather than inferences or preconceived opinions.

The Siberian site lies to the west of Lake Baikal near the village of Mal’ta on the banks of the Bolshaya Belaya River.

1,000 kilometers east of Denisova Cave.

For many years as the home of an Upper Paleolithic culture—archaeologists call it the Mal’ta-Buret culture—that left behind many beautiful and mysterious works of art thought to be more than 20,000 years old. Among them, done in bone and mammoth ivory, are carvings of elegant, long-necked water fowl and a collection of thirty human Venus figures that are “rare for Siberia but found at a number of Upper Paleolithic sites across western Eurasia.” The primary excavations at Mal’ta, which took place between 1928 and 1958, also uncovered two burials, both of young children interred with curious and beautiful grave goods including pendants, badges, and ornamental beads.11 One of these children, a boy aged 3–4 years and now known to archaeologists as MA-I, had been buried beneath a stone slab, there was a Venus figurine beside him,12 and he was “wearing an ivory diadem, a bead necklace and a bird-shaped pendant.”

C-14 dating that showed them, give or take a few hundred years, to be 24,000 years old.

Successfully sequenced MA-1’s entire genome—making it, when a full account of the investigation was published in Nature in 2014, “the oldest anatomically modern human genome reported to date.”

Known to archaeologists as the Anzick-1 burial site and dated to 12,600 years ago (which makes it 11,400 years younger than MA-1), it is also a child’s grave—in this case a boy aged 1–2 years who was interred with more than 100 tools of stone and antler, all sprinkled with red ochre. One thing we see for sure in both these ancient burials, separated by thousands of miles and thousands of years, is that the human capacity to love and cherish family members, and to regret and mourn those who pass prematurely, is not diminished by time; indeed, we instantly recognize and identify with it today because we share it.

All authorities agree that MA-1 and Anzick-1 are closely related, sharing large sequences of DNA.19 Anzick-1, however, “belonged to a population directly ancestral to many contemporary Native Americans” and thus, unsurprisingly—despite his proximity to MA-1—is “more closely related to all indigenous American populations than to any other group.”

The investigators discovered that MA-1 also stands “near the root of most Native American lineages,” and “14 to 38% of Native American ancestry may originate through gene flow from this ancient population [the population from which MA-1 stemmed]. This is likely to have occurred after the divergence of Native American ancestors from east Asian ancestors, but before the diversification of Native American populations in the New World.”

“therefore suggests a connection between pre-agricultural Europe and Upper Paleolithic Siberia.”

SOMETHING I HAVEN’T MENTIONED YET—the ochre-dusted stone and antler tools found buried with Anzick-1 were unmistakably Clovis artifacts.

First, the Anzick-1 burial was originally dated to around 12,600 years ago—or, more exactly, within the limits of resolution of C-14, to between 12,707 and 12,556 years ago.31 This suggested that the grave was dug and the grave goods placed with the remains of the deceased infant a century or two after the abrupt and mysterious disappearance of the Clovis culture from the archaeological record around 12,800 years ago. That disappearance testifies to a sudden cessation of previously widespread cultural activities, suggestive of interruption by some far-reaching cataclysmic event. What it does not mean, however, is that every member of the Clovis population died out overnight.

One possibility that has been considered is that Anzick-1 himself may have belonged to just such a remnant group.

Anzick-1’s bones were initially dated between 12,707 and 12,556 years ago. The antler foreshafts among his grave goods are a century or two older than that—in the range of 12,800 to 13,000 years ago32—“a much more typical and acceptable age for Clovis,”

“the foreshafts were 100 to 200-year-old antique heirlooms interred with the infant by the very last Clovis folks in the region.”

Clovis did, at the limits of its range, extend into some northern areas of South America, its heartland was in North America. Intuitively, therefore, we would expect the Montana infant, a Clovis individual, to be much more closely related to Native North Americans than to Native South Americans. Further investigations, however, while reconfirming that Anzick-1’s genome had a greater affinity to all Native Americans than to any extant Eurasian population, revealed it to be much more closely related to native South Americans than to Native North Americans!

IN SUMMARY, ANZICK-1 IS A paradox clothed in a conundrum, wrapped up in a mystery—an individual in a North American Clovis culture grave who is closely related to Native South Americans, to the Siberian Mal’ta population, and to ancient western Europeans.

Some Amazonian Native Americans descend partly from a Native American founding population that carried ancestry more closely related to indigenous Australians, New Guineans and Andaman Islanders than to any present-day Eurasians or Native Americans. This is suggesting a more diverse set of founding populations of the Americas than previously accepted.

In the end “a statistically clear signal linking Native Americans in the Amazonian region of Brazil to present-day Australo-Melanesians and Andaman Islanders” was confirmed.

A second founding population of the Americas. It is very old, in their view, and almost all traces of it have been overwritten almost everywhere by later genetic “noise.”

The investigators have given their “putative ancient Native American lineage” a name: “Population Y” after Ypykuéra, which means “ancestor” in the Tupi language family.”

“A Population Y that had ancestry from a lineage more closely related to present-day Australasians than to present-day East Asians and Siberians likely contributed to the DNA of Native Americans from Amazonia and the Central Brazilian Plateau.”

Congregating in that original northeast Asian—that is, Siberian—melting pot we are now being asked to envisage not only people with European genes and people with east Asian genes, but also people with Australasian genes. Neanderthals were part of the mix, too, interbreeding vigorously with Homo sapiens, and there were people carrying Denisovan genes and of course the Denisovans themselves. We’re asked to see these groups as essentially divided and separate from one another—despite the obvious evidence of their liaisons—and we’re asked to accept that they remained divided and separate, already conveniently prearranging themselves into what would become the “NA” and “SA” lineages, as they trekked across the Bering land bridge.

What has been preserved in those isolated, unadulterated Amazonian genomes that speaks to an ancient connection with Australasia might not be the traces of a full-scale migration but something more like a one-off settlement by a relatively small group.

Raghavan and Willerslev—just like Skoglund and Reich—could not ignore the persistent “Australasian signal” that kept cropping up in the data: We found that some American populations—including the Aleutian Islanders, Surui, and Athabascans—are closer to Australo-Melanesians as compared with other Native Americans, such as North American Ojibwa, Cree, and Algonquin and the South American Purepecha, Arhuaco, and Wayuu. The Surui are, in fact, one of the closest Native American populations to East Asians and Australo-Melanesians, the latter including Papuans, non-Papuan Melanesians, Solomon Islanders, and South East Asian hunter-gatherers such as Aeta.6

For orthodox thinkers, it is literally inconceivable that prehistoric settlers from the general vicinity of Papua New Guinea could have crossed the entire width of the Pacific Ocean to South America, and thence made their way to the Amazon to leave evidence of their presence in the DNA of people still living there today.

Likewise, and significantly earlier, bones and artifacts of Homo erectus dated to 800,000 years before the present have been found on the Indonesian islands of Flores and Timor, again making open-water crossings by these supposed “subhumans” a certainty even during periods of lowered sea level.

What archaeology does not concede is that the human species could have developed and refined those early nautical skills to the extent of being able to cross a vast ocean like the Pacific or the Atlantic from one side to the other. In the case of the former, extensive transoceanic journeys are not believed to have been undertaken until about 3,500 years ago, during the so-called Polynesian expansion.

The notion that long transoceanic voyages were a technological impossibility during the Stone Age remains one of the central structural elements of the dominant reference frame of archaeology—

Since that reference frame rules out, a priori, the option of a direct ocean crossing between Australasia and South America during the Paleolithic and instead is adamant that all settlement came via northeast Asia, geneticists tend to approach the data from that perspective.

The widely scattered and differential affinity of Native Americans to the Australo-Melanesians, ranging from a strong signal in the Surui to a much weaker signal in northern Amerindians such as Ojibwa, points to this gene flow occurring after the initial peopling by Native American ancestors.

In summary, therefore, taking into account all of the above, the situation seems to be that the Denisovan signal remains at a constant and fairly low level throughout present-day indigenous populations so far sequenced in both North and South America. The Australasian signal, by contrast, is definitely and notably much stronger among populations in the Amazon, such as the Surui, and much weaker among other Native Americans such as the Arhuaco (of non-Amazonian northern Colombia), the Wayuu (of non-Amazonian northern Venezuela), the Purepecha (of Mexico), and the Ojibwa, Cree, and Algonquin of north and northeast North America. While never reaching the high levels found among Amazonian populations, the signal among Aleutian Islanders and Athabascans is relatively stronger than in other Native North American groups and relatively stronger in Aleutian Islanders than it is in Athabascans—though Raghavan and Willerslev warn in their Science paper that the Aleutian Islander data must be interpreted with some caution since it “is heavily masked owing to recent admixture with Europeans.”

We know from the evidence of Denisova Cave itself that their technology—while undoubtedly “Stone Age”—was far ahead of its time and in some ways much more akin to the Neolithic than to the Upper Paleolithic. We know that they could make sea crossings and that they ranged over a vast area, at least from the Altai Mountains in the west to Australo-Melanesia in the east. Last but not least, we know that their DNA survives most strongly today in people of Australo-Melanesian descent, and there’s informed speculation that Australo-Melanesia may have been their original homeland.

However many times by however many hands they have been copied and recopied down the ages, it is my contention that these anomalous maps can be traced back to lost source documents that could only have originated with a civilization at least advanced enough to have explored the world, and to have mapped and measured it, when it was still in the grip of the Ice Age. A civilization capable of such feats must, at the very least, have had its own adepts in the techniques of boat-building, sailing, navigating, cartography, and geometry—none of these being among the skills that archaeologists are normally willing to attribute to Ice Age hunter-gatherers.

The reason Carvajal’s account was disbelieved for most of the twentieth century by almost everyone who reviewed it is therefore plain to see. The picture he painted of the pre-contact state of the peoples and cultures of the Amazon flew in the face of a dominant (and domineering) scholarly theory.

As Wilkinson goes on to note in his study of Amazonian civilization: Towards the end of the 20th century, the archaeological pendulum began to swing back toward crediting the early explorers’ accounts. Even Meggers [in Amazonia: Man and Culture in a Counterfeit Paradise] had passed on without comment a report [dated approximately 1662] by Mauricio de Heriarte that the capital of the Tapajós (at today’s Santarem) could field 60,000 warriors. Any such number of militia would by … comparative-civilizational standards have implied an urban population of 300,000 to 360,000!

Wilkinson cites an important study by anthropologist Thomas P. Myers that documents “more than 30 epidemics—smallpox, measles, and other outbreaks—some ‘on a massive scale’—in 16th–18th century South America.” Myers finds evidence of “very substantial depopulation between the Orellana and Teixeira expeditions” and estimates that in many areas it ran as high 99 percent.62 This, he further suggests, “may have been the reason why the missionaries later transmitted the idea of a relatively uninhabited Amazon region. The people they found were the survivors of the diseases and epidemics.”

Once left deserted, the great cities and monuments and other public works of any hypothetical Amazonian civilization would quickly have been encroached upon and soon completely hidden by the jungle while, at the same time, cultural memory banks would have been wiped almost clean and vast resources of skills, knowledge, and potential would have been lost forever.

THE DNA EVIDENCE PRESENTED IN part 3 reveals an astonishing anomaly. At some point during the Ice Age, perhaps as early as 13,000 years ago, a group of people carrying Australo-Melanesian genes settled in what is now the Amazon jungle.

When, I wonder, will archaeologists take to heart the old dictum that absence of evidence is not the same thing as evidence of absence, and learn the lessons that their own profession has repeatedly taught—namely that the next turn of the excavator’s spade can change everything?

Niède Guidon has spent 40 years excavating hundreds—literally hundreds!—of richly painted prehistoric rock shelters in Serra da Capivara National Park in the Brazilian state of Piauí. While everyone else is playing catch-up, she has long been confident that humans arrived in South America much earlier than 20,000 years ago. In 1986–3 years before Dillehay first began to offer his own cautious dissent from the Clovis First paradigm—she published a paper in Nature boldly titled “Carbon-14 Dates Point to Man in the Americas 32,000 Years Ago.”

Documenting continuous human occupation over the entire period from 6,160 years ago to 32,160 years ago.

But this was just the beginning, and in 2003 Guidon and other researchers completed a further study. The results pushed back the date of the human presence at Pedra Furada to 48,500 years ago, and of the paintings themselves, to at least 36,000 years ago.

Huge swaths of the Amazon, encompassing millions of square kilometers, have never been subject to any kind of archaeological investigation at all.

This is a wider problem than the Amazon. For example, sea level rose 120 meters when the Ice Age came to an end with the result that 27 million square kilometers of land that was above water at the last glacial maximum 21,000 years ago is under water today. These submerged continental shelves were prime seafront real estate during the Ice Age, yet only a few tiny slivers of them have ever been subject to any kind of marine archaeological investigation.

I’ll say nothing about Antarctica, with its 14 million square kilometers entirely virgin to the archaeologist’s spade.

We do know that the Sahara desert, presently occupying an area of about 9 million square kilometers, had a very different climate during the Ice Age, and in the early millennia of the Holocene, than it experiences today and that there were long periods when it was well watered and fertile, with extensive lakes and grasslands and abundant wildlife.

Part of our predicament, therefore, as a species with amnesia, is that huge areas of the planet that we know for sure were used by and lived upon by our ancestors—the submerged continental shelves, the Sahara desert, the Amazon rainforest—have, for a variety of practical and ideological reasons, been badly served by archaeology.

Was some advanced but unseen presence capable of spanning the globe at work behind the scenes of prehistory that might help to explain how Australasian genes reached the Amazon during the Ice Age?

But Wilkinson is not speaking of the base soils. His “exemplary agronomy,” as we shall see, refers to an artificial, man-made soil that first suddenly and inexplicably appeared in the Amazon many thousands of years ago but that has such miraculous properties of self-regeneration that it is still in use for agriculture and still incredibly productive today. It is called Terra preta.

Terra preta feels like the work of scientists, but if there was a civilization in the Amazon, then why should we be surprised to find scientific achievements to its credit?

THE EXISTENCE OF TERRA PRETA was first reported by Europeans in colonial-period Brazil who called it terra preta de Índio (Indian Black Earth),

“Black Earth,” “Amazonian Anthropogenic Dark Earths,”or simply as “Amazonian Dark Earths”—ADEs for short.

Across the rainforest there are many thousands of expanses of terra preta on a similar range of scales, covering a total area that is in all honesty unknown but that various authorities have guesstimated at 6,000 km2, 18,000 km2, 154,063 km2, and “an area the size of France” (i.e., around 640,000 km2).

Almost without exception the riverine people of the Xingu today “inhabit and plant in dark earths,” and make use of resources, such as “Brazil nuts, babassu palm, dark earths and vine forests” that are “indicators or products of this earlier occupation.”

Nobody doubts that they are “anthropogenic”—man-made in some way—and everyone agrees that they’re an amazing success story. So fecund is terra preta, even after thousands of years of use, that it can still regenerate barren soils it is added to, and has been described as “miracle earth.”

Most researchers believe that terra preta soils formed as composted material accumulated via incidental human activity (often in debris piles referred to as middens). University of São Paulo archaeologist Eduardo Neves reportedly favors a scenario in which successive generations could have swept food refuse—especially fish and animal bones—from their dwellings and then added human and animal excrement.

Their argument depicts the ancient Amazonians as living amid a shitscape (euphemistically referred to as a “middenscape”), dumping their excretions, rubbish, broken crockery, and fish bones into the middens and—most importantly—burning wet vegetation on top of the middens, and always conscientiously making sure, without any long-term planning or purpose in mind, to keep the fires damped down under a blanket of dirt and straw.

I think the evidence supports another possibility—that this remarkable soil was invented, making excellent use of freely available local resources, as an ingenious, low-tech, and environmentally friendly way to increase agricultural yield in areas that would otherwise not have been able to sustain agriculture, and thus large populations, even for a few decades, let alone for several thousands of years—as the Amazonian Dark Earths have consistently demonstrated a “miraculous” ability to do.

In summary, concedes Professor WinklerPrins, the microbial complexes associated with ADEs are “poorly understood” and “quite mysterious actually.” Likewise, even the authors of the shitscape/middenscape theory of ADE formation admit that “despite the importance of research on terra preta, we still lack a firm understanding of the specific formation processes that led to the diversity inherent in these anthrosols.”

It turns out that while “Amazonian forests in different regions differ significantly from one another in topography, climate, geology, hydrology, structure, seasonality, and history,” they nonetheless “often resemble each other” in showing a “pattern of unexpected dominance and density of a small group of plant species.

The best current estimate is that the Amazon is presently home to about 16,000 woody tree species. Out of this total, however, “only 227 hyperdominant species dominate Amazonian forests.”3 These so-called oligarchs (from the Greek for “rule by a few”) “make up only 1.4% of all the Amazon forest species but almost half of the trees in any given forest.”

In almost every case where clusters of hyperdominants were inventoried, ancient archaeological sites were found among them6—a correlation so frequent and reliable that the presence and concentration of oligarchs could, in theory, be used to “predict the occurrence of archaeological sites in Amazonian forests.”

The team’s detailed analysis, published in Science, therefore concludes that “modern tree communities in Amazonia are structured to an important extent by a long history of plant domestication by Amazonian peoples…. Detecting the widespread effect of ancient societies in modern forests … strongly refutes ideas of Amazonian forests being untouched by man. Domestication shapes Amazonian forests.”

What I have in mind is the possibility that a deep knowledge of plants and of their nutritional and other properties might have preceeded the first domestication activities that we have evidence for. Surely it is only on the basis of such foreknowledge that crops like groundnuts and manioc could be selected, domesticated, planned, and planted to complement each other’s nutritional contribution to human welfare?

The whole mystery of the Amazonian plant medicines, notably the vision-inducing brew ayahuasca (which itself is a mixture of several plants that are most unlikely to have been fortuitously brought together) is explored in depth in my 2005 book Supernatural: Meetings with the Ancient Teachers of Mankind. In these medicines, as in curare, as in terra preta, and as in the incredible burst of domestication of plants and trees in the Amazon that followed the end of the Ice Age, could we be looking at the cultural DNA not only of a civilization but of a sophisticated civilization that had developed sciences of its own that it began to share with other people—very much including the peoples of the Amazon basin—around the time that the last Ice Age came cataclysmically to its end?

Scientists at the beginning of the twenty-first century were nonetheless taken aback to be presented with overwhelming evidence of an ancient practice of geometry in the rainforest—there is compelling evidence—mysterious in itself—that “the conceptual principles of geometry are inherent in the human mind.”

Mundurukú children and adults spontaneously made use of … the core concepts of topology (e.g., connectedness), Euclidean geometry (e.g., line, point, parallelism, and right angle), and basic geometrical figures (e.g., square, triangle, and circle) … and they used distance, angle, and sense relationships in geometrical maps to locate hidden objects.

In summary, therefore, isolated peoples in remote parts of the Amazon today, whose contact with technological civilization is extremely limited, possess innate geometrical knowledge and are able to deploy it “independently of instruction, experience with maps, or measurement devices.”

From England’s Stonehenge, to the Great Pyramid of Egypt, to India’s Madurai Meenakshi Temple, to Borobudur in Indonesia, to Angkor Wat in Cambodia, to Tikal in Guatemala, to Tiahuanaco in Bolivia—and to countless other sites too numerous to mention—the design of the sacred architecture of the world is entirely governed by geometry.

I suggest that the similarities and differences between certain ancient monumental structures, created around the world at different times by different cultures, are best explained by a remote common ancestor civilization that left a legacy of ideas and knowledge in which they all shared, which their priests, shamans, and sages sought to preserve, and which they in due course deployed in their own different ways.

In summary, therefore, just 3 years of research between 2009 and 2012 witnessed a profound change in archaeological understanding of the geoglyphs of the southwestern Amazon. Previously they’d been thought to be just 750 years old; now, without any real attention being drawn to the implications, they’d become 2,000 years old.

The two other dates from Severino Calazans. Again, there are margins of error, but these dates were, respectively, 1211 BC (from Unit 5) and 2577 BC (from Unit 3)48—the latter suggesting that this geoglyph might not only have the same footprint as the Great Pyramid of Egypt but might also be about the same age.

IT’S A CURIOSITY—I CLAIM nothing more at this point—that the square enclosure ditch at Severino Calazans shares the ground plan, base dimensions, and cardinality of the Great Pyramid of Egypt, as well as a carbon date from the epoch of the Great Pyramid. That epoch, moreover, around 2500 BC, coincides and overlaps with the megalithic epoch in Europe, so another curiosity is the way that the circular geoglyphs of Amazonia resemble “henges”—the circular embankments with deep internal ditches that surround the great stone circles of the British Isles.

It is NOT my purpose here to insinuate that the Amazonian geoglyphs were in any way inspired by Britain’s stone circles, or by the Great Pyramid of Egypt or by other known Old World monuments—or, for that matter, vice versa. Where there are similarities, my suggestion is that it might be more fruitful to look for their origins in a remote ancestral civilization that passed down a common inheritance all around the globe—an inheritance of knowledge, an inheritance of science, an inheritance of “earth-measuring” that was then put into practice in many different environments by the many different cultures receiving it.

More research was done and out of roughly 200 prehistoric sites identified across the state of Amapá it was found that 30 had megalithic monuments of one kind or another.

Rego Grande. There, the principal stone circle, which has a diameter of 30 meters, consists of 127 upright megaliths. Brought from a quarry 3 kilometers away, the megaliths weigh up to 4 tons each and stand between 2.5 meters (just over 8 feet) and 4 meters (just over 13 feet) tall.6 Areas within the circle were used for elaborate human burials involving funerary urns and vases in a known pottery style of the region.

I’m concerned here, rather, with the manifestation of a legacy of ideas that may be of Ice Age antiquity—ideas involving geometry and ideas also very much involving astronomy. It’s the ideas that matter, whether we encounter them in the Amazon, or at Serpent Mound in Ohio, or at Angkor in Cambodia, or at Stonehenge in the British Isles, or among the monuments of Egypt’s Giza plateau. If mechanisms to carry, preserve, and transmit them down the generations have been introgressed into the local cultural DNA, then I see no reason why they should not manifest, and reveal their fundamental similarities, wherever and whenever conducive circumstances arise.

Coined by Richard Dawkins in his 1976 book The Selfish Gene,10 the word “meme” refers to “An element of a culture or system of behavior passed from one individual to another by imitation or other non-genetic means.”

In the case of Stonehenge, Serpent Mound, and Rego Grande, the meme concerns the orientation of the sites—which in all three cases honors the sun on the June and December solstices.

The total number of geometric ditched enclosures discovered in the southwestern Amazon survey area had increased from “over 210,” the figure on record in 2009, to “over 450” by 2017.

Then in 2018 a further study by Denise Schaan and colleagues reported an extension of the survey area across much of the southern rim of the Amazon basin: The results show that an 1800 km stretch of southern Amazonia was occupied by earth-building cultures.34 In one area alone, the Upper Tapajos Basin, 81 previously unknown pre-Columbian sites were discovered, with a total of 104 earthworks. Among them were many complex enclosures including one, 390 meters in diameter, featuring 11 mounds circularly arranged at the center of the enclosure. The researchers suggest that at least 1,300 further sites remain hidden within the jungles of the Amazon’s southern rim—a number, they add, that is “likely to be an underestimation”37 while “huge swaths of the rainforest are still unexplored.”

Given that such civilizations existed in ancient Amazonia, and clearly had the capacity to manifest their ideas in great public projects, it is intriguing that the end result was the vigorous, flamboyant, and extensive expression of the very same architectural, astronomical, and geometrical “memes” that characterize sacred architecture in many other parts of the world, and at many different periods.

Nonetheless, what Labre tells us feels significant. He didn’t see the geoglyphs, which were then entirely overgrown by jungle, but he was in the midst of them on August 17, 1887, when he stayed overnight at an Aroana village called Mamuceyada. He describes there being, as well as plantations, “about 200 inhabitants … a form of government, temples and a form of worship”—from which, together with “knowing the name of the idols,” women were excluded. Of particular importance and relevance here is Labre’s report: The idols are not of human form, but are geometrical figures made of wood and polished. The father of the gods is called Epymara, his image has an elliptical form, and is about 16 inches high…. Although they have “medicine-men” charged with religious duties and remaining celibates, the chief is nevertheless pontifex of the church.

Here in a landscape mysteriously inscribed in antiquity with vast geometrical earthworks, at a time when the earthworks themselves had long since been swallowed by jungle, we find a Native American tribe whose gods take the form of polished wooden “geometrical figures.” The tribal chief is the religious leader but there are also “medicine-men” who likewise have religious duties. It already sounds exactly like the sort of institution for the replication and transmission of geometric memes that I proposed as a hypothesis earlier, but it gets even more interesting when the shamans involved, and often the population, are drinking ayahuasca.

It is a phenomenon in itself that the same memes appear again and again among seemingly unrelated cultures of both the Old World and the New World, separated sometimes not only by thousands of miles but by thousands of years.

What would help would be a much more thorough and detailed archaeoastronomical survey of Rego Grande, and of other stone circles in its vicinity, than has already been undertaken.

All four of these squares—the two at Fazenda Parana, the one at Severino Calazans, and of course the Great Pyramid itself, are cardinally oriented, that is, their sides face true north, south, east, and west. The most basic and obvious of the cosmic alignments shared across these sites are therefore to the celestial north and south poles (the points on the celestial sphere directly above the earth’s geographic north and south poles, around which the stars and planets appear to rotate during the course of the night1), and to the points of sunrise and sunset on the spring and autumn equinoxes (when the sun rises perfectly due east and sets perfectly due west).

We’ve also seen that other great earthworks of the Amazon feature strong northwest-to-southeast orientations. This would put the investigation of possible solstitial alignments and also of “lunar standstill alignments” (of which more in part 5) at the top of the list of priorities if any proper archao-astronomical survey should ever be undertaken.

That it should then have later iterations in different media, such as the stone circle at Rego Grande and the great cosmically aligned geoglyphs at Severino Calazans and Fazenda Parana, should not surprise us.

We are dealing, I believe, with deliberately created memes here—memes that have a deeply mysterious purpose and that function in ineffable ways. They are transmitted by repetition and replication, which explains their similarities. But cultures, once separated, tend to evolve and develop in their own distinctive and quirky fashion. We can therefore expect that not only the media and materials through which the memes are made manifest, but also their local interpretation, will vary greatly through time and between one part of the world and another while nonetheless retaining a constant core of unvarying central ideas.

Not only was there no evidence of warfare, but actually very little at all in the way of archaeological materials—pottery, figurines, refuse, et cetera—that would help to decipher the use, meaning, and purpose of the glyphs. The consensus now, therefore, is that they were created for “ritual,” “spiritual,” “religious,” and “ceremonial” purposes.

“Shamans.” This word is NOT derived from, or used, in any Amazonian language. It comes, instead, from the Tungus-Mongol noun saman, meaning, broadly, “one who knows.”

The Tungus word entered Western languages through their enthusiastic written reports and has subsequently continued to be applied in all parts of the world where systems very similar to Tungus shamanism have been found.

It is the shaman—usually a man but sometimes a woman—who stands at the heart of these systems. And what all shamans have in common, regardless of which culture they come from or what they call themselves, is an ability to enter and control altered states of consciousness. Often, but not always, psychedelic plants or fungi are consumed to attain the necessary trance state. Shamanism, therefore, is not primarily a set of beliefs, nor the result of purposive study. It is, first and foremost, mastery of the techniques needed to attain trance and thus to occasion particular kinds of experiences—shamans call them “visions,” Western psychiatrists call them “hallucinations”—that are then in turn used to interpret events and guide behavior:

The true shaman must attain his knowledge and position through trance, vision and soul-journey to the Otherworld. All these states of enlightenment are reached … during a shamanic state of consciousness, and not by purposive study and application of a corpus of systematic knowledge.

Underlying the whole notion of soul-journeys to the otherworld is a model of reality that is diametrically opposed in every way to the model presently favored by Western science. This remotely ancient shamanistic model holds our material world to be much more complicated than it seems to be. Behind it, beneath it, above it, interpenetrating it, all around it—sometimes symbolized as being “underground” or sometimes “in the sky”—is an otherworld, perhaps multiple otherworlds (spirit worlds, underworlds, netherworlds, etc.) inhabited by supernatural beings.

Meanwhile, the key point, standing right at the heart of the matter and nonsensical to “rational” Western minds, is the notion that the human condition requires interaction with powerful nonphysical beings. Across much of the Amazon the nexus that facilitates such interaction is the extraordinary visionary brew ayahuasca, a plant medicine that has been in use among the indigenous peoples of this vast region for unknown thousands of years. Its active ingredient, derived from the leaves of the chacruna shrub (botanical name Psychotria viridis) is dimethyltryptamine—DMT—an immensely potent hallucinogen. It is from the other ingredient, however, derived from the Banisteriopsis caapi vine, that the brew gets its name.

It is, in my view, a remarkable scientific feat that such a highly effective combination of just 2 out of the estimated 150,000 different species of plants, trees, and vines in the Amazon was discovered by mere trial and error.

Ayahuasca itself is said to be a “doctor,” possessing a strong spirit, and is considered to be “an intelligent being with which it is possible to establish rapport, and from which it is possible to acquire knowledge and power.”11

In a follow-up paper, published in American Anthropologist in August 2017, Saunaluoma and Virtanen take their analysis much further, proposing that the geoglyphs “were systematically constructed as spaces especially laden with visible and invisible entities.” Their argument is that, regardless of scale or medium, the whole process of materializing visionary iconography, in particular geometric patterns, is “related to the fluid forms inhabiting the Amazonian relational world. Different designs ‘bring’ the presence of nonhumans to the visible world of humans for a number of Amazonian Indigenous peoples, while perceiving geometric designs in Amerindian art as paths from one dimension to another allows a viewer to shift between different worlds, from the visible to the invisible.”

“The lines embody a package of ways in which beings move, travel, communicate between themselves, and transmit knowledge, objects, and powers. These paths exist everywhere, from macro to micro scales. Geometric designs are thus about certain ways of thinking, perceiving, and indicating invisible aspects so they can be seen.”27

Saunaluoma and Virtanen further establish that, to the Shipibo-Conibo, the geometric lines open “a window to the macrocosmos” and allow “macro-cosmic order” to be “iconically sketched in the microcosmos here, in landscape designs.” As above, so below.

Once again I suggest we are looking at the remnants of an advanced system that propagates itself through time and across cultures with powerful memes among which geometry and cosmic alignments take a large share. We do not know where or when this system originated. In the ancient Amazon, however, to a greater degree than anywhere else, its dissemination became integrated with the use of vision-inducing plants—and there, up to the present day, the secrets of how to use these plants have been preserved and passed down within indigenous shamanic traditions.

First, what’s being described is dressed up in the language and imagery of myth and may of course be “just a myth.” What it sounds like, however, is a mythologized account of a settlement mission in the Amazon in which a group of migrants were accompanied by a number of more sophisticated people considered to be “supernatural” or “superhuman.”

Second, the Tukano origin myth makes it completely clear that the “supernaturals” departed after they had completed their work of preparing the Amazon for settlement by the migrants in the serpent canoe. Third, we are led to understand that direct contact between humanity and the spirit world would thereafter be broken. However a portal—ayahuasca—through which humans could still travel to the spirit world, and benefit from its teachings, would be left open.

Alignments are not the only ones to have propagated from a so far unidentified common source. Intimately connected to them are other ideas that went “viral” in both the Old World and the New, and that therefore somehow transcended the Ice Age separation of peoples.

Considered as a pyramid—and it is indeed a form of step pyramid—it comes third in the Americas after the Pyramid of Quetzalcoatl at Cholula and the Pyramid of the Sun at Teotihuacan, both of which are stone-reinforced monuments and significantly taller.

Considered an earthwork, and echoing that early explorer’s report, Monks Mound has been described as “stupendous in many ways. It is the tallest mound, covers the most area and contains the most volume of any prehistoric earthen monument in the Americas.” It is, moreover, part of a giant complex with multiple different elements including more than 100 subsidiary earthen mounds, the archaeological traces of what was once a spectacular circle of huge wooden posts (known as Cahokia’s “Woodhenge”), a spacious central plaza, and an 18-meter-wide, 800-meter-long earthwork causeway running arrow-straight between raised embankments.

Enigmatically, but quite deliberately set to an azimuth of 005 degrees—that is, 5 degrees east of true north—it is this causeway, referred to by archaeologists as the “Rattlesnake Causeway,” that defines Cahokia’s principal axis,14 giving the site a certain ambiguity and adding to its air of mystery. Every mound and earthwork is set out upon the ground in strict relation to it, with clusters of structures, dominated by Monks Mound itself, running south to north and other clusters running west to east.

William Romain, whose work at Serpent Mound we encountered in part 1, considers Monks Mound to have been conceived by its designers as a true “axis mundi”—intended to serve as a junction point between heaven and earth. He reminds us of the traditional shamanistic spiritual system of the Native American peoples of the Eastern Woodlands—the region of Cahokia. According to this system, the universe is comprised “of an Above World, This World, and Below World…. Connecting these realms is a vertical vector … the axis mundi that enables shamans to move between cosmic realms…. The axis mundi can be symbolically represented by any number of vertical elements such as a pole, tree, column of smoke, mountain, pyramid, or mound.”

Subsequent excavations revealed that no fewer than five woodhenges had been built on the same site over a period of a couple of centuries in order to accommodate increases in the size and shape of the Mound itself, which affected crucial solar sight lines.

The objective of every realignment and rededication was that an observer at the center of the post circle, looking due east across the “front sight” of a specially placed equinox marker post, should see the sun’s disk appear above the slope of the southern terrace of Monks Mound—an arrangement, says Romain, that establishes an east–west solar-oriented line across the entire Cahokia complex: Equinox sunrise above the slope of the southern terrace of Monks Mound. PHOTOGRAPHED FROM WOODHENGE BY WILLIAM ROMAIN. The result is that Monks Mound is visually connected to the Above World vis á vis the rising sun and its location on the east–west sightline that intersects the major site axis. In this way, Monks Mound is positioned at a center place.19 That assertion and manifestation of centrality is reconfirmed by two other posts at Woodhenge that serve as front sights targeting the horizon azimuths of the summer and winter solstice sunrises.2

Unexplained so far, however, is why Cahokia’s designers made a deliberate choice not to align the main axis of their premier site to the cardinal directions of earth and sky but instead chose to offset it by 5 degrees east of true north?

William Romain offers an intriguing answer. The builders of Cahokia, he argues, were geometricians who made use of a special rectangle, known as a “root-2 rectangle,” in planning the layout of the city.

If you take such a rectangle, orient it to true north (0 degrees azimuth), and then rotate it eastward by 5 degrees to match the azimuth of Cahokia’s principal axis, its diagonals turn out to align closely with important solar and lunar events as viewed from Monks Mound—specifically, the summer solstice sunrise at azimuth 59.7 degrees, the winter solstice sunset at azimuth 239.3 degrees, the moon’s maximum southern rising position at azimuth 130.1 degrees, and the moon’s maximum northern setting position at azimuth 307.1 degrees.

built in the Mississippi River basin incorporating complex geometries based almost exclusively on lunar alignments. Two of the most significant such sites to have survived, at least in part, into the twenty-first century are the High Bank Works and Newark Earthworks, both in Ohio. High Bank Works is located near the town of Chillicothe, about 40 miles northeast of Serpent Mound, and Newark Earthworks stands about 60 miles farther to the northeast near the town of Newark.

NEWARK AND HIGH BANK HAVE an almost technological feel to them, resembling gigantic printed circuit boards or wiring diagrams from the innards of some immense and ineffable instrument.

William Romain is more specific. In his view the creators of this extraordinary and in some ways rather otherworldly site “were intrigued by the variety of possible relationships between a circle and a square…. The idea that seems to be expressed is that, for every circular enclosure, a corresponding square … can be related to the circle by geometric means.”25 “Squaring the circle”—constructing a square with the same area as a given circle—was of course a geometrical exercise of great interest to the master mathematicians of ancient Babylon, Egypt, and Greece.26

Astronomer Ray Hively and philosopher Robert Horn of Indiana’s Earlham College, whose comprehensive work at Newark and High Bank in the 1980s provided the foundation for all subsequent studies, realized that the same length of 321.3 meters had also been used by the builders to lay out the Octagon

The conclusion suggested by the geometry of the Observatory Circle–Octagon combination is that both figures have been carefully and skilfully constructed from the same fundamental length.28

This unit of measure, now known by the unfortunate yet strangely appropriate acronym OCD (for Observatory Circle Diameter), was also deployed at High Bank, which, as Hively and Horn remind us, is “the only other circle-octagon combination known to have been constructed by the Hopewell.”29 It cannot be a coincidence, then, that High Bank turns out to conform to a geometric pattern based on a fundamental length of 0.998 OCD.30

Perhaps most striking of all is the fact, noted by archaeologist Bradley Lepper, that “the main axis of High Bank Works—that is, a line projected through the center of the Circle and the Octagon—bears a direct relationship to the axis of Newark’s Observatory Circle and Octagon. Although built more than 60 miles apart, the axis of High Bank Works is oriented at precisely 90 degrees to that of Newark earthworks. This suggests a deliberate attempt to link these sites through geometry and astronomy.”31

Lepper himself makes a strong case that this connection might have been more than symbolic when he presents evidence for the former existence of a causewayed road with some stretches of its parallel walls still in place as late as the mid-nineteenth century. He calls it “the Great Hopewell Road” and speculates that it was perhaps a pilgrim route that once ran between Newark and High Bank.

As a motive for the memorialization of solstitial and equinoctial alignments, however, the arguments in favor of a practical immediate agricultural payoff don’t adequately account for the enormous effort involved in the construction of many of the sites. After all, the same calendrical functions could have been realized almost as effectively and much less expensively with pairs of aligned poles.

The notion that a reliable agricultural calendar was the primary motive for skywatching also fails to explain why we find the same focus on the rising and setting sun on the solstices and the equinoxes in distinctly pre-agricultural sites such as Painel do Pilão in the Amazon, dating back more than 13,000 years.

Likewise, though they can only have been the product of detailed observations of the heavens and would have required meticulous record-keeping over many generations, the lunar alignments manifested in the great earthworks at Newark and High Bank have no obvious practical function in terms of harvests—or, indeed, of any other utilitarian pursuit. Once again, though, what they do require of those who seek deeper knowledge of them is a study of the heavens.

If we make use of such software to observe the behavior of the moon over, say, a period of a century, we will quickly notice that its rising and setting points along the eastern and western horizons are locked to a cycle shifting from farthest north to farthest south and back to farthest north again every month. As more time passes, however, we will also observe that these monthly “boundaries” on the moon’s rising and setting points aren’t fixed from year to year but instead widen and narrow over an 18.6-year cycle. If they are at their widest (“Maximum Extreme”) today, then they will be at their narrowest (“Minimum Extreme) in 9.3 years and at their widest again 9.3 years after that. Eight prominent directions are therefore implicated in these celestial events. Four target the maximum and minimum monthly boundaries north of east and the maximum and minimum monthly boundaries south of east between which the moon can rise during its 18.6-year cycle. The other four do the same for moonset on the western horizon. On each occasion as it reaches one of its extremes the moon’s constant motion stops—literally comes to a standstill—before it reverses the direction of its oscillation for the next 9.3 years. The geometry of the Newark Earthworks—and of High Bank, too—turns out to be very closely fitted to these obscure celestial events, known to astronomers as “lunar standstills,” knowledge of which would appear to have no practical contribution to make to the necessities of everyday life.

THE GREAT CONTRIBUTION OF HIVELY and Horn’s 1982 paper in Archaeoastronomy was that it demonstrated how precisely, and how cleverly, Newark celebrates and embraces the lunar standstills.

And just as at Newark, where deliberate asymmetries were introduced into the side lengths and angles of the Octagon to achieve more perfect lunar alignments, so, too, we find that one of the eight walls of High Bank’s octagon is 16 percent longer than it “should” be to preserve perfect geometrical symmetry.

Recent research by Hively and Horn has raised the intriguing possibility that the very reason Newark’s earthworks are where they are is that four prominent “high-elevation overlooks” in the surrounding landscape serve as natural front and back sights targeting sunrise and sunset on the winter and summer solstices.54 It’s unlikely to be an accident that the point of intersection of these natural alignments “lies in the central region of the earthworks and is equidistant (within 2 percent) from the centers of the Observatory Circle and the Great Circle.”55

The choice of Newark’s natural setting feels designed and deliberate.

A number of different “mound-building cultures” have been identified by archaeologists, who have assembled them into categories based on period, location, types of pottery, types of tools, arts and crafts, and other criteria.

you will not go far in learning about the mound-builders without encountering references to the Woodland Period, which is in turn divided into Early Woodland (1000 BC to 200 BC), Middle Woodland (200 BC to AD 600–800) and Late Woodland (AD 400 to AD 900–1000).

Adena culture built its mounds and earthworks during the Early Woodland period. The Hopewell culture built its mounds and earthworks during the Middle Woodland period. The Coles Creek culture was prominent during the Late Woodland period. The Late Woodland period in turn overlaps with the Early Mississippian period.

But these are no more than artificial constructs that help tidy-minded archaeologists preserve a sense of order and control over otherwise dangerously unruly data—and, besides, we must question how much the types of utensils and tools used by a culture actually tell us anything of value.

Pink highlight | Location: 4,258
Undoubtedly many different Native American cultures, speaking many different languages, were involved in the construction of the mounds. Undoubtedly their arts and crafts and tools and pottery differed. Undoubtedly they expressed themselves in many different ways. Yet when it came to their earthworks, for some mysterious reason, they all did the same things, in the same ways, repeatedly reiterating the same memes linking great geometrical complexes on the ground to events in the sky.

Poverty Point, a very mysterious archaeological site in northeast Louisiana, climbing the second biggest earthwork mound in North America. Built around 1430 BC,14 a century before the pharaoh Tutankhamun took the throne in ancient Egypt, it’s often referred to as “Bird Mound,”

All archaeologists now agree that the half dozen mounds and other earthworks at Poverty Point are man-made.

Astronomer Kenneth Brecher in 1980 to coauthor a paper in the Bulletin of the American Astronomical Society titled “The Poverty Point Octagon: World’s Largest Prehistoric Solstice Marker.”

The western half is intact and well-defined. It is intersected in four places by broad avenues, radiating out from a common center…. The west-northwest and west-southwest avenues have astronomical azimuths of approximately 299o and 241o respectively, accurately pointing to the summer and winter solstice sunset directions at the latitude of the site (32o37’ N).

Completed in 2011, the survey revealed the traces of no fewer than thirty great circles of wooden posts that had once stood in the plaza east of the geometric ridges, “some built only inches away from the previous ones, as if the posts were erected, removed sometime later, moved a slight distance, then rebuilt.”

One possibility, surely worthy of further investigation, is that what the survey found were the archaeological fingerprints of a series of “woodhenges” at Poverty Point. Very much like the Woodhenge at Cahokia—also constantly moved and adjusted, as we saw in chapter 18—they were perhaps used in conjunction with other features to create sight lines that would manifest sky-ground hierophanies at the solstices and equinoxes.

Poverty Point is “a center place,” Romain and Davis assert, “and also a place of balance in the sense that, in addition to the sunset alignments … conceptually opposite sunrise alignments are also found.”

The overall achievement—the “seamless integration of site orientation, celestial alignments, bilateral symmetry of design points, internal geometry [and] regularities in mensuration”—leads Romain and Davis to conclude that “Poverty Point was built according to a preconceived master plan … or design template … that integrated astronomical alignments, geometric shapes and local topography.”

Known as Lower Jackson Mound, excavations by archaeologists Joe Saunders and Thurman Allen have established that it is in fact extremely ancient—not from the Poverty Point era around 1700 BC at all, but from fully 3,000 years earlier, specifically between 3955 and 3655 BC.

“That Poverty Point builders were aware of ancient mounds is beyond doubt,” comments John Clark, professor of anthropology at Brigham Young University:

There must have been “an enduring traditional, if not direct ancestral, connection between the Old People and later groups.” This connection, he argues, is “demonstrated by the incorporation of the Middle Archaic Lower Jackson Mound into the principal earthwork axis at Poverty Point. Actually, Lower Jackson Mound was not merely incorporated—it furnished the alpha datum, the anchor, a vivid case of material or implicit memory.”

The suggestion, therefore, is that below the radar of archaeology more than 2 millennia of continuously transmitted knowledge connected the Coles Creek culture to the Poverty Point culture.

The Judaic faith, for example, carries down a body of traditions and beliefs that are at least 3,000 years old. Hinduism has roots going back to the Indus Valley civilization more than 5,000 years ago. Both religions also create architecture, the design of which is directly influenced by their beliefs and traditions.

There’s no reason in principle why the same sort of thing should not have happened in North America.

But there’s a problem. In the cases of Hinduism and Judaism we have unimpeachable evidence of continuity. Through sacred texts, through teachings passed from one generation to the next, and through cherished and vibrant traditions, there are no broken links in the chain of transmission. Neither Hinduism nor Judaism have ever abruptly vanished from the face of the earth, left zero traces of their presence for millennia, and then equally abruptly reappeared in full flower.

As we’ll see, however, this appears to be exactly what happened in North America.

THE REMOTE EPOCH BETWEEN 6,000 AND 5,000 years ago out of which Lower Jackson Mound emerges is an important one in the story of civilization. It was toward the end of this same millennium that the civilizations of ancient Mesopotamia and ancient Egypt took their first confident steps on the stage of history. They, too, built mounds—for example, Egypt’s predynastic mastabas or the tells of Uruk-period Mesopotamia. They, too, deployed geometry and astronomical alignments in the project of sacralizing architectural spaces. And they, too, participated in an extraordinary and seemingly coordinated burst of early construction—for just like the mounds of ancient Egypt and ancient Mesopotamia, Lower Jackson Mound is not an isolated case but part of what may once have been a very numerous and widespread group of monuments.

Very few of these sites have yet been subject to radiometric dating, but of the 16 that have, with a combined total of 53 mounds and 13 causeways, all are more than 4,700 years old2—and many are much older than that.

As a result, says Joe Saunders, a leading specialist in this field, “the existence of Middle Archaic mound-building is no longer questioned.”

Not a single item has been excavated at Watson Brake that in any way suggests the presence of an advanced material culture.

They were hunter-gatherers, not agriculturalists, and although they did gather plants that would later be domesticated, they did not domesticate these plants themselves. In other words, they lived simply, close to the earth, and were in every way a normal and representative population for this part of North America 5,000 or 6,000 years ago.13 In every way, that is, except one. They built mounds.

Joe Saunders writes: The earliest … earthworks in the Lower Mississippi Valley appear to have been made by autonomous societies.

But there must have been some communion among the autonomous societies because there are too many shared traits that cross the vast expanses of the Lower Mississippi Valley, and there is no evidence of other monuments being made elsewhere.

It was his paper, “A Mound Complex in Louisiana at 5400–5000 Years Before the Present,” published in Science on September 19, 1997, that effectively put Watson Brake on the map,

“I know it sounds pretty Zenlike,” Saunders speculated when he was asked this question in 1997, “but maybe the answer is that building them was the purpose.”

MAYBE. BUT I’M TRYING TO envisage how the community leaders or influencers would have sold that to the population. Somehow, “We want you to build these mounds because building them will be a good thing for you to do” doesn’t sound like a winning line to me.

After years of field research, excavations, and on-site measurements, Kenneth Sassman of the Laboratory of Southeastern Archaeology, and Michael Heckenberger of the University of Florida are convinced that at least three of these sites—Watson Brake, Caney Mounds, and Frenchman’s Bend—share the same basic design: The plan we infer from the spatial arrangement of Archaic mounds consists of a series of proportional and geometric regularities, including (1) a “terrace” line of three or more earthen mounds oriented along an alluvial terrace escarpment; (2) placement of the largest mound of each complex in the terrace-edge group, typically in a central position; (3) placement of the second-largest mound at a distance roughly 1.4 times that between members of the terrace-edge group; (4) a line connecting the largest and second-largest … mound (herein referred to as the “baseline”) set at an angle that deviates roughly 10 degrees from a line orthogonal to [i.e., at right angles to] the terrace line; and (5) an equilateral triangle oriented to the baseline that intercepts other mounds of the complex and appears to have formed a basic unit of proportionality.

It is probably not a coincidence that at Watson Brake the distance along the horizon from where the sun rises (or sets) on the winter solstice to where it rises (or sets) on the summer solstice defines an arc of 59 degrees…. Their triangle was probably derived from [this].

AS AT SERPENT MOUND, AS at Cahokia, as at Newark, as at High Bank, and as at Poverty Point, the primary concern of the designers of Watson Brake seems to have been to manifest, memorialize, and consummate the marriage of heaven and earth at key moments of the year.

“Even if the alignments were not to the sun,” Davis writes, “the ability to establish five perfectly parallel, nearly equidistant sightlines across several hundred meters would be remarkable. The sightlines had to have preceded construction. Their pattern suggests a master site plan, with construction to the plan taking years, or perhaps centuries, to complete.”

Impressively, the alignments target the sun not exactly where it rises and sets today but rather precisely where it would have risen and where it would have set in the epoch of 3400 BC—which, at the latitude of Watson Brake, was at azimuth 119 degrees for the winter solstice sunrise and at azimuth 299 degrees for the summer solstice sunset.

Watson Brake appears to be the earliest-known celestially-aligned mound complex in North America. That’s a big deal.

The mystery, although the sites so far investigated “show no evidence for the development of astronomical knowledge over time,” is that “the people who directed the construction of Watson Brake … had an advanced knowledge of the solar and probably lunar cycles, and they used this knowledge to design and engineer their sites. Who were these directors, and how did they get others to build the sites one container of earth at a time?”

How were these “directors” able to manifest geometrical and astronomical knowledge, and advanced combinations of the two, more than 5,000 years ago when no prior evidence of the existence of such abilities has been found in North America at such an early date?

One minute they’re not there. The next, almost magically, they are. And then, at once, the Middle Archaic mound-building phenomenon bursts into full bloom.

Until sometime around 2700 BC. That was when, for some unexplained reason, the ancient sites were all abandoned and the whole mound-building enterprise came to an abrupt and complete halt.

The abandonment of an ideology or change in ethos can occur simultaneously within a diverse range of environments. Also the absence of environmental change would be consistent with the documented continuity in economy from Early to Late Archaic periods—before, during, and after mound building.”

For the next thousand years not a single mound was built and not a single earthwork was raised. There’s not a hint of geometry or of monumental architecture. The only reasonable conclusion is that those skills had been utterly lost.

But then, as suddenly and mysteriously as the “mound-building movement” had vanished, it appeared again, at around 1700 BC, in the spectacular and sophisticated form of Poverty Point. All the old geometrical and astronomical skills were redeployed there—and by practiced hands—as though they’d been in regular use all along.

Despite the fact that different cultures were involved at different periods, every resurgence of mound-building was linked to the reiteration and reimagination of the same geometrical and astronomical memes. This was not “chance” or “coincidence.” Witness, for example, the way that Lower Jackson Mound was used as the base datum from which the entire geometry of Poverty Point was calculated.

Or, at a more human level, consider the case of the highly polished hematite plummet—a valuable item—that was made at Poverty Point at around 1500 BC but that some pilgrim carefully carried to the by then long-abandoned and deserted site of Watson Brake and deliberately buried half a meter deep near the top of Mound E.63 This kind of behavior—the incorporation of ancient sites into younger ones, pilgrimage, an offering—has the feel of a religion about it. Religious institutions have proved themselves throughout history to be extremely efficient vehicles for the preservation and transmission of memes across periods of thousands of years. It’s not unreasonable, therefore, to suppose that some kind of cosmic “sky-ground” religion lay behind the alignments to the solstices and the equinoxes at Watson Brake and at the other early sites—a religion sufficiently robust to ensure the continuous successful transmission of a system of geometry, astronomy, and architecture over thousands of years.

An enigma that I explore in all those books, but in the greatest detail in Heaven’s Mirror, is that traces of the same spiritual concepts and symbolism that enlighten the Egyptian texts are found all around the world among cultures that we can be certain were never in direct contact. Straightforward diffusion from one to the other is therefore not the answer, and “coincidence” doesn’t even begin to account for the level of detail in the similarities. The best explanation, in my view, is that we’re looking at a legacy, shared worldwide, passed down from a single, remotely ancient source.

There are many aspects to this legacy, but I believe its hallmark, as the reader knows by now, is a system of ideas in which geometry, astronomy, and the fate of the soul are all strangely entangled.

Seemingly with the intention of preparing its initiates for this afterlife journey, as Robert Bauval and I showed in our coauthored book Message of the Sphinx, the funerary texts also called for the construction of large-scale geometrical and astronomically aligned structures that were to “copy” or imitate on the ground a region of the sky known as the Duat—the ancient Egyptian name, often translated as “Netherworld,” for the realm of the dead.

The ruler of this Duat realm was the god Osiris, Lord of the Dead, whose figure in the sky was the majestic constellation that the ancient Egyptians called Sahu, and that we know as Orion.2 It is therefore not surprising, as a manifestation of this “as above so below” cosmology, that the three great pyramids of Egypt’s Giza necropolis are laid out on the ground in the form of the three stars of the belt of Orion.

Moundville in Alabama.

An excellent example of a powerful religious image was the hand and eye motif. Moundville’s “Rattlesnake Disk,” pictured on this noticeboard, offers us the best-known version, although numerous variations occur in pottery, copper, stone and shell artifacts. Stories passed down among various tribes tell of the dead entering the afterlife through an opening marked by a great warrior’s hand in the sky. One account describes that hand as the constellation we know as Orion with Orion’s belt as the wrist, its fingers pointing downwards. A faint cluster of stars in the center of the palm is a portal to the path of souls or path to the land of the dead. Researchers speculate that the hand and eye represent this constellation.8

The connection of the constellation of Orion to the land of the dead was a fundamental aspect of the ancient Egyptian religion and it felt weirdly like coming home—that comfortable intimacy of familiar territory—to find it here in a Native North American religion.

As a group the knotted serpents and the hand and eye are believed to be a representation of the night sky. The serpents are the ropes that join the earth and sky. In the palm of the hand is the portal or doorway through which the spirits of the dead can ascend the path of souls … a road or ribbon of light, the Milky Way, stretching out before the traveling souls. This river of light … deposits the souls, after a series of trials, into the realm of the dead.

Thus over time Moundville became, in the minds of its people, not only the symbolic gateway to the realm of the dead but also the materialized image of that sacred domain on earth.

There is a unifying metaphor which argues for a common core of belief across the Eastern Woodlands and Plains, and probably far beyond that area. That unifying notion is an understanding of the Milky Way as the path on which the souls of the deceased must walk.

Elsewhere Lankford reiterates that this belief system was by no means confined to the Plains, the Eastern Woodlands, and the Mississippi Valley. It is better understood, he argues, as part of “a widespread religious pattern” found right across North America and “more powerful than the tendency towards cultural diversity.”9 Indeed, what the evidence suggests is the former existence of “an ancient North American international religion … a common ethnoastronomy … and a common mythology.

Ancient Egyptian notions of the soul can seem extremely complex at first glance. Indeed, according to the great authority on the subject, Sir E. A. Wallis Budge, formerly Keeper of Egyptian Antiquities at the British Museum, it’s not just a matter of one soul but of multiple souls—all of them separate from but in some way connected to the khat, or physical body—“that which is liable to decay.”

In Budge’s summary, these separate, nonphysical “souls”—perhaps “aspects of the soul” would be a better description—include notably:   The Ka, or “double,” that stays earthbound after death in the immediate vicinity of the corpse and the tomb.   The Ba, depicted as a bird or human-headed bird that can fly freely “between tomb and underworld.”   The Khaibit, or shadow.   The Khu, or “spiritual soul.”   The Sekhem, or “power.”   The Ren, or “name.”   The Sahu, or “spiritual body,” which formed the habitation of the soul.   The Ab, or heart, “regarded as the center of the spiritual and thinking life…. It typifies everything which the word ‘conscience’ signifies to us.” The heart, and what its owner has imprinted upon it by his or her choices during life, is the specific object of judgment in the Netherworld.

Here, too, we find at first a bewildering multiplicity.

The Quileute people of the US northwest coast believe that within every living human body there reside several souls that “look exactly like the living being and may be taken off or put on in exactly the manner as a snake sheds its skin.”18 These souls are an inner soul, called the “main, strong soul,” an outer soul, called the “outside shadow,” a life-soul, referred to as “the being whereby one lives,” and the “ghost” of the living person, “the thing whereby one grows.”19 Let’s note in passing that the Ancient Egyptian Book of the Dead declares in chapter 164: I have made for thee a skin, namely a divine soul.

As an exception to the general rule among Native American peoples, the Cherokee do not describe the Milky Way as the “Path of Souls” but refer to it, rather, as “Where the Dog Ran.”

Let’s note in passing that the High Priest of Heliopolis bore the title “Chief of the Astronomers” and is represented in tomb paintings and statuary wearing a mantle adorned with stars. It is therefore of interest, when ethnographers recorded the customs and beliefs of the Skidi Pawnee of Oklahoma in the nineteenth century, that they were reported to have shamans, raised to the rank of chiefs, who specialized in astronomy. In the archives of the Smithsonian Institution there is a photograph of one of these individuals, named His Chiefly Sun, and notably he is shown wearing a mantle adorned with stars.

Certainly the idea of architectural structures being used to create entrances to the otherworld was known throughout North America. The circular hole in the top of the Ojibway shaking tent, for example, was specifically meant to allow for “soul-flight travel to the Hole in the Sky and across the barrier to the spirit realm.”

Though different in degree in terms of the engineering required, there is no difference in kind between the hole in the Ojibwa tent and the star-shaft in the Great Pyramid—which likewise appears to have been intended to facilitate soul-travel to the sky across the barrier to the spirit realm. Similarly, although there is again a marked difference of degree, there is no difference in kind between the geometric, astronomically aligned structures of the Giza plateau and the geometric, astronomically aligned structures of the Mississippi Valley. All of them seem bound together by the single purpose of the triumph of the soul over death and by the means deployed to achieve that purpose.

I’M NOT SUGGESTING THAT THE religion of ancient Egypt was brought from there to ancient North America and I’m not suggesting that the religion of ancient North America was brought to ancient Egypt. I accept the scientific consensus that the Old World and the New World have been isolated from one another, with no significant genetic or cultural contacts, for more than 12,000 years.

In both cases we have a journey of the soul to a staging ground in the west, a “leap” to a portal in the constellation Orion, transition through that portal to the Milky Way, a journey along the Milky Way during which challenges and ordeals are faced, and a judgment at which the soul’s destiny is decided.

The similarities are too many and too obvious to be dismissed as mere “coincidences.”

In the realm of archaeology, E. A. Wallis Budge faced a comparable problem with similarities he had identified between the Mesopotamian deity Sin, a moon god, and the ancient Egyptian deity Thoth, also associated with the moon. The resemblances, in Budge’s view, are “too close to be accidental. It would be wrong to say that the Egyptians borrowed from the Sumerians or the Sumerians from the Egyptians, but it may be submitted that the literati of both peoples borrowed their theological systems from some common but exceedingly ancient source.”

Walter Emery, late Edwards Professor of Egyptology at the University of London, also looked into similarities between ancient Egypt and ancient Mesopotamia. He found it impossible to explain them as the result of the direct influence of one culture upon the other and concluded: The impression we get is of an indirect connection, and perhaps the existence of a third party, whose influence spread to both the Euphrates and the Nile….

From the end of the Ice Age until the time of Columbus, the remote common ancestor of the religions that would later blossom in the Nile and Mississippi River valleys must therefore be more than 12,000 years old. I suggest that this ancestral religion—perhaps system would be a better word—used astronomical and geometrical memes expressed in architectural projects as carriers through which it reproduced itself across cultures and down through the ages, and that it was a characteristic of the system that it could lie dormant for millennia and then mysteriously reappear in full flower.

Though it is not my purpose to argue this case here, the possibility that the system still hibernates in some form or another in the twenty-first century cannot be ruled out, nor the possibility that it might at some point be awakened again in a garb suited to its time. Indeed, might we not already be seeing the first intimations of this with the explosion of interest all around the world in ayahuasca as a teacher plant, and in the parallel growth in public exposure to the initiating geometries of ayahuasca-inspired art?

BECAUSE OF THE BURNING OF the library of Alexandria and the frenzied despoiling of the temples by fanatical Christian mobs in the fifth and sixth centuries, much of the legacy of wisdom that made ancient Egypt the “light of the world” has been lost.

The immense destruction, genocide, and near-total obliteration of indigenous cultures unleashed in North America during the European conquest was a matter of an entirely different order—a full-blown, fast-moving cultural cataclysm, as a result of which we’re left often with no record at all or with huge gaps in the record.

Central to the ancient Egyptian judgment scene described in the previous chapter, the concept of Maat enshrines notions of cosmic justice, harmony, and balance. Its association with the moon is appropriate since the moon indeed plays a key “balancing” or “stabilising” role for the earth.

The sun is also often figured as being carried aboard a boat and also features prominently in the Duat, blazing an indomitable path through its terrors each night, a symbol of hope and resurrection in whose company, if they are fortunate, the souls of some of the deceased might be permitted to ride.

THE ANCIENT EGYPTIANS SAW THEIR lives as their opportunity to prepare for the trials of the journey through the Duat that they would have to confront as souls after death. The stakes were high, with both eternal annihilation and immortality being possible outcomes of that journey. There was undoubtedly an ethical aspect to the Judgment, as we’ve seen, but something else was also required, some gnosis, some deep understanding, and very strangely it turns out to be the case that those who truly sought the prize of immortality—“the life of millions of years”—were called upon first to build on the ground perfect copies “of the hidden circle of the Duat in the body of Nut [the sky].”

Egyptologists already accept that the Milky Way and the constellation Orion on its west bank are key markers in the celestial geography of the Duat, and in 1996 Robert Bauval and I made the case in our book The Message of the Sphinx that the constellation Leo was very much part of the Duat as well. To cut a long story short, our argument, which we stand by today, is that the ideas expressed in the funerary texts were indeed manifested in architecture in Egypt in the form of the Great Pyramid, the leonine Sphinx, and the underground corridors and chambers beneath these monuments.

The complex was constructed, we believe, as a three-dimensional replica, model, or simulation of the intensely geometrical Fifth Division of the Duat, also known as the “Kingdom of Sokar” and always regarded as an especially hidden and secret place.46 Moreover, we suggest that what motivated the population to support this gigantic project was precisely the promise of thus obtaining that “magical protection,” that power to become “a spirit equipped for journeying,” that would ensure a successful afterlife passage through the Duat.

The sky is gigantic and the purpose of the architecture is to honor, connect with, and above all “resemble the sky.”

Orion that hosts the portal through which the soul must pass to reach the “Winding Waterway” that in turn leads the soul onward on its journey through the Land of the Dead.

geometry is a foundational characteristic of the Land of the Dead and the rectangular, square, circular, and elliptical enclosures are the typical forms of celestial “districts” through which the soul must pass on its afterlife journey.

Causeways and mounds are prominent features of the celestial Land of the Dead that it is the purpose of the architecture to replicate on earth.

The belief that if the sky, or some “hidden” or “secret” aspect of it, were NOT copied on the ground (and in some way explored, navigated, and known prior to death), then those souls who had failed to do this necessary work, and thus were not equipped with knowledge of “the secret representations,” would be “condemned to destruction.”

WHEN IT COMES TO MOTIVATIONAL techniques, as the Roman Catholic Church demonstrated throughout the Middle Ages, the prospect of eternal damnation can be very effective. I suggest that in ancient Egypt it was the equivalent prospect of “destruction” or “annihilation” of the soul, and the possibility of avoiding such a fate—as spelled out in the funerary texts—that motivated the construction of the sky-ground temples and pyramids of the Nile Valley. They were all, in a sense, gigantic books of the dead in stone and some–the Giza complex in particular—were undoubtedly seen as “actual gateways, or doorways, to the otherworld.”

Although widely separated in time and space, the ancient inhabitants of these two regions seem to have shared a core set of ideas about the afterlife destiny of the soul and seem, moreover, to have been largely in agreement not only that those ideas should be manifested in architecture, but also on many of the specific characteristics of that architecture, and on the purpose that the architecture was intended to serve.

Thus, while one reproduced Orion’s belt and the constellation of Leo and the other orchestrated complex architectural dances aligned to lunar and solar standstills, the fundamental objective of both was to open portals between sky and ground through which the souls of the dead could pass.

William Romain’s detailed studies of the Hopewell lead him to conclude that, in the minds of those who made them: the Newark Earthworks were a portal to the Otherworld that allowed for interdimensional movement of the soul during certain solar, lunar and stellar configurations.

He also argues that the “Great Hopewell Road,” an ancient causeway that once ran straight for more than 60 miles between Newark and High Bank (see chapter 20), “was the terrestrial equivalent of, or metaphor for the Milky Way Path of Souls providing a directional component for soul travel to the Realm of the Dead.”

Further, Romain joins George Lankford in linking Serpent Mound to Scorpius and in concluding that “Serpent Mound was a cognate for the Great Lowerworld Serpent which guarded the Realm of the Dead.”

Thus, while the Great Sphinx may be the terrestrial counterpart of the constellation Leo, its gaze also sacralizes the union of heaven and earth at sunrise on the equinoxes. And while Serpent Mound may indeed be the earthly twin of the constellation Scorpius, its open jaws and the oval earthwork between them also serve to unite ground and sky at sunset on the summer solstice.

Great Jeweled Serpent, Lankford concludes, represented as an adversary on the Path of Souls, that is depicted very frequently in Moundville designs where it is directly linked to other imagery associated with the afterlife journey. He also makes a strong case that Serpent Mound is a three-dimensional representation of the same supernatural entity and draws an interesting comparison with myths of the Cherokees describing the Uktena, “a great snake, as large around as a tree-trunk,” with: a bright blazing crest like a diamond on its forehead, and scales glittering like sparks of fire.54 The same myths also tell us that the gaze of this serpent had the power to “daze” people so that they were stopped in their tracks and could not escape from it,55 and again there is a notable parallel here with the great serpent of the Book of the Dead whose gaze plunges even the Sun into a “mighty sleep.”

The earliest mound sites we know of in North America may possibly date back as far as 8,000 years. After that the trail goes cold. But then why should we be surprised? The trail goes cold for a full 1,000 years between the end of the Watson Brake epoch and the beginning of Poverty Point, and it goes cold again several times thereafter, only to reappear reborn and renewed on the far side of each lacuna. The same stop-start process, however, also means that we can’t date the inception of the tradition to its oldest manifestations so far found.

The late John Anthony West used to put it about the civilization of ancient Egypt, “a legacy not a development.”

THIS TIME IT’S NOT THE funerary texts I’m referring to, but the Edfu Building Texts, so called because they are inscribed on the walls of the Temple of Horus at Edfu in Upper Egypt. These texts take us back to a very remote period called the “Early Primeval Age of the Gods”—and these gods, it transpires, were not originally Egyptian, but lived on a sacred island, the “Homeland of the Primeval Ones,” in the midst of a great ocean. Then, at some unspecified time in the past, an immense cataclysm shook the earth and a flood poured over this island, where “the earliest mansions of the gods” had been founded, destroying it utterly, submerging all its holy places, and killing most of its divine inhabitants. Some survived, however, and we are told that this remnant set sail in their ships (for the texts leave us in no doubt that these “gods” of the early primeval age were navigators) to “wander” the world. Their purpose in doing so was nothing less than to re-create and revive the essence of their lost homeland, to bring about, in short: The resurrection of the former world of the gods … The re-creation of a destroyed world.

The takeaway is that the texts invite us to consider the possibility that the survivors of a lost civilization, thought of as “gods” but manifestly human, set about “wandering” the world in the aftermath of an extinction-level global cataclysm.

The Tukano origin myth, given in chapter 18. It tells of how “Helmsman” and “Daughter of the Sun” brought the gifts of fire, horticulture, pottery-making, and other skills to the first humans to enter the Amazon while other “supernaturals” traveled over all the rivers, explored the remote hill ranges, identified the best places for settlement, and “prepared the land so that mortal human creatures might live on it.”

Returning to ancient Egypt and to the Edfu texts, we’re told that the survivors of the Island of the Primeval Ones: journeyed through the … lands of the primeval age…. In any place in which they settled they founded new sacred domains.

The Specifications of the Mounds of the Early Primeval Age, that literally “specified” the locations in the Nile Valley upon which every mound was to be situated, the character and appearance of each mound, and the understanding that those first, foundational mounds were to serve as the sites for all the temples and pyramids that would be built in Egypt in the future. Little wonder then that included among the company of the “gods” of Edfu were the Shebtiw, a group of deities charged with a specific responsibility for “creation,” the “Builder Gods” who accomplished “the actual work of building,” and the “Seven Sages” who, in addition to dispensing wisdom, as their name suggests, were much involved in the setting out of structures and in laying foundations.

My argument has long been that the Edfu Building Texts reflect real events surrounding a real cataclysm that unfolded between 12,800 and 11,600 years ago, a period known to paleoclimatologists as the Younger Dryas and that the Texts call the “Early Primeval Age.”

North America—“Turtle Island” in Native American tradition—is always, almost automatically, assumed to be a place to which culture was brought from elsewhere, but let’s shift the reference frame. What if North America itself was the Homeland of the Primeval Ones? What if the distinctive system of ideas involving the afterlife journey of the soul and the building of very specific types of structures thought to facilitate that journey weren’t brought to North America but originated there?

The extinction-level cataclysm that the earth experienced 12,800 years ago. Although the entire globe was affected, all the evidence indicates that the epicenter was in North America. It’s giant ice cap, 2 kilometers deep and extending in that epoch as far south as Minnesota, was massively destabilized, and the destruction that followed was near total across an immense area where the archaeological record was effectively swept clean.

The Younger Dryas, the interlude of cataclysmic global climate change coinciding with the Late Pleistocene Extinction Event in which thirty-five genera of North American megafauna (with each genus consisting of several species) were wiped out around 12,800 years ago. Sharing their fate were the Clovis people and their distinctive culture with its characteristic “fluted-point” weaponry.

As the discoverer and principal excavator of Murray Springs, however, Haynes deserves credit for drawing attention to a very curious aspect of the site—a distinct dark layer of soil draped “like shrink-wrap,” as Allen West puts it, over the top of the Clovis remains and of the extinct megafauna—including Eloise. Haynes has identified this “black mat” (his term) not only at Murray Springs but at dozens of other sites across North America,1 and was the first to acknowledge its clear and obvious association with the Late Pleistocene Extinction Event.

Haynes notes also that “The basal black mat contact marks a major climate change from the warm dry climate of the terminal Allerød to the glacially cold Younger Dryas.”

This deep freeze—the mysterious epoch now known as the Younger Dryas—lasted for approximately 1,200 years until 11,600 years ago, at which point the climate flipped again, global temperatures shot up rapidly, the remnant ice sheets melted and collapsed into the oceans, and the world became as warm as it is today.

The “Younger Dryas Impact Hypothesis,” that had received its first formal airing—also in PNAS—in October 2007. The paper was coauthored by Allen West, Richard Firestone, James Kennett, and more than twenty other scientists and presents evidence that multiple fragments of a giant comet—a swarm of fragments—struck the earth with disastrous consequences around 12,800 years ago. The effects were global but the epicenter of the cataclysm was over the North American ice cap, which the impacts destabilized, triggering the Younger Dryas deep freeze and the megafaunal extinctions.

Scientists have therefore developed other measures, more subtle than looking for craters, to detect cosmic impacts in the geological record. Nanodiamonds, for example, are microscopic diamonds that form under rare conditions of great shock, pressure, and heat, and are recognized as being among the characteristic fingerprints—“proxies” in scientific language—of powerful impacts by comets or asteroids.9 Other proxies include meltglass (resembling trinitite), tiny carbon spherules that form when molten droplets cool rapidly in air, magnetic microspherules, charcoal, soot, platinum, carbon molecules containing the rare isotope helium-3, and magnetic grains with iridium.

I have a question for Allen. “Since the black mat was found draped directly on top of Eloise—like ‘shrink-wrap,’ you said—then presumably it must have begun to form very shortly after she was killed and butchered with most of her remains left lying on the spot?”

“What we see is that at the bottom of that black-mat layer, literally the first thing touching those bones, are spherules, iridium, platinum, and small pieces of melt-glass from the event. So it doesn’t mean the animal was alive when the event happened, but she had to have been alive very, very shortly, at most a few weeks, before it.”

That’s based on modern data from elephant kills in Africa. The scavengers come in quickly and disarticulate the skeleton, and that didn’t happen with Eloise.”

“Okay,” he says. “It’s pure speculation, obviously, because we’ll never know for sure the exact sequence of events here 12,800 years ago, but based on the evidence it’s not unreasonable to envisage the hunters sitting around, cooking mammoth haunch over their campfire when all of a sudden the sky explodes …”

“But what we can be certain of was that this moment marked the end of their story, and the end of an epoch, really. There’s not a single Clovis point found anywhere in North America that’s above that black mat. They’re all in it or below it. And there’s not a single mammoth skeleton anywhere in North America that’s above it. A huge part of the die-off could have been as a direct result of the impacts themselves, but impacts and airbursts south of the ice cap, particularly as far south as New Mexico, would also have set off wildfires.

The evidence points to is not days or weeks but a 21-year period of utter devastation, horror, and cataclysm unfolding between 12,836 years ago and 12,815 years ago, with a peak around 12,822 years ago.

Published in the Proceedings of the National Academy of Sciences in August 2013, the self-explanatory title of the paper is “Large Pt Anomaly in the Greenland Ice Core Points to a Cataclysm at the Onset of Younger Dryas.” Platinum is, of course, an element found on earth, but analysis of the platinum in the ice core by Petaev and his colleagues reveals a composition quite unlike terrestrial platinum and leads the scientists to conclude that “an extraterrestrial source,” perhaps “a metal impactor with an unusual composition,” is the most likely explanation.15 They note also that during the 21-year interval—between 12,836 and 12,815 years ago, as indicated by Allen:

the “impactor” was in fact multiple impactors, all of them fragments of a comet that had wandered in from the outer solar system and taken up a potentially deadly earth-crossing orbit.

There’s something else, too, from new research we’ve been working on. In the ice core, at the exact same moment we see this big onset of platinum at the beginning of the 21 years, we also see a sudden rise in dust.”

“Which tells us that along with everything else that was going on at the time there were also very high winds blowing. There are certain proxies of that windiness that end up in the ice sheet. When it’s windier the winds will pick up continental dust, and, number one, it’s colder so there’s less plant cover, so when it gets windier and there are less plants to hold the sediment down, you get huge dust storms. We can see that buildup in the Greenland ice sheet. We see magnesium and calcium, a huge increase in them, and those are indicative of terrigenous dust, continental dust, and we see an increase in sodium and chlorine which are from sea salt—so the winds are so strong they pick up more sea salt and deposit it in Greenland.

So we know that a cold-water flood poured into the Atlantic ocean around 12,800 years ago on a scale sufficient to stop the Gulf Stream in its tracks; we know that glacial lake Agassiz has been implicated in it; and we know that this “great gush of cold freshwater” has been connected to the plunge in global temperature—the “deep freeze”—that defines the Younger Dryas cold event.

Is why such a flood would have occurred at the onset of the Younger Dryas “deep freeze” around 12,800 years ago rather than, say, 800 or 1,000 years earlier at the height of the warm phase—known as the Bølling—Allerød interstadial—that immediately preceded the Younger Dryas.

The point is understated, but this is a very big deal. Two to 4 meters of global sea-level rise within “a few decades or less” of the onset of the Younger Dryas is an IMMENSE amount of water, a cataclysmic world flood by any standard.

The evidence from Wolbach’s study that in the exact same period the planet suffered a spectacular episode of biomass burning and an associated “impact winter” that “caused warm interglacial temperatures to abruptly fall to cold, near-glacial levels within less than a year, possibly in as little as 3 months.”

What we are looking for, therefore, is an agent capable—simultaneously and almost instantaneously—of bringing about all of the following:   a global flood   wildfires across an area of 10 million km2   6 months of icy darkness followed by more than 1,000 years of glacially cold weather a stratum of soil across more than 50 million km2 dated to the Younger Dryas Boundary (YDB) and infused with a cocktail of nanodiamonds, high-temperature iron-rich spherules, glassy silica-rich spherules, meltglass, platinum, iridium, osmium, and other exotic materials   a mass extinction of megafauna

Wolbach and her coauthors are forthright in their conclusion: Multiple lines of ice-core evidence appear synchronous, and this synchroneity of multiple events makes the YD interval one of the most unusual climate episodes in the entire Quaternary record…. A cosmic impact is the only known event capable of simultaneously producing the collective evidence.

In other words, the long-established and widely accepted evidence linking the onset of the Younger Dryas cold interval to a freshwater flood off the North American ice cap and consequent changes in oceanic circulation is fully accepted by Wolbach. What she and her coauthors add, however, is: an additional key element … suggesting that these climate-changing mechanisms did not occur randomly but rather were triggered by the YDB impact event. After shutdown of the ocean conveyor, the YD episode persisted … not because of continued airburst/impacts but because, once circulation stopped, feedback loops and inertia within the ocean system maintained the changed state of circulation until it reverted to its previous state.56

The Younger Dryas Heinrich Event was not triggered by normal climatic changes but by the impacts of comet fragments on the North American ice cap.

IMAGINE A WORLD WHERE GOOD, honest, hardworking, inquisitive scientists live in fear of ruining their careers, perhaps even of losing their jobs and incomes, if they investigate certain subjects that have been judged by a dominant elite to be “taboo.”

Science in the twenty-first century does NOT encourage scientists to take risks in their pursuit of “the facts”—particularly when those facts call into question long-established notions about the human past.

Around 500,000 peculiar elliptical ponds, depressions, and lakes with raised rims pock much of the US Atlantic seaboard from Delaware to Florida. Since it was in the Carolinas that scientists first noticed them in the late nineteenth century, they became known as Carolina Bays and from quite early on there were theories that they had been created by an immense swarm of meteorites striking the earth.7 Several CRG members have explored the possibility that the Younger Dryas impacts might be connected to the mystery,8 but the majority of the group have since distanced themselves from such notions.

Published as a conference paper, their proposal is that a cosmic impact during the Ice Age in Michigan’s Saginaw Bay (which was then solid land covered by deep glacial ice) would have produced ejecta and secondary impacts in a “butterfly-wing” pattern precisely over the Nebraska Rainwater Basins, where they would be oriented northeast to southwest, and the Carolina Bays, where they would be oriented northwest to southeast.15

Richard Firestone, and other CRG scientists who suspect that there may have been a total of eight impacts on the North American ice cap,

The great surface density of the bays indicates that they were created by a catastrophic saturation bombing with impacts of 13 KT to 3 MT that would have caused a mass extinction in an area with a radius of 1500 km from the extraterrestrial impact in Michigan. This paper has considered mainly the ice boulders ejected by an extraterrestrial impact on the Laurentide Ice Sheet during the Pleistocene, but the impact would also have ejected water and produced steam. Taking into consideration the thermodynamic properties of water, any liquid water ejected above the atmosphere would have transformed into a fog of ice crystals that would have blocked the light of the sun. Thus, the time of formation of the Carolina Bays and Nebraska Rainwater Basins must coincide with an extinction event in the eastern half of the United States and the onset of a period of global cooling. This combination of conditions is best met by the disappearance of the North American megafauna, the end of the Clovis culture and the onset of the Younger Dryas cooling event at 12,800 cal. BP. The report of a platinum anomaly typical of extraterrestrial impacts at the Younger Dryas Boundary supports this scenario.

AS WE’VE SEEN, ALLEN WEST and Richard Firestone propose that there may have been as many as eight significant impacts on the North American ice cap during the 21 years of the peak Younger Dryas bombardments.29

The truth of the matter is that there remains great uncertainty and confusion around exactly what happened in North America—and across the whole world—at the onset of the Younger Dryas. While that uncertainty persists, alleged “certainties” of almost any kind are inappropriate and it is wise to keep an open mind to all possibilities.

But at a deeper level what this whole exchange revealed to me was something disturbing about the way science works. I hadn’t quite grasped the role of fear before. But I could see it in action everywhere here: fear of being “noticed and monitored by colleagues,” fear of unwanted negative celebrity, fear of indignity, fear of loss of reputation, fear of loss of career—and not for committing some terrible crime but simply for exploring unorthodox possibilities and undertaking “somewhat controversial research” into what everyone agrees were extraordinary events 12,800 years ago.

Contrary to the mainstream, my broad conclusion is that an advanced global seafaring civilization existed during the Ice Age, that it mapped the earth as it looked then with stunning accuracy, and that it had solved the problem of longitude, which our own civilization failed to do until the invention of Harrison’s marine chronometer in the late eighteenth century. As masters of celestial navigation, as explorers, as geographers, and as cartographers, therefore, this lost civilization of 12,800 years ago was not outstripped by Western science until less than 300 years ago at the peak of the Age of Discovery.

Moreover, the Clovis phenomenon is, itself, an intriguing mystery. We’ve already seen that no archaeological background has ever been found to the beautiful and sophisticated fluted points used by these remarkably successful hunter-gatherers to spear mammoths like Eloise at Murray Springs. From the moment we meet them around 13,400 years ago to the moment of their disappearance from the record about 12,800 years ago, they’re equipped with their extremely effective signature “toolkit” of which the points are part. These Clovis tools and weapons appear suddenly and fully formed in archaeological deposits across huge expanses of North America with no evidence, anywhere, of experiments, developments, prototypes, or, indeed, of any intermediate stages in their evolution.

My guess is there’s a connection between Clovis and the lost civilization, not least because studies of ancient DNA show the Clovis genome to be much more closely related to Native South Americans than to Native North Americans (see part 3). Indeed, there’s a parallel between the rather sudden and inexplicable way that Australasian genes turn up in the Amazon basin and the equally sudden and inexplicable way that Clovis fluted-point technology turns up in North America.

Could both have the same cause?

If Clovis benefited from contact with a more advanced civilization, then we should find the skeletal remains of those more advanced people intermingled with the Clovis remains, and we do not—therefore, there was no advanced civilization. Similarly, if Clovis benefited from contact with the people of a more advanced civilization, then we should find at least some traces of their higher tech among the Clovis assemblages, and we do not—therefore, there was no advanced civilization.

I was therefore surprised to learn during the research for this book that apart from the incomplete skeleton of a single individual—the Anzick-1 child excavated in Montana and discussed in chapter 9—there are no human remains at all from the Clovis period.

more than 1,500 Clovis sites have been found. These sites have yielded more than 10,000 Clovis points12 and tens of thousands of other artifacts from the Clovis toolkit (40,000 at Topper alone, as we saw in chapter 6). Yet among all these archaeological riches, it bears repeating that the sum total of Clovis human remains found in 85 years of excavations is limited to the Anzick-1 partial skeleton.13

The only viable explanation is a remote common source behind them all—a lost civilization, in my view.

We have considered the question of huge volumes of meltwater released into the Arctic and Atlantic Oceans from the destabilized ice sheet and looked at the effects on global climate. But keep in mind that those enormous floods also devastated the rich North American mainland to the south, perhaps the best and most bounteous real estate then available anywhere.

This immense and extraordinary deluge, “possibly the largest flood in the history of the world,” swept away and utterly demolished everything that lay in its path. Jostling with icebergs, choked by whole forests ripped up by their roots, turbulent with mud and boulders swirling in the depths of the current, what the deluge left behind can still be seen in something of its raw form in the Channeled Scablands of the state of Washington today—a devastated blank slate (described at length in Magicians of the Gods) littered with 10,000-ton “glacial erratics,” immense fossilized waterfalls, and “current ripples” hundreds of feet long and dozens of feet high.3 If there were cities there, before the deluge, they would be gone.

New York State has its Finger Lakes. These latter were long thought to have been carved by glaciers, but their geomorphology closely parallels that of the coulees of the channeled scablands, and some researchers now believe they were cut by glacial meltwater at extreme pressures—a process linked by sediment evidence to “the collapse of continental ice sheets.”

All in all, if North America is where a lost civilization of prehistoric antiquity vanished, then by far the most significant problem we face in investigating it is the way that the “crime scene” was systematically “wiped down” by the cataclysmic events at the onset of the Younger Dryas.

I drew attention in Heaven’s Mirror to a discovery by archaeologists Jose Fernandez and Robert Cormack establishing that the settlement core of the Maya city of Utatlan was designed “according to a celestial scheme reflected by the shape of the constellation of Orion.”

Fernandez was also able to prove that all of Utatlan’s major temples “were oriented to the heliacal setting points of stars in Orion,” and noted that the Milky Way, alongside which Orion stands, “was thought of as a celestial path connecting the firmament’s navel with the centre of the underworld.”11

THE EXTIRPATION OF VITAL EVIDENCE concerning the past of our species across huge swaths of the Americas was by no means limited to the effects of the Younger Dryas cataclysm, or to the subsequent much later cataclysms of militant Christianity and smallpox.

What’s tantalizing, however, is that the influence of the lost civilization declares itself repeatedly in the commonalities shared by supposedly unconnected cultures all around the ancient world. The deeper you dig, the more obvious it becomes that they did not get these shared features from one another but from a remote common ancestor of them all.

We’ve seen that the Americas were isolated during much of the Ice Age—a geological epoch that lasted, let us not forget, from around 2.6 million years ago until around 12,000 years ago.1 In this long geological epoch, however, there were several periods of temporary climate warming when the macro-continent of North, Central, and South America would have become accessible. Two of these periods of enhanced accessibility occurred within the known time frame of past human migrations and it is the most recent (the so-called Bølling-Allerød interstadial, dated from around 14,700 years ago to around 12,800 years ago2) that archaeologists focused their attention on for far too long in their attempts to reconstruct the true story of the peopling of the Americas.

what we should actually be looking at is the deglaciation event before that—between 140,000 and 120,000 years ago.

Deméré’s suggestion still remains unpalatable to some archaeologists, yet it satisfactorily explains the growing mass of evidence that the Americas were peopled many tens of millennia before the Bølling-Allerød interstadial (see chapters 4, 5, and 6). More than that, this hitherto unimagined possibility of a very old (rather than very young) human presence in the New World helps make sense of the complex genetic heritage of Native Americans—explored in chapters 7, 8, 9, and 10. Embedded in this evidence is the mind-dilating mystery of the strong Australasian DNA signal present among certain isolated tribes of the Amazon rainforest.

Evidence that human settlement in the Amazon is extremely ancient, that great cities and large populations once flourished there, that ancient scientific knowledge of the properties of plants persists among Amazonian peoples to this day, that there was very early domestication of useful agricultural species, that the rainforest itself is an anthropocentric, cultivated, ordered “garden,” and that a “miraculous” man-made soil—terra preta—was developed in the Amazon in deep antiquity, bringing fertility to otherwise agriculturally unproductive lands and imbued with astonishing powers of self-renewal that modern scientists marvel at and do not yet fully understand.

Discovery, is the presence of gigantic geometrical earthworks and astronomically aligned stone circles in the Amazon.

The words of ayahuasca shamans, who see geometric patterns as portals to other realms of existence—specifically to the afterlife realm or land of the dead. Indeed, the very name ayahuasca means “Vine of the Dead” or “Vine of Souls.”

The real importance of the Cerutti Mastodon Site is that it provides the first solid evidence—solid enough to make it into the pages of Nature—of a truly ancient human occupation of the New World. If humans were in North America 130,000 years ago (more than twice as long as the span of the known human presence in Europe), that gives them 117,000 years to have evolved a high civilization before the Younger Dryas cataclysm struck.

Thereafter, until the next episode of deglaciation (the Bølling-Allerød interstadial) in the 2,000 years immediately preceding the Younger Dryas, all scholars agree that the vast landmass of the Americas, straddling half the globe, was cut off from the rest of the world by the Atlantic and Pacific Oceans and by mountains of ice. Migrants from Asia, even when Beringia was accessible, could not get in. But for those humans who were already south of the ice cap 120,000 years ago, the Americas must have been a paradise, safe from incursions by any other peoples and blessed with an astonishing abundance and variety of natural resources.

Thus far (extrapolating from the belief systems of its descendants) I’ve suggested that its spirituality must have involved profound explorations of the mystery of death. I’ve suggested that accurate ancient maps depicting the earth as it looked during the Ice Age imply that it had developed a level of maritime technology at least as advanced as that possessed by European seafarers in the late eighteenth century. I’ve suggested that it had mastered sophisticated geometry and astronomy. I’ve also suggested that such a “lost” civilization, maturing in isolation for tens of thousands of years in North America, might have taken a very different path from our own and might have developed technologies that archaeologists would be unable to recognize because they operated on principles or manipulated forces unknown to modern science.

At the Great Pyramid, at Baalbek, and at Sacsayhuamán, as well as at numerous other mysterious sites (such as the almost unbelievable Kailasa Temple, hewn out of solid basalt at Ellora in the Indian state of Maharashtra), intriguing ancient traditions persist. These traditions speak of meditating sages, the use of certain plants, the focused attention of initiates, miraculously speedy workmanship, and special kinds of chanting or tones played on musical instruments in connection with the lifting, placing, softening, and moulding of megaliths. My guess, confronted by the global distribution of such narratives and by the stark reality of the sites themselves, is that we’re dealing with the reverberations of an ancient technology we don’t understand, operating on principles that are utterly unknown to us.

The science of the lost civilization was primarily focused upon what we now call psi capacities that deployed the enhanced and focused power of human consciousness to channel energies and to manipulate matter.

quantum entanglement

the advanced civilization I see evolving in North America during the Ice Age had transcended leverage and mechanical advantage and learned to manipulate matter and energy by deploying powers of consciousness that we have not yet begun to tap.

It is further evidence of a remote common source behind some widespread religious motifs that one of the most famous myths of the ancient Greeks—the tale of Orpheus and Eurydice—was also present, long before European contact, in the ancient pre-Columbian cultures of North America. Some details vary, as of course do the names of the central characters and the general setting, but the underlying structure remains the same13—(1) a wife or sweetheart (Eurydice) dies prematurely; (2) her husband or lover (Orpheus) follows her soul to the Underworld and persuades its ruler to let her return with him to the land of the living; (3) Eurydice’s release is agreed on condition that she walk behind Orpheus as they make the return journey from the Underworld and that under no circumstances should he set eyes on her until they reach the land of the living; (4) at the last moment, overcome with love, Orpheus cannot resist glancing over his shoulder at his wife and in that instant she is cast back into the Underworld that she can henceforth never leave.

So compellingly similar are the Native American and Greek versions that leading scholar of religions Ake Hultkrantz dedicated an immense monograph to the mystery, published in Stockholm in 1957, titled The North American Indian Orpheus Tradition. Meanwhile his contemporary, Canadian ethnographer Charles Marius Barbeau, proposed that the Greek and Native American stories must both be offshoots of some much older core narrative and concluded, “The worldwide diffusion from an unknown source of a tale so typically classical as Orpheus and Eurydice must have required millenniums.”

What I find equally interesting is that the foundations of the narrative clearly lie in the concepts of the afterlife journey of the soul and the duality of spirit and matter so central to the religious beliefs of ancient Egypt and the ancient Mississippi Valley.

In Tibetan Buddhism the afterlife realm is known as the Bardo—literally “the Between.” Just like the Ancient Egyptian Book of the Dead and the Mississippian oral and iconographic traditions, the purpose of the Tibetan Book of the Dead is to serve as a guidebook and instruction manual for the soul on its postmortem journey through this strange parallel dimension.

My sense is that the lost civilization, as might be expected with its proposed shamanic origins, was not much interested in material things. Like many other Native American cultures, its primary goals were not to do with the acquisition of status or wealth but instead were focused, through vision quests and right living, on the perfection of the soul.

In order to prepare its initiates thoroughly so that they might be “well equipped” for the ultimate journey of death—surely a matter of far greater significance than any material concerns—the direct exploration of parallel dimensions would, as noted earlier, almost certainly have been undertaken. Had this investigation been allowed to proceed uninterrupted it might by now have transcended space, time, and matter entirely, but 12,800 years ago a deadly mass of matter in the form of the Younger Dryas comet was flung at it and brought a pause to the great prehistoric quest.

A pause but not a halt—for if I’m right there were survivors who attempted, with varying degrees of success, to repromulgate the lost teachings, planting “sleeper cells” far and wide in hunter-gatherer cultures in the form of institutions and memes that could store and transmit knowledge and, when the time was right, activate a program of public works, rapid agricultural development, and enhanced spiritual inquiry.

THERE ARE LITERALLY THOUSANDS OF myths from every inhabited continent that speak of the existence of an advanced civilization in remote prehistory, of the lost golden age in which it flourished, and of the cataclysm that brought it to an end. A feature shared by many of them—the story of Atlantis, for example, or of Noah’s flood—is the notion that human beings, by their own arrogance, cruelty, and disrespect for the earth, had somehow brought the disaster down upon their own heads and accordingly were obliged by the gods to go back to basics and learn humility again.

An Ojibwa tradition seems relevant. It speaks of a comet that “burned up the earth” in the remote past and that is destined to return: The star with the long, wide tail is going to destroy the world some day when it comes low again. That’s the comet called Long-Tailed Heavenly Climbing Star. It came down here once, thousands of years ago. Just like the sun. It had radiation and burning heat in its tail …

There is a prophecy that the comet will destroy the earth again. But it’s a restoration. The greatest blessing this island [Turtle Island] will ever have. People don’t listen to their spiritual guidance today. There will be signs in the sun, moon and stars when that comet comes down again.19

Samuel Noah Kramer, History Begins at Sumer: Thirty-Nine Firsts in Man’s Recorded History (University of Pennsylvania Press, 1991).

The Hero With a Thousand Faces

The Hero with a Thousand Faces Book Cover The Hero with a Thousand Faces
Joseph Campbell
New World Library

I don't know why it took me so long to read this book. It is excellent. All the myths, all the legends, all the....stories follow a basic formula. Campbell shares the format and shows the connections between them all. Buddha, Jesus, Odinson....all the same format. All the same "Hero's Journey".

The Hero With a Thousand Faces
Joseph Campbell

1 Myth and Dream

It would not be too much to say that myth is the secret opening through which the inexhaustible energies of the cosmos pour into human cultural manifestation

In the field of folk psychology, has been seeking to establish the psychological bases of language, myth, religion, art development, and moral codes.

The bold and truly epoch-making writings of the psychoanalysts are indispensable to the student of mythology; for, whatever may be thought of the detailed and sometimes contradictory interpretations of specific cases and problems, Freud, Jung, and their followers have demonstrated irrefutably that the logic, the heroes, and the deeds of myth survive into modern times

That of the tragicomic triangle of the nursery—the son against the father for the love of the mother. Apparently the most permanent of the dispositions of the human psyche are those that derive from the fact that, of all animals, we remain the longest at the mother breast. Human beings are born too soon; they are unfinished, unready as yet to meet the world. Consequently their whole defense from a universe of dangers is the mother, under whose protection the intra-uterine period is prolon

The unfortunate father is the first radical intrusion of another order of reality into the beatitude of this earthly restatement of the excellence of the situation within the womb; he, therefore, is experienced primarily as an enemy

The doctor is the modern master of the mythological realm, the knower of all the secret ways and words of potency. His role is precisely that of the Wise Old Man of the myths and fairy tales whose words assist the hero through the trials and terrors of the weird adventure.

The so-called rites of passage, which occupy such a prominent place in the life of a primitive society (ceremonials of birth, naming, puberty, marriage, burial, etc.), are distinguished by formal, and usually very severe, exercises of severance, whereby the mind is radically cut away from the attitudes, attachments, and life patterns of the stage being left behin

It has always been the prime function of mythology and rite to supply the symbols that carry the human spirit forward, in counteraction to those other constant human fantasies that tend to tie it back.

The psychoanalyst has to come along, at last, to assert again the tried wisdom of the older, forwardlooking teachings of the masked medicine dancers and the witch-doctor-circumcisers; whereupon we find, as in the dream of the serpent bite, that the ageless initiation symbolism is produced spontaneously by the patient himself at the moment of the release

The figure of the tyrant-monster is known to the mythologies, folk traditions, legends, and even nightmares, of the world; and his characteristics are everywhere essentially the same. He is the hoarder of the general benefit. He is the monster avid for the greedy rights of “my and mine.” The havoc wrought by him is described in mythology and fairy tale as being universal throughout his domain.

The inflated ego of the tyrant is a curse to himself and his world—no matter how his affairs may seem to prosper.

The hero is the man of self-achieved submission.

As Professor Arnold J. Toynbee indicates in his six-volume study of the laws of the rise and disintegration of civilizations,17 schism in the soul, schism in the body social, will not be resolved by any scheme of return to the good old days (archaism), or by programs guaranteed to render an ideal projected future (futurism), or even by the most realistic, hardheaded work to weld together again the deteriorating elements. Only birth can conquer death—the birth, not of the old thing again, but of something new. Within the soul, within the body social, there must be—if we are to experience long survival—a continuous “recurrence of birth” (palingenesia) to nullify the unremitting recurrences of death there is nothing we can do, except be crucified—and resurrected; dismembered totally, and then reborn.

Theseus, the hero-slayer of the Minotaur, entered Crete from without, as the symbol and arm of the rising civilization of the Greeks. That was the new and living thing.

Note 16: T.S. Eliot The Waste Land

Note 17: Arnold Toynbee, A Study of History

Dream is the personalized myth, myth the depersonalized dream; both myth and dream are symbolic in the same general way of the dynamics of the psyche

The hero, therefore, is the man or woman who has been able to battle past his personal and local historical limitations to the generally valid, normally human forms.

His second solemn task and deed therefore (as Toynbee declares and as all the mythologies of mankind indicate) is to return then to us, transfigured, and teach the lesson he has learned of life renewed.

Note 20: It must be noted against Professor Toynbee, however, that he seriously misrepresents the mythological scene when he advertisises Christianity as the only religion teaching this second task. ALL religions teach it, as do ALL mythologies and folk traditions EVERYWHERE. Toynbee arrives at his misconstruction by way of a trite and incorrect interpretation of the Oriental ideas of Nirvana, Buddha and Bodhisattval then contrasting these ideals, as he misinterprets them, with a very sophisticated rereading of the Christian idea of the City of God.

Perhaps some of us have to go through dark and devious ways before we can find the river of peace or the highroad to the soul’s destination.”

Dante’s “dark wood, midway in the journey of our life,” and the sorrows of the pits of hell: Through me is the way into the woeful city, Through me is the way into eternal woe, Through me is the way among the Lost People

…variety of Pandora’s box—that divine gift of the gods to beautiful woman, filled with the seeds of all the troubles and blessings of existence, but also provided with the sustaining virtue, hope.

Alas, where is the guide, that fond virgin, Ariadne, to supply the simple clue that will give us courage to face the Minotaur, and the means then to find our way to freedom when the monster has been met and slain?

2. Tragedy and Comedy

Modern romance, like Greek tragedy, celebrates the mystery of dismemberment, which is life in time. The happy ending is justly scorned as a misrepresentation; for the world, as we know it, as we have seen it, yields but one ending: death, disintegration, dismemberment, and the crucifixion of our heart with the passing of the forms that we have loved.

Poetics of Aristotle, tragic katharsis (i.e., the “purification” or “purgation” of the emotions of the spectator of tragedy through his experience of pity and terror) corresponds to an earlier ritual katharsis (“a purification of the community from the taints and poisons of the past year, the old contagion of sin and death”), which was the function of the festival and mystery play of the dismembered bull-god, Dionysos

…the fairy tale of happiness ever after cannot be taken seriously; it belongs to the never-never land of childhood, which is protected from the realities that will become terribly known soon enough

The happy ending of the fairy tale, the myth, and the divine comedy of the soul, is to be read, not as a contradiction, but as a transcendence of the universal tragedy of man.

Tragedy is the shattering of the forms and of our attachment to the forms; comedy, the wild and careless, inexhaustible joy of life invincible. Thus the two are the terms of a single mythological theme and experience which includes them both and which they bound: the down-going and the up-coming (kathodos and anodos), which together constitute the totality of the revelation that is life, and which the individual must know and love if he is to be purged (katharsis=purgatorio) of the contagion of sin (disobedience to the divine will) and death (identification with the mortal form).

It is the business of mythology proper, and of the fairy tale, to reveal the specific dangers and techniques of the dark interior way from tragedy to comedy. Hence the incidents are fantastic and “unreal”: they represent psychological, not physical, triumphs.

…the point is that, before such-and-such could be done on earth, this other, more important, primary thing had to be brought to pass within the labyrinth that we all know and visit in our dreams.

3 The Hero and the God

THE standard path of the mythological adventure of the hero is a magnification of the formula represented in the rites of passage: separation—initiation—return: which might be named the nuclear unit of the monomyth. A hero ventures forth from the world of common day into a region of supernatural wonder: fabulous forces are there encountered and a decisive victory is won: the hero comes back from this mysterious adventure with the power to bestow boons on his fellow man.

A majestic representation of the difficulties of the hero-task, and of its sublime import when it is profoundly conceived and solemnly undertaken, is presented in the traditional legend of the Great Struggle of the Buddha

The Old Testament records a comparable deed in its legend of Moses,

The Lord gave to him the Tables of the Law and commanded Moses to return with these to Israel, the people of the Lord.

Note 37: This is the most important single moment in Oriental mythology, a counterpart of the Crucifixion of the West. The Buddha beneath the Tree of Enlightenment (the Bo Tree) and Christ on the Holy Rood (the Tree of Redemption) are analogous figures, incorporating an archetypal World Savior, World Tree motif, which is of immemorial antiquity. Many other variants of the theme will be found among the episodes to come. The Immovable Spot and Mount Cavalry are images of the World Navel, or World Axis.

Note 28: The point is that Buddahood, Enlightenment, cannot be communicated, but only the WAY to Enlightenment.

As we soon shall see, whether presented in the vast, almost oceanic images of the Orient, in the vigorous narratives of the Greeks, or in the majestic legends of the Bible, the adventure of the hero normally follows the pattern of the nuclear unit above described: a separation from the world, a penetration to some source of power, and a life-enhancing return. The whole of the Orient has been blessed by the boon brought back by Gautama Buddha—his wonderful teaching of the Good Law—just as the Occident has been by the Decalogue of Moses. The Greeks referred fire, the first support of all human culture, to the world-transcending deed of their Prometheus, and the Romans the founding of their world-supporting city to Aeneas, following his departure from fallen Troy and his visit to the eerie underworld of the dead. Everywhere, no matter what the sphere of interest (whether religious, political, or personal), the really creative acts are represented as those deriving from some sort of dying to the world; and what happens in the interval of the hero’s nonentity, so that he comes back as one reborn, made great and filled with creative power, mankind is also unanimous in declaring.

The first great stage, that of the separation or departure, will be shown in Part I, Chapter I, in five subsections: (1) “The Call to Adventure,” or the signs of the vocation of the hero; (2) “Refusal of the Call,” or the folly of the flight from the god; (3) “Supernatural Aid,” the unsuspected assistance that comes to one who has undertaken his proper adventure; (4) “The Crossing of the First Threshold;” and (5) “The Belly of the Whale,” or the passage into the realm of night. The stage of the trials and victories of initiation will appear in Chapter II in six subsections: (1) “The Road of Trials,” or the dangerous aspect of the gods; (2) “The Meeting with the Goddess” (Magna Mater), or the bliss of infancy regained; (3) “Woman as the Temptress,” the realization and agony of Oedipus; (4) “Atonement with the Father;” (5) “Apotheosis;” and (6) “The Ultimate Boon.”

The third of the following chapters will conclude the discussion of these prospects under six subheadings: (1) “Refusal of the Return,” or the world denied; (2) “The Magic Flight,” or the escape of Prometheus; (3) “Rescue from Without;” (4) “The Crossing of the Return Threshold,” or the return to the world of common day; (5) “Master of the Two Worlds;” and (6) “Freedom to Live,” the nature and function of the ultimate boon.

Tribal or local heroes, such as the emperor Huang Ti, Moses, or the Aztec Tezcatlipoca, commit their boons to a single folk; universal heroes—Mohammed, Jesus, Gautama Buddha-bring a message for the entire world.

Part II, “The Cosmogonie Cycle,” unrolls the great vision of the creation and destruction of the world which is vouchsafed as revelation to the successful hero. Chapter I, Emanations, treats of the coming of the forms of the universe out of the void. Chapter II, The Virgin Birth, is a review of the creative and redemptive roles of the female power, first on a cosmic scale as the Mother of the Universe, then again on the human plane as the Mother of the Hero. Chapter III, Transformations of the Hero, traces the course of the legendary history of the human race through its typical stages, the hero appearing on the scene in various forms according to the changing needs of the race. And Chapter IV, Dissolutions, tells of the foretold end, first of the hero, then of the manifested world.

The cosmogonie cycle is presented with astonishing consistency in the sacred writings of all the continents,43 and it gives to the adventure of the hero a new and interesting turn; for now it appears that the perilous journey was a labor not of attainment but of reattainment, not discovery but rediscovery. The godly powers sought and dangerously won are revealed to have been within the heart of the hero all the time.

The two—the hero and his ultimate god, the seeker and the found—are thus understood as the outside and inside of a single, self-mirrored mystery, which is identical with the mystery of the manifest world. The great deed of the supreme hero is to come to the knowledge of this unity in multiplicity and then to make it known.

1 The World Navel

THE effect of the successful adventure of the hero is the unlocking and release again of the flow of life into the body of the world. The miracle of this flow may be represented in physical terms as a circulation of food substance, dynamically as a streaming of energy, or spiritually as a manifestation of grace. Such varieties of image alternate easily, representing three degrees of condensation of the one life force. An abundant harvest is the sign of God’s grace; God’s grace is the food of the soul; the lightning bolt is the harbinger of fertilizing rain, and at the same time the manifestation of the released energy of God. Grace, food substance, energy: these pour into the living world, and wherever they fail, life decomposes into death.

The torrent pours from an invisible source, the point of entry being the center of the symbolic circle of the universe, the Immovable Spot of the Buddha legend,46 around which the world may be said to revolve. Beneath this spot is the earth-supporting head of the cosmic serpent, the dragon, symbolical of the waters of the abyss, which are the divine life-creative energy and substance of the demiurge, the world-generative aspect of immortal being.47 The tree of life, i.e., the universe itself, grows from this point. It is rooted in the supporting darkness; the golden sun bird perches on its peak; a spring, the inexhaustible well, bubbles at its foot.

Thus the World Navel is the symbol of the continuous creation

The dome of heaven rests on the quarters of the earth, sometimes supported by Jour çaryatidal kings, dwarfs, giants, elephants, or turtles.

The hearth in the home, the altar in the temple, is the hub of the wheel of the earth, the womb of the Universal Mother whose fire is the fire of life.

The solar ray igniting the hearth symbolizes the communication of divine energy to the womb of the world—and is again the axis uniting and turning the two wheels.

Wherever a hero has been born, has wrought, or has passed back into the void, the place is marked and sanctified. A temple is erected there to signify and inspire the miracle of perfect centeredness; for this is the place of the breakthrough into abundance. Someone at this point discovered eternity. The site can serve, therefore, as a support for fruitful meditation. Such temples are designed, as a rule, to simulate the four directions of the world horizon, the shrine or altar at the center being symbolical of the Inexhaustible Point. The one who enters the temple compound and proceeds to sanctuary is imitating the deed of the original hero. His aim is to rehearse the universal pattern as a means of evoking within himself the recollection of the life-centering, life-renewing form.

Because, finally, the All is everywhere, and anywhere may become the seat of power.

The World Navel, then, is ubiquitous. And since it is the source of all existence, it yields the world’s plenitude of both good and evil. Ugliness and beauty, sin and virtue, pleasure and pain, are equally its production.

Chapter 1. Departure

  1. The Call to Adventure

This is an example of one of the ways in which the adventure can begin. A blunder apparently the merest chance—reveals an unsuspected world, and the individual is drawn into a relationship with forces that are not rightly understood. As Freud has shown,2 blunders are not the merest chance. They are the result of suppressed desires and conflicts.

…it marks what has been termed “the awakening of the self.”

Typical of the circumstances of the call are the dark forest, the great tree, the babbling spring, and the loathly, underestimated appearance of the carrier of the power of destiny. We recognize in the scene the symbols of the World Navel.

Freud has suggested that all moments of anxiety reproduce the painful feelings of the first separation from the mother—the tightening of the breath, congestion of the blood, etc., of the crisis of birth.4 Conversely, all moments of separation and new birth produce anxiety.

The disgusting and rejected frog or dragon of the fairy tale brings up the sun ball in its mouth; for the frog, the serpent, the rejected one, is the representative of that unconscious deep (“so deep that the bottom cannot be seen”) wherein are hoarded all of the rejected, unadmitted, unrecognized, unknown, or undeveloped factors, laws, and elements of existence.

The herald or announcer of the adventure, therefore, is often dark, loathly, or terrifying, judged evil by the world; yet if one could follow the way would be opened through the walls of day into the dark where the jewels glow. Or the herald is a beast (as in the fairy tale), representative of the repressed instinctual fecundity within ourselves, or again a veiled mysterious figure—the unknown.

Whether dream or myth, in these adventures there is an atmosphere of irresistible fascination about the figure that appears suddenly as guide, marking a new period, a new stage, in the biography. That which has to be faced, and is somehow profoundly familiar to the unconscious—though unknown, surprising, and even frightening to the conscious personality—makes itself known; and what formerly was meaningful may become strangely emptied of value: like the world of the king’s child, with the sudden disappearance into the well of the golden ball. Thereafter, even though the hero returns for a while to his familiar occupations, they may be found unfruitful

This first stage of the mythological journey—which we have designated the “call to adventure”—signifies that destiny has summoned the hero and transferred his spiritual center of gravity from within the pale of his society to a zone unknown.

This fateful region of both treasure and danger may be variously represented: as a distant land, a forest, a kingdom underground, beneath the waves, or above the sky, a secret island, lofty mountaintop, or profound dream state; but it is always a place of strangely fluid and polymorphous beings, unimaginable torments, superhuman deeds, and impossible delight. The hero can go forth of his own volition to accomplish the adventure, as did Theseus when he arrived in his father’s city, Athens, and heard the horrible history of the Minotaur; or he may be carried or sent abroad by some benign or malignant agent, as was Odysseus, driven about the Mediterranean by the winds of the angered god, Poseidon. The adventure may begin as a mere blunder, as did that of the princess of the fairy tale; or still again, one may be only casually strolling, when some passing phenomenon catches the wandering eye and lures one away from the frequented paths of man. Examples might be multiplied, ad infinitum, from every corner of the world.

2. Refusal of the Call

Refusal of the summons converts the adventure into its negative. Walled in boredom, hard work, or “culture,” the subject loses the power of significant affirmative action and becomes a victim to be saved.

Whatever house he builds, it will be a house of death: a labyrinth of cyclopean walls to hide from him his Minotaur. All he can do is create new problems for himself and await the gradual approach of his disintegration.

The myths and folk tales of the whole world make clear that the refusal is essentially a refusal to give up what one takes to be one’s own interest.

One is harassed, both day and night, by the divine being that is the image of the living self within the locked labyrinth of one’s own disoriented psyche. The ways to the gates have all been lost: there is no exit. One can only cling, like Satan, furiously, to oneself and be in hell; or else break, and be annihilate at last, in God.

What they represent is an impotence to put off the infantile ego, with its sphere of emotional relationships and ideals. One is bound in by the walls of childhood; the father and mother stand as threshold guardians, and the timorous soul, fearful of some punishment, fails to make the passage through the door and come to birth in the world without.

Sometimes the predicament following an obstinate refusal of the call proves to be the occasion of a providential revelation of some unsuspected principle of release.

3. Supernatural Aid

FOR those who have not refused the call, the first encounter of the hero-journey is with a protective figure (often a little old crone or old man) who provides the adventurer with amulets against the dragon forces he is about to pass.

The helpful crone and fairy godmother is a familiar feature of European fairy lore; in Christian saints’ legends the role is commonly played by the Virgin. The Virgin by her intercession can win the mercy of the Father. Spider Woman with her web can control the movements of the Sun. The hero who has come under the protection of the Cosmic Mother cannot be harmed. The thread of Ariadne brought Theseus safely through the adventure of the labyrinth.

What such a figure represents is the benign, protecting power of destiny. The fantasy is a reassurance—a promise that the peace of Paradise, which was known first within the mother womb, is not to be lost; that it supports the present and stands in the future as well as in the past (is omega as well as alpha); that though omnipotence may seem to be endangered by the threshold passages and life awakenings, protective power is always and ever present within the sanctuary of the heart and even immanent within, or just behind, the unfamiliar features of the world. One has only to know and trust, and the ageless guardians will appear. Having responded to his own call, and continuing to follow courageously as the consequences unfold, the hero finds all the forces of the unconscious at his side.

Not infrequently, the supernatural helper is masculine in form. In fairy lore it may be some little fellow of the wood, some wizard, hermit, shepherd, or smith, who appears, to supply the amulets and advice that the hero will require. The higher mythologies develop the role in the great figure of the guide, the teacher, the ferryman, the conductor of souls to the afterworld.

Goethe presents the masculine guide in Faust as Mephi-stopheles—and not infrequently the dangerous aspect of the “mercurial” figure is stressed; for he is the lurer of the innocent soul into realms of trial.

Protective and dangerous, motherly and fatherly at the same time, this supernatural principle of guardianship and direction unites in itself all the ambiguities of the unconscious—thus signifying the support of our conscious personality by that other, larger system, but also the inscrutability of the guide that we are following, to the peril of all our rational ends.

The call, in fact, was the first announcement of the approach of this initiatory priest.

Note 33: The well is symbolical of the unconscious. Compare that of the fairy story of the Frog King.

4. The Crossing of the First Threshold

WITH the personifications of his destiny to guide and aid him, the hero goes forward in his adventure until he comes to the “threshold guardian” at the entrance to the zone of magnified power. Such custodians bound the world in the four directions—also up and down—standing for the limits of the hero’s present sphere, or life horizon. Beyond them is darkness, the unknown, and danger; just as beyond the parental watch is danger to the infant and beyond the protection of his society danger to the member of the tribe.

The folk mythologies populate with deceitful and dangerous presences every desert place outside the normal traffic of the village.

The regions of the unknown (desert, jungle, deep sea, alien land, etc.) are free fields for the projection of unconscious content. Incestuous libido and patricidal destrudo are thence reflected back against the individual and his society in forms suggesting threats of violence and fancied dangerous delight—not only as ogres but also as sirens of mysteriously seductive, nostalgic beauty.

The Arcadian god Pan is the best known Classical example of this dangerous presence dwelling just beyond the protected zone of the village boundary.

This is a dream that brings out the sense of the first, or protective, aspect of the threshold guardian. One had better not challenge the watcher of the established bounds. And yet—it is only by advancing beyond those bounds, provoking the destructive other aspect of the same power, that the individual passes, either alive or in death, into a new zone of experience

The “Wall of Paradise,” which conceals God from human sight, is described by Nicholas of Cusa as constituted of the “coincidence of opposites,” its gate being guarded by “the highest spirit of reason, who bars the way until he has been overcome.”53 The pairs of opposites (being and not being, life and death, beauty and ugliness, good and evil, and all the other polarities that bind the faculties to hope and fear, and link the organs of action to deeds of defense and acquisition) are the clashing rocks (Symplegades) that crush the traveler, but between which the heroes always pass. This is a motif known throughout the world.

5. The Belly of the Whale

THE idea that the passage of the magical threshold is a transit into a sphere of rebirth is symbolized in the worldwide womb image of the belly of the whale. The hero, instead of conquering or conciliating the power of the threshold, is swallowed into the unknown, and would appear to have died.

The little German girl, Red Ridinghood, was swallowed by a wolf. The Polynesian favorite, Maui, was swallowed by his great-great-grandmother, Hine-nui-te-po. And the whole Greek pantheon, with the sole exception of Zeus, was swallowed by its father, Kronos.

This popular motif gives emphasis to the lesson that the passage of the threshold is a form oE self-annihilation. Its resemblance to the adventure of the Symplegades is obvious. But here, instead of passing outward, beyond the confines of the visible world, the hero goes inward, to be born again. The disappearance corresponds to the passing of a worshiper into a temple—where he is to be quickened by the recollection of who and what he is, namely dust and ashes unless immortal. The temple interior, the belly of the whale, and the heavenly land beyond, above, and below the confines of the world, are one and the same. That is why the approaches and entrances to temples are flanked and defended by colossal gargoyles: dragons, lions, devil-slayers with drawn swords, resentful dwarfs, winged bulls. These are the threshold guardians to ward away all incapable of encountering the higher silences within. They are preliminary embodiments of the dangerous aspect of the presence, corresponding to the mythological ogres that bound the conventional world, or to the two rows of teeth of the whale. They illustrate the fact that the devotee at the moment of entry into a temple undergoes a metamorphosis. His secular character remains without; he sheds it, as a snake its slough. Once inside he may be said to have died to time and returned to the World Womb, the World Navel, the Earthly Paradise. The mere fact that anyone can physically walk past the temple guardians does not invalidate their significance; for if the intruder is incapable of encompassing the sanctuary, then he has effectually remained without. Anyone unable to understand a god sees it as a devil and is thus defended from the approach. Allegorically, then, the passage into a temple and the hero-dive through the jaws of the whale are identical adventures, both denoting, in picture language, the life-centering, life-renewing act.

The hero whose attachment to ego is already annihilate passes back and forth across the horizons of the world, in and out of the
dragon, as readily as a king through all the rooms of his house. And therein lies his power to save; for his passing and returning demonstrate that through all the contraries of phenomenality the Un-create-Imperishable remains, and there is nothing to fear. And so it is that, throughout the world, men whose function it has been to make visible on earth the life-fructifying mystery of the slaying of the dragon have enacted upon their own bodies the great symbolic act, scattering their flesh, like the body of Osiris, for the renovation of the world.


1 The Road of Trials

ONCE having traversed the threshold, the hero moves in a dream landscape of curiously fluid, ambiguous forms, where he must survive a succession of trials.

The hero is covertly aided by the advice, amulets, and secret agents of the supernatural helper whom he met before his entrance into this region. Or it may be that he here discovers for the first time that there is a benign power everywhere supporting him in his superhuman passage.

…the “difficult tasks” motif is that of Psyche’s quest for her lost lover, Cupid.1 Here all the principal roles are reversed: instead of the lover trying to win his bride, it is the bride trying to win her lover; and instead of a cruel father withholding his daughter from the lover, it is the jealous mother, Venus, hiding her son, Cupid, from his bride.

Psyche’s voyage to the underworld is but one of innumerable such adventures undertaken by the heroes of fairy tale and myth. Among the most perilous are those of the shamans of the peoples of the farthest north (the Lapps, Siberians, Eskimo, and certain American Indian tribes), when they go to seek out and recover the lost or abducted souls of the sick.

In every primitive tribe,” writes Dr. Géza Róheim, “we find the medicine man in the center of society and it is easy to show that the medicine man is either a neurotic or a psychotic or at least that his art is based on the same mechanisms as a neurosis or a psychosis. Human groups are actuated by their group ideals, and these are always based on the infantile situation.” “The infancy situation is modified or inverted by the process of maturation, again modified by the necessary adjustment to reality, yet it is there and supplies those unseen libidinal ties without which no human groups could exist.”6 The medicine men, therefore, are simply making both visible and public the systems of symbolic fantasy that are present in the psyche of every adult member of their society. “They are the leaders in this infantile game and the lightning conductors of common anxiety. They fight the demons so that others can hunt the prey and in general fight reality.”

And so it happens that if anyone—in whatever society—undertakes for himself the perilous journey into the darkness by descending, either intentionally or unintentionally, into the crooked lanes of his own spiritual labyrinth, he soon finds himself in a landscape of symbolical figures (any one of which may swallow him) which is no less marvelous than the wild Siberian world of the pudak and sacred mountains. In the vocabulary of the mystics, this is the second stage of the Way, that of the “purification of the self,” when the senses are “cleansed and humbled,” and the energies and interests “concentrated upon transcendental things;”8 or in a vocabulary of more modern turn: this is the process of dissolving, transcending, or transmuting the infantile images of our personal past.

There can be no question: the psychological dangers through which earlier generations were guided by the symbols and spiritual exercises of their mythological and religious inheritance, we today (in so far as we are unbelievers, or, if believers, in so far as our inherited beliefs fail to represent the real problems of contemporary life) must face alone, or, at best, with only tentative, impromptu, and not often very effective guidance. This is our problem as modern, “enlightened” individuals, for whom all gods and devils have been rationalized out of existence.

To hear and profit, however, one may have to submit somehow to purgation and surrender. And that is part of our problem: just how to do that.

The oldest recorded account of the passage through the gates of metamorphosis is the Sumerian myth of the goddess Inanna’s descent to the nether world.

From the “great above” she set her mind toward
the “great below,”
The goddess, from the “great above” she set her
mind toward the “great below,”
Inanna, from the “great above” she set her mind
toward the “great below.”
My lady abandoned heaven, abandoned earth,
To the nether world she descended,
Inanna abandoned heaven, abandoned earth,
To the nether world she descended,
Abandoned lordship, abandoned ladyship,
To the nether world she descended.

Inanna and Ereshkigal, the two sisters, light and dark respectively, together represent, according to the antique manner of symbolization, the one goddess in two aspects; and their confrontation epitomizes the whole sense of the difficult road of trials. The hero, whether god or goddess, man or woman, the figure in a myth or the dreamer of a dream, discovers and assimilates his opposite (his own unsuspected self) either by swallowing it or by being swallowed. One by one the resistances are broken. He must put aside his pride, his virtue, beauty, and life, and bow or submit to the absolutely intolerable. Then he finds that he and his opposite are not of differing species, but one flesh.

Can the ego put itself to death? For many-headed is this surrounding Hydra; one head cut off, two more appear—unless the right caustic is applied to the mutilated stump.

2. The Meeting with the Goddess

THE ultimate adventure, when all the barriers and ogres have been overcome, is commonly represented as a mystical marriage of the triumphant hero-soul with the Queen Goddess of the World. This is the crisis at the nadir, the zenith, or at the uttermost edge of the earth, at the central point of the cosmos, in the tabernacle of the temple, or within the darkness of the deepest chamber of the heart.

The Lady of the House of Sleep is a familiar figure in fairy tale and myth. We have already spoken of her, under the forms of Brynhild and little Briar-rose. She is the paragon of all paragons of beauty, the reply to all desire, the bliss-bestowing goal of every hero’s earthly and unearthly quest. She is mother, sister, mistress, bride. Whatever in the world has lured, whatever has seemed to promise joy, has been premonitory of her existence—in the deep of sleep, if not in the cities and forests of the world. For she is the incarnation of the promise of perfection; the soul’s assurance that, at the conclusion of its exile in a world of organized inadequacies, the bliss that once was known will be known again: the comforting, the nourishing, the “good” mother—young and beautiful—who was known to us, and even tasted, in the remotest past. Time sealed her away, yet she is dwelling still, like one who sleeps in timelessness, at the bottom of the timeless sea.

The remembered image is not only benign, however; for the “bad” mother too—(1) the absent, unattainable mother, against whom aggressive fantasies are directed, and from whom a counter-aggression is feared; (2) the hampering, forbidding, punishing mother; (3) the mother who would hold to herself the growing child trying to push away; and finally (4) the desired but forbidden mother (Oedipus complex) whose presence is a lure to dangerous desire (castration complex)—persists in the hidden land of the adult’s infant recollection and is sometimes even the greater force. She is at the root of such unattainable great goddess figures as that of the chaste and terrible Diana—whose absolute ruin of the young sportsman Actaeon illustrates what a blast of fear is contained in such symbols of the mind’s and body’s blocked desire.

The mythological figure of the Universal Mother imputes to the cosmos the feminine attributes of the first, nourishing and protecting presence. The fantasy is primarily spontaneous; for there exists a close and obvious correspondence between the attitude of the young child toward its mother and that of the adult toward the surrounding material world.

For she is the world creatrix, ever mother, ever virgin. She encompasses the encompassing, nourishes the nourishing, and is the life of everything that lives.

She is also the death of everything that dies. The whole round of existence is accomplished within her sway, from birth, through adolescence, maturity, and senescence, to the grave. She is the womb and the tomb: the sow that eats her farrow. Thus she unites the “good” and the “bad,” exhibiting the two modes of the remembered mother, not as personal only, but as universal. The devotee is expected to contemplate the two with equal equanimity. Through this exercise his spirit is purged of its infantile, inappropriate sentimentalities and resentments, and his mind opened to the inscrutable presence which exists, not primarily as “good” and “bad” with respect to his childlike human convenience, his weal and woe, but as the law and image of the nature of being.

Her name is Kali, the Black One; her title: The Ferry across the Ocean of Existence.

Woman, in the picture language of mythology, represents the totality of what can be known. The hero is the one who comes to know.

The meeting with the goddess (who is incarnate in every woman) is the final test of the talent of the hero to win the boon of love (charity: amorfati), which is life itself enjoyed as the encasement of eternity.

And when the adventurer, in this context, is not a youth but a maid, she is the one who, by her qualities, her beauty, or her yearning, is fit to become the consort of an immortal. Then the heavenly husband descends to her and conducts her to his bed—whether she will or no. And if she has shunned him, the scales fall from her eyes; if she has sought him, her desire finds its peace.

The Greek Orthodox and Roman Catholic churches celebrate the same mystery in the Feast of the Assumption: “The Virgin Mary is taken up into the bridal chamber of heaven, where the King of Kings sits on his starry throne.” “O Virgin most prudent, whither goest thou, bright as the morn? all beautiful and sweet art thou, O daughter of Zion, fair as the moon, elect as the sun.”

3. Woman as the Temptress

THE mystical marriage with the queen goddess of the world represents the hero’s total mastery of life; for the woman is life, the hero its knower and master. And the testings of the hero, which were preliminary to his ultimate experience and deed, were symbolical of those crises of realization by means of which his consciousness came to be amplified and made capable of enduring the full possession of the mother-destroyer, his inevitable bride. With that he knows that he and the father are one: he is in the father’s place.

The whole sense of the ubiquitous myth of the hero’s passage is that it shall serve as a general pattern for men and women, wherever they may stand along the scale. Therefore it is formulated in the broadest terms. The individual has only to discover his own position with reference to this general human formula, and let it then assist him past his restricting walls. Who and where are his ogres? Those are the reflections of the unsolved enigmas of his own humanity. What are his ideals? Those are the symptoms of his grasp of life.

The crux of the curious difficulty lies in the fact that our conscious views of what life ought to be seldom correspond to what life really is. Generally we refuse to admit within ourselves, or within our friends, the fullness of that pushing, self protective, malodorous, carnivorous, lecherous fever which is the very nature of the organic cell. Rather, we tend to perfume, whitewash, and reinterpret; meanwhile imagining that all the flies in the ointment, all the hairs in the soup, are the faults of some unpleasant someone else.

4. Atonement with the Father

…the ogre aspect of the father.

In most mythologies, the images of mercy and grace are rendered as vividly as those of justice and wrath, so that a balance is maintained, and the heart is buoyed rather than scourged along its way. “Fear not!” says the hand gesture of the god Shiva, as he dances before his devotee the dance of the universal destruction.

For the ogre aspect of the father is a reflex of the victim’s own ego—derived from the sensational nursery scene that has been left behind, but projected before; and the fixating idolatry of that pedagogical nonthing is itself the fault that keeps one steeped in a sense of sin, sealing the potentially adult spirit from a better balanced, more realistic view of the father, and therewith of the world. Atonement (at-one-ment) consists in no more than the abandonment of that self-generated double monster—the dragon thought to be God superego) and the dragon thought to be Sin (repressed id). But this requires an abandonment of the attachment to ego itself, and that is what is difficult. One must have a faith that the father is merciful, and then a reliance on that mercy. Therewith, the center of belief is transferred outside of the bedeviling god’s tight scaly ring, and the dreadful ogres dissolve.

It is in this ordeal that the hero may derive hope and assurance from íhe helpful female figure, by whose magic (pollen charms or power of intercession) he is protected through all the frightening experiences of the father’s ego-shattering initiation. For if it is impossible to trust the terrifying father-face, then one’s faith must be centered elsewhere (Spider Woman, Blessed Mother); and with that reliance for support, one endures the crisis—only to find, in the end, that the father and mother reflect each other, and are in essence the same.

The need for great care on the part of the father, admitting to his house only those who have been thoroughly tested, is illustrated by the unhappy exploit of the lad Phaëthon, described in a famous tale of the Greeks. Born of a virgin in Ethiopia and taunted by his playmates to search the question of his father, he set off across Persia and India to find the palace of the Sun—for his mother had told him that his father was Phoebus, the god who drove the solar chariot.

This tale of indulgent parenthood illustrates the antique idea that when the roles of life aie assumed by the improperly initiated, chaos supervenes. When the child outgrows the popular idyl of the mother breast and turns to face the world of specialized adult action, it passes, spiritually, into the sphere of the father—who becomes, for his son, the sign of the future task, and for his daughter, of the future husband. Whether he knows it or not, and no matter what his position in society, the father is the initiating priest through whom the young being passes on into the larger world. And just as, formerly, the mother represented the “good” and “evil,” so now does he, but with this complication—that there is a new element of rivalry in the picture: the son against the father for the mastery of the universe, and the daughter against the mother to be the mastered world.

The traditional idea of initiation combines an introduction of the candidate into the techniques, duties, and prerogatives of his vocation with a radical readjustment of his emotional relationship to the parental images. The mystagogue (father or father-substitute) is to entrust the symbols of office only to a son who has been effectually purged of all inappropriate infantile cathexes—for whom the just, impersonal exercise of the powers will not be rendered impossible by unconscious (or perhaps even conscious and rationalized) motives of self aggrandizement, personal preference, or resentment. Ideally, the invested one has been divested of his mere humanity and is representative of an impersonal cosmic force. He is the twice-born: he has become himself the father. And he is competent, consequently, now to enact himself the role of the initiator, the guide, the sun door, through whom one may pass from the infantile illusions of “good” and “evil” to an experience of the majesty of cosmic law, purged of hope and fear, and at peace in the understanding of the revelation of being.

…through the Christian church (in the mythology of the Fall and Redemption, Crucifixion and Resurrection, the “second birth” of baptism, the initiatory blow on the cheek at confirmation, the symbolical eating of the Flesh and drinking of the Blood) solemnly, and sometimes effectively, we are united to those immortal images of initiatory might, through the sacramental operation of which, man, since the beginning of his day on earth, has dispelled the terrors of his phenomenality and won through to the all-transfiguring vision of immortal being. “For if the blood of bulls and of goats, and the ashes of an heifer sprinkling the unclean, sanctifieth to the purifying of the flesh: how much more shall the blood of Christ, who through the eternal Spirit offered himself without spot to God, purge your conscience from dead works to serve the living God?”

But the most extraordinary and profoundly moving of the traits of Viracocha, this nobly conceived Peruvian rendition of the universal god, is the detail that is peculiarly his own, namely that of the tears. The living waters are the tears of God. Herewith the world-discrediting insight of the monk, “All life is sorrowful,” is combined with the world-begetting affirmative of the father: “Life must bei” In full awareness of the life anguish of the creatures of his hand, in full consciousness of the roaring wilderness of pains, the brain-splitting fires of the deluded, self-ravaging, lustful, angry universe of his creation, this divinity acquiesces in the deed of supplying life to life. To withhold the seminal waters would be to annihilate; yet to give them forth is to create this world that we know. For the essence of time is flux, dissolution of the momentarily existent; and the essence of life is time. In his mercy, in his love for the forms of time, this demiurgic man of men yields countenance to the sea of pangs; but in his full awareness of what he is doing, the seminal waters of the life that he gives are the tears of his eyes.

The paradox of creation, the coming of the forms of time out of eternity, is the germinal secret of the father. It can never be quite explained. Therefore, in every system of theology there is an umbilical point, an Achilles tendon which the finger of mother life has touched, and where the possibility of perfect knowledge has been impaired. The problem of the hero is to pierce himself (and therewith his world) precisely through that point; to shatter and annihilate that key knot of his limited existence.

The problem of the hero going to meet the father is to open his soul beyond terror to such a degree that he will be ripe to understand how the sickening and insane tragedies of this vast and ruthless cosmos are completely validated in the majesty of Being. The hero transcends life with its peculiar blind spot and for a moment rises to a glimpse of the source. He beholds the face of the father, understands—and the two are atoned.

5. Apotheosis

When the envelopment of consciousness has been annihilated, then he becomes free of all fear, beyond the reach of change.”84 This is the release potential within us all, and which anyone can attain—through herohood; for, as we read: “All things are Buddha-things;”85 or again (and this is the other way of making the same statement): “All beings are without self.”

Time and eternity are two aspects of the same experience-whole, two planes of the same nondual ineffable; i.e., the jewel of eternity is in the lotus of birth and death: om mani padme hum. The first wonder to be noted here is the androgynous character of the Bodhisattva: masculine Avalokiteshvara, feminine Kwan Yin. Male-female gods are not uncommon in the world of myth. They emerge always with a certain mystery; for they conduct the mind beyond objective experience into a symbolic realm where duality is left behind.

And among the Greeks, not only Hermaphrodite (the child of Hermes and Aphrodite),88 but Eros too, the divinity of love (the first of the gods, according to Plato),89 were in sex both female and male.

“So God created man in his own image, in the image of God created he him; male and female created he them.”90 The question may arise in the mind as to the nature of the image of God; but the answer is already given in the text, and is clear enough. “When the Holy One, Blessed be He, created the first man, He created him androgynous.”91 The removal of the feminine into another form symbolizes the beginning of the fall from perfection into duality; and it was naturally followed by the discovery of the duality of good and evil, exile from the garden where God walks on earth, and thereupon the building of the wall of Paradise, constituted of the “coincidence of opposites,”92 by which Man (now man and woman) is cut off from not only the vision but even the recollection of the image of God.

This is the Biblical version of a myth known to many lands. It represents one of the basic ways of symbolizing the mystery of creation: the devolvement of eternity into time, the breaking of the one into the two and then the many, as well as the generation of new life through the reconjunction of the two. This image stands at the beginning of the cosmogonie cycle,93 and with equal propriety at the conclusion of the hero-task, at the moment when the wall of Paradise is dissolved, the divine form found and recollected, and wisdom regained.

The call of the Great Father Snake was alarming to the child; the mother was protection. But the father came. He was the guide and initiator into the mysteries of the unknown. As the original intruder into the paradise of the infant with its mother, the father is the archetypal enemy; hence, throughout life all enemies are symbolical (to the unconscious) of the father.

…the irresistible compulsion to make war: the impulse to destroy the father is continually transforming itself into public violence. The old men of the immediate community or race protect themselves from their growing sons by the psychological magic of their totem ceremonials. They enact the ogre father, and then reveal themselves to be the feeding mother too. A new and larger paradise is thus established. But this paradise does not include the traditional enemy tribes, or races, against whom aggression is still systematically projected.

The rest of the world meanwhile (that is to say, by far the greater portion of mankind) is left outside the sphere of his sympathy and protection because outside the sphere of the protection of his god. And there takes place, then, that dramatic divorce of the two principles of love and hate which the pages of history so bountifully illustrate. Instead of clearing his own heart the zealot tries to clear the world. The laws of the City of God are applied only to his in-group (tribe, church, nation, class, or what not) while the fire of a perpetual holy war is hurled (with good conscience, and indeed a sense of pious service) against whatever uncircumcised, barbarian, heathen, “native,” or alien people happens to occupy the position of neighbor.

Once we have broken free of the prejudices of our own provincially limited ecclesiastical, tribal, or national rendition of the world archetypes, it becomes possible to understand that the supreme initiation is not that of the local motherly fathers, who then project aggression onto the neighbors for their own defense.

The good news, which the World Redeemer brings and which so many have been glad to hear, zealous to preach, but reluctant, apparently, to demonstrate, is that God is love, that He can be, and is to be, loved, and that all without exception are his children.

The understanding of the final—and critical—implications of the world-redemptive words and symbols of the tradition of Christendom has been so disarranged, during the tumultuous centuries that have elapsed since St. Augustine’s declaration of the holy war of the Civitas Dei against the Civitas Diaboli, that the modern thinker wishing to know the meaning of a world religion (i.e., of a doctrine of universal love) must turn his mind to the other great (and much older) universal communion: that of the Buddha, where the primary word still is peace—peace to all beings.

If ye realize the Emptiness of All Things, Compassion
will arise within your hearts;
If ye lose all differentiation between yourselves and others, fit
to serve others ye will be;
And when in serving others ye shall win success, then shall ye
meet with me;
And finding me, ye shall attain to Buddhahood.

Peace is at the heart of all because Avalokiteshvara-Kwannon, the mighty Bodhisattva, Boundless Love, includes, regards, and dwells within (without exception) every sentient being.

The Lord Who is Seen Within.” We are all reflexes of the image of the Bodhisattva. The sufferer within us is that divine being. We and that protecting father are one. This is the redeeming insight. That protecting father is every man we meet. And so it must be known that, though this ignorant, limited, self-defending, suffering body may regard itself as threatened by some other the enemy—that one too is the God. The ogre breaks us, but the hero, the fit candidate, undergoes the initiation “like a man;” and behold, it was the father: we in Him and He in us. The dear, protecting mother of our body could not defend us from the Great Father Serpent; the mortal, tangible body that she gave us was delivered into his frightening power. But death was not the end. New life, new birth, new knowledge of existence (so that we live not in this physique only, but in all bodies, all physiques of the world, as the Bodhisattva) was given us. That father was himself the womb, the mother, of a second birth.

The childhood parent images and ideas of “good” and “evil” have been surpassed. We no longer desire and fear; we are what was desired and feared. All the gods, Bodhisattvas, and Buddhas have been subsumed in us, as in the halo of the mighty holder of the lotus of the world.

The method of the celebrated Buddhist Eightfold Path:
Right Belief, Right Intentions,
Right Speech, Right Actions,
Right Livelihood, Right Endeavoring,
Right Mindfulness, Right Concentration.

And the Christian reading of the meaning also is the same: Et Verbum caro factum est, i.e., “The Jewel is in the Lotus”: Om mani padme Aum.

Note 134: “And the Word was made flesh”; verse of the Angelus, celebrating the conception of Jesus in Mary’s womb.

6. The Ultimate Boon

The motif (derived from an infantile fantasy) of the inexhaustible dish, symbolizing the perpetual life-giving, form-building powers of the universal source, is a fairy-tale counterpart of the mythological image of the cornucopian banquet of the gods.

The profession, for example, of the medicine man, this nucleus of all primitive societies, “originates … on the basis of the infantile body-destruction fantasies, by means of a series of defence mechanisms.”139 In Australia a basic conception is that the spirits have removed the intestines of the medicine man and substituted pebbles, quartz crystals, a quantity of rope, and sometimes also a little snake endowed with power.140 “The first formula is abreaction in fantasy (my inside has already been destroyed) followed by reaction-formation (my inside is not something corruptible and full of faeces, but incorruptible, full of quartz crystals.

The supreme boon desired for the Indestructible Body is uninterrupted residence in the Paradise of the Milk that Never Fails

Soul and body food, heart’s ease, is the gift of “All Heal,” the nipple inexhaustible. Mt. Olympus rises to the heavens; gods and heroes banquet there on ambrosia (α, not, mortal). In Wotan’s mountain hall, four hundred and thirty-two thousand heroes consume the undiminished flesh of Sachrimnir, the Cosmic Boar, washing it down with a milk that runs from the udders of the she-goat Heidrun: she feeds on the leaves of Yggdrasil, the World Ash. Within the fairy hills of Erin, the deathless Tuatha De Danaan consume the self-renewing pigs of Manannan, drinking copiously of Guibne’s ale. In Persia, the gods in the mountain garden on Mt. Hara Berezaiti drink immortal haoma, distilled from the Gaokerena Tree, the tree of life. The Japanese gods drink sake, the Polynesian ave, the Aztec gods drink the blood of men and maids. And the redeemed of Yahweh, in their roof garden, are served the inexhaustible, delicious flesh of the monsters Behemoth, Leviathan, and Ziz, while drinking the liquors of the four sweet rivers of paradise.

Humor is the touchstone of the truly mythological as distinct from the more literal-minded and sentimental theological mood. The gods as icons are not ends in themselves. Their entertaining myths transport the mind and spirit, not up to, but past them, into the yonder void; from which perspective the more heavily freighted theological dogmas then appear to have been only pedagogical lures: their function, to cart the unadroit intellect away from its concrete clutter of facts and events to a comparatively rarefied zone, where, as a final boon, all existence—whether heavenly, earthly, or infernal—may at last be seen transmuted into the semblance of a lightly passing, recurrent, mere childhood dream of bliss and fright.”

The gods and goddesses then are to be understood as embodiments and custodians of the elixir of Imperishable Being but not themselves the Ultimate in its primary state. What the hero seeks through his intercourse with them is therefore not finally themselves, but their grace, i.e., the power of their sustaining substance. This miraculous energy-substance and this alone is the Imperishable; the names and forms of the deities who everywhere embody, dispense, and represent it come and go. This is the miraculous energy of the thunderbolts of Zeus, Yahweh, and the Supreme Buddha, the fertility of the rain of Viracocha, the virtue announced by the bell rung in the Mass at the consecration,157 and the light of the ultimate illumination of the saint and sage. Its guardians dare release it only to the duly proven.

The greatest tale of the elixir quest in the Mesopotamian, pre-Biblical tradition is that of Gilgamesh, a legendary king of the Sumerian city of Erech, who set forth to attain the watercress of immortality, the plant “Never Grow Old.

Now the far land that they were approaching was the residence of Utnapishtim, the hero of the primordial deluge, Gilgamesh, on landing, had to listen to the patriarch’s long recitation of the story of the deluge.

The plant was growing at the bottom of the cosmic sea.

Gilgamesh bathed in a cool water-hole and lay down to rest. But while he slept, a serpent smelled the wonderful perfume of the plant, darted forth, and carried it away. Eating it, the snake immediately gained the power of sloughing its skin, and so renewed its youth. But Gilgamesh, when he awoke, sat down and wept, “and the tears ran down the wall of his nose.”

The research for physical immortality proceeds from a misunderstanding of the traditional teaching. On the contrary, the basic problem is: to enlarge the pupil of the eye, so that the body with its attendant personality will no longer obstruct the view. Immortality is then experienced as a present fact: “It is here! It is here!”

“All things are in process, rising and returning. Plants come to blossom, but only to return to the root. Returning to the root is like seeking tranquility. Seeking tranquility is like moving toward destiny. To move toward destiny is like eternity. To know eternity is enlightenment, and not to recognize eternity brings disorder and evil.

The Japanese have a proverb: “The gods only laugh when men pray to them for wealth.” The boon bestowed on the worshiper is always scaled to his stature and to the nature of his dominant desire: the boon is simply a symbol of life energy stepped down to the requirements of a certain specific case. The irony, of course, lies in the fact that, whereas the hero who has won the favor of the god may beg for the boon of perfect illumination, what he generally seeks are longer years to live, weapons with which to slay his neighbor, or the health of his child.

The agony of breaking through personal limitations is the agony of spiritual growth. Art, literature, myth and cult, philosophy, and ascetic disciplines are instruments to help the individual past his limiting horizons into spheres of ever-expanding realization. As he crosses threshold after threshold, conquering dragon after dragon, the stature of the divinity that he summons to his highest wish increases, until it subsumes the cosmos. Finally, the mind breaks the bounding sphere of the cosmos to a realization transcending all experiences of form—all symbolizations, all divinities: a realization of the ineluctable void.

This is the highest and ultimate crucifixion, not only of the hero, but of his god as well. Here the Son and the Father alike are annihilated—as personality-masks over the unnamed.

…the universal force of a single inscrutable mystery: the power that constructs the atom and controls the orbits of the stars.

That font of life is the core of the individual, and within himself he will find it—if he can tear the coverings away. The pagan Germanic divinity Othin (Wotan) gave an eye to split the veil of light into the knowledge of this infinite dark, and then underwent for it the passion of a crucifixion:

I ween that I hung on the windy tree,
Hung there for nights full nine;
With the spear I was wounded, and offered I was
To Othin, myself to myself,
On the tree that none may ever know
What root beneath it runs.

The Buddha’s victory beneath the Bo Tree is the classic Oriental example of this deed. With the sword of his mind he pierced the bubble of the universe—and it shattered into nought. The whole world of natural experience, as well as the continents, heavens, and hells of traditional religious belief, exploded—together with their gods and demons. But the miracle of miracles was that though all exploded, all was nevertheless thereby renewed, revivified, and made glorious with the effulgence of true being. Indeed, the gods of the redeemed heavens raised their voices in harmonious acclaim of the man-hero who had penetrated beyond them to the void that was their life and source.


1 Refusal of the Return

WHEN the hero—quest has been accomplished, through penetration to the source, or through the grace of some male or female, human or animal, personification, the adventurer still must return with his life-transmuting trophy. The full round, the norm of the monomyth, requires that the hero shall now begin the labor of bringing the runes of wisdom, the Golden Fleece, or his sleeping princess, back into the kingdom of humanity, where the boon may redound to the renewing of the community, the nation, the planet, or the ten thousand worlds.

But the responsibility has been frequently refused. Even the Buddha, after his triumph, doubted whether the message of realization could be communicated, and saints are reported to have passed away while in the supernal ecstasy.

2. The Magic Flight

IF THE hero in his triumph wins the blessing of the goddess or the god and is then explicitly commissioned to return to the world with some elixir for the restoration of society, the final stage of his adventure is supported by all the powers of his supernatural patron. On the other hand, if the trophy has been attained against the opposition of its guardian, or if the hero’s wish to return to the world has been resented by the gods or demons, then the last stage of the mythological round becames a lively, often comical, pursuit.This flight may be complicated by marvels of magical obstruction and evasion.

The flight is a favorite episode of the folk tale, where it is developed under many lively forms.

A popular variety of the magic flight is that in which objects are left behind to speak for the fugitive and thus delay pursuit.

Another well-known variety of the magic flight is one in which a number of delaying obstacles are tossed behind by the wildly fleeing hero.

One of the most shocking of the obstacle flights is that of the Greek hero, Jason. He had set forth to win the Golden Fleece. Putting to sea in the magnificent Argo with a great company of warriors, he had sailed in the direction of the Black Sea, and, though delayed by many fabulous dangers, arrived, at last, miles beyond the Bosporus, at the city and palace of King Aeëtes. Behind the palace was the grove and tree of the dragon-guarded prize.

Now the daughter of the king, Medea, conceived an overpowering passion for the illustrious foreign visitor and, when her father imposed an impossible task as the price of the Golden Fleece, compounded charms that enabled him to succeed.

Then Jason snatched the prize, Medea ran with him, and the Argo put to sea. But the king was soon in swift pursuit. And when Medea perceived that his sails were cutting down their lead, she persuaded Jason to kill Apsyrtos, her younger brother whom she had carried off, and toss the pieces of the dismembered body into the sea. This forced King Aeëtes, her father, to put about, rescue the fragments, and go ashore to give them decent burial. Meanwhile the Argo ran with the wind and passed from his ken.

It is always some little fault, some slight yet critical symptom of human frailty, that makes impossible the open interrelationship between the worlds; so that one is tempted to believe, almost, that if the small, marring accident could be avoided, all would be well.

The myths of failure touch us with the tragedy of life, but those of success only with their own incredibility. And yet, if the mono-myth is to fulfill its promise, not human failure or superhuman success but human success is what we shall have to be shown. That is the problem of the crisis of the threshold of the return. We shall first consider it in the superhuman symbols and then seek the practical teaching for historic man.

3. Rescue from Without

THE hero may have to be brought back from his supernatural adventure by assistance from without. That is to say, the world may have to come and get him.

And yet, in so far as one is alive, life will call. Society is jealous of those who remain away from it, and will come knocking at the door. If the hero—like Muchukunda—is unwilling, the disturber suffers an ugly shock; but on the other hand, if the summoned one is only delayed—sealed in by the beatitude of the state of perfect being (which resembles death)—an apparent rescue is effected, and the adventurer returns.

The mirror, the sword, and the tree, we recognize. The mirror, reflecting the goddess and drawing her forth from the august repose of her divine non-manifestation, is symbolic of the world, the field of the reflected image. Therein divinity is pleased to regard its own glory, and this pleasure is itself inducement to the act of manifestation or “creation.” The sword is the counterpart of the thunderbolt. The tree is the World Axis in its wish-fulfilling, fruitful aspect—the same as that displayed in Christian homes at the season of the winter solstice, which is the moment of the rebirth or return of the sun, a joyous custom inherited from the Germanic paganism that has given to the modern German language its feminine Sonne. The dance of Uzume and the uproar of the gods belong to carnival: the world left topsy-turvy by the withdrawal of the supreme divinity, but joyous for the coming renewal. And the shimenawa, the august rope of straw that was stretched behind the goddess when she reappeared, symbolizes the graciousness of the miracle of the light’s return. This shimenawa is one of the most conspicuous, important, and silently eloquent, of the traditional symbols of the folk religion of Japan. Hung above the entrances of the temples, festooned along the streets at the New Year festival, it denotes the renovation of the world at the threshold of the return. If the Christian cross is the most telling symbol of the mythological passage into the abyss of death, the shimenawa is the simplest sign of the resurrection. The two represent the mystery of the boundary between the worlds—the existent nonexistent line.

Amaterasu is an Oriental sister of the great Inanna, the supreme goddess of the ancient Sumerian cuneiform temple-tablets, whose descent we have already followed into the lower world. Inanna, Ishtar, Astarte, Aphrodite, Venus: those were the names she bore in the successive culture periods of the Occidental development—associated, not with the sun, but with the planet that carries her name, and at the same time with the moon, the heavens, and the fruitful earth. In Egypt she became the goddess of the Dog Star, Sirius, whose annual reappearance in the sky announced the earth-fructifying flood season of the river Nile.

…the rescue from without. They show in the final stages of the adventure the continued operation of the supernatural assisting force that has been attending the elect through the whole course of his ordeal. His consciousness having succumbed, the unconscious nevertheless supplies its own balances, and he is born back into the world from which he came. Instead of holding to and saving his ego, as in the pattern of the magic flight, he loses it, and yet, through grace, it is returned.

This brings us to the final crisis of the round, to which the whole miraculous excursion has been but a prelude—that, namely, of the paradoxical, supremely difficult threshold-crossing of the hero’s return from the mystic realm into the land of common day. Whether rescued from without, driven from within, or gently carried along by the guiding divinities, he has yet to re-enter with his boon the long-forgotten atmosphere where men who are fractions imagine themselves to be complete. He has yet to confront society with his ego-shattering, life-redeeming elixir, and take the return blow of reasonable queries, hard resentment, and good people at a loss to comprehend.

4. The Crossing of the Return Threshold

THE two worlds, the divine and the human, can be pictured only as distinct from each other—different as life and death, as day and night. The hero adventures out of the land we know into darkness; there he accomplishes his adventure, or again is simply lost to us, imprisoned, or in danger; and his return is described as a coming back out of that yonder zone. Nevertheless—and here is a great key to the understanding of myth and symbol—the two kingdoms are actually one. The realm of the gods is a forgotten dimension of the world we know. And the exploration of that dimension, either willingly or unwillingly, is the whole sense of the deed of the hero.

But the hero-soul goes boldly in—and discovers the hags converted into goddesses and the dragons into the watchdogs of the gods.

The boon brought from the transcendent deep becomes quickly rationalized into nonentity, and the need becomes great for another hero to refresh the word.

How teach again, however, what has been taught correctly and incorrectly learned a thousand thousand times, throughout the millenniums of mankind’s prudent folly? That is the hero’s ultimate difficult task.

The first problem of the returning hero is to accept as real, after an experience of the soul-satisfying vision of fulfillment, the passing joys and sorrows, banalities and noisy obscenities of life.

The story of Rip van Winkle is an example of the delicate case of the eturning hero.

In deep sleep, declare the Hindus, the self is unified and blissful; therefore deep sleepis called the cognitional state.

The equating of a single year in Paradise to one hundred of earthly existence is a motif well known to myth. The full round of one hundred signifies totality. Similarly, the three hundred and sixty degrees of the circle signify totality; accordingly the Hindu Puranas represent one year of the gods as equal to three hundred and sixty of men.

Sir James George Frazer explains in the following graphic way the fact that over the whole earth the divine personage may not touch the ground with his foot. “Apparently holiness, magical virtue, taboo, or whatever we may call that mysterious quality which is supposed to pervade sacred or tabooed persons, is conceived by the primitive philosopher as a physical substance or fluid, with which the sacred man is charged just as a Leyden jar is charged with electricity; and exactly as the electricity in the jar can be discharged by contact with a good conductor, so the holiness or magical virtue in the man can be discharged and drained away by contact with the earth, which on this theory serves as an excellent conductor for the magical fluid. Hence in order to preserve the charge from running to waste, the sacred or tabooed personage must be carefully prevented from touching the ground; in electrical language he must be insulated,

The wife is insulated, more or less, by her ring.

And the myths—for example, the myths assembled by Ovid in his great compendium, the Metamorphoses—recount again and again the shocking transformations that take place when the
insulation between a highly concentrated power center and the lower power field of the surrounding world is, without proper precautions, suddenly taken away. According to the fairy lore of the Celts and Germans, a gnome or elf caught abroad by the sunrise is turned immediately into a stick or a stone.

The returning hero, to complete his adventure, must survive the impact of the world. Rip van Winkle never knew what he had experienced; his return was a joke

The encounter and separation, for all its wildness, is typical of the sufferings of love. For when a heart insists on its destiny, resisting the general blandishment, then the agony is great; so too the danger. Forces, however, will have been set in motion beyond the reckoning of the senses. Sequences of events from the corners of the world will draw gradually together, and miracles of coincidence bring the inevitable to pass.

Not everyone has a destiny: only the hero who has plunged to touch it, and has come up again—with a ring.

5. Master of the Two Worlds

Here is the whole myth in a moment: Jesus the guide, the way, the vision, and the companion of the return. The disciples are his initiates, not themselves masters of the mystery, yet introduced to the full experience of the paradox of the two worlds in one. Peter was so frightened he babbled.28 Flesh had dissolved before their eyes to reveal the Word. They fell upon their faces, and when they arose the door again had closed.

We do not particularly care whether Rip van Winkle, Kamar al-Zaman, or Jesus Christ ever actually lived. Their stories are what concern us: and these stories are so widely distributed over the world—attached to various heroes in various lands—that the question of whether this or that local carrier of the universal theme may or may not have been a historical, living man can be of only secondary moment. The stressing of this historical element will lead to confusion; it will simply obfuscate the picture message.

What, then, is the tenor of the image of the transfiguration? That is the question we have to ask. But in order that it may be confronted on universal grounds, rather than sectarian, we had better review one further example, equally celebrated, of the archetypal event.

The disciple has been blessed with a vision transcending the scope of normal human destiny, and amounting to a glimpse of the essential nature of the cosmos. Not his personal fate, but the fate of mankind, of life as a whole, the atom and all the solar systems, has been opened to him; and this in terms befitting his human understanding, that is to say, in terms of an anthropomorphic vision: the Cosmic Man. An identical initiation might have been effected by means of the equally valid image of the Cosmic Horse, the Cosmic Eagle, the Cosmic Tree, or the Cosmic Praying-Mantis.

The Cosmic Man whom he beheld was an aristocrat, like himself, and a Hindu. Correspondingly, in Palestine the Cosmic Man appeared as a Jew, in ancient Germany as a German; among the Basuto he is a Negro, in Japan Japanese.

Note 31: Om. The Cosmic Tree is a well known mythological figure (viz., Yggdrasil, the World Ash, of the Eddas). The Mantis plays a major role in the mythology of the Bushmen of South Africa.

Symbols are only the vehicles of communication; they must not be mistaken for the final term, the tenor, of their reference. No matter how attractive or impressive they may seem, they remain but convenient means, accommodated to the understanding.

The problem of the theologian is to keep his symbol translucent, so that it may not block out the very light it is supposed to convey. “For then alone do we know God truly,” writes Saint Thomas Aquinas, “when we believe that He is far above all that man can possibly think of God.”

The next thing to observe is that the transfiguration of Jesus was witnessed by devotees who had extinguished their personal wills, men who had long since liquidated “life,” “personal fate,” “destiny,” by complete self-abnegation in the Master. “Neither by the Vedas, nor by penances, nor by alms-giving, nor yet by sacrifice, am I to be seen in the form in which you have just now beheld Me,” Krishna declared, after he had resumed his familiar shape; “but only by devotion to Me may I be known in this form, realized truly, and entered into. He who does My work and regards Me as the Supreme Goal, who is devoted to Me and without hatred for any creature—he comes to Me.”35 A corresponding formulation by Jesus makes the point more succinctly: “Whosoever will lose his life for my sake shall find it.”

The meaning is very clear; it is the meaning of all religious practice. The individual, through prolonged psychological disciplines, gives up completely all attachment to his personal limitations, idiosyncrasies, hopes and fears, no longer resists the self-annihilation that is prerequisite to rebirth in the realization of truth, and so becomes ripe, at last, for the great at-one-ment. His personal ambitions being totally dissolved, he no longer tries to live but willingly relaxes to whatever may come to pass in him; he becomes, that is to say, an anonymity. The Law lives in him with his unreserved consent.

6. Freedom to Live

The battlefield is symbolic of the field of life, where every creature lives on the death of another. A realization of the inevitable guilt of life may so sicken the heart that, like Hamlet or like Arjuna, one may refuse to go on with it. On the other hand, like most of the rest of us, one may invent a false, finally unjustified, image of oneself as an exceptional phenomenon in the world, not guilty as others are, but justified in one’s inevitable sinning because one represents the good. Such self-righteousness leads to a misunderstanding, not only of oneself but of the nature of both man and the cosmos. The goal of the myth is to dispel the need for such life ignorance by effecting a reconciliation of the individual consciousness with the universal will.

Man in the world of action loses his centering in the principle of eternity if he is anxious for the outcome of his deeds, but resting them and their fruits on the knees of the Living God he is released by them, as by a sacrifice, from the bondages of the sea of death.


Campbell’s diagram of the Hero’s Journey

Here is a link to an excellent website that has alternate examples of the diagram:

More detailed diagram from the above mentioned site.

The mythological hero, setting forth from his commonday hut or castle, is lured, carried away, or else voluntarily proceeds, to the threshold of adventure. There he encounters a shadow presence that guards the passage. The hero may defeat or conciliate this power and go alive into the kingdom of the dark (brother battle, dragon-battle; offering, charm), or be slain by the opponent and descend in death (dismemberment, crucifixion). Beyond the threshold, then, the hero journeys through a world of unfamiliar yet strangely intimate forces, some of which severely threaten him (tests), some of which give magical aid (helpers). When he arrives at the nadir of the mythological round, he undergoes a supreme ordeal and gains his reward. The triumph may be represented as the hero’s sexual union with the goddess-mother of the world (sacred marriage), his recognition by the father creator (father atonement), his own divinization (apotheosis), or again—if the powers have remained unfriendly to him—his theft of the boon he came to gain (bride-theft, fire-theft); intrinsically it is an expansion of consciousness and therewith of being (illumination, transfiguration, freedom). The final work is that of the return. If the powers have blessed the hero, he now sets forth under their protection (emissary); if not, he flees and is pursued (transformation flight, obstacle flight). At the return threshold the transcendental powers must remain behind; the hero re emerges from the kingdom of dread (return, resurrection). The boon that he brings restores the world (elixir).

The archetype of the hero in the belly of the whale is widely known. The principal deed of the adventurer is usually to make fire with his fire sticks in the interior of the monster, thus bringing about the whale’s death and his own release. Fire making in this manner is symbolic of the sex act. The two sticks—socket-stick and spindle—are known respectively as the female and the male; the flame is the newly generated life. The hero making fire in the whale is a variant of the sacred marriage.

And in modern progressive Christianity the Christ—Incarnation of the Logos and Redeemer of the World—is primarily a historical personage, a harmless country wise man of the semi-oriental past, who preached a benign doctrine of “do as you would be done by,” yet was executed as a criminal. His death is read as a splendid lesson in integrity and fortitude.

Wherever the poetry of myth is interpreted as biography, history, or science, it is killed. The living images become only remote facts of a distant time or sky. Furthermore, it is never difficult to demonstrate that as science and history mythology is absurd. When a civilization begins to reinterpret its mythology in this way, the life goes out of it, temples become museums, and the link between the two perspectives is dissolved. Such a blight has certainly descended on the Bible and on a great part of the Christian cult.


1 From Psychology to Metaphysics

…there can be little doubt, either that myths are of the nature of dream, or that dreams are symptomatic of the dynamics of the psyche.

With their discovery that the patterns and logic of fairy tale and myth correspond to those of dream, the long discredited chimeras of archaic man have returned dramatically to the foreground of modern consciousness. According to this view it appears that through the wonder tales—which pretend to describe the lives of the legendary heroes, the powers of the divinities of nature, the spirits of the dead, and the totem ancestors of the group—symbolic expression is given to the unconscious desires, fears, and tensions that underlie the conscious patterns of human behavior. Mythology, in other words, is psychology misread as biography; history, and cosmology.

And so, to grasp the full value of the mythological figures that have come down to us, we must understand that they are not only symptoms of the unconscious (as indeed are all human thoughts and acts) but also controlled and intended statements of certain spiritual principles, which have remained as constant throughout the course of human history as the form and nervous structure of the human physique itself. Briefly formulated, the universal doctrine teaches that all the visible structures of the world—all things and beings—are the effects of a ubiquitous power out of which they rise, which supports and fills them during the period of their manifestation, and back into which they must ultimately dissolve. This is the power known to science as energy, to the Mela nesians as mana, to the Sioux Indians as wakonda, the Hindus as shakti, and the Christians as the power of God. Its manifestation in the psyche is termed, by the psychoanalysts, libido. And its
manifestation in the cosmos is the structure and flux of the universe itself.

God and the gods are only convenient means—themselves of the nature of the world of names and forms, though eloquent of, and ultimately conducive to, the ineffable. They are mere symbols to move and awaken the mind, and to call it past themselves.

Correspondingly, the key to open the door the other way is the same equation in reverse: the unconscious = the metaphysical realm. “For,” as Jesus states it, “behold, the kingdom of God is within you.”

Redemption consists in the return to superconsciousness and therewith the dissolution of the world.

The hero is the one who, while still alive, knows and rep resents the claims of the superconsciousness which throughout creation is more or less unconscious. The adventure of the hero represents the moment in his life when he achieved illumination—the nuclear moment when, while still alive, he found and opened the road to the light beyond the dark walls of our living death.

Perhaps the most eloquent possible symbol of this mystery is that of the god crucified, the god offered, “himself to himself.” Read in one direction, the meaning is the passage of the phenomenal hero into superconsciousness: the body with its five senseslike that of Prince Five-weapons stuck to Sticky-hair—is left hanging to the cross of the knowledge of life and death, pinned in five places (the two hands, the two feet, and the head crowned with thorns). But also, God has descended voluntarily and taken upon himself this phenomenal agony. God assumes the life of man and man releases the God within himself at the mid-point of the cross-arms of the same “coincidence of opposites,” the same sun door through which God descends and Man ascends—each as the other’s food.

2. The Universal Round

As THE consciousness of the individual rests on a sea of night into which it descends in slumber and out of which it mysteriously wakes, so, in the imagery of myth, the universe is precipitated out of, and reposes upon, a timelessness back into which it again dissolves. And as the mental and physical health of the individual depends on an orderly flow of vital forces into the field of waking day from the unconscious dark, so again in myth, the continuance of the cosmic order is assured only by a controlled flow of power from the source. The gods are symbolic personifications of the laws governing this flow. The gods come into existence with the dawn of the world and dissolve with the twilight. They are not eternal in the sense that the night is eternal.

A basic conception of Oriental philosophy is understood to be rendered in this picture-form. Whether the myth was originally an illustration of the philosophical formula, or the latter a distillation out of the myth, it is today impossible to say. Certainly the myth goes back to remote ages, but so too does philosophy. Who is to know what thoughts lay in the minds of the old sages who developed and treasured the myth and handed it on? Very often, during the analysis and penetration of the secrets of archaic symbol, one can only feel that our generally accepted notion of the history of philosophy is founded on a completely false assumption, namely that abstract and metaphysical thought begins where it first appears in our extant records.

The philosophical formula illustrated by the cosmogonic cycle is that of the circulation of consciousness through the three planes of being. The first plane is that of waking experience: cognitive of the hard, gross, facts of an outer universe, illuminated by the light of the sun, and common to all. The second plane is that of dream experience: cognitive of the fluid, subtle, forms of a private interior world, selfluminous and of one substance with the dreamer. The third plane is that of deep sleep: dreamless, profoundly blissful. In the first are encountered the instructive experiences of life; in the second these are digested, assimilated to the inner forces of the dreamer; while in the third all is enjoyed and known unconsciously, in the space within the heart,” the room of the inner controller, the source and end of all.

The cosmogonic cycle is to be understood as the passage of universal consciousness from the deep sleep zone of the unmanifest, through dream, to the full day of waking; then back again through dream to the timeless dark.

3. Out of the Void—Space

SAINT THOMAS AQUINAS declares: “The name of being wise is reserved to him alone whose consideration is about the end of the universe, which end is also the beginning of the universe.” The basic principle of all mythology is this of the beginning in the end. Creation myths are pervaded with a sense of the doom that is continually recalling all created shapes to the imperishable out of which they first emerged.

Mythology is defeated when the mind rests solemnly with its favorite or traditional images, defending them as though they themselves were the message that they communicate. These images are to be regarded as no more than shadows from the unfathomable reach beyond, where the eye goeth not, speech goeth not, nor the mind, nor even piety. Like the trivialities of dream, those of myth are big with meaning.

From the void beyond all voids unfold the world-sustaining emanations, plantlike, mysterious.

3. Within Space—Life

The image of the cosmic egg is known to many mythologies; it appears in the Greek Orphic, Fgyptian, Finnish, Buddhistic, and Japanese. “In the beginning this world was merely nonbeing,” we read in a sacred work of the Hindus; “It was existent. It developed. It turned into an egg. It lay for the period of a year. It was split asunder.

“Space is boundless by re-entrant form not by great extension. That which is is a shell floating in the infinitude of that which is not.” This succinct formulation by a modern physicist, illustrating the world picture as he saw it in 1928,32 gives precisely the sense of the mythological cosmic egg. Furthermore, the evolution of life, described by our modern science of biology, is the theme of the early stages of the cosmogonie cycle. Finally, the world destruction, which the physicists tell us must come with the exhaustion of our sun and ultimate running down of the whole cosmos.

Not uncommonly the cosmic egg bursts to disclose, swelling from within, an awesome figure in human form.

This cabalistic text is a commentary to the scene in Genesis where Adam gives forth Eve. A similar conception appears in Plato’s Symposium. According to this mysticism of sexual love, the ultimate experience of love is a realization that beneath the illusion of two-ness dwells identity: “each is both.” This realization can expand into a discovery that beneath the multitudinous individualities of the whole surrounding universe—human, animal, vegetable, even mineral—dwells identity; whereupon the love experience becomes cosmic, and the beloved who first opened the vision is magnified as the mirror of creation.

4. The Breaking of the One into the Manifold

In mythology, wherever the Unmoved Mover, the Mighty Living One, holds the center of attention, there is a miraculous spontaneity about the shaping of the universe. The elements condense and move into play of their own accord, or at the Creator’s slightest word; the portions of the self-shattering cosmic egg go to their stations without aid.

As known to the Greeks, this story is rendered by Hesiod in his account of the separation of Ouranos (Father Heaven) from Gaia (Mother Earth). According to this variant, the Titan Kronos castrated his father with a sickle and pushed him up out of the way.

Again the image comes to us from the ancient cuneiform texts of the Sumerians, dating from the third and fourth millenniums B.C. First was the primeval ocean; the primeval ocean generated the cosmic mountain, which consisted of heaven and earth united; An (the Heaven Father) and Ki (the Earth Mother) produced Enlil (the Air God), who presently separated An from Ki and then himself united with his mother to beget mankind.

Icelandic Eddas, and in the Babylonian Tablets of Creation. The final insult here is given in the characterization of the demiurgic presence of the abyss as “evil,” “dark,” “obscene.” The bright young warrior-sons, now disdaining the generative source, the personage of the seed-state of deep sleep, summarily slay it, hack it, slice it into lengths, and carpenter it into the structure of the world. 1 his is the pattern for victory of all our later slayings of the dragon, the beginning of the agelong history of the deeds of the hero.

In the Babylonian version the hero is Marduk, the sun-god; the victim is Tiamat—terrifying, dragonlike, attended by swarms of demons—a female personification of the original abyss itself: chaos as the mother of the gods, but now the menace of the world.

The myths never tire of illustrating the point that conflict in the created world is not what it seems. Tiamat, though slain and dismembered, was not thereby undone.

Herein lies the basic paradox of myth: the paradox of the dual focus. Just as at the opening of the cosmogonic cycle it was possible to say “God is not involved,” but at the same time “God is creator-preserver-destroyer,” so now at this critical juncture, where the One breaks into the many, destiny “happens,” but at the same time “is brought about.” From the perspective of the source, the world is a majestic harmony of forms pouring into being, exploding, and dissolving. But what the swiftly passing creatures experience is a terrible cacaphony of battle cries and pain. The myths do not deny this agony (the crucifixion); they reveal within, behind, and around it essential peace (the heavenly rose).

The shift of perspective from the repose of the central Cause to the turbulation of the peripheral effects is represented in the Fall of Adam and Eve in the Garden of Eden. They ate of the forbidden fruit, “And the eves of them both were opened.”51 The bliss of Paradise was closed to them and they beheld the created field from the other side of a transforming veil. Henceforth they should experience the inevitable as the hard to gain.

5. Folk Stories of Creation

THE simplicity of the origin stories of the undeveloped folk mythologies stands in contrast to the profoundly suggestive myths of the cosmogonic cycle.

Through the blank wall of timelessness there breaks and enters a shadowy creator-figure to shape the world of forms. His day is dreamlike in its duration, fluidity, and ambient power.

A clown figure working in continuous opposition to the well-wishing creator very often appears in myth and folk tale, as accounting for the ills and difficulties of existence this side of the veil.

Note 52: A broad distinction can be made between the mythologies of the truly primitive (fishing, hunting, root-digging, and berry-picking) peoples and those of the civilizations that came into being following the development of the arts of agriculture, dairying, and herding, ca. 6000 BC. Most of what we call primitive, however, is actually colonial, i.e. diffused from some high culture center and adapted to the needs of the a simpler society. It is in order to avoid the misleading term, “primitive,” that I am calling the undeveloped or degenerate traditions “folk mythologies.” The term is adequate for the purposes of the present elementary comparative study of the universal forms though if would certainly not serve for a strict historical analysis.

Universal too is the casting of the antagonist, the representative of evil, in the role of the clown. Devils—both the lusty thickheads and the sharp, clever deceivers are always clowns.


1 Mother Universe

THE world-generating spirit of the father passes into the manifold of earthly experience through a transforming medium—the mother of the world. She is a personification of the primal element named in the second verse of Genesis, where we read that “the spirit of God moved upon the face of the waters.” In the Hindu myth, she is the female figure through whom the Self begot all creatures. More abstractly understood, she is the world-bounding frame: “space, time, and causality”—the shell of the cosmic egg.

More abstractly still, she is the lure that moved the Self-brooding Absolute to the act of creation.

And she is virgin, because her spouse is the Invisible Unknown.

2. Matrix of Destiny

“THE universal goddess makes her appearance to men under a multitude of guises; for the effects of creation are multitudinous, complex, and of mutually contradictory kind when experienced from the viewpoint of the created world. The mother of life is at the same time the mother of death; she is masked in the ugly demonesses of famine and disease.

The Sumero-Babylonian astral mythology identified the aspects of the cosmic female with the phases of the planet Venus. As morning star she was the virgin, as evening star the harlot, as lady of the night sky the consort of the moon; and when extinguished under the blaze of the sun she was the hag of hell. Wherever the Mesopotamian influence extended, the traits of the goddess were touched by the light of this fluctuating star.

3. Womb of Redemption

Men’s perspectives become flat, comprehending only the light reflecting, tangible surfaces of existence.

The people yearn for some personality who, in a world of twisted bodies and souls, will represent again the lines of the incarnate image.

In an inconspicuous village the maid is born who will maintain herself undefiled of the fashionable errors of her generation: a miniature in the midst of men of the cosmic woman who was the bride of the wind. Her womb, remaining fallow as the primordial abyss, summons to itself by its very readiness the original power that fertilized the void.

The story is recounted everywhere; and with such striking uniformity of the main contours, that the early Christian missionaries were forced to think that the devil himself must be throwing up mockeries of their teaching wherever they set their hand.

4. Folk Stories of Virgin Motherhood

THE Buddha descended from heaven to his mother’s womb in the shape of a milk-white elephant.

Any leaf accidentally swallowed, any nut, or even the breath of a breeze, may be enough to fertilize the ready womb. The procreating power is everywhere. And according to the whim or destiny of the hour, either a herosavior or a world-annihilating demon may be conceived—one can never know.


1 The Primordial Hero and the Human

The cosmogonic cycle is now to be carried forward, therefore, not by the gods, who have become invisible, but by the heroes, more or less human in character, through whom the world destiny is realized. This is the line where creation myths begin to give place to legend—as in the Book of Genesis, following the expulsion from the garden.

Such serpent kings and minotaurs tell of a past when the emperor was the carrier of a special world-creating, world-sustaining power, very much greater than that represented in the normal human physique. In those times was accomplished the heavy titan-work, the massive establishment of the foundations of our human civilization.

2. Childhood of the Human Hero

…the tendency has always been to endow the hero with extraordinary powers from the moment of birth, or even the moment of conception. The whole herolife is shown to have been a pageant of marvels with the great central adventure as its culmination.

This accords with the view that herohood is predestined, rather than simply achieved, and opens the problem of the relationship of biography to character.

In Part I, “The Adventure of the Hero,” we regarded the redemptive deed from the first standpoint, which may be called the psychological. We now must describe it from the second, where it becomes a symbol of the same metaphysical mystery that it was the deed of the hero himself to rediscover and bring to view.

Stated in the terms already formulated, the hero’s first task is to experience consciously the antecedent stages of the cosmogonic cycle; to break back through the epochs of emanation. His second, then, is to return from that abyss to the plane of contemporary life, there to serve as a human transformer of demiurgic potentials.

The deeds of the hero in the second part of his personal cycle will be proportionate to the depth of his descent during the first.

If the deeds of an actual historical figure proclaim him to have been a hero, the builders of his legend will invent for him appropriate adventures in depth. These will be pictured as journeys into miraculous realms, and are to be interpreted as symbolic, on the one hand, of descents into the night-sea of the psyche, and on the other, of the realms or aspects of man’s destiny that are made manifest in the respective lives.

King Sargon of Agade (c. 2550 B.C.) was born of a lowly mother. His father was unknown. Set adrift in a basket of bulrushes on the waters of the Euphrates, he was discovered by Akki the husbandman, whom he was brought up to serve as gardener. The goddess Ishtar favored the youth. Thus he became, at last, king and emperor, renowned as the living god.

Pope Gregory the Great (A.D. 540?- 604) was born of noble twins who at the instigation of the devil had committed incest. His penitent mother set him to sea in a little casket. He was found and fostered by fishermen, and at the age of six was sent to a cloister to be educated as a priest.

Charlemagne (742-814) was persecuted as a child by his elder brothers, and took flight to Saracen Spain. There, under the name of Mainet, he rendered signal services to the king. He converted the king’s daughter to the Christian faith, and the two secretly arranged to marry. After further deeds, the royal youth returned to France, where he overthrew his former persecutors and triumphantly assumed the crown. Then he ruled a hundred years, surrounded by a zodiac of twelve peers.

Each of these biographies exhibits the variously rationalized theme of the infant exile and return.

This is a prominent feature in all legend, folk tale, and myth. Usually an effort is made to give it some semblance of physical plausibility. However, when the hero in question is a great patriarch, wizard, prophet, or incarnation, the wonders are permitted to develop beyond all bounds.

The folk tales commonly support or supplant this theme of the exile with that of the despised one, or the handicapped: the abused youngest son or daughter, the orphan, stepchild, ugly duckling, or the squire of low degree.

In sum: the child of destiny has to face a long period of obscurity. This is a time of extreme danger, impediment, or disgrace. He is thrown inward to his own depths or outward to the unknown; either way, what he touches is a darkness unexplored. And this is a zone of unsuspected presences, benign as well as malignant: an angel appears, a helpful animal, a fisherman, a hunter, crone, or peasant. Fostered in the animal school, or, like Siegfried, below ground among the gnomes that nourish the roots of the tree of life, or again, alone in some little room (the story has been told a thousand ways), the young world-apprentice learns the lesson of the seed powers, which reside just beyond the sphere of the measured and the named.

The myths agree that an extraordinary capacity is required to face and survive such experience. The infancies abound in anecdotes of precocious strength, cleverness, and wisdom.

Jesus confounded the wise men. The baby Buddha had been left one day beneath the shade of a tree; his nurses suddenly noted that the shadow had not moved all afternoon and that the child was sitting fixed in a yogic trance. The feats of the beloved Hindu savior, Krishna, during his infant exile among the cowherds of Gokula and Brindaban, constitute a lively cycle.

The conclusion of the childhood cycle is the return or recognition of the hero, when, after the long period of obscurity, his true character is revealed. This event may precipitate a considerable crisis; for it amounts to an emergence of powers hitherto excluded from human life. Earlier patterns break to fragments or dissolve; disaster greets the eye. Yet after a moment of apparent havoc, the creative value of the new factor comes to view, and the world takes shape again in unsuspected glory. This theme of crucifixion-resurrection can be illustrated either on the body of the hero himself, or in his effects upon his world. The first alternative we find in the Pueblo story of the water jar.

3. The Hero as Warrior

THE place of the hero’s birth, or the remote land of exile from which he returns to perform his adult deeds among men, is the mid-point or navel of the world. Just as ripples go out from an underwater spring, so the forms of the universe expand in circles from this source.

From the umbilical spot the hero departs to realize his destiny. His adult deeds pour creative power into the world.

For the mythological hero is the champion not of things become but of things becoming; the dragon to be slain by him is precisely the monster of the status quo: Holdfast, the keeper of the past. From obscurity the hero emerges, but the enemy is great and conspicuous in the seat of power; he is enemy, dragon, tyrant, because he turns to his own advantage the authority of his position. He is Holdfast not because he keeps the past but because he keeps.

The tyrant is proud, and therein resides his doom. He is proud because he thinks of his strength as his own; thus he is in the clown role, as a mistaker of shadow for substance; it is his destiny to be tricked. The mythological hero, reappearing from the darkness that is the source of the shapes of the day, brings a knowledge of the secret of the tyrant’s doom. With a gesture as simple as the pressing of a button, he annihilates the impressive configuration. The hero-deed is a continuous shattering of the crystallizations of the moment. The cycle rolls: mythology focuses on the growing-point. Transformation, fluidity, not stubborn ponderosity, is the characteristic of the living God.

The great figure of the moment exists only to be broken, cut into chunks, and scattered abroad. Briefly: the ogre-tyrant is the champion of the prodigious fact, the hero the champion of creative life.

The world period of the hero in human form begins only when villages and cities have expanded over the land. Many monsters remaining from primeval times still lurk in the outlying regions, and through malice or desperation these set themselves against the human community. They have to be cleared away. Furthermore, tyrants of human breed, usurping to themselves the goods of their neighbors, arise, and are the cause of widespread misery. These have to be suppressed. The elementary deeds of the hero are those of the clearing of the field.

The warrior-kings of antiquity regarded their work in the spirit of the monster-slayer. This formula, indeed, of the shining hero going against the dragon has been the great device of self-justification for all crusades.

4. The Hero as Lover

THE hegemony wrested from the enemy, the freedom won from the malice of the monster, the life energy released from the toils of the tyrant Holdfast—is symbolized as a woman. She is the maiden of the innumerable dragon slayings, the bride abducted from the jealous father, the virgin rescued from the unholy lover. She is the “other portion” of the hero himself—for “each is both”: if his stature is that of world monarch she is the world, and if he is a warrior she is fame. She is the image of his destiny which he is to release from the prison of enveloping circumstance. But where he is ignorant of his destiny, or deluded by false considerations, no effort on his part will overcome the obstacles.

5. The Hero as Emperor and as Tyrant

THE hero of action is the agent of the cycle, continuing into the living moment the impulse that first moved the world. Because our eyes are closed to the paradox of the double focus, we regard the deed as accomplished amid danger and great pain by a vigorous arm, whereas from the other perspective it is, like the archetypal dragon-slaying of Tiamat by Marduk, only a bringing to pass of the inevitable.

The supreme hero, however, is not the one who merely continues the dynamics of the cosmogonic round, but he who reopens the eye—so that through all the comings and goings, delights and agonies of the world panorama, the One Presence will be seen again. This requires a deeper wisdom than the other, and results in a pattern not of action but of significant representation. The symbol of the first is the virtuous sword, of the second, the scepter of dominion, or the book of the law. The characteristic adventure of the first is the winning of the bride—the bride is life. The adventure of the second is the going to the father—the father is the invisible unknown.

Adventures of the second type fit directly into the patterns of religious iconography.

Even in a simple folk tale a depth is suddenly sounded when the son of the virgin one day asks of his mother: “Who is my father?” The question touches the problem of man and the invisible. The familiar myth-motifs of the atonement inevitably follow.

Where the goal of the hero’s effort is the discovery of the unknown father, the basic symbolism remains that of the tests and the self-revealing way. In the above example the test is reduced to the persistent questions and a frightening look.

The hero blessed by the father returns to represent the father among men. As teacher (Moses) or as emperor (Huang Ti), his word is law. Since he is now centered in the source, he makes visible the repose and harmony of the central place. He is a reflection of the World Axis from which the concentric circles spread—the World Mountain, the World Tree—he is the perfect microcosmic mirror of the macrocosm. To see him is to perceive the meaning of existence. From his presence boons go out; his word is the wind of life.

6. The Hero as World Redeemer

Two degrees of initiation are to be distinguished in the mansion of the father. From the first the son returns as emissary, but from the second, with the knowledge that “I and the father are one.” Heroes of this second, highest illumination are the world redeemers, the so-called incarnations, in the highest sense. Their myths open out to cosmic proportions. Their words carry an authority beyond anything pronounced by the heroes of the scepter and the book.

The work of the incarnation is to refute by his presence the pretensions of the tyrant ogre. The latter has occluded the source of grace with the shadow of his limited personality; the incarnation, utterly free of such ego consciousness, is a direct manifestation of the law.

The legends of the redeemer describe the period of desolation as caused by a moral fault on the part of man (Adam in the garden, Jemshid on the throne). Yet from the standpoint of the cosmogonic cycle, a regular alternation of fair and foul is characteristic of the spectacle of time. Just as in the history of the universe, so also in that of nations: emanation leads to dissolution, youth to age, birth to death, form-creative vitality to the dead weight of inertia. Life surges, precipitating forms, and then ebbs, leaving jetsam behind. The golden age, the reign of the world emperor, alternates, in the pulse of every moment of life, with the waste land, the reign of the tyrant. The god who is the creator becomes the destroyer in the end.

Stated in direct terms: the work of the hero is to slay the tenacious aspect of the father (dragon, tester, ogre king) and release from its ban the vital energies that will feed the universe. “This can be done either in accordance with the Father’s will or against his will; he [the Father] may ‘choose death for his children’s sake,’ or it may be that the Gods impose the passion upon him, making him their sacrificial victim. These are not contradictory doctrines, but different ways of telling one and the same story; in reality, Slayer and Dragon, sacrificer and victim, are of one mind behind the scenes, where there is no polarity of contraries, but mortal enemies on the stage, where the everlasting war of the Gods and the Titans is displayed.

The hero of yesterday becomes the tyrant of tomorrow, unless he crucifies himself today.

To protect the unprepared, mythology veils such ultimate revelations under half-obscuring guises, while yet insisting on the gradually instructive form. The savior figure who eliminates the tyrant father and then himself assumes the crown is (like Oedipus) stepping into his sire’s stead. To soften the harsh patricide, the legend represents the father as some cruel uncle or usurping Nimrod. Nevertheless, the half-hidden fact remains. Once it is glimpsed, the entire spectacle buckles: the son slays the father, but the son and the father are one. The enigmatical figures dissolve back into the primal chaos. This is the wisdom of the end (and rebeginning) of the world.

7. The Hero as Saint

…the saint or ascetic, the world-renouncer.

King Oedipus came to know that the woman he had married was his mother, the man he had slain his father; he plucked his eyes out and wandered in penance over the earth. The Freudians declare that each of us is slaying his father, marrying his mother, all the time—only unconsciously: the roundabout symbolic ways of doing this and the rationalizations of the consequent compulsive activity constitute our individual lives and common civilization.

The tree has now become the cross: the White Youth sucking milk has become the Crucified swallowing gall. Corruption crawls where before was the blossom of spring. Yet beyond this threshold of the cross—for the cross is a way (the sun door), not an end—is beatitude in God.

8. Departure of the Hero

THE last act in the biography of the hero is that of the death or departure. Here the whole sense of the life is epitomized. Needless to say, the hero would be no hero if death held for him any terror; the first condition is reconciliation with the grave.

The life-eager hero can resist death, and postpone his fate for a certain time


1 End of the Microcosm

THE mighty hero of extraordinary powers—able to lift Mount Govardhan on a finger, and to fill himself with the terrible glory of the universe—is each of us: not the physical self visible in the mirror, but the king within. Krishna declares: “I am the Self, seated in the hearts of all creatures. I am the beginning, the middle, and the end of all beings.”1 This, precisely, is the sense of the prayers for the dead, at the moment of personal dissolution: that the individual should now return to his pristine knowledge of the world-creative divinity who during life was reflected within his heart.

But, as in the death of the Buddha, the power to make a full transit back through the epochs of emanation depends on the character of the man when he was alive.

The myths tell of a dangerous journey of the soul, with obstacles to be passed.

Dante’s Divina Commedia is an exhaustive review of the stages: “Inferno,” the misery of the spirit bound to the prides and actions of the flesh; “Purgatorio,” the process of transmuting fleshly into spiritual experience; “Paradiso,” the degrees of spiritual realization.

2. End of the Macrocosm

One of the strongest representations appears in the Poetic Edda of the ancient Vikings. Othin (Wotan) the chief of the gods has asked to know what will be the doom of himself and his pantheon, and the “Wise Woman,” a personification of the World Mother herself, Destiny articulate, lets him hear:

Brothers shall fight and fell each other, And sisters’ sons shall kinship stain; Hard is it on earth, with mighty whoredom; Ax-time, sword-time, shields are sundered, Wind-time, wolf-time, ere the world falls; Nor ever shall men each other spare.

Note 11: From the Poetic Edda, “Voluspa”, translated by Bellows and the Prose Edda, “Gylfaginning”, translated by Brodeur.

Fenris-Wolf shall run free, and advance with lower jaw against the earth, upper against the heavens (“he would gape yet more if there were room for it”)

Othin shall advance against the wolf, Thor against the serpent, Tyr against the dog—the worst monster of all—and Freyr against Surt, the man of flame.

“And as Jesus sat upon the mount of Olives, the disciples came unto him privately, saying, Tell us when shall these things be? and what shall be the sign of thy coming, and of the end of the world?

And Jesus answered and said unto them, Take heed that no man deceive you. For many shall come in my name, saying, I am Christ; and shall deceive many. And ye shall hear of wars and rumors of wars: see that ye be not troubled: for all these things must come to pass, but the end is not yet. For nation shall rise against nation, and kingdom against kingdom: and there shall be famines and pestilences, and earthquakes, in divers places.


3. The Shapeshifter

Mythology has been interpreted by the modern intellect as a primitive, fumbling effort to explain the world of nature (Frazer); as a production of poetical fantasy from prehistoric times, misunderstood by succeeding ages (Müller); as a repository of allegorical instruction, to shape the individual to his group (Durkheim); as a group dream, symptomatic of archetypal urges within the depths of the human psyche (Jung); as the traditional vehicle of man’s profoundest metaphysical insights (Coomaraswamy); and as God’s Revelation to His children (the Church). Mythology is all of these. The various judgments are determined by the viewpoints of the judges. For when scrutinized in terms not of what it is but of how it functions, of how it has served mankind in the past, of how it may serve today, mythology shows itself to be as amenable as life itself to the obsessions and requirements of the individual, the race, the age.

4. The Function of Myth, Cult, and Meditation

Rites of initiation and installation, then, teach the lesson of the essential oneness of the individual and the group; seasonal festivals open a larger horizon. As the individual is an organ of society, so is the tribe or city—so is humanity entire—only a phase of the mighty organism of the cosmos.

It has been customary to describe the seasonal festivals of so-called native peoples as efforts to control nature. This is a misrepresentation. There is much of the will to control in every act of man, and particularly in those magical ceremonies that are thought to bring rain clouds, cure sickness, or stay the flood; nevertheless, the dominant motive in all truly religious (as opposed to black-magical) ceremonial is that of submission to the inevitables of destiny—and in the seasonal festivals this motive is particularly apparent.

No tribal rite has yet been recorded which attempts to keep winter from descending; on the contrary: the rites all prepare the community to endure, together with the rest of nature, the season of the terrible cold. And in the spring, the rites do not seek to compel nature to pour forth immediately corn, beans, and squash for the lean community; on the contrary: the rites dedicate the whole people to the work of nature’s season. The wonderful cycle of the year, with its hardships and periods of joy, is celebrated, and delineated, and represented as continued in the life-round of the human group.

But there is another way—in diametric opposition to that of social duty and the popular cult. From the standpoint of the way of duty, anyone in exile from the community is a nothing

The aim is not to see, but to realize that one is, that essence; then one is free to wander as that essence in the world. Furthermore: the world too is of that essence. The essence of oneself and the essence of the world: these two are one. Hence separateness, withdrawal, is no longer necessary. Wherever the hero may wander, whatever he may do, he is ever in the presence of his own essence—for he has the perfected eye to see. There is no separateness. Thus, just as the way of social participation may lead in the end to a realization of the All in the individual, so that of exile brings the hero to the Self in all.

5. The Hero Today

…for the democratic ideal of the self-determining individual, the invention of the power-driven machine, and the development of the scientific method of research, have so transformed human life that the long-inherited, timeless universe of symbols has collapsed.

In the fateful, epoch-announcing words of Nietzsche’s Zarathustra: “Dead are all the gods.” One knows the tale; it has been told a thousand ways

The dream-web of myth fell away; the mind opened to full waking consciousness; and modern man emerged from ancient ignorance, like a butterfly from its cocoon, or like the sun at dawn from the womb of mother night.

The social unit is not a carrier of religious content, but an economic-political organization. Its ideals are not those of the hieratic pantomime, making visible on earth the forms of heaven, but of the secular state, in hard and unremitting competition for material supremacy and resources. Isolated societies, dream-bounded within a mythologically charged horizon, no longer exist except as areas to be exploited. And within the progressive societies themselves, every last vestige of the ancient human heritage of ritual, morality, and art is in full decay.

…today no meaning is in the group—none in the world: all is in the individual. But there the meaning is absolutely unconscious. One does not know toward what one moves. One does not know by what one is propelled. The lines of communication between the conscious and the unconscious zones of the human psyche have all been cut, and we have been split in two.

Its parody-rituals of the parade ground serve the ends of Holdfast, the tyrant dragon, not the God in whom self-interest is annihilate.

Nor can the great world religions, as at present understood, meet the requirement. For they have become associated with the causes of the factions, as instruments of propaganda and self-congratulation.

The universal triumph of the secular state has thrown all religious organizations into such a definitely secondary, and finally ineffectual, position that religious pantomime is hardly more today than a sanctimonious exercise for Sunday morning, whereas business ethics and patriotism stand for the remainder of the week. Such a monkey-holiness is not what the functioning world requires; rather, a transmutation of the whole social order is necessary, so that through every detail and act of secular life the vitalizing image of the universal god man who is actually immanent and effective in all of us may be somehow made known to consciousness.

Man is that alien presence with whom the forces of egoism must come to terms, through whom the ego is to be crucified and resurrected, and in whose image society is to be reformed. Man, understood however not as “I” but as “Thou”: for the ideals and temporal institutions of no tribe, race, continent, social class, or century, can be the measure of the inexhaustible and multifariously wonderful divine existence that is the life in all of us.

It is not society that is to guide and save the creative hero, but precisely the reverse.

Enlightenment Now: The Case for Reason, Science, Humanism, and Progress Steven Pinker

Enlightenment Now Book Cover Enlightenment Now
Steven Pinker

Feeling like everything has gone to $%#!? Worried about the future? This is excellent medicine. Pinker takes an analytical approach using data to show that quality of life, wealth, safety, peace, knowledge, and happiness are on the up across the globe!

I would read Thinking Fast and Slow first. It will help with understanding various biases.

There are a LOT of notes. I might try to trim them down by removing some of the stuff that only I would note. There are lots of history and economic history bits.

“Why should I live?”

In the very act of asking that question, you are seeking reasons for your convictions, and so you are committed to reason as the means to discover and justify what is important to you. And there are so many reasons to live! As a sentient being, you have the potential to flourish. You can refine your faculty of reason itself by learning and debating. You can seek explanations of the natural world through science, and insight into the human condition through the arts and humanities. You can make the most of your capacity for pleasure and satisfaction, which allowed your ancestors to thrive and thereby allowed you to exist. You can appreciate the beauty and richness of the natural and cultural world. As the heir to billions of years of life perpetuating itself, you can perpetuate life in turn. You have been endowed with a sense of sympathy—the ability to like, love, respect, help, and show kindness—and you can enjoy the gift of mutual benevolence with friends, family, and colleagues. And because reason tells you that none of this is particular to you, you have the responsibility to provide to others what you expect for yourself. You can foster the welfare of other sentient beings by enhancing life, health, knowledge, freedom, abundance, safety, beauty, and peace. History shows that when we sympathize with others and apply our ingenuity to improving the human condition, we can make progress in doing so, and you can help to continue that progress.

The Enlightenment principle that we can apply reason and sympathy to enhance human flourishing may seem obvious, trite, old-fashioned. I wrote this book because I have come to realize that it is not. More than ever, the ideals of reason, science, humanism, and progress need a wholehearted defense.

We ignore the achievements of the Enlightenment at our peril.

The ideals of the Enlightenment are products of human reason, but they always struggle with other strands of human nature: loyalty to tribe, deference to authority, magical thinking, the blaming of misfortune on evildoers.

Harder to find is a positive vision that sees the world’s problems against a background of progress that it seeks to build upon by solving those problems in their turn.

“The West is shy of its values—it doesn’t speak up for classical liberalism,”

The Islamic State, which “knows exactly what it stands for,”

Friedrich Hayek observed, “If old truths are to retain their hold on men’s minds, they must be restated in the language and concepts of successive generations”

What is enlightenment? In a 1784 essay with that question as its title, Immanuel Kant answered that it consists of “humankind’s emergence from its self-incurred immaturity,” its “lazy and cowardly” submission to the “dogmas and formulas” of religious or political authority.1 Enlightenment’s motto, he proclaimed, is “Dare to understand!”

David Deutsch’s defense of enlightenment, The Beginning of Infinity.

All failures—all evils—are due to insufficient knowledge.

It is a mistake to confuse hard problems with problems unlikely to be solved.

The thinkers of the Enlightenment sought a new understanding of the human condition. The era was a cornucopia of ideas, some of them contradictory, but four themes tie them together: reason, science, humanism, and progress.

If there’s anything the Enlightenment thinkers had in common, it was an insistence that we energetically apply the standard of reason to understanding our world, and not fall back on generators of delusion like faith, dogma, revelation, authority, charisma, mysticism, divination, visions, gut feelings, or the hermeneutic parsing of sacred texts.

Others were pantheists, who used “God” as a synonym for the laws of nature.

They insisted that it was only by calling out the common sources of folly that we could hope to overcome them. The deliberate application of reason was necessary precisely because our common habits of thought are not particularly reasonable.

That leads to the second ideal, science, the refining of reason to understand the world.

To the Enlightenment thinkers the escape from ignorance and superstition showed how mistaken our conventional wisdom could be, and how the methods of science—skepticism, fallibilism, open debate, and empirical testing—are a paradigm of how to achieve reliable knowledge.

The need for a “science of man” was a theme that tied together Enlightenment thinkers who disagreed about much else, including Montesquieu, Hume, Smith, Kant, Nicolas de Condorcet, Denis Diderot, Jean-Baptiste d’Alembert, Jean-Jacques Rousseau, and Giambattista Vico.

They were cognitive neuroscientists, who tried to explain thought, emotion, and psychopathology in terms of physical mechanisms of the brain. They were evolutionary psychologists, who sought to characterize life in a state of nature and to identify the animal instincts that are “infused into our bosoms.” They were social psychologists, who wrote of the moral sentiments that draw us together, the selfish passions that divide us, and the foibles of shortsightedness that confound our best-laid plans. And they were cultural anthropologists, who mined the accounts of travelers and explorers for data both on human universals and on the diversity of customs and mores across the world’s cultures.

The idea of a universal human nature brings us to a third theme, humanism. The thinkers of the Age of Reason and the Enlightenment saw an urgent need for a secular foundation for morality, because they were haunted by a historical memory of centuries of religious carnage: the Crusades, the Inquisition, witch hunts, the European wars of religion. They laid that foundation in what we now call humanism, which privileges the well-being of individual men, women, and children over the glory of the tribe, race, nation, or religion.

We are endowed with the sentiment of sympathy, which they also called benevolence, pity, and commiseration. Given that we are equipped with the capacity to sympathize with others, nothing can prevent the circle of sympathy from expanding from the family and tribe to embrace all of humankind, particularly as reason goads us into realizing that there can be nothing uniquely deserving about ourselves or any of the groups to which we belong.

A humanistic sensibility impelled the Enlightenment thinkers to condemn not just religious violence but also the secular cruelties of their age, including slavery,

The Enlightenment is sometimes called the Humanitarian Revolution, because it led to the abolition of barbaric practices that had been commonplace across civilizations for millennia.

With our understanding of the world advanced by science and our circle of sympathy expanded through reason and cosmopolitanism, humanity could make intellectual and moral progress.

Government is not a divine fiat to reign, a synonym for “society,” or an avatar of the national, religious, or racial soul. It is a human invention, tacitly agreed to in a social contract, designed to enhance the welfare of citizens by coordinating their behavior and discouraging selfish acts that may be tempting to every individual but leave everyone worse off. As the most famous product of the Enlightenment, the Declaration of Independence, put it, in order to secure the right to life, liberty, and the pursuit of happiness, governments are instituted among people, deriving their just powers from the consent of the governed.

The Enlightenment also saw the first rational analysis of prosperity.

Specialization works only in a market that allows the specialists to exchange their goods and services, and Smith explained that economic activity was a form of mutually beneficial cooperation (a positive-sum game, in today’s lingo): each gets back something that is more valuable to him than what he gives up. Through voluntary exchange, people benefit others by benefiting themselves;

He only said that in a market, whatever tendency people have to care for their families and themselves can work to the good of all.

“If the tailor goes to war against the baker, he must henceforth bake his own bread.”)

doux commerce, gentle commerce.

Another Enlightenment ideal, peace.

Together with international commerce, he recommended representative republics (what we would call democracies), mutual transparency, norms against conquest and internal interference, freedom of travel and immigration, and a federation of states that would adjudicate disputes between them.

The first keystone in understanding the human condition is the concept of entropy or disorder, which emerged from 19th-century physics and was defined in its current form by the physicist Ludwig Boltzmann.1 The Second Law of Thermodynamics states that in an isolated system (one that is not interacting with its environment), entropy never decreases.

It follows that any perturbation of the system, whether it is a random jiggling of its parts or a whack from the outside, will, by the laws of probability, nudge the system toward disorder or uselessness—not because nature strives for disorder, but because there are so many more ways of being disorderly than of being orderly.

Law of Entropy.

Life and happiness depend on an infinitesimal sliver of orderly arrangements of matter amid the astronomical number of possibilities.

The Law of Entropy is widely acknowledged in everyday life in sayings such as “Things fall apart,” “Rust never sleeps,” “Shit happens,” “Whatever can go wrong will go wrong,”

“The Second Law of Thermodynamics Is the First Law of Psychology.”4 Why the awe for the Second Law? From an Olympian vantage point, it defines the fate of the universe and the ultimate purpose of life, mind, and human striving: to deploy energy and knowledge to fight back the tide of entropy and carve out refuges of beneficial order.

in 1859, it was reasonable to think they were the handiwork of a divine designer—one of the reasons, I suspect, that so many Enlightenment thinkers were deists rather than outright atheists. Darwin and Wallace made the designer unnecessary. Once self-organizing processes of physics and chemistry gave rise to a configuration of matter that could replicate itself, the copies would make copies, which would make copies of the copies, and so on, in an exponential explosion.

Organisms are open systems: they capture energy from the sun, food, or ocean vents to carve out temporary pockets of order in their bodies and nests while they dump heat and waste into the environment, increasing disorder in the world as a whole.

Nature is a war, and much of what captures our attention in the natural world is an arms race.

the third keystone, information.8 Information may be thought of as a reduction in entropy—as the ingredient that distinguishes an orderly, structured system from the vast set of random, useless ones.

The information contained in a pattern depends on how coarsely or finely grained our view of the world is.

Information is what gets accumulated in a genome in the course of evolution. The sequence of bases in a DNA molecule correlates with the sequence of amino acids in the proteins that make up the organism’s body, and they got that sequence by structuring the organism’s ancestors—reducing their entropy—into the improbable configurations that allowed them to capture energy and grow and reproduce.

Energy channeled by knowledge is the elixir with which we stave off entropy, and advances in energy capture are advances in human destiny. The invention of farming around ten thousand years ago multiplied the availability of calories from cultivated plants and domesticated animals, freed a portion of the population from the demands of hunting and gathering, and eventually gave them the luxury of writing, thinking, and accumulating their ideas. Around 500 BCE, in what the philosopher Karl Jaspers called the Axial Age, several widely separated cultures pivoted from systems of ritual and sacrifice that merely warded off misfortune to systems of philosophical and religious belief that promoted selflessness and promised spiritual transcendence.

(Confucius, Buddha, Pythagoras, Aeschylus, and the last of the Hebrew prophets walked the earth at the same time.)

The Axial Age was when agricultural and economic advances provided a burst of energy: upwards of 20,000 calories per person per day in food, fodder, fuel, and raw materials. This surge allowed the civilizations to afford larger cities, a scholarly and priestly class, and a reorientation of their priorities from short-term survival to long-term harmony. As Bertolt Brecht put it millennia later: Grub first, then ethics.19

And the next leap in human welfare—the end of extreme poverty and spread of abundance, with all its moral benefits—will depend on technological advances that provide energy at an acceptable economic and environmental cost to the entire world

The first piece of wisdom they offer is that misfortune may be no one’s fault. A major breakthrough of the Scientific Revolution—perhaps its biggest breakthrough—was to refute the intuition that the universe is saturated with purpose.

Galileo, Newton, and Laplace replaced this cosmic morality play with a clockwork universe in which events are caused by conditions in the present, not goals for the future.

Not only does the universe not care about our desires, but in the natural course of events it will appear to thwart them, because there are so many more ways for things to go wrong than for them to go right.

Awareness of the indifference of the universe was deepened still further by an understanding of evolution.

As Adam Smith pointed out, what needs to be explained is wealth. Yet even today, when few people believe that accidents or diseases have perpetrators, discussions of poverty consist mostly of arguments about whom to

Another implication of the Law of Entropy is that a complex system like an organism can easily be disabled, because its functioning depends on so many improbable conditions being satisfied at once.

So for all the flaws in human nature, it contains the seeds of its own improvement, as long as it comes up with norms and institutions that channel parochial interests into universal benefits. Among those norms are free speech, nonviolence, cooperation, cosmopolitanism, human rights, and an acknowledgment of human fallibility, and among the institutions are science, education, media, democratic government, international organizations, and markets. Not coincidentally, these were the major brainchildren of the Enlightenment.

And the second decade of the 21st century saw the rise of populist movements that blatantly repudiate the ideals of the Enlightenment.1 They are tribalist rather than cosmopolitan, authoritarian rather than democratic, contemptuous of experts rather than respectful of knowledge, and nostalgic for an idyllic past rather than hopeful for a better future.

The disdain for reason, science, humanism, and progress has a long pedigree in elite intellectual and artistic culture.

The Enlightenment was swiftly followed by a counter-Enlightenment, and the West has been divided ever since.

The Romantic movement pushed back particularly hard against Enlightenment ideals. Rousseau, Johann Herder, Friedrich Schelling, and others denied that reason could be separated from emotion, that individuals could be considered apart from their culture, that people should provide reasons for their acts, that values applied across times and places, and that peace and prosperity were desirable ends. A human is a part of an organic whole—a culture, race, nation, religion, spirit, or historical force—and people should creatively channel the transcendent unity of which they are a part. Heroic struggle, not the solving of problems, is the greatest good, and violence is inherent to nature and cannot be stifled without draining life of its vitality. “There are but three groups worthy of respect,” wrote Charles Baudelaire, “the priest, the warrior, and the poet. To know, to kill, and to create.”

The most obvious is religious faith.

Religions also commonly clash with humanism whenever they elevate some moral good above the well-being of humans, such as accepting a divine savior, ratifying a sacred narrative, enforcing rituals and taboos, proselytizing other people to do the same, and punishing or demonizing those who don’t.

A second counter-Enlightenment idea is that people are the expendable cells of a superorganism—a clan, tribe, ethnic group, religion, race, class, or nation—and that the supreme good is the glory of this collectivity rather than the well-being of the people who make it up. An obvious example is nationalism, in which the superorganism is the nation-state, namely an ethnic group with a government.

Nationalism should not be confused with civic values, public spirit, social responsibility, or cultural pride.

It’s quite another thing when a person is forced to make the supreme sacrifice for the benefit of a charismatic leader, a square of cloth, or colors on a map.

Religion and nationalism are signature causes of political conservatism, and continue to affect the fate of billions of people in the countries under their influence.

Left-wing and right-wing political ideologies have themselves become secular religions, providing people with a community of like-minded brethren, a catechism of sacred beliefs, a well-populated demonology, and a beatific confidence in the righteousness of their cause.

Political ideology undermines reason and science.7 It scrambles people’s judgment, inflames a primitive tribal mindset, and distracts them from a sounder understanding of how to improve the world. Our greatest enemies are ultimately not our political adversaries but entropy, evolution (in the form of pestilence and the flaws in human nature), and most of all ignorance—a shortfall of knowledge of how best to solve our problems.

For almost two centuries, a diverse array of writers has proclaimed that modern civilization, far from enjoying progress, is in steady decline and on the verge of collapse.

Declinism bemoans our Promethean dabbling with technology.9 By wresting fire from the gods, we have only given our species the means to end its own existence, if not by poisoning our environment then by loosing nuclear weapons, nanotechnology, cyberterror, bioterror, artificial intelligence, and other existential threats upon the world

Another variety of declinism agonizes about the opposite problem—not that modernity has made life too harsh and dangerous, but that it has made it too pleasant and safe. According to these critics, health, peace, and prosperity are bourgeois diversions from what truly matters in life.

In the twilight of a decadent, degenerate civilization, true liberation is to be found not in sterile rationality or effete humanism but in an authentic, heroic, holistic, organic, sacred, vital being-in-itself and will to power.

Friedrich Nietzsche, who coined the term will to power, recommends the aristocratic violence of the “blond Teuton beasts” and the samurai, Vikings, and Homeric heroes: “hard, cold, terrible, without feelings and without conscience, crushing everything, and bespattering everything with blood.”

The historical pessimists dread the downfall but lament that we are powerless to stop it. The cultural pessimists welcome it with a “ghoulish schadenfreude.” Modernity is so bankrupt, they say, that it cannot be improved, only transcended.

A final alternative to Enlightenment humanism condemns its embrace of science. Following C. P. Snow, we can call it the Second Culture, the

Second Culture today. Many intellectuals and critics express a disdain for science as anything but a fix for mundane problems. They write as if the consumption of elite art is the ultimate moral good.

Intellectual magazines regularly denounce “scientism,” the intrusion of science into the territory of the humanities such as politics and the arts.

Science is commonly blamed for racism, imperialism, world wars, and the Holocaust.

Intellectuals hate progress.

It’s the idea of progress that rankles the chattering class—the Enlightenment belief that by understanding the world we can improve the human condition.

A modern optimist believes that the world can be much, much better than it is today. Voltaire was satirizing not the Enlightenment hope for progress but its opposite, the religious rationalization for suffering called theodicy, according to which God had no choice but to allow epidemics and massacres because a world without them is metaphysically impossible.

In The Idea of Decline in Western History, Arthur Herman shows that prophets of doom are the all-stars of the liberal arts curriculum, including Nietzsche, Arthur Schopenhauer, Martin Heidegger, Theodor Adorno, Walter Benjamin, Herbert Marcuse, Jean-Paul Sartre, Frantz Fanon, Michel Foucault, Edward Said, Cornel West, and a chorus of eco-pessimists.

Psychologists have long known that people tend to see their own lives through rose-colored glasses: they think they’re less likely than the average person to become the victim of a divorce, layoff, accident, illness, or crime. But change the question from the people’s lives to their society, and they transform from Pollyanna to Eeyore.

Public opinion researchers call it the Optimism Gap.

The news, far from being a “first draft of history,” is closer to play-by-play sports commentary.

The nature of news is likely to distort people’s view of the world because of a mental bug that the psychologists Amos Tversky and Daniel Kahneman called the Availability heuristic: people estimate the probability of an event or the frequency of a kind of thing by the ease with which instances come to mind.

Availability errors are a common source of folly in human reasoning.

Vacationers stay out of the water after they have read about a shark attack or if they have just seen Jaws.12 Plane

How can we soundly appraise the state of the world?

The answer is to count. How many people are victims of violence as a proportion of the number of people alive? How many are sick, how many starving, how many poor, how many oppressed, how many illiterate, how many unhappy? And are those numbers going up or down? A quantitative mindset, despite its nerdy aura, is in fact the morally enlightened one, because it treats every human life as having equal value rather than privileging the people who are closest to us or most photogenic.

Resistance to the idea of progress runs deeper than statistical fallacies.

Many people lack the conceptual tools to ascertain whether progress has taken place or not; the very idea that things can get better just doesn’t compute.

A decline is not the same thing as a disappearance. (The statement “x > y” is different from the statement “y = 0.”) Something can decrease a lot without vanishing altogether. That means that the level of violence today is completely irrelevant to the question of whether violence has declined over the course of history.

The only way to answer that question is to compare the level of violence now with the level of violence in the past. And whenever you look at the level of violence in the past, you find a lot of it, even if it isn’t as fresh in memory as the morning’s headlines.

No, the psychological roots of progressophobia run deeper. The deepest is a bias that has been summarized in the slogan “Bad is stronger than good.”21 The idea can be captured in a set of thought experiments suggested by Tversky.

The psychological literature confirms that people dread losses more than they look forward to gains, that they dwell on setbacks more than they savor good fortune, and that they are more stung by criticism than they are heartened by praise. (As a psycholinguist I am compelled to add that the English language has far more words for negative emotions than for positive ones.)

One exception to the Negativity bias is found in autobiographical memory. Though we tend to remember bad events as well as we remember good ones, the negative coloring of the misfortunes fades with time, particularly the ones that happened to us.24 We are wired for nostalgia: in human memory, time heals most wounds.

The cure for the Availability bias is quantitative thinking,

Trump was the beneficiary of a belief—near universal in American journalism—that “serious news” can essentially be defined as “what’s going wrong.” . . . For decades, journalism’s steady focus on problems and seemingly incurable pathologies was preparing the soil that allowed Trump’s seeds of discontent and despair to take root. . . . One consequence is that many Americans today have difficulty imagining, valuing or even believing in the promise of incremental system change, which leads to a greater appetite for revolutionary, smash-the-machine change.

The shift during the Vietnam and Watergate eras from glorifying leaders to checking their power—with an overshoot toward indiscriminate cynicism, in which everything about America’s civic actors invites an aggressive takedown.

Sentiment mining assesses the emotional tone of a text by tallying the number and contexts of words with positive and negative connotations, like good, nice, terrible, and horrific.

Putting aside the wiggles and waves that reflect the crises of the day, we see that the impression that the news has become more negative over time is real. The New York Times got steadily more morose from the early 1960s to the early 1970s, lightened up a bit (but just a bit) in the 1980s and 1990s, and then sank into a progressively worse mood in the first decade of the new century.

And here is a shocker: The world has made spectacular progress in every single measure of human well-being. Here is a second shocker: Almost no one knows about it.

In the mid-18th century, life expectancy in Europe and the Americas was around 35, where it had been parked for the 225 previous years for which we have data.3 Life expectancy for the world as a whole was 29.

The life expectancy of hunter-gatherers is around 32.5, and it probably decreased among the peoples who first took up farming because of their starchy diet and the diseases they caught from their livestock and each other.

It returned to the low 30s by the Bronze Age, where it stayed put for thousands of years, with small fluctuations across centuries and regions.4 This period in human history may be called the Malthusian Era, when any advance in agriculture or health was quickly canceled by the resulting bulge in population, though “era” is an odd term for 99.9 percent of our species’ existence.

Progress is an outcome not of magic but of problem-solving.

Problems are inevitable, and at times particular sectors of humanity have suffered terrible setbacks.

Average life spans are stretched the most by decreases in infant and child mortality, both because children are fragile and because the death of a child brings down the average more than the death of a 60-year-old.

Are we really living longer, or are we just surviving infancy in greater numbers?

So do those of us who survive the ordeals of childbirth and childhood today live any longer than the survivors of earlier eras? Yes, much longer. Figure

No matter how old you are, you have more years ahead of you than people of your age did in earlier decades and centuries.

The economist Steven Radelet has pointed out that “the improvements in health among the global poor in the last few decades are so large and widespread that they rank among the greatest achievements in human history. Rarely has the basic well-being of so many people around the world improved so substantially, so quickly. Yet few people are even aware that it is happening.”13

In his 2005 bestseller The Singularity Is Near, the inventor Ray Kurzweil forecasts that those of us who make it to 2045 will live forever, thanks to advances in genetics, nanotechnology (such as nanobots that will course through our bloodstream and repair our bodies from the inside), and artificial intelligence, which will not just figure out how to do all this but recursively improve its own intelligence without limit.

Lacking the gift of prophecy, no one can say whether scientists will ever find a cure for mortality. But evolution and entropy make it unlikely. Senescence is baked into our genome at every level of organization, because natural selection favors genes that make us vigorous when we are young over those that make us live as long as possible.

Peter Hoffman points out, “Life pits biology against physics in mortal combat.”

“Income—although important both in and of itself and as a component of wellbeing . . .—is not the ultimate cause of wellbeing.”16 The fruits of science are not just high-tech pharmaceuticals such as vaccines, antibiotics, antiretrovirals, and deworming pills. They also comprise ideas—ideas that may be cheap to implement and obvious in retrospect, but which save millions of lives. Examples include boiling, filtering, or adding bleach to water; washing hands;

The historian Fernand Braudel has documented that premodern Europe suffered from famines every few decades.

Many of those who were not starving were too weak to work, which locked them into poverty.

As the comedian Chris Rock observed, “This is the first society in history where the poor people are fat.”

hardship everywhere before the 19th century, rapid improvement in Europe and the United States over the next two centuries, and, in recent decades, the developing world catching up.

Fortunately, the numbers reflect an increase in the availability of calories throughout the range, including the bottom.

Figure 7-2 shows the proportion of children who are stunted in a representative sample of countries which have data for the longest spans of time.

We see that in just two decades the rate of stunting has been cut in half.

Not only has chronic undernourishment been in decline, but so have catastrophic famines—the crises that kill people in large numbers and cause widespread wasting (the condition of being two standard deviations below one’s expected weight)

Figure 7-4 shows the number of deaths in major famines in each decade for the past 150 years, scaled by world population at the time.

The link from crop failure to famine has been broken. Most recent drought- or flood-triggered food crises have been adequately met by a combination of local and international humanitarian response.

In 1798 Thomas Malthus explained that the frequent famines of his era were unavoidable and would only get worse, because “population, when unchecked, increases in a geometrical ratio. Subsistence increases only in an arithmetic ratio. A slight acquaintance with numbers will show the immensity of the first power in comparison with the second.” The implication was that efforts to feed the hungry would only lead to more misery, because they would breed more children who were doomed to hunger in their turn.

Where did Malthus’s math go wrong? Looking at the first of his curves, we already saw that population growth needn’t increase in a geometric ratio indefinitely, because when people get richer and more of their babies survive, they have fewer babies (see also figure 10-1). Conversely, famines don’t reduce population growth for long. They disproportionately kill children and the elderly, and when conditions improve, the survivors quickly replenish the population.13 As Hans Rosling put it, “You can’t stop population growth by letting poor children die.”14

Looking at the second curve, we discover that the food supply can grow geometrically when knowledge is applied to increase the amount of food that can be coaxed out of a patch of land. Since the birth of agriculture ten thousand years ago, humans have been genetically engineering plants and animals by selectively breeding the ones that had the most calories and fewest toxins and that were the easiest to plant and harvest.

Clever farmers also tinkered with irrigation, plows, and organic fertilizers, but Malthus always had the last word.

The moral imperative was explained to Gulliver by the King of Brobdingnag: “Whoever makes two ears of corn, or two blades of grass to grow where only one grew before, deserves better of humanity, and does more essential service to his country than the whole race of politicians put together.”

British Agricultural Revolution.16 Crop rotation and improvements to plows and seed drills were followed by mechanization, with fossil fuels replacing human and animal muscle.

But the truly gargantuan boost would come from chemistry. The N in SPONCH, the acronym taught to schoolchildren for the chemical elements that make up the bulk of our bodies, stands for nitrogen, a major ingredient of protein, DNA, chlorophyll, and the energy carrier ATP. Nitrogen atoms are plentiful in the air but bound in pairs (hence the chemical formula N2), which are hard to split apart so that plants can use them.

Fertilizer on an industrial scale,

Over the past century, grain yields per hectare have swooped upward while real prices have plunged.

In the United States in 1901, an hour’s wages could buy around three quarts of milk; a century later, the same wages would buy sixteen quarts. The amount of every other foodstuff that can be bought with an hour of labor has multiplied as well: from a pound of butter to five pounds, a dozen eggs to twelve dozen, two pounds of pork chops to five pounds, and nine pounds of flour to forty-nine pounds.

In addition to beating back hunger, the ability to grow more food from less land has been, on the whole, good for the planet. Despite their bucolic charm, farms are biological deserts which sprawl over the landscape at the expense of forests and grasslands. Now that farms have receded in some parts of the world, temperate forests have been bouncing back,

High-tech agriculture, the critics said, consumes fossil fuels and groundwater, uses herbicides and pesticides, disrupts traditional subsistence agriculture, is biologically unnatural, and generates profits for corporations. Given that it saved a billion lives and helped consign major famines to the dustbin of history, this seems to me like a reasonable price to pay. More important, the price need not be with us forever. The beauty of scientific progress is that it never locks us into a technology but can develop new ones with fewer problems than the old ones (a dynamic we will return to here).

There is no such thing as a genetically unmodified crop). Yet traditional environmentalist groups, with what the ecology writer Stewart Brand has called their “customary indifference to starvation,” have prosecuted a fanatical crusade to keep transgenic crops from people—not just from whole-food gourmets in rich countries but from poor farmers in developing ones.

Poverty has no causes,” wrote the economist Peter Bauer.

History is written not so much by the victors as by the affluent, the sliver of humanity with the leisure and education to write about it.

Norberg, drawing on Braudel, offers vignettes of this era of misery, when the definition of poverty was simple: “if you could afford to buy bread to survive another day, you were not poor.”

Economists speak of a “lump fallacy” or “physical fallacy” in which a finite amount of wealth has existed since the beginning of time, like a lode of gold, and people have been fighting over how to divide it up ever since.4 Among the brainchildren of the Enlightenment is the realization that wealth is created.5 It is created primarily by knowledge and cooperation: networks of people arrange matter into improbable but useful configurations and combine the fruits of their ingenuity and labor. The corollary, just as radical, is that we can figure out how to make more of it.

The endurance of poverty and the transition to modern affluence can be shown in a simple but stunning graph. It plots, for the past two thousand years, a standard measure of wealth creation, the Gross World Product, measured in 2011 international dollars.

The story of the growth of prosperity in human history depicted in figure 8-1 is close to: nothing . . . nothing . . . nothing . . . (repeat for a few thousand years) . . . boom! A millennium after the year 1 CE, the world was barely richer than it was at the time of Jesus.

Starting in the 19th century, the increments turned into leaps and bounds. Between 1820 and 1900, the world’s income tripled. It tripled again in a bit more than fifty years. It took only twenty-five years for it to triple again, and another thirty-three years to triple yet another time. The Gross World Product today has grown almost a hundredfold since the Industrial Revolution was in place in 1820, and almost two hundredfold from the start of the Enlightenment in the 18th century.

Indeed, the Gross World Product is a gross underestimate of the expansion of prosperity.

Adam Smith called it the paradox of value: when an important good becomes plentiful, it costs far less than what people are willing to pay for it. The difference is called consumer surplus, and the explosion of this surplus over time is impossible to tabulate.

Economic historian Joel Mokyr calls “the enlightened economy.”8 The machines and factories of the Industrial Revolution, the productive farms of the Agricultural Revolution, and the water pipes of the Public Health Revolution could deliver more clothes, tools, vehicles, books, furniture, calories, clean water, and other things that people want than the craftsmen and farmers of a century before.

“After 1750 the epistemic base of technology slowly began to expand. Not only did new products and techniques emerge; it became better understood why and how the old ones worked, and thus they could be refined, debugged, improved, combined with others in novel ways and adapted to new uses.”

One was the development of institutions that lubricated the exchange of goods, services, and ideas—the dynamic singled out by Adam Smith as the generator of wealth. The economists Douglass North, John Wallis, and Barry Weingast argue that the most natural way for states to function, both in history and in many parts of the world today, is for elites to agree not to plunder and kill each other, in exchange for which they are awarded a fief, franchise, charter, monopoly, turf, or patronage network that allows them to control some sector of the economy and live off the rents (in the economist’s sense of income extracted from exclusive access to a resource).

The third innovation, after science and institutions, was a change in values: an endorsement of what the economic historian Deirdre McCloskey calls bourgeois virtue.12 Aristocratic, religious, and martial cultures have always looked down on commerce as tawdry and venal. But in 18th-century England and the Netherlands, commerce came to be seen as moral and uplifting. Voltaire and other Enlightenment philosophes valorized the spirit of commerce for its ability to dissolve sectarian hatreds:

The Enlightenment thus translated the ultimate question ‘How can I be saved?’ into the pragmatic ‘How can I be happy?’—thereby heralding a new praxis of personal and social adjustment.”

In 1905 the sociologist Max Weber proposed that capitalism depended on a “Protestant ethic” (a hypothesis with the intriguing prediction that Jews should fare poorly in capitalist societies, particularly in business and finance). In any case the Catholic countries of Europe soon zoomed out of poverty too, and a succession of other escapes shown in figure 8-2 have put the lie to various theories explaining why Buddhism, Confucianism, Hinduism, or generic “Asian” or “Latin” values were incompatible with dynamic market economies.

Sarting in the late 20th century, poor countries have been escaping from poverty in their turn. The Great Escape is becoming the Great Convergence.

Extreme poverty is being eradicated, and the world is becoming middle class.

In 1800, at the dawn of the Industrial Revolution, most people everywhere were poor. The average income was equivalent to that in the poorest countries in Africa today (about $500 a year in international dollars), and almost 95 percent of the world lived in what counts today as “extreme poverty” (less than $1.90 a day). By 1975, Europe and its offshoots had completed the Great Escape, leaving the rest of the world behind, with one-tenth their income, in the lower hump of a camel-shaped curve.20 In the 21st century the camel has become a dromedary, with a single hump shifted to the right and a much lower tail on the left: the world had become richer and more equal.

In two hundred years the rate of extreme poverty in the world has tanked from 90 percent to 10, with almost half that decline occurring in the last thirty-five years.

Also, an increase in the number of people who can withstand the grind of entropy and the struggle of evolution is a testimonial to the sheer magnitude of the benevolent powers of science, markets, good government, and other modern institutions.

“In 1976,” Radelet writes, “Mao single-handedly and dramatically changed the direction of global poverty with one simple act: he died.”

The death of Mao Zedong is emblematic of three of the major causes of the Great Convergence.

The first is the decline of communism (together with intrusive socialism). For reasons we have seen, market economies can generate wealth prodigiously while totalitarian planned economies impose scarcity, stagnation, and often famine.

A shift from collectivization, centralized control, government monopolies, and suffocating permit bureaucracies (what in India was called “the license raj”) to open economies took place on a number of fronts beginning in the 1980s. They included Deng Xiaoping’s embrace of capitalism in China, the collapse of the Soviet Union and its domination of Eastern Europe, and the liberalization of the economies of India, Brazil, Vietnam, and other countries.

It’s important to add that the market economies which blossomed in the more fortunate parts of the developing world were not the laissez-faire anarchies of right-wing fantasies and left-wing nightmares. To varying degrees, their governments invested in education, public health, infrastructure, and agricultural and job training, together with social insurance and poverty-reduction programs.35

Radelet’s second explanation of the Great Convergence is leadership.

During the decades of stagnation from the 1970s to the early 1990s, many other developing countries were commandeered by psychopathic strongmen with ideological, religious, tribal, paranoid, or self-aggrandizing agendas rather than a mandate to enhance the well-being of their citizens.

The 1990s and 2000s saw a spread of democracy (chapter 14) and the rise of levelheaded, humanistic leaders—not just national statesmen like Nelson Mandela, Corazon Aquino, and Ellen Johnson Sirleaf but local religious and civil-society leaders acting to improve the lives of their compatriots.38

A third cause was the end of the Cold War. It not only pulled the rug out from under a number of tinpot dictators but snuffed out many of the civil wars that had racked developing countries since they attained independence in the 1960s.

A fourth cause is globalization, in particular the explosion in trade made possible by container ships and jet airplanes and by the liberalization of tariffs and other barriers to investment and trade. Classical economics and common sense agree that a larger trading network should make everyone, on average, better off.

Radelet, who observes that “while working on the factory floor is often referred to as sweatshop labor, it is often better than the granddaddy of all sweatshops: working in the fields as an agricultural day laborer.”

Over the course of a generation, slums, barrios, and favelas can morph into suburbs, and the working class can become middle class.47

Progress consists of unbundling the features of a social process as much as we can to maximize the human benefits while minimizing the harms.

The last, and in many analyses the most important, contributor to the Great Convergence is science and technology.49 Life is getting cheaper, in a good way. Thanks to advances in know-how, an hour of labor can buy more food, health, education, clothing, building materials, and small necessities and luxuries than it used to. Not only can people eat cheaper food and take cheaper medicines, but children can wear cheap plastic sandals instead of going barefoot, and adults can hang out together getting their hair done or watching a soccer game using cheap solar panels and appliances.

Today about half the adults in the world own a smartphone, and there are as many subscriptions as people. In parts of the world without roads, landlines, postal service, newspapers, or banks, mobile phones are more than a way to share gossip and cat photos; they are a major generator of wealth. They allow people to transfer money, order supplies, track the weather and markets, find day labor, get advice on health and farming practices, even obtain a primary education.

Quality of life.

Health, longevity, and education are so much more affordable than they used to be.

Everyone is living longer regardless of income.55 In the richest country two centuries ago (the Netherlands), life expectancy was just forty, and in no country was it above forty-five.

Today, life expectancy in the poorest country in the world (the Central African Republic) is fifty-four, and in no country is it below forty-five.56

GDP per capita correlates with longevity, health, and nutrition.57 Less obviously, it correlates with higher ethical values like peace, freedom, human rights, and tolerance.

Between 2009 and 2016, the proportion of articles in the New York Times containing the word inequality soared tenfold, reaching 1 in 73.1

The Great Recession began in 2007.

In the United States, the share of income going to the richest one percent grew from 8 percent in 1980 to 18 percent in 2015, while the share going to the richest tenth of one percent grew from 2 percent to 8 percent.4

I need a chapter on the topic because so many people have been swept up in the dystopian rhetoric and see inequality as a sign that modernity has failed to improve the human condition. As we will see, this is wrong, and for many reasons.

Income inequality is not a fundamental component of well-being.

The point is made with greater nuance by the philosopher Harry Frankfurt in his 2015 book On Inequality.5 Frankfurt argues that inequality itself is not morally objectionable; what is objectionable is poverty. If a person lives a long, healthy, pleasurable, and stimulating life, then how much money the Joneses earn, how big their house is, and how many cars they drive are morally irrelevant. Frankfurt writes, “From the point of view of morality, it is not important everyone should have the same. What is morally important is that each should have enough.”

Lump fallacy—the mindset in which wealth is a finite resource,

Since the Industrial Revolution, it has expanded exponentially. That means that when the rich get richer, the poor can get richer, too.

“The poorer half of the population are as poor today as they were in the past, with barely 5 percent of total wealth in 2010, just as in 1910.”8 But total wealth today is vastly greater than it was in 1910, so if the poorer half own the same proportion, they are far richer, not “as poor.”

Among the world’s billionaires is J. K. Rowling, author of the Harry Potter novels, which have sold more than 400 million copies and have been adapted into a series of films seen by a similar number of people.10 Suppose that a billion people have handed over $10 each for the pleasure of a Harry Potter paperback or movie ticket, with a tenth of the proceeds going to Rowling. She has become a billionaire, increasing inequality, but she has made people better off, not worse off (which is not to say that every rich person has made people better off).

Her wealth arose as a by-product of the voluntary decisions of billions of book buyers and moviegoers.

When the rich get too rich, everyone else feels poor, so inequality lowers well-being even if everyone gets richer. This is an old idea in social psychology, variously called the theory of social comparison, reference groups, status anxiety, or relative deprivation.

We will see in chapter 18 that richer people and people in richer countries are (on average) happier than poorer people and people in poorer countries.

In their well-known book The Spirit Level, the epidemiologists Richard Wilkinson and Kate Pickett claim that countries with greater income inequality also have higher rates of homicide, imprisonment, teen pregnancy, infant mortality, physical and mental illness, social distrust, obesity, and substance abuse.14

The Spirit Level theory has been called “the left’s new theory of everything,” and it is as problematic as any other theory that leaps from a tangle of correlations to a single-cause explanation. For one thing, it’s not obvious that people

Wilkinson and Pickett’s sample was restricted to developed countries, but even within that sample the correlations are evanescent, coming and going with choices about which countries to include.

Kelley and Evans held constant the major factors that are known to affect happiness, including GDP per capita, age, sex, education, marital status, and religious attendance, and found that the theory that inequality causes unhappiness “comes to shipwreck on the rock of the facts.”

The authors suggest that whatever envy, status anxiety, or relative deprivation people may feel in poor, unequal countries is swamped by hope. Inequality is seen as a harbinger of opportunity, a sign that education and other routes to upward mobility might pay off for them and their children.

People are content with economic inequality as long as they feel that the country is meritocratic, and they get angry when they feel it isn’t. Narratives about the causes of inequality loom larger in people’s minds than the existence of inequality. That creates an opening for politicians to rouse the rabble by singling out cheaters who take more than their fair share: welfare queens, immigrants, foreign countries, bankers, or the rich, sometimes identified with ethnic minorities.18

Investment in research and infrastructure to escape economic stagnation, regulation of the finance sector to reduce instability, broader access to education and job training to facilitate economic mobility, electoral transparency and finance reform to eliminate illicit influence, and so on.

Economic inequality, then, is not itself a dimension of human well-being, and it should not be confused with unfairness or with poverty. Let’s now turn from the moral significance of inequality to the question of why it has changed over time.

The simplest narrative of the history of inequality is that it comes with modernity.

Inequality, in this story, started at zero, and as wealth increased over time, inequality grew with it. But the story is not quite right.

The image of forager egalitarianism is misleading. For one thing, the hunter-gatherer bands that are still around for us to study are not representative of an ancestral way of life, because they have been pushed into marginal lands and lead nomadic lives that make the accumulation of wealth impossible, if for no other reason than that it would be a nuisance to carry around. But sedentary hunter-gatherers, such as the natives of the Pacific Northwest, which is flush with salmon, berries, and fur-bearing animals, were florid inegalitarians, and developed a hereditary nobility who kept slaves, hoarded luxuries, and flaunted their wealth in gaudy potlatches.

They are less likely to share plant foods, since gathering is a matter of effort, and indiscriminate sharing would allow free-riding.

What happens when a society starts to generate substantial wealth? An increase in absolute inequality (the difference between the richest and poorest) is almost a mathematical necessity.

Some people are bound to take greater advantage of the new opportunities than others, whether by luck, skill, or effort, and they will reap disproportionate rewards.

As the Industrial Revolution gathered steam, European countries made a Great Escape from universal poverty, leaving the other countries behind.

What’s significant about the decline in inequality is that it’s a decline in poverty.

But then, starting around 1980, inequality bounced into a decidedly un-Kuznetsian rise.

The rise and fall in inequality in the 19th century reflects Kuznets’s expanding economy, which gradually pulls more people into urban, skilled, and thus higher-paying occupations. But the 20th-century plunge—which has been called the Great Leveling or the Great Compression—had more sudden causes. The plunge overlaps the two world wars, and that is no coincidence: major wars often level the income distribution.

The historian Walter Scheidel identifies “Four Horsemen of Leveling”: mass-mobilization warfare, transformative revolution, state collapse, and lethal pandemics.

The four horsemen reduce inequality by killing large numbers of workers, driving up the wages of those who survive.

But modernity has brought a more benign way to reduce inequality. As we have seen, a market economy is the best poverty-reduction program we know of for an entire country.

(Another way of putting it is that a market economy maximizes the average, but we also care about the variance and the range.) As the circle of sympathy in a country expands to encompass the poor (and as people want to insure themselves should they ever become poor), they increasingly allocate a portion of their pooled resources—that is, government funds—to alleviating that poverty.

The net result is “redistribution,” but that is something of a misnomer, because the goal is to raise the bottom, not lower the top, even if in practice the top is lowered.

Figure 9-4 shows that social spending took off in the middle decades of the 20th century (in the United States, with the New Deal in the 1930s; in other developed countries, with the rise of the welfare state after World War II). Social spending now takes up a median of 22 percent of their GDP.31

The explosion in social spending has redefined the mission of government: from warring and policing to also nurturing.32 Governments underwent this transformation for several reasons. Social spending inoculates citizens against the appeal of communism and fascism. Some of the benefits, like universal education and public health, are public goods that accrue to everyone, not just the direct beneficiaries.

Social spending is designed to help people who have less money, with the bill footed by people who have more money. This is the principle known as redistribution, the welfare state, social democracy, or socialism (misleadingly, because free-market capitalism is compatible with any amount of social spending).

The United States is famously resistant to anything smacking of redistribution. Yet it allocates 19 percent of its GDP to social services, and despite the best efforts of conservatives and libertarians the spending has continued to grow. The most recent expansions are a prescription drug benefit introduced by George W. Bush and the eponymous health insurance plan known as Obamacare introduced by his successor.

Many Americans are forced to pay for health, retirement, and disability benefits through their employers rather than the government. When this privately administered social spending is added to the public portion, the United States vaults from twenty-fourth into second place among the thirty-five OECD countries, just behind France.34

Social spending, like everything, has downsides. As with all insurance, it can create a “moral hazard” in which the insured slack off or take foolish risks, counting on the insurer to bail them out if they fail.

The rise of inequality in wealthy nations that began around 1980. This is the development that inspired the claim that life has gotten worse for everyone but the richest.

A “second industrial revolution” driven by electronic technologies replayed the Kuznets rise by creating a demand for highly skilled professionals, who pulled away from the less educated at the same time that the jobs requiring less education were eliminated by automation. Globalization allowed workers in China, India, and elsewhere to underbid their American competitors in a worldwide labor market, and the domestic companies that failed to take advantage of these offshoring opportunities were outcompeted on price.

Declining inequality worldwide, increasing inequality within rich countries—into a single graph which pleasingly takes the shape of an elephant (figure 9-5

The cliché about globalization is that it creates winners and losers, and the elephant curve displays them as peaks and valleys. It reveals that the winners include most of humanity. The elephant’s bulk (its body and head), which includes about seven-tenths of the world’s population, consists of the “emerging global middle class,” mainly in Asia. Over this period they saw cumulative gains of 40 to 60 percent in their real incomes. The nostrils at the tip of the trunk consist of the world’s richest one percent, who also saw their incomes soar.

Globalization’s “losers”: the lower middle classes of the rich world, who gained less than 10 percent. These are the focus of the new concern about inequality: the “hollowed-out middle class,” the Trump supporters, the people globalization left behind.

The rich certainly have prospered more than anyone else, perhaps more than they should have, but the claim about everyone else is not accurate, for a number of reasons.

Most obviously, it’s false for the world as a whole: the majority of the human race has become much better off. The two-humped camel has become

Extreme poverty has plummeted and may disappear; and both international and global inequality coefficients are in decline. Now, it’s true that the world’s poor have gotten richer in part at the expense of the American lower middle class, and if I were an American politician I would not publicly say that the tradeoff was worth it. But as citizens of the world considering humanity as a whole, we have to say that the tradeoff is worth it.

Today’s discussions of inequality often compare the present era unfavorably with a golden age of well-paying, dignified, blue-collar jobs that have been made obsolete by automation and globalization.

What’s relevant to well-being is how much people earn, not how high they rank.

Stephen Rose divided the American population into classes using fixed milestones rather than quantiles. “Poor” was defined as an income of $0–$30,000 (in 2014 dollars) for a family of three, “lower middle class” as $30,000–$50,000, and so on.46 The study found that in absolute terms, Americans have been moving on up. Between 1979 and 2014, the percentage of poor Americans dropped from 24 to 20,

Upper middle class ($100,000–$350,000),

The middle class is being hollowed out in part because so many Americans are becoming affluent. Inequality undoubtedly increased—the rich got richer faster than the poor and middle class got richer—but everyone (on average) got richer.

A third reason that rising inequality has not made the lower classes worse off is that low incomes have been mitigated by social transfers. For all its individualist ideology, the United States has a lot of redistribution. The income tax is still graduated, and low incomes are buffered by a “hidden welfare state” that includes unemployment insurance, Social Security, Medicare, Medicaid, Temporary Assistance for Needy Families, food stamps, and the Earned Income Tax Credit, a kind of negative income tax in which the government boosts the income of low earners. Put them together and America becomes far less unequal.

The United States has not gone as far as countries like Germany and Finland,

Some kind of welfare state may be found in all developed countries, and it reduces inequality even when it is hidden.50

The sociologist Christopher Jencks has calculated that when the benefits from the hidden welfare state are added up, and the cost of living is estimated in a way that takes into account the improving quality and falling price of consumer goods, the poverty rate has fallen in the past fifty years by more than three-quarters, and in 2013 stood at 4.8 percent.

The progress stagnated around the time of the Great Recession, but it picked up in 2015 and 2016 (not shown in the graph), when middle-class income reached a record high and the poverty rate showed its largest drop since 1999.54

The unsheltered homeless—fell in number between 2007 and 2015 by almost a third, despite the Great Recession.55

Income is just a means to an end: a way of paying for things that people need, want, and like, or as economists gracelessly call it, consumption. When poverty is defined in terms of what people consume rather than what they earn, we find that the American poverty rate has declined by ninety percent since 1960, from 30 percent of the population to just 3 percent. The two forces that have famously increased inequality in income have at the same time decreased inequality in what matters.

Together, technology and globalization have transformed what it means to be a poor person, at least in developed countries.

The poor used to be called the have-nots. In 2011, more than 95 percent of American households below the poverty line had electricity, running water, flush toilets, a refrigerator, a stove, and a color TV.58 (A century and a half before, the Rothschilds, Astors, and Vanderbilts had none of these things.)

The rich have gotten richer, but their lives haven’t gotten that much better. Warren Buffett may have more air conditioners than most people, or better ones, but by historical standards the fact that a majority of poor Americans even have an air conditioner is astonishing.

Though disposable income has increased, the pace of the increase is slow, and the resulting lack of consumer demand may be dragging down the economy as a whole.62 The hardships faced by one sector of the population—middle-aged, less-educated, non-urban white Americans—are real and tragic, manifested in higher rates of drug overdose (chapter 12) and suicide

Truck drivers, for example, make up the most common occupation in a majority of states, and self-driving vehicles may send them the way of scriveners, wheelwrights, and switchboard operators. Education, a major driver of economic mobility, is not keeping up with the demands of modern economies: tertiary education has soared in cost (defying the inexpensification of almost every other good), and in poor American neighborhoods, primary and secondary education are unconscionably substandard. Many parts of the American tax system are regressive, and money buys too much political influence.

Rather than tilting at inequality per se it may be more constructive to target the specific problems lumped with it.65 An obvious priority is to boost the rate of economic growth, since it would increase everyone’s slice of the pie and provide more pie to redistribute.

The next step in the historic trend toward greater social spending may be a universal basic income (or its close relative, a negative income tax).

Despite its socialist aroma, the idea has been championed by economists (such as Milton Friedman), politicians (such as Richard Nixon), and states (such as Alaska) that are associated with the political right, and today analysts across the political spectrum are toying with it.

It could rationalize the kludgy patchwork of the hidden welfare state, and it could turn the slow-motion disaster of robots replacing workers into a horn of plenty. Many of the jobs that robots will take over are jobs that people don’t particularly enjoy, and the dividend in productivity, safety, and leisure could be a boon to humanity as long as it is widely shared.

Inequality is not the same as poverty, and it is not a fundamental dimension of human flourishing. In comparisons of well-being across countries, it pales in importance next to overall wealth. An increase in inequality is not necessarily bad: as societies escape from universal poverty, they are bound to become more unequal, and the uneven surge may be repeated when a society discovers new sources of wealth.


The key idea is that environmental problems, like other problems, are solvable, given the right knowledge.

Beginning in the 1960s, the environmental movement grew out of scientific knowledge (from ecology, public health, and earth and atmospheric sciences) and a Romantic reverence for nature.

In this chapter I will present a newer conception of environmentalism which shares the goal of protecting the air and water, species, and ecosystems but is grounded in Enlightenment optimism rather than Romantic declinism.

Ecomodernism, Ecopragmatism, Earth Optimism,

Enlightenment Environmentalism or Humanistic Environmentalism.3

Ecomodernism begins with the realization that some degree of pollution is an inescapable consequence of the Second Law of Thermodynamics. When people use energy to create a zone of structure in their bodies and homes, they must increase entropy elsewhere in the environment in the form of waste, pollution, and other forms of disorder.

When native peoples first set foot in an ecosystem, they typically hunted large animals to extinction, and often burned and cleared vast swaths of forest.

When humans took up farming, they became more disruptive still.

A second realization of the ecomodernist movement is that industrialization has been good for humanity.8 It has fed billions, doubled life spans, slashed extreme poverty, and, by replacing muscle with machinery, made it easier to end slavery, emancipate women, and educate children (chapters 7, 15, and 17). It has allowed people to read at night, live where they want, stay warm in winter, see the world, and multiply human contact. Any costs in pollution and habitat loss have to be weighed against these gifts.

The third premise is that the tradeoff that pits human well-being against environmental damage can be renegotiated by technology. How to enjoy more calories, lumens, BTUs, bits, and miles with less pollution and land is itself a technological problem, and one that the world is increasingly solving.

Figure 10-1 shows that the world population growth rate peaked at 2.1 percent a year in 1962, fell to 1.2 percent by 2010, and will probably fall to less than 0.5 percent by 2050 and be close to zero around 2070, when the population is projected to level off and then decline.

The other scare from the 1960s was that the world would run out of resources. But resources just refuse to run out. The 1980s came and went without the famines that were supposed to starve tens of millions of Americans and billions of people worldwide. Then the year 1992 passed and, contrary to projections from the 1972 bestseller The Limits to Growth and similar philippics, the world did not exhaust its aluminum, copper, chromium, gold, nickel, tin, tungsten, or zinc.

From the 1970s to the early 2000s newsmagazines periodically illustrated cover stories on the world’s oil supply with a gas gauge pointing to Empty. In 2013 The Atlantic ran a cover story about the fracking revolution entitled “We Will Never Run Out of Oil.”

And the Rare Earths War? In reality, when China squeezed its exports in 2010 (not because of shortages but as a geopolitical and mercantilist weapon), other countries started extracting rare earths from their own mines, recycling them from industrial waste, and re-engineering products so they no longer needed them.15

Instead, as the most easily extracted supply of a resource becomes scarcer, its price rises, encouraging people to conserve it, get at the less accessible deposits, or find cheaper and more plentiful substitutes.

In reality, societies have always abandoned a resource for a better one long before the old one was exhausted.

In The Big Ratchet: How Humanity Thrives in the Face of Natural Crisis, the geographer Ruth DeFries describes the sequence as “ratchet-hatchet-pivot.” People discover a way of growing more food, and the population ratchets upward. The method fails to keep up with the demand or develops unpleasant side effects, and the hatchet falls. People then pivot to a new method.

Figure 10-3 shows that since 1970, when the Environmental Protection Agency was established, the United States has slashed its emissions of five air pollutants by almost two-thirds. Over the same period, the population grew by more than 40 percent, and those people drove twice as many miles and became two and a half times richer. Energy use has leveled off, and even carbon dioxide emissions have turned a corner, a point to which we will return.

They mainly reflect gains in efficiency and emission control.

Though tropical forests are still, alarmingly, being cut down, between the middle of the 20th century and the turn of the 21st the rate fell by two-thirds (figure 10-4).24 Deforestation of the world’s largest tropical forest, the Amazon, peaked in 1995, and from 2004 to 2013 the rate fell by four-fifths.25

Thanks to habitat protection and targeted conservation efforts, many beloved species have been pulled from the brink of extinction, including albatrosses, condors, manatees, oryxes, pandas, rhinoceroses, Tasmanian devils, and tigers; according to the ecologist Stuart Pimm, the rate of bird extinctions has been reduced by 75 percent.31 Though many species remain in precarious straits, a number of ecologists and paleontologists believe that the claim that humans are causing a mass extinction like the Permian and Cretaceous is hyperbolic.

One key is to decouple productivity from resources: to get more human benefit from less matter and energy. This puts a premium on density.36 As agriculture becomes more intensive by growing crops that are bred or engineered to produce more protein, calories, and fiber with less land, water, and fertilizer, farmland is spared, and it can morph back to natural habitats. (Ecomodernists point out that organic farming, which needs far more land to produce a kilogram of food, is neither green nor sustainable.)

All these processes are helped along by another friend of the Earth, dematerialization. Progress in technology allows us to do more with less.

Digital technology is also dematerializing the world by enabling the sharing economy, so that cars, tools, and bedrooms needn’t be made in huge numbers that sit around unused most of the time.

Hipsterization leads them to distinguish themselves by their tastes in beer, coffee, and music.

Just as we must not accept the narrative that humanity inexorably despoils every part of the environment, we must not accept the narrative that every part of the environment will rebound under our current practices.

If the emission of greenhouse gases continues, the Earth’s average temperature will rise to at least 1.5°C (2.7°F) above the preindustrial level by the end of the 21st century, and perhaps to 4°C (7.2°F) above that level or more. That will cause more frequent and more severe heat waves, more floods in wet regions, more droughts in dry regions, heavier storms, more severe hurricanes, lower crop yields in warm regions, the extinction of more species, the loss of coral reefs (because the oceans will be both warmer and more acidic), and an average rise in sea level of between 0.7 and 1.2 meters (2 and 4 feet) from both the melting of land ice and the expansion of seawater. (Sea level has already risen almost eight inches since 1870, and the rate of the rise appears to be accelerating.) Low-lying areas would be flooded, island nations would disappear beneath the waves, large stretches of farmland would no longer be arable, and millions of people would be displaced. The effects could get still worse in the 22nd century and beyond, and in theory could trigger upheavals such as a diversion of the Gulf Stream (which would turn Europe into Siberia) or a collapse of Antarctic ice sheets.

A recent survey found that exactly four out of 69,406 authors of peer-reviewed articles in the scientific literature rejected the hypothesis of anthropogenic global warming, and that “the peer-reviewed literature contains no convincing evidence against [the hypothesis].

Nonetheless, a movement within the American political right, heavily underwritten by fossil fuel interests, has prosecuted a fanatical and mendacious campaign to deny that greenhouse gases are warming the planet.47

The problem is that carbon emissions are a classic public goods game, also known as a Tragedy of the Commons. People benefit from everyone else’s sacrifices and suffer from their own, so everyone has an incentive to be a free rider and let everyone else make the sacrifice, and everyone suffers. A standard remedy for public goods dilemmas is a coercive authority that can punish free riders. But any government with the totalitarian power to abolish artistic pottery is unlikely to restrict that power to maximizing the common good. One can, alternatively, daydream

Most important, the sacrifice needed to bring carbon emissions down by half and then to zero is far greater than forgoing jewelry: it would require forgoing electricity, heating, cement, steel, paper, travel, and affordable food and clothing.

Escaping from poverty requires abundant energy.

Economic progress is an imperative in rich and poor countries alike precisely because it will be needed to adapt to the climate change that does occur. Thanks in good part to prosperity, humanity has been getting healthier (chapters 5 and 6), better fed (chapter 7), more peaceful (chapter 11), and better protected from natural hazards and disasters (chapter 12). These advances have made humanity more resilient to natural and human-made threats: disease outbreaks don’t become pandemics, crop failures in one region are alleviated by surpluses in another, local skirmishes are defused before they erupt into war, populations are better protected against storms, floods, and droughts.

The enlightened response to climate change is to figure out how to get the most energy with the least emission of greenhouse gases. There is, to be sure, a tragic view

Ausubel notes that the modern world has been progressively decarbonizing.

Annual CO2 emissions may have leveled off for the time being at around 36 billion tons, but that’s still a lot of CO2 added to the atmosphere every year, and there is no sign of the precipitous plunge we would need to stave off the harmful outcomes. Instead, decarbonization needs to be helped along with pushes from policy and technology, an idea called deep decarbonization.73

A second key to deep decarbonization brings up an inconvenient truth for the traditional Green movement: nuclear power is the world’s most abundant and scalable carbon-free energy source.

Nuclear energy, in contrast, represents the ultimate in density,

It’s often said that with climate change, those who know the most are the most frightened, but with nuclear power, those who know the most are the least frightened.

“The French have two kinds of reactors and hundreds of kinds of cheese, whereas in the United States the figures are reversed.”89

The benefits of advanced nuclear energy are incalculable.

An energy source that is cheaper, denser, and cleaner than fossil fuels would sell itself, requiring no herculean political will or international cooperation.92 It would not just mitigate climate change but furnish manifold other gifts. People in the developing world could skip the middle rungs in the energy ladder, bringing their standard of living up to that of the West without choking on coal smoke. Affordable desalination of seawater, an energy-ravenous process, could irrigate farms, supply drinking water, and, by reducing the need for both surface water and hydro power, allow dams to be dismantled, restoring the flow of rivers to lakes and seas and revivifying entire ecosystems.

The last of these is critical for a simple reason. Even if greenhouse gas emissions are halved by 2050 and zeroed by 2075, the world would still be on course for risky warming, because the CO2 already emitted will remain in the atmosphere for a very long time. It’s not enough to stop thickening the greenhouse; at some point we have to dismantle it.

The obvious way to remove CO2 from the air, then, is to recruit as many carbon-hungry plants as we can to help us. We can do this by encouraging the transition from deforestation to reforestation and afforestation (planting new forests), by reversing tillage and wetland destruction, and by restoring coastal and marine habitats.

Will any of this happen? The obstacles are unnerving; they include the world’s growing thirst for energy, the convenience of fossil fuels with their vast infrastructure, the denial of the problem by energy corporations and the political right, the hostility to technological solutions from traditional Greens and the climate justice left, and the tragedy of the carbon commons.

Despite a half-century of panic, humanity is not on an irrevocable path to ecological suicide.


In The Better Angels of Our Nature I showed that, as of the first decade of the 21st century, every objective measure of violence had been in decline.

For most of human history, war was the natural pastime of governments, peace a mere respite between wars.2

(Great powers are the handful of states and empires that can project force beyond their borders, that treat each other as peers, and that collectively control a majority of the world’s military resources.)

It’s not just the great powers that have stopped fighting each other. War in the classic sense of an armed conflict between the uniformed armies of two nation-states appears to be obsolescent.

The world’s wars are now concentrated almost exclusively in a zone stretching from Nigeria to Pakistan, an area containing less than a sixth of the world’s population. Those wars are civil wars, which the Uppsala Conflict Data Program (UCDP) defines as an armed conflict between a government and an organized force which verifiably kills at least a thousand soldiers and civilians a year.

The flip is driven mainly by conflicts that have a radical Islamist group on one side (eight of the eleven in 2015, ten of the twelve in 2016); without them, there would have been no increase in the number of wars at all. Perhaps not coincidentally, two of the wars in 2014 and 2015 were fueled by another counter-Enlightenment ideology, Russian nationalism, which drove separatist forces, backed by Vladimir Putin, to battle the government of Ukraine in two of its provinces.

The worst of the ongoing wars is in Syria,

“Wars begin in the minds of men.” And indeed we find that the turn away from war consists in more than just a reduction in wars and war deaths; it also may be seen in nations’ preparations for war. The prevalence of conscription, the size of armed forces, and the level of global military spending as a percentage of GDP have all decreased in recent decades.

Kant’s famous essay “Perpetual Peace.”19

As we saw in chapter 1, many Enlightenment thinkers advanced the theory of gentle commerce, according to which international trade should make war less appealing. Sure enough, trade as a proportion of GDP shot up in the postwar era, and quantitative analyses have confirmed that trading countries are less likely to go to war, holding all else constant.21

Another brainchild of the Enlightenment is the theory that democratic government serves as a brake on glory-drunk leaders who would drag their countries into pointless wars. Starting in the 1970s, and accelerating

Democratic Peace theory, in which pairs of countries that are more democratic are less likely to confront each other in militarized disputes.22

Yet the biggest single change in the international order is an idea we seldom appreciate today: war is illegal.

That cannot happen today: the world’s nations have committed themselves to not waging war except in self-defense or with the approval of the United Nations Security Council. States are immortal, borders are grandfathered in, and any country that indulges in a war of conquest can expect opprobrium, not acquiescence, from the rest.

War “enlarges the mind of a people and raises their character,” wrote Alexis de Tocqueville. It is “life itself,” said Émile Zola; “the foundation of all the arts . . . [and] the high virtues and faculties of man,” wrote John Ruskin.

Romantic militarism sometimes merged with romantic nationalism, which exalted the language, culture, homeland, and racial makeup of an ethnic group—the ethos of blood and soil—and held that a nation could fulfill its destiny only as an ethnically cleansed sovereign state.

But perhaps the biggest impetus to romantic militarism was declinism, the revulsion among intellectuals at the thought that ordinary people seemed to be enjoying their lives in peace and prosperity.34 Cultural pessimism became particularly entrenched in Germany through the influence of Schopenhauer, Nietzsche, Jacob Burckhardt, Georg Simmel, and Oswald Spengler, author in 1918–23 of The Decline of the West. (We will return to these ideas in chapter 23.) To this day, historians of World War I puzzle over why England and Germany, countries with a lot in common—Western, Christian, industrialized, affluent—would choose to hold a pointless bloodbath. The reasons are many and tangled, but insofar as they involve ideology, Germans before World War I “saw themselves as outside European or Western civilization,” as Arthur Herman points out.35 In particular, they thought they were bravely resisting the creep of a liberal, democratic, commercial culture that had been sapping the vitality of the West since the Enlightenment, with the complicity of Britain and the United States. Only from the ashes of a redemptive cataclysm, many thought, could a new heroic order arise.

Worldwide, injuries account for about a tenth of all deaths, outnumbering the victims of AIDS, malaria, and tuberculosis combined, and are responsible for 11 percent of the years lost to death and disability.

Though lethal injuries are a major scourge of human life, bringing the numbers down is not a sexy cause. The inventor of the highway guard rail did not get a Nobel Prize, nor are humanitarian awards given to designers of clearer prescription drug labels.

More people are killed in homicides than wars.

But in a sweeping historical development that the German sociologist Norbert Elias called the Civilizing Process, Western Europeans, starting in the 14th century, began to resolve their disputes in less violent ways.6 Elias credited the change to the emergence of centralized kingdoms out of the medieval patchwork of baronies and duchies, so that the endemic feuding, brigandage, and warlording were tamed by a “king’s peace.” Then, in the 19th century, criminal justice systems were further professionalized by municipal police forces and a more deliberative court system.

People became enmeshed in networks of commercial and occupational obligations laid out in legal and bureaucratic rules. Their norms for everyday conduct shifted from a macho culture of honor, in which affronts had to be answered with violence, to a gentlemanly culture of dignity, in which status was won by displays of propriety and self-control.

(Homicide rates are the most reliable indicator of violent crime across different times and places because a corpse is always hard to overlook, and rates of homicide correlate with rates of other violent crimes like robbery, assault, and rape.)

Violent crime is a solvable problem.

Half of the world’s homicides are committed in just twenty-three countries containing about a tenth of humanity, and a quarter are committed in just four: Brazil (25.2), Colombia (25.9), Mexico (12.9), and Venezuela. (The world’s two murder zones—northern Latin America and southern sub-Saharan Africa—are distinct from its war zones, which stretch from Nigeria through the Middle East into Pakistan.) The lopsidedness continues down the fractal scale. Within a country, most of the homicides cluster in a few cities, such as Caracas (120 per 100,000) and San Pedro Sula (in Honduras, 187). Within cities, the homicides cluster in a few neighborhoods; within neighborhoods, they cluster in a few blocks; and within blocks, many are carried out by a few individuals.17 In my hometown of Boston, 70 percent of the shootings take place in 5 percent of the city, and half the shootings were perpetrated by one percent of the youths.18

High rates of homicide can be brought down quickly.

Combine the cockeyed distribution of violent crime with the proven possibility that high rates of violent crime can be brought down quickly, and the math is straightforward: a 50 percent reduction in thirty years is not just practicable but almost conservative.

This “Hobbesian trap,” as it is sometimes called, can easily set off cycles of feuding and vendetta: you have to be at least as violent as your adversaries lest you become their doormat. The largest category of homicide, and the one that varies the most across times and places, consists of confrontations between loosely acquainted young men over turf, reputation, or revenge. A disinterested third party with a monopoly on the legitimate use of force—that is, a state with a police force and judiciary—can nip this cycle in the bud. Not only does it disincentivize aggressors by the threat of punishment, but it reassures everyone else that the aggressors are disincentivized and thereby relieves them of the need for belligerent self-defense.

Here is Eisner’s one-sentence summary of how to halve the homicide rate within three decades: “An effective rule of law, based on legitimate law enforcement, victim protection, swift and fair adjudication, moderate punishment, and humane prisons is critical to sustainable reductions in lethal violence.”32 The adjectives effective, legitimate, swift, fair, moderate, and humane differentiate his advice from the get-tough-on-crime rhetoric favored by right-wing politicians.

Together with the presence of law enforcement, the legitimacy of the regime appears to matter, because people not only respect legitimate authority themselves but factor in the degree to which they expect their potential adversaries to respect it.

Thomas Abt and Christopher Winship

They concluded that the single most effective tactic for reducing violent crime is focused deterrence. A “laser-like focus” must first be directed on the neighborhoods where crime is rampant or even just starting to creep up, with the “hot spots” identified by data gathered in real time. It must be further beamed at the individuals and gangs who are picking on victims or roaring for a fight. And it must deliver a simple and concrete message about the behavior that is expected of them, like “Stop shooting and we will help you, keep shooting and we will put you in prison.” Getting the message through, and then enforcing it, depends on the cooperation of other members of the community—the store owners, preachers, coaches, probation officers, and relatives.

Also provably effective is cognitive behavioral therapy.

It is a set of protocols designed to override the habits of thought and behavior that lead to criminal acts.

Therapies that teach strategies of self-control. Troublemakers also have narcissistic and sociopathic thought patterns, such as that they are always in the right, that they are entitled to universal deference, that disagreements are personal insults, and that other people have no feelings or interests.

Together with anarchy, impulsiveness, and opportunity, a major trigger of criminal violence is contraband.

Violent crime exploded in the United States when alcohol was prohibited in the 1920s and when crack cocaine became popular in the late 1980s, and it is rampant in Latin American and Caribbean countries in which cocaine, heroin, and marijuana are trafficked today. Drug-fueled violence remains an unsolved international problem.

“Aggressive drug enforcement yields little anti-drug benefits and generally increases violence,” while “drug courts and treatment have a long history of effectiveness.”

Neither right-to-carry laws favored by the right, nor bans and restrictions favored by the left, have been shown to make much difference—though there is much we don’t know, and political and practical impediments to finding out more.39

In 1965 a young lawyer named Ralph Nader published Unsafe at Any Speed, a j’accuse of the industry for neglecting safety in automotive design. Soon after, the National Highway Traffic Safety Administration was established and legislation was passed requiring new cars to be equipped with a number of safety features. Yet the graph shows that steeper reductions came before the activism and the legislation, and the auto industry was sometimes ahead of its customers and regulators.

In 1980 Mothers Against Drunk Driving was formed, and they lobbied for higher drinking ages, lowered legal blood alcohol levels, and the stigmatization of drunk driving, which popular culture had treated as a source of comedy (such as in the movies North by Northwest and Arthur).

The Brooklyn Dodgers, before they moved to Los Angeles, had been named after the city’s pedestrians, famous for their skill at darting out of the way of hurtling streetcars.

When robotic cars are ubiquitous, they could save more than a million lives a year, becoming one of the greatest gifts to human life since the invention of antibiotics.

After car crashes, the likeliest cause of accidental death consists of falls, followed by drownings and fires, followed by poisonings.

Figure 12-6 shows an apparent exception to the conquest of accidents: the category called “Poison (solid or liquid).” The steep rise starting in the 1990s is anomalous in a society that is increasingly latched,

Then I realized that the category of accidental poisonings includes drug overdoses.

In 2013, 98 percent of the “Poison” deaths were from drugs (92 percent) or alcohol (6 percent), and almost all the others were from gases and vapors (mostly carbon monoxide). Household and occupational hazards like solvents, detergents, insecticides, and lighter fluid were responsible for less than a half of one percent of the poisoning deaths, and would scrape the bottom of figure 12-6

The curve begins to rise in the psychedelic 1960s, jerks up again during the crack cocaine epidemic of the 1980s, and blasts off during the far graver epidemic of opioid addiction in the 21st century. Starting in the 1990s, doctors overprescribed synthetic opioid painkillers like oxycodone, hydrocodone, and fentanyl, which are not just addictive but gateway drugs to heroin.

A sign that the measures might be effective is that the number of overdoses of prescription opioids (though not of illicit heroin and fentanyl) peaked in 2010 and may be starting to come down.56

The peak age of poisoning deaths in 2011 was around fifty, up from the low forties in 2003, the late thirties in 1993, the early thirties in 1983, and the early twenties in 1973.57 Do the subtractions and you find that in every decade it’s the members of the generation born between 1953 and 1963 who are drugging themselves to death. Despite perennial panic about teenagers, today’s kids are, relatively speaking, all right, or at least better. According to a major longitudinal study of teenagers called Monitoring the Future, high schoolers’ use of alcohol, cigarettes, and drugs (other than marijuana and vaping) have dropped to the lowest levels since the survey began in 1976.58

Humanity’s conquest of everyday danger is a peculiarly unappreciated form of progress.

Just as people tend not to see accidents as atrocities (at least when they are not the victims), they don’t see gains in safety as moral triumphs, if they are aware of them at all. Yet the sparing of millions of lives, and the reduction of infirmity, disfigurement, and suffering on a massive scale, deserve our gratitude and demand an explanation. That is true even of murder, the most moralized of acts, whose rate has plummeted for reasons that defy standard narratives.


It’s because terrorism, as it is now defined, is largely a phenomenon of war, and wars no longer take place in the United States or Western Europe.

A majority of the world’s terrorist deaths take place in zones of civil war (including 8,831 in Iraq, 6,208 in Afghanistan, 5,288 in Nigeria, 3,916 in Syria, 1,606 in Pakistan, and 689 in Libya), and many of these are double-counted as war deaths, because “terrorism” during a civil war is simply a war crime—a deliberate attack on civilians—committed by a group other than the government.

About twice as many Americans have been killed since 1990 by right-wing extremists as by Islamist terror groups.

Modern terrorism is a by-product of the vast reach of the media.

Killing innocent people, especially in circumstances in which readers of the news can imagine themselves. News media gobble the bait and give the atrocities saturation coverage. The Availability heuristic kicks in and people become stricken with a fear that is unrelated to the level of danger.

The legal scholar Adam Lankford has analyzed the motives of the overlapping categories of suicide terrorists, rampage shooters, and hate crime killers, including both the self-radicalized lone wolves and the bomb fodder recruited by terrorist masterminds.14 The killers tend to be loners and losers, many with untreated mental illness, who are consumed with resentment and fantasize about revenge and recognition. Some fused their bitterness with Islamist ideology, others with a nebulous cause such as “starting a race war” or “a revolution against the federal government, taxes, and anti-gun laws.” Killing a lot of people offered them the chance to be a somebody, even if only in the anticipation, and going out in a blaze of glory meant that they didn’t have to deal with the irksome aftermath of being a mass murderer.

The historian Yuval Harari notes that terrorism is the opposite of military action, which tries to damage the enemy’s ability to retaliate and prevail.16

From their position of weakness, Harari notes, what terrorists seek to accomplish is not damage but theater.

Harari points out that in the Middle Ages, every sector of society retained a private militia—aristocrats, guilds, towns, even churches and monasteries—and they secured their interests by force: “If in 1150 a few Muslim extremists had murdered a handful of civilians in Jerusalem, demanding that the Crusaders leave the Holy Land, the reaction would have been ridicule rather than terror. If you wanted to be taken seriously, you should have at least gained control of a fortified castle or two.”

Sociologist Eric Madfis, has recommended a policy for rampage shootings of “Don’t Name Them, Don’t Show Them, but Report Everything Else,” based on a policy for juvenile shooters already in effect in Canada and on other strategies of calculated media self-restraint.)


humanity has tried to steer a course between the violence of anarchy and the violence of tyranny.

Early governments pacified the people they ruled, reducing internecine violence, but imposed a reign of terror that included slavery, harems, human sacrifice, summary executions, and the torture and mutilation of dissidents and deviants.

Chaos is deadlier than tyranny. More of these multicides result from the breakdown of authority rather than the exercise of authority.

One can think of democracy as a form of government that threads the needle, exerting just enough force to prevent people from preying on each other without preying on the people itself.

Democracy is a major contributor to human flourishing. But it’s not the only reason: democracies also have higher rates of economic growth, fewer wars and genocides, healthier and better-educated citizens, and virtually no famines.4 If the world has become more democratic over time, that is progress.

The political scientist Samuel Huntington organized the history of democratization into three waves.5 The first swelled in the 19th century, when that great Enlightenment experiment, American constitutional democracy with its checks on government power, seemed to be working.

With the defeat of fascism in World War II, a second wave gathered force as colonies gained independence from their European overlords, pushing the number of recognized democracies up to thirty-six by 1962.

The West German chancellor Willy Brandt lamented that “Western Europe has only 20 or 30 more years of democracy left in it; after that it will slide, engineless and rudderless, under the surrounding sea of dictatorship.”

Military and fascist governments fell in southern Europe (Greece and Portugal in 1974, Spain in 1975), Latin America (including Argentina in 1983, Brazil in 1985, and Chile in 1990), and Asia (including Taiwan and the Philippines around 1986, South Korea around 1987, and Indonesia in 1998). The Berlin Wall was torn down in 1989,

In 1989 the political scientist Francis Fukuyama published a famous essay in which he proposed that liberal democracy represented “the end of history,” not because nothing would ever happen again but because the world was coming to a consensus over the humanly best form of governance and no longer had to fight over it.8

The rise of alternatives to democracy such as theocracy in the Muslim world and authoritarian capitalism in China. Democracies themselves appeared to be backsliding into authoritarianism with populist victories in Poland and Hungary and power grabs by Recep Erdogan in Turkey and Vladimir Putin in Russia (the return of the sultan and the czar).

After swelling in the 1990s, this third wave spilled into the 21st century in a rainbow of “color revolutions” including Croatia (2000), Serbia (2000), Georgia (2003), Ukraine (2004), and Kyrgyzstan (2005), bringing the total at the start of the Obama presidency in 2009 to 87.14

As of 2015, the most recent year in the dataset, the total stood at 103.

It is true that stable, top-shelf democracy is likelier to be found in countries that are richer and more highly educated.17 But governments that are more democratic than not are a motley collection: they are entrenched in most of Latin America, in floridly multiethnic India, in Muslim Malaysia, Indonesia, Niger, and Kosovo, in fourteen countries in sub-Saharan Africa (including Namibia, Senegal, and Benin), and in poor countries elsewhere such as Nepal, Timor-Leste, and most of the Caribbean.18

Political scientists are repeatedly astonished by the shallowness and incoherence of people’s political beliefs, and by the tenuous connection of their preferences to their votes and to the behavior of their representatives.21 Most voters are ignorant not just of current policy options but of basic facts, such as what the major branches of government are, who the United States fought in World War II, and which countries have used nuclear weapons. Their opinions flip depending on how a question is worded: they say that the government spends too much on “welfare” but too little on “assistance to the poor,” and that it should “use military force” but not “go to war.” When they do formulate a preference, they commonly vote for a candidate with the opposite one. But it hardly matters, because once in office politicians vote the positions of their party regardless of the opinions of their constituents.

Many political scientists have concluded that most people correctly recognize that their votes are astronomically unlikely to affect the outcome of an election, and so they prioritize work, family, and leisure over educating themselves about politics and calibrating their votes. They use the franchise as a form of self-expression: they vote for candidates who they think are like them and stand for their kind of people.

Also, autocrats can learn to use elections to their advantage. The latest fashion in dictatorship has been called the competitive, electoral, kleptocratic, statist, or patronal authoritarian regime.22 (Putin’s Russia is the prototype.) The incumbents use the formidable resources of the state to harass the opposition, set up fake opposition parties, use state-controlled media to spread congenial narratives, manipulate electoral rules, tilt voter registration, and jigger the elections themselves. (Patronal authoritarians, for all that, are not invulnerable—the color revolutions sent several of them packing.)

In his 1945 book The Open Society and Its Enemies, the philosopher Karl Popper argued that democracy should be understood not as the answer to the question “Who should rule?” (namely, “The People”), but as a solution to the problem of how to dismiss bad leadership without bloodshed.

Steven Levitsky and Lucan Way point out, “State failure brings violence and instability; it almost never brings democratization.”27

The freedom to complain rests on an assurance that the government won’t punish or silence the complainer. The front line in democratization, then, is constraining the government from abusing its monopoly on force to brutalize its uppity citizens.

Has the rise in democracy brought a rise in human rights, or are dictators just using elections and other democratic trappings to cover their abuses with a smiley-face?

The abolition of capital punishment has gone global (figure 14-3), and today the death penalty is on death row.

We are seeing a moral principle—Life is sacred, so killing is onerous—become distributed across a wide range of actors and institutions that have to cooperate to make the death penalty possible. As these actors and institutions implement the principle more consistently and thoroughly, they inexorably push the country away from the impulse to avenge a life with a life.


First Lady Michelle Obama in a speech at the Democratic National Convention in 2016: “I wake up every morning in a house that was built by slaves, and I watch my daughters, two beautiful, intelligent black young women, playing with their dogs on the White House lawn.”

A string of highly publicized killings by American police officers of unarmed African American suspects, some of them caught on smartphone videos, has led to a sense that the country is suffering an epidemic of racist attacks by police on black men. Media coverage of athletes who have assaulted their wives or girlfriends, and of episodes of rape on college campuses, has suggested to many that we are undergoing a surge of violence against

The data suggest that the number of police shootings has decreased, not increased, in recent decades (even as the ones that do occur are captured on video), and three independent analyses have found that a black suspect is no more likely than a white suspect to be killed by the police.6 (American police shoot too many people, but it’s not primarily a racial issue.)

The Pew Research Center has probed Americans’ opinions on race, gender, and sexual orientation over the past quarter century, and has reported that these attitudes have undergone a “fundamental shift” toward tolerance and respect of rights, with formerly widespread prejudices sinking into oblivion.

Other surveys show the same shifts.8 Not only has the American population become more liberal, but each generational cohort is more liberal than the one born before it.

Millennials (those born after 1980), who are even less prejudiced than the national average, tell us which way the country is going.10

A decline in prejudice or simply a decline in the social acceptability of prejudice, with fewer people willing to confess their disreputable attitudes to a pollster.

And contrary to the fear that the rise of Trump reflects (or emboldens) prejudice, the curves continue their decline through his period of notoriety in 2015–2016 and inauguration in early 2017.

Stephens-Davidowitz has pointed out to me that these curves probably underestimate the decline in prejudice because of a shift in who’s Googling.

Stephens-Davidowitz confirmed that bigoted searches tended to come from regions with older and less-educated populations. Compared with the country as a whole, retirement communities are seven times as likely to search for “nigger jokes” and thirty times as likely to search for “fag jokes.”

These threads confirmed that racists may be a dwindling breed: someone who searches for “nigger” is likely to search for other topics that appeal to senior citizens, such as “social security” and “Frank Sinatra.”

Private prejudice is declining with time and declining with youth, which means that we can expect it to decline still further as aging bigots cede the stage to less prejudiced cohorts.

Until they do, these older and less-educated people (mainly white men) may not respect the benign taboos on racism, sexism, and homophobia that have become second nature to the mainstream, and may even dismiss them as “political correctness.”

Trump’s success, like that of right-wing populists in other Western countries, is better understood as the mobilization of an aggrieved and shrinking demographic in a polarized political landscape than as the sudden reversal of a century-long movement toward equal rights.

Hate crimes against Asian, Jewish, and white targets have declined as well. And despite claims that Islamophobia has become rampant in America, hate crimes targeting Muslims have shown little change other than a one-time rise following 9/11 and upticks following other Islamist terror attacks, such as the ones in Paris and San Bernardino in 2015.20

Women’s status, too, is ascendant.

Violence against women is best measured by victimization surveys, because they circumvent the problem of underreporting to the police; these instruments show that rates of rape and violence against wives and girlfriends have been sinking for decades and are now at a quarter or less of their peaks in the past

No form of progress is inevitable, but the historical erosion of racism, sexism, and homophobia are more than a change in fashion.

Also, as people are forced to justify the way they treat other people, rather than dominating them out of instinctive, religious, or historical inertia, any justification for prejudicial treatment will crumble under scrutiny.

In his book Freedom Rising, the political scientist Christian Welzel (building on a collaboration with Ron Inglehart, Pippa Norris, and others) has proposed that the process of modernization has stimulated the rise of “emancipative values.”36 As societies shift from agrarian to industrial to informational, their citizens become less anxious about fending off enemies and other existential threats and more eager to express their ideals and to pursue opportunities in life. This shifts their values toward greater freedom for themselves and others. The transition is consistent with the psychologist Abraham Maslow’s theory of a hierarchy of needs from survival and safety to belonging, esteem, and self-actualization (and with Brecht’s “Grub first, then ethics”). People begin to prioritize freedom over security, diversity over uniformity, autonomy over authority, creativity over discipline, and individuality over conformity. Emancipative values may also be called liberal values, in the classical sense related to “liberty” and “liberation” (rather than the sense of political leftism).

The graph displays a historical trend that is seldom appreciated in the hurly-burly of political debate: for all the talk about right-wing backlashes and angry white men, the values of Western countries have been getting steadily more liberal (which, as we will see, is one of the reasons those men are so angry).

A critical discovery displayed in the graph is that the liberalization does not reflect a growing bulge of liberal young people who will backslide into conservatism as they get older.

The liberalization trends shown in figure 15-6 come from the Prius-driving, chai-sipping, kale-eating populations of post-industrial Western countries.

What is surprising, though, is that in every part of the world, people have become more liberal. A lot more liberal:

We’ve already seen that children the world over have become better off: they are less likely to enter the world motherless, die before their fifth birthday, or grow up stunted for lack of food.

Starting with influential treatises by John Locke in 1693 and Jean-Jacques Rousseau in 1762, childhood was reconceptualized.50 A carefree youth was now considered a human birthright. Play was an essential form of learning, and the early years of life shaped the adult and determined the future of society.


Homo sapiens, “knowing man,” is the species that uses information to resist the rot of entropy and the burdens of evolution.

Social science, correlation is not causation. Do better-educated countries get richer, or can richer countries afford more education? One way to cut the knot is to take advantage of the fact that a cause must precede its effect.

Better education today makes a country more democratic and peaceful tomorrow.

Better-educated girls grow up to have fewer babies, and so are less likely to beget youth bulges with their surfeit of troublemaking young men.9 And better-educated countries are richer, and as we saw in chapters 11 and 14, richer countries tend to be more peaceful and democratic.

So much changes when you get an education! You unlearn dangerous superstitions, such as that leaders rule by divine right, or that people who don’t look like you are less than human.

Studies of the effects of education confirm that educated people really are more enlightened. They are less racist, sexist, xenophobic, homophobic, and authoritarian.10 They place a higher value on imagination, independence, and free speech.11 They are more likely to vote, volunteer, express political views, and belong to civic associations such as unions, political parties, and religious and community organizations.12 They are also likelier to trust their fellow citizens—a prime ingredient of the precious elixir called social capital which gives people the confidence to contract, invest, and obey the law without fearing that they are chumps who will be shafted by everyone else.13

Intelligence Quotient (IQ) scores have been rising for more than a century, in every part of the world, at a rate of about three IQ points (a fifth of a standard deviation) per decade.

Also, it beggars belief to think that an average person of 1910, if he or she had entered a time machine and materialized today, would be borderline retarded by our standards, while if Joe and Jane Average made the reverse journey, they would outsmart 98 percent of the befrocked and bewhiskered Edwardians who greeted them as they emerged.

It’s no paradox that a heritable trait can be boosted by changes in the environment. That’s what happened with height, a trait that also is highly heritable and has increased over the decades, and for some of the same reasons: better nutrition and less disease.

Does the Flynn effect matter in the real world? Almost certainly. A high IQ is not just a number that you can brag about in a bar or that gets you into Mensa; it is a tailwind in life.38 People with high scores on intelligence tests get better jobs, perform better in their jobs, enjoy better health and longer lives, are less likely to get into trouble with the law, and have a greater number of noteworthy accomplishments like starting companies, earning patents, and creating respected works of art—all holding socioeconomic status constant.

Still, there have been some signs of a smarter populace, such as the fact that the world’s top-ranked chess and bridge players have been getting younger.


the worry that all that extra healthy life span and income may not have increased human flourishing after all if they just consign people to a rat race of frenzied careerism, hollow consumption, mindless entertainment, and soul-deadening anomie.

Cultural criticism can be a thinly disguised snobbery that shades into misanthropy.

In practice, “consumerism” often means “consumption by the other guy,” since the elites who condemn it tend themselves to be conspicuous consumers of exorbitant luxuries like hardcover books, good food and wine, live artistic performances, overseas travel, and Ivy-class education for their children.

In Development as Freedom, Amartya Sen sidesteps this trap by proposing that the ultimate goal of development is to enable people to make choices: strawberries and cream for those who want them. The philosopher Martha Nussbaum has taken the idea a step further and laid out a set of “fundamental capabilities” that all people should be given the opportunity to exercise.3 One can think of them as the justifiable sources of satisfaction and fulfillment that human nature makes available to us. Her list begins with capabilities that, as we have seen, the modern world increasingly allows people to realize: longevity, health, safety, literacy, knowledge, free expression, and political participation. It goes on to include aesthetic experience, recreation and play, enjoyment of nature, emotional attachments, social affiliations, and opportunities to reflect on and engage in one’s own conception of the good life.

That life is getting better even beyond the standard economists’ metrics like longevity and wealth.

As Morgan Housel notes, “We constantly worry about the looming ‘retirement funding crisis’ in America without realizing that the entire concept of retirement is unique to the last five decades.

Think of it this way: The average American now retires at age 62. One hundred years ago, the average American died at age 51.”

Today an average American worker with five years on the job receives 22 days of paid time off a year (compared with 16 days in 1970), and that is miserly by the standards of Western Europe.

In 1919, an average American wage earner had to work 1,800 hours to pay for a refrigerator; in 2014, he or she had to work fewer than 24 hours (and the new fridge was frost-free and came with an icemaker).

Hans Rosling suggests, the washing machine deserves to be called the greatest invention of the Industrial Revolution.

Time is not the only life-enriching resource granted to us by technology. Another is light. Light is so empowering that it serves as the metaphor of choice for a superior intellectual and spiritual state: enlightenment.

The economist William Nordhaus has cited the plunging price (and hence the soaring availability) of this universally treasured resource as an emblem of progress.

Adam Smith pointed out, “The real price of every thing . . . is the toil and trouble of acquiring it.”

The technology expert Kevin Kelly has proposed that “over time, if a technology persists long enough, its costs begin to approach (but never reach) zero.”

What are people doing with that extra time and money?

With the rise of two-career couples, overscheduled kids, and digital devices, there is a widespread belief (and recurring media panic) that families are caught in a time crunch that’s killing the family dinner.

But the new tugs and distractions have to be weighed against the 24 extra hours that modernity has granted to breadwinners every week and the 42 extra hours it has granted to homemakers.

In 2015, men reported 42 hours of leisure per week, around 10 more than their counterparts did fifty years earlier, and women reported 36 hours, more than 6 hours more

And at the end of the day, the family dinner is alive and well. Several studies and polls agree that the number of dinners families have together changed little from 1960 through 2014, despite the iPhones, PlayStations, and Facebook accounts.

Indeed, over the course of the 20th century, typical American parents spent more time, not less, with their children.

Today, almost half of the world’s population has Internet access, and three-quarters have access to a mobile phone.

The late 19th-century American diet consisted mainly of pork and starch.29 Before refrigeration and motorized transport, most fruits and vegetables would have spoiled before they reached a consumer, so farmers grew nonperishables like turnips, beans, and potatoes.

There can be no question of which was the greatest era for culture; the answer has to be today, until it is superseded by tomorrow.


According to the theory of the hedonic treadmill, people adapt to changes in their fortunes, like eyes adapting to light or darkness, and quickly return to a genetically determined baseline.4 According to the theory of social comparison (or reference groups, status anxiety, or relative deprivation, which we examined in chapter 9), people’s happiness is determined by how well they think they are doing relative to their compatriots, so as the country as a whole gets richer, no one feels happier—indeed, if their country becomes more unequal, then even if they get richer they may feel worse.

Some intellectuals are incredulous, even offended, that happiness has become a subject for economists rather than just poets, essayists, and philosophers. But the approaches are not opposed. Social scientists often begin their studies of happiness with ideas that were first conceived by artists and philosophers, and they can pose questions about historical and global patterns that cannot be answered by solitary reflection, no matter how insightful.

Freedom or autonomy: the availability of options to lead a good life (positive freedom) and the absence of coercion that prevents a person from choosing among them (negative freedom).

Happiness has two sides, an experiential or emotional side, and an evaluative or cognitive side.13 The experiential component consists of a balance between positive emotions like elation, joy, pride, and delight, and negative emotions like worry, anger, and sadness.

The ultimate measure of happiness would consist of a lifetime integral or weighted sum of how happy people are feeling and how long they feel that way.

People’s evaluations of how they are living their lives. People can be asked to reflect on how satisfied they feel “these days” or “as a whole” or “taking all things together,” or to render the almost philosophical judgment of where they stand on a ten-rung ladder ranging from “the worst possible life for you” to “the best possible life for you.”

Social scientists have become resigned to the fact that happiness, satisfaction, and best-versus-worst-possible life are blurred in people’s minds and that it’s often easiest just to average them together.14

And this brings us to the final dimension of a good life, meaning and purpose. This is the quality that, together with happiness, goes into Aristotle’s ideal of eudaemonia or “good spirit.”16

Roy Baumeister and his colleagues probed for what makes people feel their lives are meaningful. The respondents separately rated how happy and how meaningful their lives were, and they answered a long list of questions about their thoughts, activities, and circumstances. The results suggest that many of the things that make people happy also make their lives meaningful, such as being connected to others, feeling productive, and not being alone or bored.

People who lead meaningful lives may enjoy none of these boons. Happy people live in the present; those with meaningful lives have a narrative about their past and a plan for the future. Those with happy but meaningless lives are takers and beneficiaries; those with meaningful but unhappy lives are givers and benefactors.

Meaning is about expressing rather than satisfying the self: it is enhanced by activities that define the person and build a reputation.

The most immediate is the absence of a cross-national Easterlin paradox: the cloud of arrows is stretched along a diagonal, which indicates that the richer the country, the happier its people.

Most strikingly, the slopes of the arrows are similar to each other, and identical to the slope for the swarm of arrows as a whole (the dashed gray line lurking behind the swarm). That means that a raise for an individual relative to that person’s compatriots adds as much to his or her happiness as the same increase for their country across the board.

Happiness, of course, depends on much more than income.

Bowling Alone.

Though people have reallocated their time because families are smaller, more people are single, and more women work, Americans today spend as much time with relatives, have the same median number of friends and see them about as often, report as much emotional support, and remain as satisfied with the number and quality of their friendships as their counterparts in the decade of Gerald Ford and Happy Days. Users of the Internet and social media have more contact with friends (though a bit less face-to-face contact), and they feel that the electronic ties have enriched their relationships.

Social media users care too much, not too little, about other people, and they empathize with them over their troubles rather than envying them their successes.

Standard formula for sowing panic: Here’s an anecdote, therefore it’s a trend, therefore it’s a crisis.

But just because social life looks different today from the way it looked in the 1950s, it does not mean that humans, that quintessentially social species, have become any less social.

One of psychology’s best-kept secrets is that cognitive behavior therapy is demonstrably effective (often more effective than drugs) in treating many forms of distress, including depression, anxiety, panic attacks, PTSD, insomnia, and the symptoms of schizophrenia.

Everything is amazing. Are we really so unhappy? Mostly we are not. Developed countries are actually pretty happy, a majority of all countries have gotten happier, and as long as countries get richer they should get happier still. The dire warnings about plagues of loneliness, suicide, depression, and anxiety don’t survive fact-checking.

A modicum of anxiety may be the price we pay for the uncertainty of freedom. It is another word for the vigilance, deliberation, and heart-searching that freedom demands. It’s not entirely surprising that as women gained in autonomy relative to men they also slipped in happiness. In earlier times, women’s list of responsibilities rarely extended beyond the domestic sphere. Today young women increasingly say that their life goals include career, family, marriage, money, recreation, friendship, experience, correcting social inequities, being a leader in their community, and making a contribution to society.83 That’s a lot of things to worry about, and a lot of ways to be frustrated: Woman plans and God laughs.

As people become better educated and increasingly skeptical of received authority, they may become unsatisfied with traditional religious verities and feel unmoored in a morally indifferent cosmos.


In The Progress Paradox, the journalist Gregg Easterbrook suggests that a major reason that Americans are not happier, despite their rising objective fortunes, is “collapse anxiety”: the fear that civilization may implode and there’s nothing anyone can do about it.

Remember the Y2K bug?12 In the 1990s, as the turn of the millennium drew near, computer scientists began to warn the world of an impending catastrophe.

When 12:00 A.M. on January 1, 2000, arrived and the digits rolled over, a program would think it was 1900 and would crash or go haywire (presumably because it would divide some number by the difference between what it thought was the current year and the year 1900, namely zero, though why a program would do this was never made clear).

A hundred billion dollars was spent worldwide on reprogramming software for Y2K Readiness, a challenge that was likened to replacing every bolt in every bridge in the world.

A typical mammalian species lasts around a million years, and it’s hard to insist that Homo sapiens will be an exception.

Even if we did invent superhumanly intelligent robots, why would they want to enslave their masters or take over the world?

The second fallacy is to think of intelligence as a boundless continuum of potency, a miraculous elixir with the power to solve any problem, attain any goal.

Knowledge is acquired by formulating explanations and testing them against reality, not by running an algorithm faster and faster.

The real world gets in the way of many digital apocalypses. When HAL gets uppity, Dave disables it with a screwdriver, leaving it pathetically singing “A Bicycle Built for Two” to itself.

If we gave an AI the goal of maintaining the water level behind a dam, it might flood a town, not caring about the people who drowned. If we gave it the goal of making paper clips, it might turn all the matter in the reachable universe into paper clips, including our possessions and bodies.

Artificial intelligence is like any other technology. It is developed incrementally, designed to satisfy multiple conditions, tested before it is implemented, and constantly tweaked for efficacy and safety (chapter 12). As the AI expert Stuart Russell puts it, “No one in civil engineering talks about ‘building bridges that don’t fall down.’ They just call it ‘building bridges.’”

In 2002 Martin Rees publicly offered the bet that “by 2020, bioterror or bioerror will lead to one million casualties in a single event.”35

The question I’ll consider is whether the grim facts should lead any reasonable person to conclude that humanity is screwed.

The key is not to fall for the Availability bias and assume that if we can imagine something terrible, it is bound to happen. The real danger depends on the numbers: the proportion of people who want to cause mayhem or mass murder, the proportion of that genocidal sliver with the competence to concoct an effective cyber or biological weapon, the sliver of that sliver whose schemes will actually succeed, and the sliver of the sliver of the sliver that accomplishes a civilization-ending cataclysm rather than a nuisance, a blow, or even a disaster, after which life goes on.

Such attacks could take place in every city in the world many times a day, but in fact take place somewhere or other every few years (leading the security expert Bruce Schneier to ask, “Where are all the terrorist attacks?”).

Far from being criminal masterminds, most terrorists are bumbling schlemiels.

Serious threats to the integrity of a country’s infrastructure are likely to require the resources of a state.50 Software hacking is not enough; the hacker needs detailed knowledge about the physical construction of the systems he hopes to sabotage.

State-based cyber-sabotage escalates the malevolence from terrorism to a kind of warfare, where the constraints of international relations, such as norms, treaties, sanctions, retaliation, and military deterrence, inhibit aggressive attacks, as they do in conventional “kinetic” warfare.

But disaster sociology (yes, there is such a field) has shown that people are highly resilient in the face of catastrophe.53 Far from looting, panicking, or sinking into paralysis, they spontaneously cooperate to restore order and improvise networks for distributing goods and services.

It may be more than just luck that the world so far has seen just one successful bioterror attack (the 1984 tainting of salad with salmonella in an Oregon town by the Rajneeshee religious cult, which killed no one) and one spree killing (the 2001 anthrax mailings, which killed five).60


Prognosticators are biased toward scaring people.

As early as 1945, the theologian Reinhold Niebuhr observed, “Ultimate perils, however great, have a less lively influence upon the human imagination than immediate resentments and frictions, however small by comparison.”

As we saw with climate change, people may be likelier to acknowledge a problem when they have reason to think it is solvable than when they are terrified into numbness and helplessness.

The most obvious is to whittle down the size of the arsenal. The process is well under way. Few people are aware of how dramatically the world has been dismantling nuclear weapons. Figure 19-1 shows that the United States has reduced its inventory by 85 percent from its 1967 peak, and now has fewer nuclear warheads than at any time since 1956.113 Russia, for its part, has reduced its arsenal by 89 percent from its Soviet-era peak. (Probably even fewer people realize that about 10 percent of electricity in the United States comes from dismantled nuclear warheads, mostly Soviet.)114 In 2010 both countries signed


The poor may not always be with us. The world is about a hundred times wealthier today than it was two centuries ago, and the prosperity is becoming more evenly distributed across the world’s countries and people. The proportion of humanity living in extreme poverty has fallen from almost 90 percent to less than 10 percent, and within the lifetimes of most of the readers of this book it could approach zero.

The world is giving peace a chance.

The proportion of people killed annually in wars is less than a quarter of what it was in the 1980s, a seventh of what it was in the early 1970s, an eighteenth of what it was in the early 1950s, and a half a percent of what it was during World War II.

People are getting not just healthier, richer, and safer but freer. Two centuries ago a handful of countries, embracing one percent of the world’s people, were democratic; today, two-thirds of the world’s countries, embracing two-thirds of its people, are.

As people are getting healthier, richer, safer, and freer, they are also becoming more literate, knowledgeable, and smarter. Early in the 19th century, 12 percent of the world could read and write; today 83 percent can.

As societies have become healthier, wealthier, freer, happier, and better educated, they have set their sights on the most pressing global challenges. They have emitted fewer pollutants, cleared fewer forests, spilled less oil, set aside more preserves, extinguished fewer species, saved the ozone layer, and peaked in their consumption of oil, farmland, timber, paper, cars, coal, and perhaps even carbon. For all their differences, the world’s nations came to a historic agreement on climate change, as they did in previous years on nuclear testing, proliferation, security, and disarmament. Nuclear weapons, since the extraordinary circumstances of the closing days of World War II, have not been used in the seventy-two years they have existed. Nuclear terrorism, in defiance of forty years of expert predictions, has never happened. The world’s nuclear stockpiles have been reduced by 85 percent, with more reductions to come, and testing has ceased (except by the tiny rogue regime in Pyongyang) and proliferation has frozen. The world’s two most pressing problems, then, though not yet solved, are solvable: practicable long-term agendas have been laid out for eliminating nuclear weapons and for mitigating climate change. For all the bleeding headlines, for all the crises, collapses, scandals, plagues, epidemics, and existential threats, these are accomplishments to savor. The Enlightenment is working: for two and a half centuries, people have used knowledge to enhance human flourishing. Scientists have exposed the workings of matter, life, and mind. Inventors have harnessed the laws of nature to defy entropy, and entrepreneurs have made their innovations affordable. Lawmakers have made people better off by discouraging acts that are individually beneficial but collectively harmful. Diplomats have done the same with nations. Scholars have perpetuated the treasury of knowledge and augmented the power of reason. Artists have expanded the circle of sympathy. Activists have pressured the powerful to overturn repressive measures, and their fellow citizens to change repressive norms. All these efforts have been channeled into institutions that have allowed us to circumvent the flaws of human nature and empower our better angels. At the same time . . . Seven hundred million people in the world today live in extreme poverty. In the regions where they are concentrated, life expectancy is less than 60, and almost a quarter of the people are undernourished. Almost a million children die of pneumonia every year, half a million from diarrhea or malaria, and hundreds of thousands from measles and AIDS. A dozen wars are raging in the world, including one in which more than 250,000 people have died, and in 2015 at least ten thousand people were slaughtered in genocides. More than two billion people, almost a third of humanity, are oppressed in autocratic states. Almost a fifth of the world’s people lack a basic education; almost a sixth are illiterate.

Progress is not utopia, and that there is room—indeed, an imperative—for us to strive to continue that progress.

How reasonable is the hope for continuing progress?

The Scientific Revolution and the Enlightenment set in motion the process of using knowledge to improve the human condition.

Solutions create new problems, which take time to solve in their term. But when we stand back from these blips and setbacks, we see that the indicators of human progress are cumulative: none is cyclical, with gains reliably canceled by losses.3

The technological advances that have propelled this progress should only gather speed. Stein’s Law continues to obey Davies’s Corollary (Things that can’t go on forever can go on much longer than you think), and genomics, synthetic biology, neuroscience, artificial intelligence, materials science, data science, and evidence-based policy analysis are flourishing.

So too with moral progress. History tells us that barbaric customs can not only be reduced but essentially abolished, lingering at most in a few benighted backwaters.

If economies stop growing, things could get ugly.

As the entrepreneur Peter Thiel lamented, “We wanted flying cars; instead we got 140 characters.”

Whatever its causes, economic stagnation is at the root of many other problems and poses a significant challenge for 21st-century policymakers.

The second decade of the 21st century has seen the rise of a counter-Enlightenment movement called populism, more accurately, authoritarian populism.24 Populism calls for the direct sovereignty of a country’s “people” (usually an ethnic group, sometimes a class), embodied in a strong leader who directly channels their authentic virtue and experience.

By focusing on the tribe rather than the individual, it has no place for the protection of minority rights or the promotion of human welfare worldwide.

Populism comes in left-wing and right-wing varieties, which share a folk theory of economics as zero-sum competition: between economic classes in the case of the left, between nations or ethnic groups in the case of the right.

Populism looks backward to an age in which the nation was ethnically homogeneous, orthodox cultural and religious values prevailed, and economies were powered by farming and manufacturing, which produced tangible goods for local consumption and for export.

Nothing captures the tribalistic and backward-looking spirit of populism better than Trump’s campaign slogan: Make America Great Again.

Trump’s authoritarian instincts are subjecting the institutions of American democracy to a stress test, but so far it has pushed back on a number of fronts. Cabinet secretaries have publicly repudiated various quips, tweets, and stink bombs; courts have struck down unconstitutional measures; senators and congressmen have defected from his party to vote down destructive legislation; Justice Department and Congressional committees are investigating the administration’s ties to Russia; an FBI chief has publicly called out Trump’s attempt to intimidate him (raising talk about impeachment for obstruction of justice); and his own staff, appalled at what they see, regularly leak compromising facts to the press—all in the first six months of the administration.

Globalization in particular is a tide that is impossible for any ruler to order back.

Where the new president, Emmanuel Macron, proclaimed that Europe was “waiting for us to defend the spirit of the Enlightenment, threatened in so many places.”

In the American election, voters in the two lowest income brackets voted for Clinton 52–42, as did those who identified “the economy” as the most important issue. A majority of voters in the four highest income brackets voted for Trump, and Trump voters singled out “immigration” and “terrorism,” not “the economy,” as the most important issues.34

“Education, Not Income, Predicted Who Would Vote for Trump.”35 Why should education have mattered so much? Two uninteresting explanations are that the highly educated happen to affiliate with a liberal political tribe, and that education may be a better long-term predictor of economic security than current income. A more interesting explanation is that education exposes people in young adulthood to other races and cultures in a way that makes it harder to demonize them. Most interesting of all is the likelihood that education, when it does what it is supposed to do, instills a respect for vetted fact and reasoned argument, and so inoculates people against conspiracy theories, reasoning by anecdote, and emotional demagoguery.

Silver found that the regional map of Trump support did not overlap particularly well with the maps of unemployment, religion, gun ownership, or the proportion of immigrants. But it did align with the map of Google searches for the word nigger, which Seth Stephens-Davidowitz has shown is a reliable indicator of racism (chapter 15).36 This doesn’t mean that most Trump supporters are racists. But overt racism shades into resentment and distrust, and the overlap suggests that the regions of the country that gave Trump his Electoral College victory are those with the most resistance to the decades-long process of integration and the promotion of minority interests (particularly racial preferences, which they see as reverse discrimination against them).

Populist voters are older, more religious, more rural, less educated, and more likely to be male and members of the ethnic majority. They embrace authoritarian values, place themselves on the right of the political spectrum, and dislike immigration and global and national governance.39 Brexit voters, too, were older, more rural, and less educated than those who voted to remain: 66 percent of high school graduates voted to leave, but only 29 percent of degree holders did.40

Populism is an old man’s movement.

This raises the possibility that as the Silent Generation and older Baby Boomers shuffle off this mortal coil, they will take authoritarian populism with them.

Since populist movements have achieved an influence beyond their numbers, fixing electoral irregularities such as gerrymandering and forms of disproportionate representation which overweight rural areas (such as the US Electoral College) would help. So would journalistic coverage that tied candidates’ reputations to their record of accuracy and coherence rather than to trivial gaffes and scandals.

I believe that the media and intelligentsia were complicit in populists’ depiction of modern Western nations as so unjust and dysfunctional that nothing short of a radical lurch could improve them.

“I’d rather see the empire burn to the ground under Trump, opening up at least the possibility of radical change, than cruise on autopilot under Clinton,” flamed a left-wing advocate of “the politics of arson.”50

People have a tremendous amount to lose when charismatic authoritarians responding to a “crisis” trample over democratic norms and institutions and command their countries by the force of their personalities.

Such is the nature of progress. Pulling us forward are ingenuity, sympathy, and benign institutions. Pushing us back are the darker sides of human nature and the Second Law of Thermodynamics. Kevin Kelly explains how this dialectic can nonetheless result in forward motion: Ever since the Enlightenment and the invention of science, we’ve managed to create a tiny bit more than we’ve destroyed each year. But that few percent positive difference is compounded over decades into what we might call civilization. . . . [Progress] is a self-cloaking action seen only in retrospect. Which is why I tell people that my great optimism of the future is rooted in history.53

Kelly offers “protopia,” the pro- from progress and process. Others have suggested “pessimistic hopefulness,” “opti-realism,” and “radical incrementalism.”54 My favorite comes from Hans Rosling, who, when asked whether he was an optimist, replied, “I am not an optimist. I’m a very serious possibilist.”55

“The ruling ideas of each age have ever been the ideas of its ruling class.” Karl Marx


“One can’t criticize something with nothing”:

To begin with, no Enlightenment thinker ever claimed that humans were consistently rational.

What they argued was that we ought to be rational, by learning to repress the fallacies and dogmas that so readily seduce us, and that we can be rational, collectively if not individually, by implementing institutions and adhering to norms that constrain our faculties, including free speech, logical analysis, and empirical testing. And if you disagree, then why should we accept your claim that humans are incapable of rationality?

But real evolutionary psychology treats humans differently: not as two-legged antelopes but as the species that outsmarts antelopes. We are a cognitive species that depends on explanations of the world. Since the world is the way it is regardless of what people believe about it, there is a strong selection pressure for an ability to develop explanations that are true.7

The standard explanation of the madness of crowds is ignorance: a mediocre education system has left the populace scientifically illiterate, at the mercy of their cognitive biases, and thus defenseless against airhead celebrities, cable-news gladiators, and other corruptions from popular culture. The standard solution is better schooling and more outreach to the public by scientists on television, social media, and popular Web sites. As an outreaching scientist I’ve always found this theory appealing, but I’ve come to realize it’s wrong, or at best a small part of the problem.

Kahan concludes that we are all actors in a Tragedy of the Belief Commons: what’s rational for every individual to believe (based on esteem) can be irrational for the society as a whole to act upon (based on reality).17

What’s going on is that these people are sharing blue lies. A white lie is told for the benefit of the hearer; a blue lie is told for the benefit of an in-group (originally, fellow police officers).19 While some of the conspiracy theorists may be genuinely misinformed, most express these beliefs for the purpose of performance rather than truth: they are trying to antagonize liberals and display solidarity with their blood brothers.

Another paradox of rationality is that expertise, brainpower, and conscious reasoning do not, by themselves, guarantee that thinkers will approach the truth. On the contrary, they can be weapons for ever-more-ingenious rationalization. As Benjamin Franklin observed, “So convenient a thing is it to be a rational creature, since it enables us to find or make a reason for everything one has a mind to do.”

Engagement with politics is like sports fandom in another way: people seek and consume news to enhance the fan experience, not to make their opinions more accurate.25 That explains another of Kahan’s findings: the better informed a person is about climate change, the more polarized his or her opinion.

So we can’t blame human irrationality on our lizard brains: it was the sophisticated respondents who were most blinded by their politics. As two other magazines summarized the results: “Science Confirms: Politics Wrecks Your Ability to Do Math” and “How Politics Makes Us Stupid.”29

Of the two forms of politicization that are subverting reason today, the political is far more dangerous than the academic, for an obvious reason.

In 21st-century America, the control of Congress by a Republican Party that became synonymous with the extreme right has been pernicious, because it is so convinced of the righteousness of its cause and the evil of its rivals that it has undermined the institutions of democracy to get what it wants. The corruptions include gerrymandering, imposing voting restrictions designed to disenfranchise Democratic voters, encouraging unregulated donations from moneyed interests, blocking Supreme Court nominations until their party controls the presidency, shutting down the government when their maximal demands aren’t met, and unconditionally supporting Donald Trump over their own objections to his flagrantly antidemocratic impulses.71 Whatever differences in policy or philosophy divide the parties, the mechanisms of democratic deliberation should be sacrosanct. Their erosion, disproportionately by the right, has led many people, including a growing share of young Americans, to see democratic government as inherently dysfunctional and to become cynical about democracy itself.72

What can be done to improve standards of reasoning? Persuasion by facts and logic, the most direct strategy, is not always futile.

When people are first confronted with information that contradicts a staked-out position, they become even more committed to it, as we’d expect from the theories of identity-protective cognition, motivated reasoning, and cognitive dissonance reduction. Feeling their identity threatened, belief holders double down and muster more ammunition to fend off the challenge. But since another part of the human mind keeps a person in touch with reality, as the counterevidence piles up the dissonance can mount until it becomes too much to bear and the opinion topples over, a phenomenon called the affective tipping point.80 The tipping point depends on the balance between how badly the opinion holder’s reputation would be damaged by relinquishing the opinion and whether the counterevidence is so blatant and public as to be common knowledge: a naked emperor, an elephant in the room.81

The reasons are familiar to education researchers.84 Any curriculum will be pedagogically ineffective if it consists of a lecturer yammering in front of a blackboard, or a textbook that students highlight with a yellow marker. People understand concepts only when they are forced to think them through, to discuss them with others, and to use them to solve problems.

All students should learn about cognitive biases fell deadborn from my lips.)

Effective training in critical thinking and cognitive debiasing may not be enough to cure identity-protective cognition, in which people cling to whatever opinion enhances the glory of their tribe and their status within it.

Experiments have shown that the right rules can avert the Tragedy of the Belief Commons and force people to dissociate their reasoning from their identities.88 One technique was discovered long ago by rabbis: they forced yeshiva students to switch sides in a Talmudic debate and argue the opposite position. Another is to have people try to reach a consensus in a small discussion group; this forces them to defend their opinions to their groupmates, and the truth usually wins.

Most of us are deluded about our degree of understanding of the world, a bias called the Illusion of Explanatory Depth.

Perhaps most important, people are less biased when they have skin in the game and have to live with the consequences of their opinions.

Experiments have shown that when people hear about a new policy, such as welfare reform, they will like it if it is proposed by their own party and hate it if it is proposed by the other—all the while convinced that they are reacting to it on its objective merits.

However long it takes, we must not let the existence of cognitive and emotional biases or the spasms of irrationality in the political arena discourage us from the Enlightenment ideal of relentlessly pursuing reason and truth. If we can identify ways in which humans are irrational, we must know what rationality is. Since there’s nothing special about us, our fellows must have at least some capacity for rationality as well. And it’s in the very nature of rationality that reasoners can always step back, consider their own shortcomings, and reason out ways to work around them.


That gravity is the curvature of space-time, and that life depends on a molecule that carries information, directs metabolism, and replicates itself.

But the scorn for scientific consensus has widened into a broadband know-nothingness.

Positivism depends on the reductionist belief that the entire universe, including all human conduct, can be explained with reference to precisely measurable, deterministic physical processes. . . . Positivist assumptions provided the epistemological foundations for Social Darwinism and pop-evolutionary notions of progress, as well as for scientific racism and imperialism. These tendencies coalesced in eugenics, the doctrine that human well-being could be improved and eventually perfected through the selective breeding of the “fit” and the sterilization or elimination of the “unfit.”

An endorsement of scientific thinking must first of all be distinguished from any belief that members of the occupational guild called “science” are particularly wise or noble. The culture of science is based on the opposite belief. Its signature practices, including open debate, peer review, and double-blind methods, are designed to circumvent the sins to which scientists, being human, are vulnerable. As Richard Feynman put it, the first principle of science is “that you must not fool yourself—and you are the easiest person to fool.”

The lifeblood of science is the cycle of conjecture and refutation: proposing a hypothesis and then seeing whether it survives attempts to falsify it.

The fallacy (putting aside the apocryphal history) is a failure to recognize that what science allows is an increasing confidence in a hypothesis as the evidence accumulates, not a claim to infallibility on the first try.

As Wieseltier puts it, “It is not for science to say whether science belongs in morality and politics and art. Those are philosophical matters, and science is not philosophy.”

Today most philosophers (at least in the analytic or Anglo-American tradition) subscribe to naturalism, the position that “reality is exhausted by nature, containing nothing ‘supernatural,’ and that the scientific method should be used to investigate all areas of reality, including the ‘human spirit.’”17 Science, in the modern conception, is of a piece with philosophy and with reason itself.

The world is intelligible.

In making sense of our world, there should be few occasions on which we are forced to concede, “It just is” or “It’s magic” or “Because I said so.”

Many people are willing to credit science with giving us handy drugs and gadgets and even with explaining how physical stuff works. But they draw the line at what truly matters to us as human beings: the deep questions about who we are, where we came from, and how we define the meaning and purpose of our lives. That is the traditional territory of religion, and its defenders tend to be the most excitable critics of scientism. They are apt to endorse the partition plan proposed by the paleontologist and science writer Stephen Jay Gould in his book Rocks of Ages, according to which the proper concerns of science and religion belong to “non-overlapping magisteria.” Science gets the empirical universe; religion gets the questions of morality, meaning, and value.

The moral worldview of any scientifically literate person—one who is not blinkered by fundamentalism—requires a clean break from religious conceptions of meaning and value.

To begin with, the findings of science imply that the belief systems of all the world’s traditional religions and cultures—their theories of the genesis of the world, life, humans, and societies—are factually mistaken. We know, but our ancestors did not, that humans belong to a single species of African primate that developed agriculture, government, and writing late in its history. We know that our species is a tiny twig of a genealogical tree that embraces all living things and that emerged from prebiotic chemicals almost four billion years ago. We know that we live on a planet that revolves around one of a hundred billion stars in our galaxy, which is one of a hundred billion galaxies in a 13.8-billion-year-old universe, possibly one of a vast number of universes. We know that our intuitions about space, time, matter, and causation are incommensurable with the nature of reality on scales that are very large and very small. We know that the laws governing the physical world (including accidents, disease, and other misfortunes) have no goals that pertain to human well-being. There is no such thing as fate, providence, karma, spells, curses, augury, divine retribution, or answered prayers—though the discrepancy between the laws of probability and the workings of cognition may explain why people believe there are. And we know that we did not always know these things, that the beloved convictions of every time and culture may be decisively falsified, doubtless including many we hold today.

What happens to those who are taught that science is just another narrative like religion and myth, that it lurches from revolution to revolution without making progress, and that it is a rationalization of racism, sexism, and genocide?

Ultimately the greatest payoff of instilling an appreciation of science is for everyone to think more scientifically.

Three-quarters of the nonviolent resistance movements succeeded, compared with only a third of the violent ones.50 Gandhi and King were right, but without data, you would never know it.


The goal of maximizing human flourishing—life, health, happiness, freedom, knowledge, love, richness of experience—may be called humanism.

It is humanism that identifies what we should try to achieve with our knowledge. It provides the ought that supplements the is. It distinguishes true progress from mere mastery.

Some Eastern religions, including Confucianism and varieties of Buddhism, always grounded their ethics in human welfare rather than divine dictates.

First, any Moral Philosophy student who stayed awake through week 2 of the syllabus can also rattle off the problems with deontological ethics. If lying is intrinsically wrong, must we answer truthfully when the Gestapo demand to know the whereabouts of Anne Frank?

If a terrorist has hidden a ticking nuclear bomb that would annihilate millions, is it immoral to waterboard him into revealing its location? And given the absence of a thundering voice from the heavens, who gets to pull principles out of the air and pronounce that certain acts are inherently immoral even if they hurt no one?

A viable moral philosophy for a cosmopolitan world cannot be constructed from layers of intricate argumentation or rest on deep metaphysical or religious convictions. It must draw on simple, transparent principles that everyone can understand and agree upon. The ideal of human flourishing—that it’s good for people to lead long, healthy, happy, rich, and stimulating lives—is just such a principle, since it is based on nothing more (and nothing less) than our common humanity.

Our universe can be specified by a few numbers, including the strengths of the forces of nature (gravity, electromagnetism, and the nuclear forces), the number of macroscopic dimensions of space-time (four), and the density of dark energy (the source of the acceleration of the expansion of the universe). In Just Six Numbers, Martin Rees enumerates them on one hand and a finger; the exact tally depends on which version of physical theory one invokes and on whether one counts the constants themselves or ratios between them. If any of these constants were off by a minuscule iota, then matter would fly apart or collapse upon itself, and stars, galaxies, and planets, to say nothing of terrestrial life and Homo sapiens, could never have formed.

If the factual tenets of religion can no longer be taken seriously, and its ethical tenets depend entirely on whether they can be justified by secular morality, what about its claims to wisdom on the great questions of existence? A favorite talking point of faitheists is that only religion can speak to the deepest yearnings of the human heart. Science will never be adequate to address the great existential questions of life, death, love, loneliness, loss, honor, cosmic justice, and metaphysical hope.

To begin with, the alternative to “religion” as a source of meaning is not “science.” No one ever suggested that we look to ichthyology or nephrology for enlightenment on how to live, but rather to the entire fabric of human knowledge, reason, and humanistic values, of which science is a part. It’s true that the fabric contains important strands that originated in religion, such as the language and allegories of the Bible and the writings of sages, scholars, and rabbis. But today it is dominated by secular content, including debates on ethics originating in Greek and Enlightenment philosophy, and renderings of love, loss, and loneliness in the works of Shakespeare, the Romantic poets, the 19th-century novelists, and other great artists and essayists. Judged by universal standards, many of the religious contributions to life’s great questions turn out to be not deep and timeless but shallow and archaic, such as a conception of “justice” that includes punishing blasphemers, or a conception of “love” that adjures a woman to obey her husband.

A “spirituality” that sees cosmic meaning in the whims of fortune is not wise but foolish. The first step toward wisdom is the realization that the laws of the universe don’t care about you. The next is the realization that this does not imply that life is meaningless, because people care about you, and vice versa. You care about yourself, and you have a responsibility to respect the laws of the universe that keep you alive, so you don’t squander your existence. Your loved ones care about you, and you have a responsibility not to orphan your children, widow your spouse, and shatter your parents. And anyone with a humanistic sensibility cares about you, not in the sense of feeling your pain—human empathy is too feeble to spread itself across billions of strangers—but in the sense of realizing that your existence is cosmically no less important than theirs, and that we all have a responsibility to use the laws of the universe to enhance the conditions in which we all can flourish.

It would not be fanciful to say that over the course of the 20th century the global rate of atheism increased by a factor of 500, and that it has doubled again so far in the 21st. An additional 23 percent of the world’s population identify themselves as “not a religious person,” leaving 59 percent of the world as “religious,” down from close to 100 percent a century before.

The Secularization Thesis, irreligion is a natural consequence of affluence and education.66 Recent studies confirm that wealthier and better-educated countries tend to be less religious.

Why is the world losing its religion? There are several reasons.80 The Communist governments of the 20th century outlawed or discouraged religion, and when they liberalized, their citizenries were slow to reacquire the taste. Some of the alienation is part of a decline in trust in all institutions from its high-water mark in the 1960s.81 Some of it is carried by the global current toward emancipative values (chapter 15) such as women’s rights, reproductive freedom, and tolerance of homosexuality.82 Also, as people’s lives become more secure thanks to affluence, medical care, and social insurance, they no longer pray to God to save them from ruin: countries with stronger safety nets are less religious, holding other factors constant.83 But the most obvious reason may be reason itself: when people become more intellectually curious and scientifically literate, they stop believing in miracles.

No discussion of global progress can ignore the Islamic world, which by a number of objective measures appears to be sitting out the progress enjoyed by the rest. Muslim-majority countries score poorly on measures of health, education, freedom, happiness, and democracy, holding wealth constant.90 All of the wars raging in 2016 took place in Muslim-majority countries or involved Islamist groups, and those groups were responsible for the vast majority of terrorist attacks.

Still others were exacerbated by clumsy Western interventions in the Middle East, including the dismemberment of the Ottoman Empire, support of the anti-Soviet mujahedin in Afghanistan, and the invasion of Iraq.

But part of the resistance to the tide of progress can be attributed to religious belief. The problem begins with the fact that many of the precepts of Islamic doctrine, taken literally, are floridly antihumanistic. The Quran contains scores of passages that express hatred of infidels, the reality of martyrdom, and the sacredness of armed jihad.

Of course many of the passages in the Bible are floridly antihumanistic too. One needn’t debate which is worse; what matters is how literally the adherents take them.

Self-identifying as a Muslim, regardless of the particular branch of Islam, seems to be almost synonymous with being strongly religious.”94

Between 50 and 93 percent believe that the Quran “should be read literally, word by word,” and that “overwhelming percentages of Muslims in many countries want Islamic law (sharia) to be the official law of the land.”

All these troubling patterns were once true of Christendom, but starting with the Enlightenment, the West initiated a process (still ongoing) of separating the church from the state, carving out a space for secular civil society, and grounding its institutions in a universal humanistic ethics. In most Muslim-majority countries, that process is barely under way.

Making things worse is a reactionary ideology that became influential through the writings of the Egyptian author Sayyid Qutb (1906–1966), a member of the Muslim Brotherhood and the inspiration for Al Qaeda and other Islamist movements.100 The ideology looks back to the glory days of the Prophet, the first caliphs, and classical Arab civilization, and laments subsequent centuries of humiliation at the hands of Crusaders, horse tribes, European colonizers, and, most recently, insidious secular modernizers.

While the West might enjoy the peace, prosperity, education, and happiness of post-Enlightenment societies, Muslims will never accept this shallow hedonism, and it’s only understandable that they should cling to a system of medieval beliefs and customs forever.

Tunisia, Bangladesh, Malaysia, and Indonesia have made long strides toward liberal democracy (chapter 14). In many Islamic countries, attitudes toward women and minorities are improving (chapter 15)—slowly, but more detectably among women, the young, and the educated.

Let me turn to the second enemy of humanism, the ideology behind resurgent authoritarianism, nationalism, populism, reactionary thinking, even fascism. As with theistic morality, the ideology claims intellectual merit, affinity with human nature, and historical inevitability. All three claims, we shall see, are mistaken.

A thinker who represented the opposite of humanism (indeed, of pretty much every argument in this book), one couldn’t do better than the German philologist Friedrich Nietzsche (1844–1900).109 Earlier in the chapter I fretted about how humanistic morality could deal with a callous, egoistic, megalomaniacal sociopath. Nietzsche argued that it’s good to be a callous, egoistic, megalomaniacal sociopath. Not good for everyone, of course, but that doesn’t matter: the lives of the mass of humanity (the “botched and the bungled,” the “chattering dwarves,” the “flea-beetles”) count for nothing. What is worthy in life is for a superman (Übermensch, literally “overman”) to transcend good and evil, exert a will to power, and achieve heroic glory. Only through such heroism can the potential of the species be realized and humankind lifted to a higher plane of being.

Western civilization has gone steadily downhill since the heyday of Homeric Greeks, Aryan warriors, helmeted Vikings, and other manly men. It has been especially corrupted by the “slave morality” of Christianity, the worship of reason by the Enlightenment, and the liberal movements of the 19th century that sought social reform and shared prosperity. Such effete sentimentality led only to decadence and degeneration.

Man shall be trained for war and woman for the recreation of the warrior. All else is folly. . . . Thou goest to woman? Do not forget thy whip.

A declaration of war on the masses by higher men is needed. . . . A doctrine is needed powerful enough to work as a breeding agent: strengthening the strong, paralyzing and destructive for the world-weary. The annihilation of the humbug called “morality.” . . . The annihilation of the decaying races. . . . Dominion over the earth as a means of producing a higher type.

Most obviously, Nietzsche helped inspire the romantic militarism that led to the First World War and the fascism that led to the Second. Though Nietzsche himself was neither a German nationalist nor an anti-Semite, it’s no coincidence that these quotations leap off the page as quintessential Nazism: Nietzsche posthumously became the Nazis’ court philosopher. (In his first year as chancellor, Hitler made a pilgrimage to the Nietzsche Archive, presided over by Elisabeth Förster-Nietzsche, the philosopher’s sister and literary executor, who tirelessly encouraged the connection.) The link to Italian Fascism is even more direct: Benito Mussolini wrote in 1921 that “the moment relativism linked up with Nietzsche, and with his Will to Power, was when Italian Fascism became, as it still is, the most magnificent creation of an individual and a national Will to Power.”

The connections between Nietzsche’s ideas and the megadeath movements of the 20th century are obvious enough: a glorification of violence and power, an eagerness to raze the institutions of liberal democracy, a contempt for most of humanity, and a stone-hearted indifference to human life.

As Bertrand Russell pointed out in A History of Western Philosophy, they “might be stated more simply and honestly in the one sentence: ‘I wish I had lived in the Athens of Pericles or the Florence of the Medici.’” The ideas fail the first test of moral

Though she later tried to conceal it, Ayn Rand’s celebration of selfishness, her deification of the heroic capitalist, and her disdain for the general welfare had Nietzsche written all over them.113

Disdaining the commitment to truth-seeking among scientists and Enlightenment thinkers, Nietzsche asserted that “there are no facts, only interpretations,” and that “truth is a kind of error without which a certain species of life could not live.”

A godfather to all the intellectual movements of the 20th century that were hostile to science and objectivity, including Existentialism, Critical Theory, Poststructuralism, Deconstructionism, and Postmodernism.

A surprising number of 20th-century intellectuals and artists have gushed over totalitarian dictators, a syndrome that the intellectual historian Mark Lilla calls tyrannophilia.115 Some tyrannophiles were Marxists, working on the time-honored principle “He may be an SOB, but he’s our SOB.”

Professional narcissism. Intellectuals and artists may feel unappreciated in liberal democracies, which allow their citizens to tend to their own needs in markets and civic organizations. Dictators implement theories from the top down, assigning a role to intellectuals that they feel is commensurate with their worth. But tyrannophilia is also fed by a Nietzschean disdain for the common man, who annoyingly prefers schlock to fine art and culture, and by an admiration of the superman who transcends the messy compromises of democracy and heroically implements a vision of the good society.

And Trump has been closely advised by two men, Stephen Bannon and Michael Anton, who are reputed to be widely read and who consider themselves serious intellectuals. Anyone who wants to go beyond personality in understanding authoritarian populism must appreciate the two ideologies behind them, both of them militantly opposed to Enlightenment humanism and each influenced, in different ways, by Nietzsche. One is fascist, the other reactionary—not in the common left-wing sense of “anyone who is more conservative than me,” but in their original, technical senses.118

The early fascist intellectuals, including Julius Evola (1898–1974) and Charles Maurras (1868–1952), have been rediscovered by neo-Nazi parties in Europe and by Bannon and the alt-right movement in the United States, all of whom acknowledge the influence of Nietzsche.

A multicultural, multiethnic society can never work, because its people will feel rootless and alienated and its culture will be flattened to the lowest common denominator. For a nation to subordinate its interests to international agreements is to forfeit its birthright to greatness and become a chump in the global competition of all against all. And since a nation is an organic whole, its greatness can be embodied in the greatness of its leader, who voices the soul of the people directly, unencumbered by the millstone of an administrative state.

The first theocons were 1960s radicals who redirected their revolutionary fervor from the hard left to the hard right. They advocate nothing less than a rethinking of the Enlightenment roots of the American political order. The recognition of a right to life, liberty, and the pursuit of happiness, and the mandate of government to secure these rights, are, they believe, too tepid for a morally viable society. That impoverished vision has only led to anomie, hedonism, and rampant immorality, including illegitimacy, pornography, failing schools, welfare dependency, and abortion. Society should aim higher than this stunted individualism, and promote conformity to more rigorous moral standards from an authority larger than ourselves. The obvious source of these standards is traditional Christianity.

Theocons hold that the erosion of the church’s authority during the Enlightenment left Western civilization without a solid moral foundation, and a further undermining during the 1960s left it teetering on the brink.

Lilla points out an irony in theoconservativism. While it has been inflamed by radical Islamism (which the theocons think will soon start World War III), the movements are similar in their reactionary mindset, with its horror of modernity and progress.124 Both believe that at some time in the past there was a happy, well-ordered state where a virtuous people knew their place. Then alien secular forces subverted this harmony and brought on decadence and degeneration. Only a heroic vanguard with memories of the old ways can restore the society to its golden age.

First, the claim that humans have an innate imperative to identify with a nation-state (with the implication that cosmopolitanism goes against human nature) is bad evolutionary psychology. Like the supposed innate imperative to belong to a religion, it confuses a vulnerability with a need. People undoubtedly feel solidarity with their tribe, but whatever intuition of “tribe” we are born with cannot be a nation-state, which is a historical artifact of the 1648 Treaties of Westphalia. (Nor could it be a race, since our evolutionary ancestors seldom met a person of another race.) In reality, the cognitive category of a tribe, in-group, or coalition is abstract and multidimensional.126 People see themselves as belonging to many overlapping tribes: their clan, hometown, native country, adopted country, religion, ethnic group, alma mater, fraternity or sorority, political party, employer, service organization, sports team, even brand of camera equipment. (If you want to see tribalism at its fiercest, check out a “Nikon vs. Canon” Internet discussion group.)

It’s true that political salesmen can market a mythology and iconography that entice people into privileging a religion, ethnicity, or nation as their fundamental identity. With the right package of indoctrination and coercion, they can even turn them into cannon fodder.127 That does not mean that nationalism is a human drive. Nothing in human nature prevents a person from being a proud Frenchman, European, and citizen of the world, all at the same time.128

Vibrant cultures sit in vast catchment areas in which people and innovations flow from far and wide. This explains why Eurasia, rather than Australia, Africa, or the Americas, was the first continent to give birth to expansive civilizations (as documented by Sowell in his Culture trilogy and Jared Diamond in Guns, Germs, and Steel).129 It explains why the fountains of culture have always been trading cities on major crossroads and waterways.

Between 1803 and 1945, the world tried an international order based on nation-states heroically struggling for greatness. It didn’t turn out so well.

After 1945 the world’s leaders said, “Well, let’s not do that again,” and began to downplay nationalism in favor of universal human rights, international laws, and transnational organizations. The result, as we saw in chapter 11, has been seventy years of peace and prosperity in Europe and, increasingly, the rest of the world.

The European elections and self-destructive flailing of the Trump administration in 2017 suggest that the world may have reached Peak Populism, and as we saw in chapter 20, the movement is on a demographic road to nowhere.

Still, the appeal of regressive ideas is perennial, and the case for reason, science, humanism, and progress always has to be made.

Remember your math: an anecdote is not a trend. Remember your history: the fact that something is bad today doesn’t mean it was better in the past. Remember your philosophy: one cannot reason that there’s no such thing as reason, or that something is true or good because God said it is. And remember your psychology: much of what we know isn’t so, especially when our comrades know it too.

Keep some perspective. Not every problem is a Crisis, Plague, Epidemic, or Existential Threat, and not every change is the End of This, the Death of That, or the Dawn of a Post-Something Era. Don’t confuse pessimism with profundity: problems are inevitable, but problems are solvable, and diagnosing every setback as a symptom of a sick society is a cheap grab for gravitas. Finally, drop the Nietzsche.