Humanity is at a crucial moment in which our technologies are advanced enough to have created our own existential risks and secure enough to consider a longterm enough future in which natural risks pose true threats. This moment can be called the Precipice, the name of a 2020 book by Toby Ord.
I enjoyed it, came to it around organizing I’ve done toward longtermism, and another buzzy book on the topic.
Below I share my notes for future reference.
My notes:
- 200k years of Homo Sapiens but earth could be habitable for hundred of millions of years, and we could be multi planetary. The point is we may have lots of time.
- The gap between our power and our wisdom widens
- Effective altruism: giving 10% of earnings and now looking far longer term with those goals
- Vasily Aleksandrovich Arkhipov, Russian Naval officer known for avoiding a nuclear attack during Cuban middle crisis by overriding a malfunctioning system
- Today 1 out of 10 line in extreme poverty; before Industrial Revolution, 19 of 20 did (inflation adjusted)
- “In ecological terms it is not a human that is remarkable but humanity”
- Average mammalian species lives for 1m years; Homo erectus lasted for 2m; homo sapiens are at 200k
- Earth will remain habitable for a billion years
- Nuclear, climate change, biotechnology and AI are among the big threats facing us now
- Carl Sagan: “Many of the dangers we face indeed arise from science and technology – but more fundamentally, because we have become powerful without becoming commensurately wise. The world altering powers that technology has delivered into our hands now require a degree of consideration in foresight that has never before been asked of us”
- In 20th century we had a 1 in 100 chance of “ human extinction, or the unrecoverable collapse of civilization,” but the author puts that as 1 in 6 in 21st century —Russian roulette
- He says we are on a Precipice (a period like the Enlightenment)
- Nick Bostrom: existential catastrophes of many kinds ( extinction or failed continuation, which includes unrecoverable collapse and unrecoverable dystopia )
- Europe survived through as much as 50% population loss in the worst Black Death period (1348~) so we think it require more loss than that
- Extinction-level events are not talked soberly and seriously, saved just to action films because it’s just so hard to grapple with
- Derek Parfit: comparing a nuclear war that kills 99% of humans vs one that kills 100% of humans requires an understanding of probabilities, as the latter kills all chance of our species lasting
- Almost all humans who will ever live have not yet been born. If we kill ourselves, Jonathan Schell says it’d be a case of “infant mortality”
- Caring about people with geographic distance is a sign of moral progress, and so temporal distance will follow which is longtermism
- Discount rates that trained economists prefer to weight cost-benefit analysis do not work for the long time horizon of human well-being. “A standard discount rate of 5% for a year applied to our future would imply that if you can save one person from a headache in 1 million years time or 1 billion people from torture in 2 million years time you should save the one from a headache.”
- Population ethics
- The Epicurious argument that once you die you aren’t there to experience the bad is flawed because it misunderstands that having less good (life) can be as bad as having more bad (pain)
- Maybe 10k generations have existed so far; maybe 40k generations more to reach 1m years more
- Pliny the Younger: “There will come a time when are descendants will be amazed that we did not know things that are so plain to them.“
- The gifts from our parents are paid forward, and likewise the gifts of past generations must be paid forward too : our duty to future generations was created by our ancestors
- Why do we care about avoiding existential risk? Present (lots of pain), future (lost possible lives), our past (debt to ancestors), our character (honor our culture) or cosmic significance (maybe we are the only moral bodies in universe) (p 56)
- Markets don’t supply public goods well — and avoiding existential risk is an inter generational global public good
- Biases keep us disengaged: availability heuristic (only fear threats we can remember happening); acute compassion for that we can see (which means storytelling) and scope neglect (a million dead doesn’t feel different than a billion dead)
- Nuclear ethics were taken seriously until the Cold War ended; John Leslie’s 1996 book The End of the World started extinction discussions, influenced Nick Bostrom who influenced this author
- Natural risks, anthropogenic risks and future risks
- Alvarez hypotheses and Shoemaker impact hazards gave rise to tracking of asteroids (and many comets) and we found what we think are 95% of them and none have any clear threat to impacting us in next hundred years (1 in 6000 chance in any century that one at least 1km would hit) 12 years from discovery to government action; 28 years and nearly all were tracked; asteroids are mostly covered so he recommends moving some tracking into comets
- Stellar explosions and supervolanoes and other natural risks are all relatively low
- We face a thousand times more anthropogenic risks than natural ones (though asteroid is big enough threat they ought not be ignored)
- Arnold Toynbee wrote “the human race’s prospects of survival were considerably better when we were defenseless against tigers than they are today, when we have become defenseless against ourselves.”
- The start of The Precipice is the Trinity atomic test, since there were still some questions about whether we could light the whole atmosphere (Enrico Fermi’s bets and jokes)
- No elected official was warned of Oppenheimer’s meetings at Berkeley when they were still scared Hitler might hold the world to nuclear ransom. They were right that their test of nuclear fission wouldn’t ignite the atmosphere. But the other major calculation that summer related to Castle Bravo, they got wrong — it was far more powerful than the expected
- 15 days after Hiroshima, America started planning for nuclear war with the Soviets
- Nuclear bomb threats: local effects of the explosion and fire; radioactive dust and perhaps worst (and last to be understood) nuclear winter through columns of smoke that reach to the stratosphere and spread the world over
- Feedback effects of climate change are terrible but unlikely to truly threaten humanity existentially
- Paul Ehrlich got his “population bomb” wrong; Today we add a fixed number not a fixed proportion to the population: the bomb is over
- Norman Borlaug won the Nobel Prize for breeding new high-yield wheat “who may be responsible for saving more lives than anyone else in history”
- Author thinks declining population isn’t a problem because government policies are clear and popular (but they haven’t always been effective, I add!!)
- In 1933, Ernest Rutherford said harnessing atomic energy was “moonshine” and the next day Leo Szilard discovered the chain reaction; Wilbur Wright declared heavier than air flight was 50 years away two years before he invented it
- 90% Indigenous died and 10% world population in Colombian exchange ; maybe 5-15% of the world died in the Black Death period including as much as half of Europe; 3% of world population died in both Justinian plague of 541 and 1918 Spanish flu, possibly more than world wars combined
- Information hazards like biohazards where the genome of smallpox and 1918 flu
- Unilateralists curse: riskier behavior in sharing more because more sharing can’t be fought, a single outlier can have bad outcomes
- In 1956 Dartmouth AI research
- By 1980s, researchers realized calculus and chess were easier for AI than recognizing a cat or picking up an egg
- Deep learning approaches were a breakthrough
- Stuart Russel and Oren Etzioni are opposite in AI speculative risk
- Puerto Rico conference in 2015 and Asilomar AI principles in 2017
- Biotechnology, AGI and nanotechnology (being able to reproduce any material at the atomic level) are future risks
- Nick Bostrom notes unforeseen risks likely wait us — like finding another atomic bomb but made more easily
- Alignment and values lock in for AI; dystopian lock in too (a religion for AI)
- Keep options open which include competing systems
- His probabilities include the assumption that we respond to crisis
- Future of Humanity Institute outlines different stages an extinction level event must follow (see screenshot)
- Risks that are anti correlated, correlated or independent
- Great power war isn’t a cause but amplifies any of these risks
- Global burden of disease introduced “risk factor,” in how smoking contribute to disease
- Reduce risk most efficiency by focusing on those that are most important, tractable and neglected
- Owen Cotton-Barratt of Future of Humanity Institute: define those three terms above and Cost Effectiveness = Importance x Tractability x Neglectedness
- That’s a national or global portfolio of risk, best for local, individual or groups to use their comparative advantage to focus on what can be taught to others
- Another heuristic is to assess risks by those that are soon, sudden and sharp (no warning shot like we expect for a power law distributed asteroid event)
- Nick Beckstead: distinguish between targeted and broad interventions
- “Early action is higher leverage, but more easily wasted. It has more power, but less accuracy.”
- We spend a lot on broad interventions (education) so it’s not neglected. Suggests bringing up targeted risk spending to a factor of 100 to align with spending on ice cream
- it is not one’s neighbour who are the enemy, but misery, ignorance, and cold indifference of natural law” Isaac Asimov
What would a highly coordinated and aligned strategy:
- Reaching existential security
- he long reflection (which is the best kind of future for humans, a long positive moral philosophy we are not positioned to have now)
- Achieving our potential
Notes continued:
- “Patient, prudent and coordinated”
- Our past was brief enough to evade the natural threats and our power too limited to produce anthropogenic threats
- Multi planetary species primarily addresses natural risks but many would be correlated (AGI, l
- Humanity heavily relies on trial and error, but these are challenges and time scales that don’t allow for missing once. It is a higher level of thinking
- Black plague and Cuban middle crises are near misses and length of human existence give us probabilities
- Is 195 countries different chances at good outcomes or bad?
- Unesco declaration on responsibilities of present generation toward future generations
- Democracy struggles to incorporate future generations; should their be an ombudsman to represent their interests?
- “Our current predicament stems from the rapid growth of humanities power, outstripping the slow and unsteady growth of our wisdom”
- Montreal protocol with ozone layer was a success
- State risks and transition risks: speed through the first to avoid cumulative risks but move carefully through transition risks. Most anthropogenic risks are transition risks (AGI) as opposed to living unprotected with natural threats of asteroid (state risks)
- Open Philanthropy project is funding longterm existential look
- A career is 80k hours devoted to something: what’s ours? Donations also help (Katherine McCormick was a philanthropist who largely funded contraceptives pill; Norman Borlaug green revolution largely funded by Philantropy)
- In 100k years our carbon footprint could be gone; in 10m years full biodiversity will return akin to the return of other mass extinction events
- Is our future project to extend the life of the earth by adding carbon and reducing sun to extend oceanic life later? Then to bring our biosphere elsewhere
- Species last 1-10m years but why can’t we outpace that given how unusual we are?
- This book wants to be serious, nothing Hollywood about it
- Think of it as interstellar travel on an ocean liner but like Polynesian travelers 1k years ago island hopping — using our sun over thousands of years
- Acceleration, surviving the voyage, deaceration and establish a base
- Observable universe involves that which is accelerating away fast but we can still see; these change what we can see
- Go slow
- Frank Ramsey at just 26 before dying: “I don’t feel the least humble before the vastness of the heavens.
- The stars may be large, but they cannot think or love; and these are qualities which impress me far more than size does. I take no credit for weighing nearly seventeen stone. My picture of the world is drawn in perspective, and not to scale. The foreground is occupied by human beings and the stars are all small as threepenny bits.”
- We don’t fear hunger and many disease but new risks
- How much will transplants create new species?