Artificial intelligence will be considered a new epoch in human history. The Enlightenment was defined by the age of reason, in which a process could ensure humans develop new and tested knowledge. Increasingly though, algorithmic learning is developing so rapidly that no human entirely understands the recommendations that AI makes. This will be seen as entirely new age.
So argues the Age of AI: And Our Human Future, a 2021 book written by former Secretary of State Henry Kissinger, former Google CEO Eric Schmidt and AI researcher Daniel Huttenlocher. That big idea was argued in an article Kissinger wrote for The Atlantic in 2018 entitled: How the Enlightenment Ends.
The book is neither dystopian nor breathlessly optimistic. It is not full of rich stories nor colorful visions. It is a clear-eyed book directed toward policymakers and business leaders. It outlines its authors view of current research and understanding about where AI research is heading.
I collected notes from the book below. I recommend reading it.
Find my notes below:
- Understand a simple view of global knowledge: the Greek and Roman Classics to the medieval / Middle Ages to Enlightenment and age of reason to romanticism and then to modernity. AI will be something new.
- Kant: ‘a reason: thing in itself, and understanding its place”
- Bohr: instruments are part of observations
- Wittgenstein wrote of “family resemblances,” of general themes across phenomena.
- Enlightenment concept of a knowable world that we will learn step by step could give way to AI when we understand we don’t know why something happens the way it does
- “When information is contextualized, it becomes knowledge. When knowledge compels convictions, it becomes wisdom.” (52) “The digital world has a little patience for wisdom; its values are shaped by a probation not introspection.”
- Alan Turing (Turing test of external results being only definition of intelligence) and John Mccarthy definition’s of AI became benchmarks
- AI today is imprecise (we don’t need exact inputs); dynamic, emergent (find new solutions); and capable of “learning” (57)
- After “the AI winter” of research in the 1990s, advances in machine learning and neural networks developed: rather than the Platonic idea of defining (a cat is this and that) to the Wittgenstein sense of family resemblances (here are a lot of cats , find others) (61)
- Authors point to Alpha and AI-discovered drug halicin as crucial early examples of AI development
- ML: Supervised learning, unsupervised and reinforcement learning
- In 2015, neural networks for language translation radically accelerated improvements in the field (sequential dependencies being a major part of that)
- Google’s BERT is a bidirectional transformer, not only reading left to right but in all directions so it can understand relations of phrases
- Parallel corpora technique means all sorts of media in different languages (not necessarily already translated) allowed for all language pattern seeking to develop, not related to existing examples of translation
- Google DeepMind that saved energy by 40% without anyone fully understanding how it got there
- AI learning could be similar to Alexander Fleming with penicillin, who didn’t know how pencilled did what it did at first; we had to go figure that out afterward
- AI learning is shallow, so unexpected common sense mistakes happen at important rare moments (ie autonomous vehicles). Humans are better at exceptional moments; AI is presently better at rote and routine, that may change
- Distinct learning and deployment phases allows control that is distinct from the infamous 2016 Tay Microsoft chat AI that developed hate speech because it was continuously learning (83)
- At present AI has three constraints: (1) the rules of the code (ie the rules of chess); (2) its objective function (goal of winning chess, not some other outcome) and (3) recognizing inputs it was given (ie how to know what a pawn is) (84)
- How likely is artificial general intelligence? There is debate whether we will be able to simply combine narrow AI into a general or whether the disciplines won’t fit
- Digital Network platforms are entirely new phenomenon and their scale allows AI to be deployed (ie Google switched in 2015 to AI powered search over human algorithm).
- The positive network effect of digital platforms is far rarer in an analog world: stock markets were an early example, and deployment of communication technology like the telephone were others. But typically more usage is not a benefit to a single user
- If AI GPT-3 were used to deploy hate speech faux accounts to sow prejudice only another AI likely could combat it at scale. (114) This sounds like nuclear weapons technology.
- The scale of network platforms as transnational and its speed means domestic government interference feels too slow and limited; but what alternatives do we have to self policing?
- The U.S. and China both domestically looking at antitrust and want global preeminence of these technologies in AI globally (122) both aided by continental scale single language usages, plus the universities and talent that Europe has
- Network platforms will face questions and regulatory hurdles around the world as they develop into transnational players
- Nuclear weapons created a paradox of the most advanced technology that nobody wants to use. Will AI create a kind of similar stalemate? But it’s far more hidden and secret and potentially anonymous (ie Stuxnet attack on Iran is not technically clearly attributed to anyone)
- Deepfakes as a strategic form of disinformation
- As Time reminded in a story about comedy AI: robots are meant to replace jobs that are dirty, dangerous and dull.
- Theme is of AI to aid people not replace people but will get complicated as AI makes predictions and insights that are beyond our range of understanding.
- Lethal autonomous weapons can have an human “on the loop” monitoring passively or “in the loop” meaning they’re required to sign off on certain steps (165)
- Governments will have to come together for mutual restraint
- Regulation of technology has tended to be aided by limitations in at least one of three factors:
- civilian or government use (Nuclear reactors and rifles are both dual use)
- Complexity (rifles simple; nuclear is not)
- Scale of damage (nuclear widespread; rifles not)
- AI is the first that is all three (dual use; simple to disseminate so can’t easily be controlled by government; and able to create widespread destruction) ie if a terrorist group secured a powerful algorithm (166)
- US has made a distinction between AI enabled weapons and AI weapons (which can be lethal without human intervention), and US argues only the former should be able to be used
- “to the two traditional ways by which people have known the world, faith and reason, AI adds a third.” 178
- How will this change human identity?
- How do we feel about an AI making recommendations on bonuses and promotions? Will we create roadblocks like we might for weapons?
- “explanations supply meaning and permit purpose; the public recognition an explicit application of moral principles supplied justice.” But AI won’t always offer clearly understood applications (what about concept of deliberative justice?
- AlphaFold protein research (like AI chess players) are example of an Ai that does things radically differently and then human researchers attempt to explain it to influence how humans approach. It’s an entirely new research tool. Can we keep up?
- “Order without legitimacy is mere force” so we will need to maintain human oversight for trust in system
- “in the age of AI, then, human reason will find itself both augmented and diminished.” (208)
- We will be given deeper analysis and also fed more personalized experiences
- “Created by humans, AI should be overseen by humans.” (215)
- Kant: “Human reason… is burdened with questions which it cannot dismiss..but which it also cannot answer” (226)