Ethan Mollick headshot and Co-Intelligence book cover

Living and working with AI

Artificial intelligence is a marvel that generates two unnecessarily extreme reactions: This will solve all our problems, or it will actually kill us all.

“If we focus solely on the risks or benefits of building, super intelligent machines, it robs us of our abilities to consider the more likely second and third scenarios, a world where AI is ubiquitous, but very much in human control.” That’s from the 2024 book by Wharton professor Ethan Mollick titled Co-Intelligence: Living and Working with AI. As he writes: “Rather than being worried about one giant AI Apocalypse, we need to worry about the many small catastrophes at AI can bring.”

It’s a great read gathering the moment we are in right now. I recommend it. Below are my notes for my future reference.

My notes:

  • Ben Franklin played Mechanical Turk in France, and Poe wrote about seeing it in Richmond, Virginia
  • Alignment from Nick Bostrom’s paperclip metaphor: there’s no reason AI should follow our ethics
  • Artificial super intelligence (a different way of saying AGI)
  • John von Neumann’s famous description of technological singularity: “ human affairs, as we know them, could not continue”
  • Eliezer Yudkowsky: ultimate AI critic
  • Japan’s copyright act (2018): AI does not violate copyright because it is not an author
  • “Even if pre-training is legal, it may not be ethical.”
  • RHLF (Reinforcement Learning from Human Feedback) with low paid workers rating ugly content
  • CMU’s Daniil Boiko and Robert MacKnight showed AI run labs, the “Coscientist”
  • Author’s four principles:
    • Always invite AI to the table;
    • Be the human in the loop;
    • Treat AI like a person (but tell it what kind of person it is) and
    • Assume this is the worst AI you will ever use 
  • Author’s: jagged frontier of AI (developments will vary by industry and function)
  • Innovation is expensive for organizations but cheap for individuals
  • Turing’s 1950 paper on imitation game 
  • ELIZA the illusion of intelligence: “The Eliza Effect is the tendency to falsely attribute human thought processes and emotions to AI, and believe an AI is more intelligent than it actually is”
  • In 2001 the “Eugene Gootsman boy” may have been the first to cross Turing test
  • Microsoft’s Tay and its very public and rapid descent into vile madness in 2016
  • Theory of mind of Kevin Roose’s story with a Bing AI chatbot in 2023
  • AI researchers: 14 signs of sentience , indicators of a conscious state
  • Replika AI of deceased partners
  • Data scientist Colin Fraser: when asked for a random number between 1 and 100 ChatGPT says 42 10% of the time — probably because of Douglas Adam’s Hitchhiker’s Guide made it a meme
  • Hallucinations are declining with each release but there’s debate if it can ever get to zero or not; the paradox is that which makes LLMs so creative is also what makes them bad at what other software is so good at — following fact-based rules
  • We might have thought Ai would first automate repetitive tasks but the transformer technology changed that — reference that meme that said we wanted AI to do dishes for us so we could write poetry not the other way around 
  • “The issue is that we often mistake novelty for originality.”
  • Alternative Uses Test that AI does well on: Jennifer Hanse and Paul Hanel showed we think ChatGPT 4 outperformed all but 9.4% of humans — and we expect to get better
  • Remote associates test from the 1960s to test cognitive ability
  • Equal odds rule: innovative people create more and better ideas (not correlated with intelligence, it’s a different skill). Developed in 1977 by psychologist Dean Simonton, the equal odds rule is a theory that states that the number of successful ideas a person produces is proportional to the total number of ideas they generate. The rule suggests that the more work a person produces, the more likely they are to create something meaningful.
  • Shakked Noy and Whitney Zhang 2023 paper: ChatGPT saved time
  • Humanities could become more valuable because you know what info to pull from (Will that general knowledge of humanity experience replace software engineers?)
  • “The Button” we will use to generate AI in more places, not just n “mere ceremony” but in previously useful slow tasks, like writing performance reviews and letters of recommendation ) 
  • Ed Felton: Of 1,016 jobs evaluated, just 36 had no overlap with near-term AI capabilities
  • Fabrizio Dell’Aqua showed how AI made some recruiters lazier and missed applicants
  • “Just Me Tasks:” because for now it has to be, or because you think it should for ethical or moral or cultural reasons
  • Cyborgs (integrated AI use) and centaurs (strict lines between what is done by AI and what is not)
  • AI boosts lowest performers the most 
  • Amara’s Law from 1960s: We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run. “In the short-term things change less than we think. In the long-term, things will change more than we think.”
  • Bloom in 1984: the 2 Sigma Problem
  • Even before AI, 20k people in Kenya earned a living writing essays full time
  • Sarah J Banks: in mid 1970s, calculators caused education homework crisis too
  • Chain of thought prompting
  • Ask for help: “I want you to help me to do XYZ, what do you need from me to succeed?”
  • “The biggest danger to our educational system posed by AI is not its destruction of homework, but rather it’s undermining of the hidden system of apprenticeship that comes after formal education”
  • Matthew Beane: already medical apprenticeships distorted by medical robots, younger did “shadow training” like YouTube videos
  • “The path to expertise requires a grounding in facts”
  • Anders Ericsson it’s not 10k hours of practice but the type of practice
  • Will AI help as a coach?
  • Software programmers in 75th and 25th percentile can be a 27 times gap in output
  • Authors study of video game industry: quality of middle managers represented a fifth of success, more than entire senior management team
  • In his study between BCG consultants; a 22% gap between low and high performers closed to 4
  • “Humans, walking and talking bags of water and trace chemicals that we are, have managed to convince well organized sand to pretend to think like us”
  • “We have the paradox of our golden age of science. More research is being published by more scientists than ever, but the result is actually slowing progress. With too much to read and absorb, papers and more crowded fields are siding new work less and canonizing highly sided articles more.” AI can help
  • Will AI develop slowly or expotentially?
  • His four options: growth stops; growth slows; growth keeps fast or we truly reach AGI
  • “If we focus solely on the risks or benefits of building, super intelligent machines, it robs us of our abilities to consider the more likely second and third scenarios, a world where AI is ubiquitous, but very much in human control. And in those worlds we get to make choices about what AI means. Rather than being worried about one giant AI Apocalypse, we need to worry about the many small catastrophes at AI can bring.”

Leave a Reply