Do Androids Dream of Electric Sheep?

Initially set in 1992, later editions of the science fiction classic “Do Androids Dream of Electric Sheep” updated the setting to 2021. And so, we have now lived through Philip K. Dick’s 1968 novel.

Perhaps best known as inspiring the 1982 Harrison Ford movie Bladerunner, the novel won mixed reviews at launch but has developed a cult following. Dick (1928-1982) is not remembered as a great writer as much as a great thinker (Minority Report and Total Recall also inspired by his stories), and that’s felt truer still after a new wave of artificial intelligence hype.

The title plays off a subplot of the book in which the humans who remain on earth (after nuclear fallout) covet the status symbol of a living animal, as opposed to artificial ones. So, the question is whether androids (the increasingly human-passing machines that the main character is chasing) would dream of electric ones? Its big theme: What defines humanity, especially if machines increasingly recreate many of the skills we identify with? I enjoyed the book, and below share notes for my own future reference.

Continue reading Do Androids Dream of Electric Sheep?

Living and working with AI

Artificial intelligence is a marvel that generates two unnecessarily extreme reactions: This will solve all our problems, or it will actually kill us all.

“If we focus solely on the risks or benefits of building, super intelligent machines, it robs us of our abilities to consider the more likely second and third scenarios, a world where AI is ubiquitous, but very much in human control.” That’s from the 2024 book by Wharton professor Ethan Mollick titled Co-Intelligence: Living and Working with AI. As he writes: “Rather than being worried about one giant AI Apocalypse, we need to worry about the many small catastrophes at AI can bring.”

It’s a great read gathering the moment we are in right now. I recommend it. Below are my notes for my future reference.

Continue reading Living and working with AI

The Singularity is Nearer

This generation of artificial intelligence bots have passed the famed Turing test, as we once knew it.

Experts may quibble with the rules, and we’ll continue to move the goal posts. Now, though, generative AI needs to play dumb to trick humans, because they move too fast, can write too convincingly and have too-comprehensive knowledge to be any person.

That’s from the new book from futurist Ray Kurzweil called “The Singularity is Nearer: When We Merge with AI”. It’s a followup to his 2005 book “The Singularity is Near.”

He’s among the best known, and longest-running champions of the kind of digital superintelligence that is called the singularity, which he says is coming — he estimates it by 2045, and has a bet with a friend that by 2029, AI will pass an even more rigorous Turing test he helped establish. His book is a wild romp of optimism and confidence. Anyone digging into the conversation will appreciate it. I recommend it.

Below I share notes for my future reference.

Continue reading The Singularity is Nearer

Quantum Supremacy by Michio Kaku

Newtonian physics works for most of our everyday experiences. But for the biggest systems we encounter, we need Einstein’s theories of relativity to make sense of spacetime.

Neither, nor does our own intuitive understanding of the world, work at the smallest scale we understand. This is the quantum level, where electrons can be at two places at the same time, transmit information faster than speed of light and instantly analyze infinite paths between two points.

As Danish physicist and Nobel laureate Neils Bohr (1885-1962) wrote: “Anyone who is not shocked by quantum theory does not understand it.”

And much like we didn’t understand all the ramifications of the atomic age before we developed nuclear weapons, governments and companies are busy investing in the military and commercial implications of the potentially radical advancement in quantum computing.

That’s the timing from prominent physicist and science communicator Michio Kaku in his 2023 book “Quantum Supremacy: How the Quantum Computer Revolution Will Change Everything.”

By no means exhaustive, I picked up the book for a primer on the technology my work overlaps with. Below I share my notes for future reference.

Continue reading Quantum Supremacy by Michio Kaku

How will the world end?

Given our odds, it’s a lot more likely the human species will go extinct long before Earth itself is destroyed. Funny that we don’t given that chilling nuance more thought.

The 2019 book “End Times” by journalist Bryan Walsh discusses various potential catastrophes that could threaten humanity’s survival. One of the main points made in the book is that humans have a tendency to underestimate the likelihood and consequences of catastrophic events, and that we should be more proactive in addressing potential threats to our survival. Fitting that the book was published before the covid-19 pandemic was identified.

The book covers a range of topics, including the risk of a nuclear war or environmental disaster, the possibility of an asteroid impact, the threat of pandemics and epidemics, and the long-term consequences of climate change. It also explores the psychological and economic factors that influence our ability to address these issues, such as the “arithmetic of compassion” and the social discount rate.

I found the book a mix of big-picture thinking and practical evaluation, a thought-provoking reminder of the fragility of human civilization and the importance of being prepared for potential disasters.

I shared below my notes from reading the book.

Continue reading How will the world end?

The Age of AI

Artificial intelligence will be considered a new epoch in human history. The Enlightenment was defined by the age of reason, in which a process could ensure humans develop new and tested knowledge. Increasingly though, algorithmic learning is developing so rapidly that no human entirely understands the recommendations that AI makes. This will be seen as entirely new age.

So argues the Age of AI: And Our Human Future, a 2021 book written by former Secretary of State Henry Kissinger, former Google CEO Eric Schmidt and AI researcher Daniel Huttenlocher. That big idea was argued in an article Kissinger wrote for The Atlantic in 2018 entitled: How the Enlightenment Ends.

The book is neither dystopian nor breathlessly optimistic. It is not full of rich stories nor colorful visions. It is a clear-eyed book directed toward policymakers and business leaders. It outlines its authors view of current research and understanding about where AI research is heading.

I collected notes from the book below. I recommend reading it.

Continue reading The Age of AI

J.C.R. Licklider and his Dream Machine of personal computing

We interact with computers to help us think.

Both in the transactional sense that these machines can help us solve math problems or search across a vast array of indexed information, and in the deeper sense that we can patter our own behaviors around how a computer solves a problem. This wasn’t always inevitable.

Before the invention of the keyboard, computer mouse and graphical interface, and certainly before the government-funded creation of the internet, computers were seen charitably as oversized and expensive calculators. They may seem today like an appliance that is as valuable to our quality of life as an indoor toilet or a heating system. It took vision to make the change.

The people (yes, especially a particular man) behind that vision is the focus of The Dream Machine: J.C.R. Licklider and the Revolution That Made Computing Personal, a 2001 book by science journalist M. Mitchell Waldrop. The book tells the story of J.C.R. Licklider (1915-1990) and his role in the development of the modern personal computer. Licklider, a psychologist and computer scientist, was one of the pioneers of the concept of “interactive computing,” which envisioned a future in which computers would be accessible and easy to use for individuals, rather than just large institutions.

Continue reading J.C.R. Licklider and his Dream Machine of personal computing

Homo Deus: notes on Yuval Noah Harrari 2017 book

Advancements in artificial intelligence could bring about a world in which humans are secondary to self-learning algorithms.

That’s one of the big themes in the 2017 book Homo Deus, a followup by historian and popular intellectual Yuval Noah Harrari on his 2014 book Sapiens. Even more than his first, Homo Deus has been criticized for its wide-sweeping generalizations and his science generalizations. Harrari is one of the chief architects of a kind of techno-pessism so I still find his approach helpful to follow.

He’s a great storyteller, and beyond any debunked science, he engages with concepts I found interesting. I’m sharing notes here for myself. The book is worth reading if only to grasp a view on the treacherous waters some fear are coming due to technical advancements.

Continue reading Homo Deus: notes on Yuval Noah Harrari 2017 book

Technical.ly cited in ‘Weapons of Math Destruction’

The importance of artificial intelligence and the algorithms that power them is still understated.

That’s among the big themes from Weapons of Math Destruction, an important book published lat year and written by Kathy O’Neill, a computer scientist.

Proudly, Technical.ly had a small contribution, as this story of ours was cited in the book — this story informed that later reporting. There have been a few other examples of that sorta thing but I haven’t captured them. Just kinda fun to see.