Notes: The Future of Intelligence on the Sam Harris Podcast

Here are my notes from The Future of Intelligence, a Conversation with Max Tegmark on the Sam Harris Podcast.

You can listen to it here:

My notes and thoughts:

  • We always focus on the downsides of super intelligent AI. There are, however, upsides. Super intelligence can help solve some of the biggest problems of our time: Safety, medical issues, justice, etc.
  • Containment is both a technical and a moral issue. Much more difficult than currently given credit for. Given ways we have to construct it, we likely can just “unplug” it.
  • Tegmark defines these three stages of life:
    • Life 1.0: Both hardware and software determined by evolution. (Flagella)
    • Life 2.0: Hardware determined by evolution, software can be learned (Humans)
    • Life 3.0: Both hardware and software can be changed at will. (AI machines)
  • Wide vs narrow intelligence: Humans have wide intelligence. Generally good a lot a lot of different tasks and can learn a lot implicitly. Computers have (so far) with narrow intelligence. They can calculate and do programmed tasks much better than us. But will completely fail at needing to account for unwritten constraints when someone says, “take me to the airport as fast as possible.”
  • The moment the top narrow intelligence gets knit together and meets the minimum of general intelligence, it will likely surpass human intelligence.
  • What makes us intelligent is the pattern in which the hardware is arranged. Not the building blocks themselves.
  • The software isn’t aware of the hardware. Our bodies are completely different from when we were young, but we feel like the same person.
  • The question of consciousness is key. A subjective experience depends on it.
  • We probably already have the hardware to get human-level general intelligence. What we are missing is the software. It is unlikely to be the same architecture as the human brain, likely similar. (Planes are much more simple than birds.)
  • AI Safety research needs to go hand-in-hand with AI research. How do we make computers unhackable? How do we contain it in development? How do we ensure system stability?
  • One further issue you are going to need to overcome is having computers answer how a decision was made in an understandable way instead of just dumping a stack trace.
  • Tegmark councils his own kids to go into fields that computers are bad at. Fields where people pay a premium for them to be done by Humans.