Artwork

Content provided by The 80,000 Hours Podcast, The 80, and 000 Hours team. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The 80,000 Hours Podcast, The 80, and 000 Hours team or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

#92 – Brian Christian on the alignment problem

2:55:46
 
Share
 

Manage episode 286634241 series 1531348
Content provided by The 80,000 Hours Podcast, The 80, and 000 Hours team. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The 80,000 Hours Podcast, The 80, and 000 Hours team or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Brian Christian is a bestselling author with a particular knack for accurately communicating difficult or technical ideas from both mathematics and computer science.

Listeners loved our episode about his book Algorithms to Live By — so when the team read his new book, The Alignment Problem, and found it to be an insightful and comprehensive review of the state of the research into making advanced AI useful and reliably safe, getting him back on the show was a no-brainer.

Brian has so much of substance to say this episode will likely be of interest to people who know a lot about AI as well as those who know a little, and of interest to people who are nervous about where AI is going as well as those who aren't nervous at all.

Links to learn more, summary and full transcript.

Here’s a tease of 10 Hollywood-worthy stories from the episode:

The Riddle of Dopamine: The development of reinforcement learning solves a long-standing mystery of how humans are able to learn from their experience.
ALVINN: A student teaches a military vehicle to drive between Pittsburgh and Lake Erie, without intervention, in the early 1990s, using a computer with a tenth the processing capacity of an Apple Watch.
Couch Potato: An agent trained to be curious is stopped in its quest to navigate a maze by a paralysing TV screen.
Pitts & McCulloch: A homeless teenager and his foster father figure invent the idea of the neural net.
Tree Senility: Agents become so good at living in trees to escape predators that they forget how to leave, starve, and die.
The Danish Bicycle: A reinforcement learning agent figures out that it can better achieve its goal by riding in circles as quickly as possible than reaching its purported destination.
Montezuma's Revenge: By 2015 a reinforcement learner can play 60 different Atari games — the majority impossibly well — but can’t score a single point on one game humans find tediously simple.
Curious Pong: Two novelty-seeking agents, forced to play Pong against one another, create increasingly extreme rallies.
AlphaGo Zero: A computer program becomes superhuman at Chess and Go in under a day by attempting to imitate itself.
Robot Gymnasts: Over the course of an hour, humans teach robots to do perfect backflips just by telling them which of 2 random actions look more like a backflip.

We also cover:

• How reinforcement learning actually works, and some of its key achievements and failures
• How a lack of curiosity can cause AIs to fail to be able to do basic things
• The pitfalls of getting AI to imitate how we ourselves behave
• The benefits of getting AI to infer what we must be trying to achieve
• Why it’s good for agents to be uncertain about what they're doing
• Why Brian isn’t that worried about explicit deception
• The interviewees Brian most agrees with, and most disagrees with
• Developments since Brian finished the manuscript
• The effective altruism and AI safety communities
• And much more

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Sofia Davis-Fogel.

  continue reading

263 episodes

Artwork
iconShare
 
Manage episode 286634241 series 1531348
Content provided by The 80,000 Hours Podcast, The 80, and 000 Hours team. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The 80,000 Hours Podcast, The 80, and 000 Hours team or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Brian Christian is a bestselling author with a particular knack for accurately communicating difficult or technical ideas from both mathematics and computer science.

Listeners loved our episode about his book Algorithms to Live By — so when the team read his new book, The Alignment Problem, and found it to be an insightful and comprehensive review of the state of the research into making advanced AI useful and reliably safe, getting him back on the show was a no-brainer.

Brian has so much of substance to say this episode will likely be of interest to people who know a lot about AI as well as those who know a little, and of interest to people who are nervous about where AI is going as well as those who aren't nervous at all.

Links to learn more, summary and full transcript.

Here’s a tease of 10 Hollywood-worthy stories from the episode:

The Riddle of Dopamine: The development of reinforcement learning solves a long-standing mystery of how humans are able to learn from their experience.
ALVINN: A student teaches a military vehicle to drive between Pittsburgh and Lake Erie, without intervention, in the early 1990s, using a computer with a tenth the processing capacity of an Apple Watch.
Couch Potato: An agent trained to be curious is stopped in its quest to navigate a maze by a paralysing TV screen.
Pitts & McCulloch: A homeless teenager and his foster father figure invent the idea of the neural net.
Tree Senility: Agents become so good at living in trees to escape predators that they forget how to leave, starve, and die.
The Danish Bicycle: A reinforcement learning agent figures out that it can better achieve its goal by riding in circles as quickly as possible than reaching its purported destination.
Montezuma's Revenge: By 2015 a reinforcement learner can play 60 different Atari games — the majority impossibly well — but can’t score a single point on one game humans find tediously simple.
Curious Pong: Two novelty-seeking agents, forced to play Pong against one another, create increasingly extreme rallies.
AlphaGo Zero: A computer program becomes superhuman at Chess and Go in under a day by attempting to imitate itself.
Robot Gymnasts: Over the course of an hour, humans teach robots to do perfect backflips just by telling them which of 2 random actions look more like a backflip.

We also cover:

• How reinforcement learning actually works, and some of its key achievements and failures
• How a lack of curiosity can cause AIs to fail to be able to do basic things
• The pitfalls of getting AI to imitate how we ourselves behave
• The benefits of getting AI to infer what we must be trying to achieve
• Why it’s good for agents to be uncertain about what they're doing
• Why Brian isn’t that worried about explicit deception
• The interviewees Brian most agrees with, and most disagrees with
• Developments since Brian finished the manuscript
• The effective altruism and AI safety communities
• And much more

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Sofia Davis-Fogel.

  continue reading

263 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide