Artwork

Content provided by Ben Chugg and Vaden Masrani, Ben Chugg, and Vaden Masrani. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Ben Chugg and Vaden Masrani, Ben Chugg, and Vaden Masrani or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

#49 - AGI: Could The End Be Nigh? (With Rosie Campbell)

1:24:53
 
Share
 

Manage episode 358668352 series 3418237
Content provided by Ben Chugg and Vaden Masrani, Ben Chugg, and Vaden Masrani. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Ben Chugg and Vaden Masrani, Ben Chugg, and Vaden Masrani or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

When big bearded men wearing fedoras begin yelling at you that the end is nigh and superintelligence is about to kill us all, what should you do? Vaden says don't panic, and Ben is simply awestruck by the ability to grow a beard in the first place.

To help us think through the potential risks and rewards of ever more impressive machine learning models, we invited Rosie Campbell on the podcast. Rosie is on the safety team at OpenAI and, while she's more worried about the existential risks of AI than we are, she's just as keen on some debate over a bottle of wine.

We discuss:

  • Whether machine learning poses an existential threat
  • How concerned we should be about existing AI
  • Whether deep learning can get us to artificial general intelligence (AGI)
  • If AI safety is simply quality assurance
  • How can we test if an AI system is creative?

References:

Contact us

Prove you're creative by inventing the next big thing and then send it to us at incrementspodcast@gmail.com

Special Guest: Rosie Campbell.

Support Increments

  continue reading

76 episodes

Artwork
iconShare
 
Manage episode 358668352 series 3418237
Content provided by Ben Chugg and Vaden Masrani, Ben Chugg, and Vaden Masrani. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Ben Chugg and Vaden Masrani, Ben Chugg, and Vaden Masrani or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

When big bearded men wearing fedoras begin yelling at you that the end is nigh and superintelligence is about to kill us all, what should you do? Vaden says don't panic, and Ben is simply awestruck by the ability to grow a beard in the first place.

To help us think through the potential risks and rewards of ever more impressive machine learning models, we invited Rosie Campbell on the podcast. Rosie is on the safety team at OpenAI and, while she's more worried about the existential risks of AI than we are, she's just as keen on some debate over a bottle of wine.

We discuss:

  • Whether machine learning poses an existential threat
  • How concerned we should be about existing AI
  • Whether deep learning can get us to artificial general intelligence (AGI)
  • If AI safety is simply quality assurance
  • How can we test if an AI system is creative?

References:

Contact us

Prove you're creative by inventing the next big thing and then send it to us at incrementspodcast@gmail.com

Special Guest: Rosie Campbell.

Support Increments

  continue reading

76 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide