Artwork

Content provided by TWIML and Sam Charrington. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by TWIML and Sam Charrington or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

AI Robustness and Safety with Dario Amodei - TWiML Talk #75

36:43
 
Share
 

Manage episode 209978249 series 2355587
Content provided by TWIML and Sam Charrington. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by TWIML and Sam Charrington or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
The show is part of a series that I’m really excited about, in part because I’ve been working to bring them to you for quite a while now. The focus of the series is a sampling of the interesting work being done over at OpenAI, the independent AI research lab founded by Elon Musk, Sam Altman and others. In this episode i'm joined by Dario Amodei, Team Lead for Safety Research at OpenAI. While in San Francisco a few months ago, I spent some time at the OpenAI office, during which I sat down with Dario to chat about the work happening at OpenAI around AI safety. Dario and I dive into the two areas of AI safety that he and his team are focused on--robustness and alignment. We also touch on his research with the Google DeepMind team, the OpenAI Universe tool, and how human interactions can be incorporated into reinforcement learning models. This was a great conversation, and along with the other shows in this series, this is a nerd alert show! To find the notes for this show, visit twimlai.com/talk/75 For more info on this series, visit twimlai.com/openai
  continue reading

721 episodes

Artwork
iconShare
 
Manage episode 209978249 series 2355587
Content provided by TWIML and Sam Charrington. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by TWIML and Sam Charrington or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
The show is part of a series that I’m really excited about, in part because I’ve been working to bring them to you for quite a while now. The focus of the series is a sampling of the interesting work being done over at OpenAI, the independent AI research lab founded by Elon Musk, Sam Altman and others. In this episode i'm joined by Dario Amodei, Team Lead for Safety Research at OpenAI. While in San Francisco a few months ago, I spent some time at the OpenAI office, during which I sat down with Dario to chat about the work happening at OpenAI around AI safety. Dario and I dive into the two areas of AI safety that he and his team are focused on--robustness and alignment. We also touch on his research with the Google DeepMind team, the OpenAI Universe tool, and how human interactions can be incorporated into reinforcement learning models. This was a great conversation, and along with the other shows in this series, this is a nerd alert show! To find the notes for this show, visit twimlai.com/talk/75 For more info on this series, visit twimlai.com/openai
  continue reading

721 episodes

Tutti gli episodi

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide