Artwork

Content provided by Alex Kantrowitz. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Alex Kantrowitz or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

What the Ex-OpenAI Safety Employees Are Worried About — With William Saunders and Lawrence Lessig

48:13
 
Share
 

Manage episode 426913595 series 2781245
Content provided by Alex Kantrowitz. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Alex Kantrowitz or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

William Saunders is an ex-OpenAI Superallignment team member. Lawrence Lessig is a professor of Law and Leadership at Harvard Law School. The two come on to discuss what's troubling ex-OpenAI safety team members. We discuss whether the Saudners' former team saw something secret and damning inside OpenAI, or whether it was a general cultural issue. And then, we talk about the 'Right to Warn' a policy that would give AI insiders a right to share concerning developments with third parties without fear of reprisal. Tune in for a revealing look into the eye of a storm brewing in the AI community.

----

You can subscribe to Big Technology Premium for 25% off at https://bit.ly/bigtechnology

Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice.

For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/

Questions? Feedback? Write to: bigtechnologypodcast@gmail.com

  continue reading

323 episodes

Artwork
iconShare
 
Manage episode 426913595 series 2781245
Content provided by Alex Kantrowitz. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Alex Kantrowitz or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

William Saunders is an ex-OpenAI Superallignment team member. Lawrence Lessig is a professor of Law and Leadership at Harvard Law School. The two come on to discuss what's troubling ex-OpenAI safety team members. We discuss whether the Saudners' former team saw something secret and damning inside OpenAI, or whether it was a general cultural issue. And then, we talk about the 'Right to Warn' a policy that would give AI insiders a right to share concerning developments with third parties without fear of reprisal. Tune in for a revealing look into the eye of a storm brewing in the AI community.

----

You can subscribe to Big Technology Premium for 25% off at https://bit.ly/bigtechnology

Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice.

For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/

Questions? Feedback? Write to: bigtechnologypodcast@gmail.com

  continue reading

323 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide