Artwork

Content provided by Ellie Anderson, Ph.D. and David Peña-Guzmán, Ph.D., Ellie Anderson, and David Peña-Guzmán. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Ellie Anderson, Ph.D. and David Peña-Guzmán, Ph.D., Ellie Anderson, and David Peña-Guzmán or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

AI Safety with Shazeda Ahmed

57:06
 
Share
 

Manage episode 411571458 series 2828065
Content provided by Ellie Anderson, Ph.D. and David Peña-Guzmán, Ph.D., Ellie Anderson, and David Peña-Guzmán. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Ellie Anderson, Ph.D. and David Peña-Guzmán, Ph.D., Ellie Anderson, and David Peña-Guzmán or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Welcome your robot overlords! In episode 101 of Overthink, Ellie and David speak with Dr. Shazeda Ahmed, specialist in AI Safety, to dive into the philosophy guiding artificial intelligence. With the rise of LLMs like ChatGPT, the lofty utilitarian principles of Effective Altruism have taken the tech-world spotlight by storm. Many who work on AI safety and ethics worry about the dangers of AI, from how automation might put entire categories of workers out of a job to how future forms of AI might pose a catastrophic “existential risk” for humanity as a whole. And yet, optimistic CEOs portray AI as the beginning of an easy, technology-assisted utopia. Who is right about AI: the doomers or the utopians? And whose voices are part of the conversation in the first place? Is AI risk talk spearheaded by well-meaning experts or investor billionaires? And, can philosophy guide discussions about AI toward the right thing to do?

Check out the episode's extended cut here!


Nick Bostrom, Superintelligence
Adrian Daub, What Tech Calls Thinking
Virginia Eubanks, Automating Inequality
Mollie Gleiberman, “Effective Altruism and the strategic ambiguity of ‘doing good’”
Matthew Jones and Chris Wiggins, How Data Happened
William MacAskill, What We Owe the Future
Toby Ord, The Precipice
Inioluwa Deborah Raji et al., “The Fallacy of AI Functionality”
Inioluwa Deborah Raji and Roel Dobbe, “Concrete Problems in AI Safety, Revisted”
Peter Singer, Animal Liberation
Amia Srinivisan, “Stop The Robot Apocalypse”

Patreon | patreon.com/overthinkpodcast

Website | overthinkpodcast.com

Instagram & Twitter | @overthink_pod

Email | Dearoverthink@gmail.com

YouTube | Overthink podcast

Support the Show.

  continue reading

108 episodes

Artwork

AI Safety with Shazeda Ahmed

Overthink

144 subscribers

published

iconShare
 
Manage episode 411571458 series 2828065
Content provided by Ellie Anderson, Ph.D. and David Peña-Guzmán, Ph.D., Ellie Anderson, and David Peña-Guzmán. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Ellie Anderson, Ph.D. and David Peña-Guzmán, Ph.D., Ellie Anderson, and David Peña-Guzmán or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Welcome your robot overlords! In episode 101 of Overthink, Ellie and David speak with Dr. Shazeda Ahmed, specialist in AI Safety, to dive into the philosophy guiding artificial intelligence. With the rise of LLMs like ChatGPT, the lofty utilitarian principles of Effective Altruism have taken the tech-world spotlight by storm. Many who work on AI safety and ethics worry about the dangers of AI, from how automation might put entire categories of workers out of a job to how future forms of AI might pose a catastrophic “existential risk” for humanity as a whole. And yet, optimistic CEOs portray AI as the beginning of an easy, technology-assisted utopia. Who is right about AI: the doomers or the utopians? And whose voices are part of the conversation in the first place? Is AI risk talk spearheaded by well-meaning experts or investor billionaires? And, can philosophy guide discussions about AI toward the right thing to do?

Check out the episode's extended cut here!


Nick Bostrom, Superintelligence
Adrian Daub, What Tech Calls Thinking
Virginia Eubanks, Automating Inequality
Mollie Gleiberman, “Effective Altruism and the strategic ambiguity of ‘doing good’”
Matthew Jones and Chris Wiggins, How Data Happened
William MacAskill, What We Owe the Future
Toby Ord, The Precipice
Inioluwa Deborah Raji et al., “The Fallacy of AI Functionality”
Inioluwa Deborah Raji and Roel Dobbe, “Concrete Problems in AI Safety, Revisted”
Peter Singer, Animal Liberation
Amia Srinivisan, “Stop The Robot Apocalypse”

Patreon | patreon.com/overthinkpodcast

Website | overthinkpodcast.com

Instagram & Twitter | @overthink_pod

Email | Dearoverthink@gmail.com

YouTube | Overthink podcast

Support the Show.

  continue reading

108 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide