Artwork

Content provided by Aaron Bergman. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Aaron Bergman or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Is AI an existential risk? A discussion | Ryan Kidd, James Fodor | EAGxAustralia 2023

56:07
 
Share
 

Manage episode 424733775 series 3503936
Content provided by Aaron Bergman. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Aaron Bergman or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Ryan is Co-Director of the ML Alignment Theory Scholars Program, a Board Member and Co-Founder of the London Initiative for Safe AI, and a Manifund Regrantor. Previously, he completed a PhD in Physics at the University of Queensland and ran UQ’s Effective Altruism student group for ~3 years. Ryan’s ethics are largely preference utilitarian and cosmopolitan; he is deeply concerned about near-term x-risk and safeguarding the long-term future. James Fodor is a PhD student in the Decision, Risk and Financial Sciences Program. He completed graduate studies in physics and economics at the University of Melbourne, and a masters degree in neuroscience at the Australian National University. He has also worked as a research assistant in structural biology at Monash University. Outside of research, James has a keen interest in science, philosophy, and critical thinking. He is passionate about Effective Altruism, including causes such as global poverty and animal welfare.

  continue reading

159 episodes

Artwork
iconShare
 
Manage episode 424733775 series 3503936
Content provided by Aaron Bergman. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Aaron Bergman or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Ryan is Co-Director of the ML Alignment Theory Scholars Program, a Board Member and Co-Founder of the London Initiative for Safe AI, and a Manifund Regrantor. Previously, he completed a PhD in Physics at the University of Queensland and ran UQ’s Effective Altruism student group for ~3 years. Ryan’s ethics are largely preference utilitarian and cosmopolitan; he is deeply concerned about near-term x-risk and safeguarding the long-term future. James Fodor is a PhD student in the Decision, Risk and Financial Sciences Program. He completed graduate studies in physics and economics at the University of Melbourne, and a masters degree in neuroscience at the Australian National University. He has also worked as a research assistant in structural biology at Monash University. Outside of research, James has a keen interest in science, philosophy, and critical thinking. He is passionate about Effective Altruism, including causes such as global poverty and animal welfare.

  continue reading

159 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide