Artwork

Content provided by Kanjun Qiu. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Kanjun Qiu or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Episode 10: Dylan Hadfield-Menell, UC Berkeley/MIT, on the value alignment problem in AI

1:31:32
 
Share
 

Manage episode 292304519 series 2906499
Content provided by Kanjun Qiu. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Kanjun Qiu or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Dylan Hadfield-Menell (Google Scholar) (Website) recently finished his PhD at UC Berkeley and is starting as an assistant professor at MIT. He works on the problem of designing AI algorithms that pursue the intended goal of their users, designers, and society in general. This is known as the value alignment problem.

Highlights from our conversation:

👨‍👩‍👧‍👦 How to align AI to human values

📉 Consequences of misaligned AI -> bias & misdirected optimization

📱 Better AI recommender systems

  continue reading

36 episodes

Artwork
iconShare
 
Manage episode 292304519 series 2906499
Content provided by Kanjun Qiu. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Kanjun Qiu or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Dylan Hadfield-Menell (Google Scholar) (Website) recently finished his PhD at UC Berkeley and is starting as an assistant professor at MIT. He works on the problem of designing AI algorithms that pursue the intended goal of their users, designers, and society in general. This is known as the value alignment problem.

Highlights from our conversation:

👨‍👩‍👧‍👦 How to align AI to human values

📉 Consequences of misaligned AI -> bias & misdirected optimization

📱 Better AI recommender systems

  continue reading

36 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide