Artwork

Content provided by BlueDot Impact. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by BlueDot Impact or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Imitative Generalisation (AKA ‘Learning the Prior’)

18:14
 
Share
 

Manage episode 424087974 series 3498845
Content provided by BlueDot Impact. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by BlueDot Impact or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

This post tries to explain a simplified version of Paul Christiano’s mechanism introduced here, (referred to there as ‘Learning the Prior’) and explain why a mechanism like this potentially addresses some of the safety problems with naïve approaches. First we’ll go through a simple example in a familiar domain, then explain the problems with the example. Then I’ll discuss the open questions for making Imitative Generalization actually work, and the connection with the Microscope AI idea. A more detailed explanation of exactly what the training objective is (with diagrams), and the correspondence with Bayesian inference, are in the appendix.

Source:

https://www.alignmentforum.org/posts/JKj5Krff5oKMb8TjT/imitative-generalisation-aka-learning-the-prior-1

Narrated for AI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.

---

A podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website.

  continue reading

Chapters

1. Imitative Generalisation (AKA ‘Learning the Prior’) (00:00:00)

2. TL;DR (00:00:11)

3. Goals of this post (00:02:22)

4. Example: using IG to avoid overfitting in image classification. (00:03:02)

5. Key difficulties for IG (00:10:11)

6. Relationship with Microscope AI (00:14:45)

80 episodes

Artwork
iconShare
 
Manage episode 424087974 series 3498845
Content provided by BlueDot Impact. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by BlueDot Impact or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

This post tries to explain a simplified version of Paul Christiano’s mechanism introduced here, (referred to there as ‘Learning the Prior’) and explain why a mechanism like this potentially addresses some of the safety problems with naïve approaches. First we’ll go through a simple example in a familiar domain, then explain the problems with the example. Then I’ll discuss the open questions for making Imitative Generalization actually work, and the connection with the Microscope AI idea. A more detailed explanation of exactly what the training objective is (with diagrams), and the correspondence with Bayesian inference, are in the appendix.

Source:

https://www.alignmentforum.org/posts/JKj5Krff5oKMb8TjT/imitative-generalisation-aka-learning-the-prior-1

Narrated for AI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.

---

A podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website.

  continue reading

Chapters

1. Imitative Generalisation (AKA ‘Learning the Prior’) (00:00:00)

2. TL;DR (00:00:11)

3. Goals of this post (00:02:22)

4. Example: using IG to avoid overfitting in image classification. (00:03:02)

5. Key difficulties for IG (00:10:11)

6. Relationship with Microscope AI (00:14:45)

80 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide