Artwork

Content provided by Michaël Trazzi. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Michaël Trazzi or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

2. Connor Leahy on GPT3, EleutherAI and AI Alignment

1:28:46
 
Share
 

Manage episode 299172991 series 2966339
Content provided by Michaël Trazzi. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Michaël Trazzi or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

In the first part of the podcast we chat about how to speed up GPT-3 training, how Conor updated on recent announcements of large language models, why GPT-3 is AGI for some specific definitions of AGI [1], the obstacles in plugging planning to GPT-N and why the brain might approximate something like backprop. We end this first chat with solomonoff priors [2], adversarial attacks such as Pascal Mugging [3], and whether direct work on AI Alignment is currently tractable. In the second part, we chat about his current projects at EleutherAI [4][5], multipolar scenarios and reasons to work on technical AI Alignment research.
[1] https://youtu.be/HrV19SjKUss?t=4785
[2] https://en.wikipedia.org/wiki/Solomonoff%27s_theory_of_inductive_inference
[3] https://www.lesswrong.com/posts/a5JAiTdytou3Jg749/pascal-s-mugging-tiny-probabilities-of-vast-utilities
[4] https://www.eleuther.ai/
[5] https://discord.gg/j65dEVp5

  continue reading

54 episodes

Artwork
iconShare
 
Manage episode 299172991 series 2966339
Content provided by Michaël Trazzi. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Michaël Trazzi or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

In the first part of the podcast we chat about how to speed up GPT-3 training, how Conor updated on recent announcements of large language models, why GPT-3 is AGI for some specific definitions of AGI [1], the obstacles in plugging planning to GPT-N and why the brain might approximate something like backprop. We end this first chat with solomonoff priors [2], adversarial attacks such as Pascal Mugging [3], and whether direct work on AI Alignment is currently tractable. In the second part, we chat about his current projects at EleutherAI [4][5], multipolar scenarios and reasons to work on technical AI Alignment research.
[1] https://youtu.be/HrV19SjKUss?t=4785
[2] https://en.wikipedia.org/wiki/Solomonoff%27s_theory_of_inductive_inference
[3] https://www.lesswrong.com/posts/a5JAiTdytou3Jg749/pascal-s-mugging-tiny-probabilities-of-vast-utilities
[4] https://www.eleuther.ai/
[5] https://discord.gg/j65dEVp5

  continue reading

54 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide