Show notes are at https://stevelitchfield.com/sshow/chat.html
…
continue reading
Content provided by LessWrong. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by LessWrong or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!
Go offline with the Player FM app!
“AGI Safety and Alignment at Google DeepMind:A Summary of Recent Work ” by Rohin Shah, Seb Farquhar, Anca Dragan
MP3•Episode home
Manage episode 435362327 series 3364760
Content provided by LessWrong. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by LessWrong or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.We wanted to share a recap of our recent outputs with the AF community. Below, we fill in some details about what we have been working on, what motivated us to do it, and how we thought about its importance. We hope that this will help people build off things we have done and see how their work fits with ours.
Who are we?
We’re the main team at Google DeepMind working on technical approaches to existential risk from AI systems. Since our last post, we’ve evolved into the AGI Safety & Alignment team, which we think of as AGI Alignment (with subteams like mechanistic interpretability, scalable oversight, etc.), and Frontier Safety (working on the Frontier Safety Framework, including developing and running dangerous capability evaluations). We’ve also been growing since our last post: by 39% last year [...]
---
Outline:
(00:32) Who are we?
(01:32) What have we been up to?
(02:16) Frontier Safety
(02:38) FSF
(04:05) Dangerous Capability Evaluations
(05:12) Mechanistic Interpretability
(08:54) Amplified Oversight
(09:23) Theoretical Work on Debate
(10:32) Empirical Work on Debate
(11:37) Causal Alignment
(12:47) Emerging Topics
(14:57) Highlights from Our Collaborations
(17:07) What are we planning next?
---
First published:
August 20th, 2024
Source:
https://www.lesswrong.com/posts/79BPxvSsjzBkiSyTq/agi-safety-and-alignment-at-google-deepmind-a-summary-of
---
Narrated by TYPE III AUDIO.
…
continue reading
Who are we?
We’re the main team at Google DeepMind working on technical approaches to existential risk from AI systems. Since our last post, we’ve evolved into the AGI Safety & Alignment team, which we think of as AGI Alignment (with subteams like mechanistic interpretability, scalable oversight, etc.), and Frontier Safety (working on the Frontier Safety Framework, including developing and running dangerous capability evaluations). We’ve also been growing since our last post: by 39% last year [...]
---
Outline:
(00:32) Who are we?
(01:32) What have we been up to?
(02:16) Frontier Safety
(02:38) FSF
(04:05) Dangerous Capability Evaluations
(05:12) Mechanistic Interpretability
(08:54) Amplified Oversight
(09:23) Theoretical Work on Debate
(10:32) Empirical Work on Debate
(11:37) Causal Alignment
(12:47) Emerging Topics
(14:57) Highlights from Our Collaborations
(17:07) What are we planning next?
---
First published:
August 20th, 2024
Source:
https://www.lesswrong.com/posts/79BPxvSsjzBkiSyTq/agi-safety-and-alignment-at-google-deepmind-a-summary-of
---
Narrated by TYPE III AUDIO.
360 episodes
MP3•Episode home
Manage episode 435362327 series 3364760
Content provided by LessWrong. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by LessWrong or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.We wanted to share a recap of our recent outputs with the AF community. Below, we fill in some details about what we have been working on, what motivated us to do it, and how we thought about its importance. We hope that this will help people build off things we have done and see how their work fits with ours.
Who are we?
We’re the main team at Google DeepMind working on technical approaches to existential risk from AI systems. Since our last post, we’ve evolved into the AGI Safety & Alignment team, which we think of as AGI Alignment (with subteams like mechanistic interpretability, scalable oversight, etc.), and Frontier Safety (working on the Frontier Safety Framework, including developing and running dangerous capability evaluations). We’ve also been growing since our last post: by 39% last year [...]
---
Outline:
(00:32) Who are we?
(01:32) What have we been up to?
(02:16) Frontier Safety
(02:38) FSF
(04:05) Dangerous Capability Evaluations
(05:12) Mechanistic Interpretability
(08:54) Amplified Oversight
(09:23) Theoretical Work on Debate
(10:32) Empirical Work on Debate
(11:37) Causal Alignment
(12:47) Emerging Topics
(14:57) Highlights from Our Collaborations
(17:07) What are we planning next?
---
First published:
August 20th, 2024
Source:
https://www.lesswrong.com/posts/79BPxvSsjzBkiSyTq/agi-safety-and-alignment-at-google-deepmind-a-summary-of
---
Narrated by TYPE III AUDIO.
…
continue reading
Who are we?
We’re the main team at Google DeepMind working on technical approaches to existential risk from AI systems. Since our last post, we’ve evolved into the AGI Safety & Alignment team, which we think of as AGI Alignment (with subteams like mechanistic interpretability, scalable oversight, etc.), and Frontier Safety (working on the Frontier Safety Framework, including developing and running dangerous capability evaluations). We’ve also been growing since our last post: by 39% last year [...]
---
Outline:
(00:32) Who are we?
(01:32) What have we been up to?
(02:16) Frontier Safety
(02:38) FSF
(04:05) Dangerous Capability Evaluations
(05:12) Mechanistic Interpretability
(08:54) Amplified Oversight
(09:23) Theoretical Work on Debate
(10:32) Empirical Work on Debate
(11:37) Causal Alignment
(12:47) Emerging Topics
(14:57) Highlights from Our Collaborations
(17:07) What are we planning next?
---
First published:
August 20th, 2024
Source:
https://www.lesswrong.com/posts/79BPxvSsjzBkiSyTq/agi-safety-and-alignment-at-google-deepmind-a-summary-of
---
Narrated by TYPE III AUDIO.
360 episodes
All episodes
×Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.