Artwork

Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

AF - UC Berkeley course on LLMs and ML Safety by Dan H

0:52
 
Share
 

Manage episode 428100301 series 3337166
Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: UC Berkeley course on LLMs and ML Safety, published by Dan H on July 9, 2024 on The AI Alignment Forum. The UC Berkeley course I co-taught now has lecture videos available: https://www.youtube.com/playlist?list=PLJ66BAXN6D8H_gRQJGjmbnS5qCWoxJNfe Course site: Understanding LLMs: Foundations and Safety Unrelatedly, a more conceptual AI safety course has its content available at https://www.aisafetybook.com/ Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
  continue reading

398 episodes

Artwork
iconShare
 
Manage episode 428100301 series 3337166
Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: UC Berkeley course on LLMs and ML Safety, published by Dan H on July 9, 2024 on The AI Alignment Forum. The UC Berkeley course I co-taught now has lecture videos available: https://www.youtube.com/playlist?list=PLJ66BAXN6D8H_gRQJGjmbnS5qCWoxJNfe Course site: Understanding LLMs: Foundations and Safety Unrelatedly, a more conceptual AI safety course has its content available at https://www.aisafetybook.com/ Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
  continue reading

398 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide