Go offline with the Player FM app!
Percy Liang on the Center for Research on Foundation Models
Archived series ("Inactive feed" status)
When? This feed was archived on February 26, 2024 22:33 (). Last successful fetch was on September 04, 2024 15:08 ()
Why? Inactive feed status. Our servers were unable to retrieve a valid podcast feed for a sustained period.
What now? You might be able to find a more up-to-date version using the search function. This series will no longer be checked for updates. If you believe this to be in error, please check if the publisher's feed link below is valid and contact support to request the feed be restored or if you have any other concerns about this.
Manage episode 331913066 series 3352614
Realizing that Foundation Models are a big deal, scaling, why Percy founded CRFM, Stanford's position in the field, benchmarking, privacy, and CRFM's first and next 30 years.
Transcript: https://web.stanford.edu/class/cs224u/podcast/liang/
- Percy's website
- Percy on Twitter
- CRFM
- On the opportunities and risks of foundation models
- ELMo: Deep contextualized word representations
- BERT: Pre-training of deep bidirectional Transformers for language understanding
- Sam Bowman
- GPT-2
- Adversarial examples for evaluating reading comprehension systems
- System 1 and System 2
- The Unreasonable Effectiveness of Data
- Chinchilla: Training Compute-Optimal Large Language Models
- GitHub Copilot
- LaMDA: Language models for dialog applications
- AI Test Kitchen
- DALL-E 2
- Richer Socher on the CS224U podcast
- you.com
- Chris Ré
- Fei-Fei Li
- Chris Manning
- HAI
- Rob Reich
- Erik Brynjolfsson
- Dan Ho
- Russ Altman
- Jeff Hancock
- The time is now to develop community norms for the release of foundation models
- Twitter Spaces event
- Best practices for deploying language models
- Model Cards for model reporting
- Datasheets for datasets
- Strathern's law
16 episodes
Archived series ("Inactive feed" status)
When? This feed was archived on February 26, 2024 22:33 (). Last successful fetch was on September 04, 2024 15:08 ()
Why? Inactive feed status. Our servers were unable to retrieve a valid podcast feed for a sustained period.
What now? You might be able to find a more up-to-date version using the search function. This series will no longer be checked for updates. If you believe this to be in error, please check if the publisher's feed link below is valid and contact support to request the feed be restored or if you have any other concerns about this.
Manage episode 331913066 series 3352614
Realizing that Foundation Models are a big deal, scaling, why Percy founded CRFM, Stanford's position in the field, benchmarking, privacy, and CRFM's first and next 30 years.
Transcript: https://web.stanford.edu/class/cs224u/podcast/liang/
- Percy's website
- Percy on Twitter
- CRFM
- On the opportunities and risks of foundation models
- ELMo: Deep contextualized word representations
- BERT: Pre-training of deep bidirectional Transformers for language understanding
- Sam Bowman
- GPT-2
- Adversarial examples for evaluating reading comprehension systems
- System 1 and System 2
- The Unreasonable Effectiveness of Data
- Chinchilla: Training Compute-Optimal Large Language Models
- GitHub Copilot
- LaMDA: Language models for dialog applications
- AI Test Kitchen
- DALL-E 2
- Richer Socher on the CS224U podcast
- you.com
- Chris Ré
- Fei-Fei Li
- Chris Manning
- HAI
- Rob Reich
- Erik Brynjolfsson
- Dan Ho
- Russ Altman
- Jeff Hancock
- The time is now to develop community norms for the release of foundation models
- Twitter Spaces event
- Best practices for deploying language models
- Model Cards for model reporting
- Datasheets for datasets
- Strathern's law
16 episodes
All episodes
×Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.