Artwork

Content provided by Paul Middlebrooks. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Paul Middlebrooks or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

BI 163 Ellie Pavlick: The Mind of a Language Model

1:21:34
 
Share
 

Manage episode 358472463 series 2422585
Content provided by Paul Middlebrooks. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Paul Middlebrooks or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Support the show to get full episodes and join the Discord community.

Check out my free video series about what's missing in AI and Neuroscience

Ellie Pavlick runs her Language Understanding and Representation Lab at Brown University, where she studies lots of topics related to language. In AI, large language models, sometimes called foundation models, are all the rage these days, with their ability to generate convincing language, although they still make plenty of mistakes. One of the things Ellie is interested in is how these models work, what kinds of representations are being generated in them to produce the language they produce. So we discuss how she's going about studying these models. For example, probing them to see whether something symbolic-like might be implemented in the models, even though they are the deep learning neural network type, which aren't suppose to be able to work in a symbol-like manner. We also discuss whether grounding is required for language understanding - that is, whether a model that produces language well needs to connect with the real world to actually understand the text it generates. We talk about what language is for, the current limitations of large language models, how the models compare to humans, and a lot more.

0:00 - Intro 2:34 - Will LLMs make us dumb? 9:01 - Evolution of language 17:10 - Changing views on language 22:39 - Semantics, grounding, meaning 37:40 - LLMs, humans, and prediction 41:19 - How to evaluate LLMs 51:08 - Structure, semantics, and symbols in models 1:00:08 - Dimensionality 1:02:08 - Limitations of LLMs 1:07:47 - What do linguists think? 1:14:23 - What is language for?

  continue reading

199 episodes

Artwork
iconShare
 
Manage episode 358472463 series 2422585
Content provided by Paul Middlebrooks. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Paul Middlebrooks or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Support the show to get full episodes and join the Discord community.

Check out my free video series about what's missing in AI and Neuroscience

Ellie Pavlick runs her Language Understanding and Representation Lab at Brown University, where she studies lots of topics related to language. In AI, large language models, sometimes called foundation models, are all the rage these days, with their ability to generate convincing language, although they still make plenty of mistakes. One of the things Ellie is interested in is how these models work, what kinds of representations are being generated in them to produce the language they produce. So we discuss how she's going about studying these models. For example, probing them to see whether something symbolic-like might be implemented in the models, even though they are the deep learning neural network type, which aren't suppose to be able to work in a symbol-like manner. We also discuss whether grounding is required for language understanding - that is, whether a model that produces language well needs to connect with the real world to actually understand the text it generates. We talk about what language is for, the current limitations of large language models, how the models compare to humans, and a lot more.

0:00 - Intro 2:34 - Will LLMs make us dumb? 9:01 - Evolution of language 17:10 - Changing views on language 22:39 - Semantics, grounding, meaning 37:40 - LLMs, humans, and prediction 41:19 - How to evaluate LLMs 51:08 - Structure, semantics, and symbols in models 1:00:08 - Dimensionality 1:02:08 - Limitations of LLMs 1:07:47 - What do linguists think? 1:14:23 - What is language for?

  continue reading

199 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide