Artwork

Content provided by Sentience Institute. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Sentience Institute or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Raphaël Millière on large language models

1:49:27
 
Share
 

Manage episode 367749015 series 2596584
Content provided by Sentience Institute. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Sentience Institute or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Ultimately, if you want more human-like systems that exhibit more human-like intelligence, you would want them to actually learn like humans do by interacting with the world and so interactive learning, not just passive learning. You want something that's more active where the model is going to actually test out some hypothesis, and learn from the feedback it's getting from the world about these hypotheses in the way children do, it should learn all the time. If you observe young babies and toddlers, they are constantly experimenting. They're like little scientists, you see babies grabbing their feet, and testing whether that's part of my body or not, and learning gradually and very quickly learning all these things. Language models don't do that. They don't explore in this way. They don't have the capacity for interaction in this way.

  • Raphaël Millière

How do large language models work? What are the dangers of overclaiming and underclaiming the capabilities of large language models? What are some of the most important cognitive capacities to understand for large language models? Are large language models showing sparks of artificial general intelligence? Do language models really understand language?

Raphaël Millière is the 2020 Robert A. Burt Presidential Scholar in Society and Neuroscience in the Center for Science and Society and a Lecturer in the Philosophy Department at Columbia University. He completed his DPhil (PhD) in philosophy at the University of Oxford, where he focused on self-consciousness. His interests lie primarily in the philosophy of artificial intelligence and cognitive science. He is particularly interested in assessing the capacities and limitations of deep artificial neural networks and establishing fair and meaningful comparisons with human cognition in various domains, including language understanding, reasoning, and planning.

Topics discussed in the episode:

  • Introduction (0:00)
  • How Raphaël came to work on AI (1:25)
  • How do large language models work? (5:50)
  • Deflationary and inflationary claims about large language models (19:25)
  • The dangers of overclaiming and underclaiming (25:20)
  • Summary of cognitive capacities large language models might have (33:20)
  • Intelligence (38:10)
  • Artificial general intelligence (53:30)
  • Consciousness and sentience (1:06:10)
  • Theory of mind (01:18:09)
  • Compositionality (1:24:15)
  • Language understanding and referential grounding (1:30:45)
  • Which cognitive capacities are most useful to understand for various purposes? (1:41:10)
  • Conclusion (1:47:23)

Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast

Support the show

  continue reading

23 episodes

Artwork
iconShare
 
Manage episode 367749015 series 2596584
Content provided by Sentience Institute. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Sentience Institute or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Ultimately, if you want more human-like systems that exhibit more human-like intelligence, you would want them to actually learn like humans do by interacting with the world and so interactive learning, not just passive learning. You want something that's more active where the model is going to actually test out some hypothesis, and learn from the feedback it's getting from the world about these hypotheses in the way children do, it should learn all the time. If you observe young babies and toddlers, they are constantly experimenting. They're like little scientists, you see babies grabbing their feet, and testing whether that's part of my body or not, and learning gradually and very quickly learning all these things. Language models don't do that. They don't explore in this way. They don't have the capacity for interaction in this way.

  • Raphaël Millière

How do large language models work? What are the dangers of overclaiming and underclaiming the capabilities of large language models? What are some of the most important cognitive capacities to understand for large language models? Are large language models showing sparks of artificial general intelligence? Do language models really understand language?

Raphaël Millière is the 2020 Robert A. Burt Presidential Scholar in Society and Neuroscience in the Center for Science and Society and a Lecturer in the Philosophy Department at Columbia University. He completed his DPhil (PhD) in philosophy at the University of Oxford, where he focused on self-consciousness. His interests lie primarily in the philosophy of artificial intelligence and cognitive science. He is particularly interested in assessing the capacities and limitations of deep artificial neural networks and establishing fair and meaningful comparisons with human cognition in various domains, including language understanding, reasoning, and planning.

Topics discussed in the episode:

  • Introduction (0:00)
  • How Raphaël came to work on AI (1:25)
  • How do large language models work? (5:50)
  • Deflationary and inflationary claims about large language models (19:25)
  • The dangers of overclaiming and underclaiming (25:20)
  • Summary of cognitive capacities large language models might have (33:20)
  • Intelligence (38:10)
  • Artificial general intelligence (53:30)
  • Consciousness and sentience (1:06:10)
  • Theory of mind (01:18:09)
  • Compositionality (1:24:15)
  • Language understanding and referential grounding (1:30:45)
  • Which cognitive capacities are most useful to understand for various purposes? (1:41:10)
  • Conclusion (1:47:23)

Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast

Support the show

  continue reading

23 episodes

ทุกตอน

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide