Artwork

Content provided by The Academic Minute. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Academic Minute or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Zaid Zada, Princeton University – Brains and Machines Navigate a Common Language Space for Communication

 
Share
 

Manage episode 444263778 series 2459839
Content provided by The Academic Minute. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Academic Minute or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

On this Student Spotlight: Understanding language is key to being human…or a chat bot.

Zaid Zada, Ph.D. candidate at Princeton University, examines language and language models.

Zaid is a Ph.D. candidate at Princeton University studying how the brain processes language, how multiple brains synchronize to share information with each other, and what language models can teach us about the brain’s linguistic capabilities.

Brains and Machines Navigate a Common Language Space for Communication

https://academicminute.org/wp-content/uploads/2024/10/10-09-24-Princeton-Brains-and-Machines-Navigate-a-Common-Language-Space-for-Communication.mp3

As you’re listening to me speak, our brains are becoming more and more synchronized. This is the power of language. It allows me to transform the neural activity that represents my intended message into words to send to you. Then, when you hear the words, your brain recreates the message by mirroring the same neural activity as mine. We call this kind of synchrony “speaker–listener coupling”. In some sense, this is not very surprising because, to understand each other, we have to agree on what words mean in the context they’re used in. We experience this subjectively when we say, “we’re on the same wavelength”.

Our research delves deeper into this phenomenon. In our study, we collected a unique dataset of high-quality brain recordings of people engaged in natural conversations. In addition to speaker-listener coupling, we found that underlying the coupling is a shared “neural code” for language. Or the way our brains represent words is similar across people. What’s more, we found that large language models exhibit a similar neural code for language as our brains.

Large language models are at the core of the latest AI technologies such as ChatGPT. It’s become clear that they learn enough about language to meaningfully converse with us. Our study found that large language models encode linguistic information similar to the human brain. This similarity allowed us to track how a thought in the speaker’s brain is encoded into words and transferred, word-by-word, to the listener’s brain. We found that linguistic content emerges in the speaker’s brain before articulating a word, and the same linguistic content rapidly reemerges in the listener’s brain after hearing the word. This discovery highlights the shared neural code we, and machines, use to communicate.

Read More:
[Neuron] – A shared model-based linguistic space for transmitting our thoughts from brain to brain in natural conversations
[The Conversation] – AIs encode language like brains do − opening a window on human conversations

Share

The post Zaid Zada, Princeton University – Brains and Machines Navigate a Common Language Space for Communication appeared first on The Academic Minute.

  continue reading

289 episodes

Artwork
iconShare
 
Manage episode 444263778 series 2459839
Content provided by The Academic Minute. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Academic Minute or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

On this Student Spotlight: Understanding language is key to being human…or a chat bot.

Zaid Zada, Ph.D. candidate at Princeton University, examines language and language models.

Zaid is a Ph.D. candidate at Princeton University studying how the brain processes language, how multiple brains synchronize to share information with each other, and what language models can teach us about the brain’s linguistic capabilities.

Brains and Machines Navigate a Common Language Space for Communication

https://academicminute.org/wp-content/uploads/2024/10/10-09-24-Princeton-Brains-and-Machines-Navigate-a-Common-Language-Space-for-Communication.mp3

As you’re listening to me speak, our brains are becoming more and more synchronized. This is the power of language. It allows me to transform the neural activity that represents my intended message into words to send to you. Then, when you hear the words, your brain recreates the message by mirroring the same neural activity as mine. We call this kind of synchrony “speaker–listener coupling”. In some sense, this is not very surprising because, to understand each other, we have to agree on what words mean in the context they’re used in. We experience this subjectively when we say, “we’re on the same wavelength”.

Our research delves deeper into this phenomenon. In our study, we collected a unique dataset of high-quality brain recordings of people engaged in natural conversations. In addition to speaker-listener coupling, we found that underlying the coupling is a shared “neural code” for language. Or the way our brains represent words is similar across people. What’s more, we found that large language models exhibit a similar neural code for language as our brains.

Large language models are at the core of the latest AI technologies such as ChatGPT. It’s become clear that they learn enough about language to meaningfully converse with us. Our study found that large language models encode linguistic information similar to the human brain. This similarity allowed us to track how a thought in the speaker’s brain is encoded into words and transferred, word-by-word, to the listener’s brain. We found that linguistic content emerges in the speaker’s brain before articulating a word, and the same linguistic content rapidly reemerges in the listener’s brain after hearing the word. This discovery highlights the shared neural code we, and machines, use to communicate.

Read More:
[Neuron] – A shared model-based linguistic space for transmitting our thoughts from brain to brain in natural conversations
[The Conversation] – AIs encode language like brains do − opening a window on human conversations

Share

The post Zaid Zada, Princeton University – Brains and Machines Navigate a Common Language Space for Communication appeared first on The Academic Minute.

  continue reading

289 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide