Artwork

Content provided by GV (Google Ventures). All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by GV (Google Ventures) or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

S4E6: MIT’s James DiCarlo on Reverse-Engineering Human Sight with AI

45:00
 
Share
 

Manage episode 376215852 series 2547493
Content provided by GV (Google Ventures). All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by GV (Google Ventures) or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Season 4 of our Theory and Practice podcast investigates the powerful new world of AI applications and what it means to be human in the age of human-like artificial intelligence. Episode 6 explores what happens when AI is explicitly used to understand humans.

In this episode, we're joined by James DiCarlo, the Peter de Florez Professor of Neuroscience at Massachusetts Institute of Technology and Director of the MIT Quest for Intelligence. Trained in biomedical engineering and medicine, Professor DiCarlo brings a technical mindset to understanding the machine-like processes in human brains. His focus is on the machinery that enables us to see.

"Anything that our brain achieves is because there's a machine in there. It's not magic; there's some kind of machine running. So that means there is some machine that could emulate what we do. And our job is to figure out the details of that machine. So the problem is someday tractable. It's just a question of when."

Professor DiCarlo unpacks how well convolutional neural networks (CNNs), a form of deep learning, mimic the human brain. These networks excel at finding patterns in images to recognize objects. One key difference with humans is that our vision feeds information into different areas of the brain and receives feedback. Professor DiCarlo argues that CNNs help him and his team understand how our brains gather vast amounts of data from a limited field of vision in a millisecond glimpse.

Alex and Anthony also discuss the potential clinical applications of machine learning — from using an ECG to determine a person's biological age to understanding a person's cardiovascular health from retina images.

  continue reading

36 episodes

Artwork
iconShare
 
Manage episode 376215852 series 2547493
Content provided by GV (Google Ventures). All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by GV (Google Ventures) or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Season 4 of our Theory and Practice podcast investigates the powerful new world of AI applications and what it means to be human in the age of human-like artificial intelligence. Episode 6 explores what happens when AI is explicitly used to understand humans.

In this episode, we're joined by James DiCarlo, the Peter de Florez Professor of Neuroscience at Massachusetts Institute of Technology and Director of the MIT Quest for Intelligence. Trained in biomedical engineering and medicine, Professor DiCarlo brings a technical mindset to understanding the machine-like processes in human brains. His focus is on the machinery that enables us to see.

"Anything that our brain achieves is because there's a machine in there. It's not magic; there's some kind of machine running. So that means there is some machine that could emulate what we do. And our job is to figure out the details of that machine. So the problem is someday tractable. It's just a question of when."

Professor DiCarlo unpacks how well convolutional neural networks (CNNs), a form of deep learning, mimic the human brain. These networks excel at finding patterns in images to recognize objects. One key difference with humans is that our vision feeds information into different areas of the brain and receives feedback. Professor DiCarlo argues that CNNs help him and his team understand how our brains gather vast amounts of data from a limited field of vision in a millisecond glimpse.

Alex and Anthony also discuss the potential clinical applications of machine learning — from using an ECG to determine a person's biological age to understanding a person's cardiovascular health from retina images.

  continue reading

36 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide