Artwork

Content provided by CBC. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by CBC or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Did Google make conscious AI?

26:24
 
Share
 

Manage episode 331767304 series 2455762
Content provided by CBC. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by CBC or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Earlier this week, Blake Lemoine, an engineer who works for Google’s Responsible AI department, went public with his belief that Google’s LaMDA chatbot is sentient. LaMDA, or Language Model for Dialogue Applications, is an artificial intelligence program that mimics speech and tries to predict which words are most related to the prompts it is given. While some experts believe that conscious AI is something that will be possible in the future, many in the field think that Lemoine is mistaken — and that the conversation he has stirred up about sentience takes away from the immediate and pressing ethical questions surrounding Google’s control over this technology and the ease at which people can be fooled by it. Today on Front Burner, cognitive scientist and author of Rebooting AI, Gary Marcus, discusses LaMDA, the trouble with testing for consciousness in AI and what we should really be thinking about when it comes to AI’s ever-expanding role in our day-to-day lives.
  continue reading

1615 episodes

Artwork

Did Google make conscious AI?

Front Burner

1,629 subscribers

published

iconShare
 
Manage episode 331767304 series 2455762
Content provided by CBC. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by CBC or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Earlier this week, Blake Lemoine, an engineer who works for Google’s Responsible AI department, went public with his belief that Google’s LaMDA chatbot is sentient. LaMDA, or Language Model for Dialogue Applications, is an artificial intelligence program that mimics speech and tries to predict which words are most related to the prompts it is given. While some experts believe that conscious AI is something that will be possible in the future, many in the field think that Lemoine is mistaken — and that the conversation he has stirred up about sentience takes away from the immediate and pressing ethical questions surrounding Google’s control over this technology and the ease at which people can be fooled by it. Today on Front Burner, cognitive scientist and author of Rebooting AI, Gary Marcus, discusses LaMDA, the trouble with testing for consciousness in AI and what we should really be thinking about when it comes to AI’s ever-expanding role in our day-to-day lives.
  continue reading

1615 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide