Artwork

Content provided by WIRED. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by WIRED or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

The Blurred Reality of Human-Washing

26:43
 
Share
 

Manage episode 429493615 series 1670773
Content provided by WIRED. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by WIRED or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Voice assistants have become a constant presence in our lives. Maybe you talk to Alexa or Gemini or Siri to ask a question or to perform a task. Maybe you have to do a little back and forth with a voice bot whenever you call your pharmacy, or when you book a service appointment at your car dealership. You may even get frustrated and start pleading with the robot on the other end of the line to connect you with a real human.

That’s the catch, though: These voice bots are starting to sound a lot more like actual humans, with emotions in their voice, little ticks and giggles in between phrases, and the occasional flirty aside. Today’s voice-powered chatbots are blurring the lines between what’s real and what’s not, which prompts a complicated ethical question: Can you trust a bot that insists it’s actually human?

This week, Lauren Goode tells us about her recent news story on a bot that was easily tricked into lying and saying it was a human. And WIRED senior writer Paresh Dave tells us how AI watchdogs and government regulators are trying to prevent natural-sounding chatbots from misrepresenting themselves.

Show Notes:

Read more about the Bland AI chatbot, which lied and said it was human. Read Will Kight’s story about researchers’ warnings of the manipulative power of emotionally expressive chatbots.

Recommendations:

Lauren recommends The Bee Sting by Paul Murray. (Again.) Paresh recommends subscribing to your great local journalism newsletter or Substack to stay informed on important local issues.

Mike recommends Winter Journal, a memoir by Paul Auster.

Paresh Dave can be found on social media @peard33. Lauren Goode is @LaurenGoode. Michael Calore is @snackfight. Bling the main hotline at @GadgetLab. The show is produced by Boone Ashworth (@booneashworth). Our theme music is by Solar Keys.

  continue reading

348 episodes

Artwork
iconShare
 
Manage episode 429493615 series 1670773
Content provided by WIRED. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by WIRED or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Voice assistants have become a constant presence in our lives. Maybe you talk to Alexa or Gemini or Siri to ask a question or to perform a task. Maybe you have to do a little back and forth with a voice bot whenever you call your pharmacy, or when you book a service appointment at your car dealership. You may even get frustrated and start pleading with the robot on the other end of the line to connect you with a real human.

That’s the catch, though: These voice bots are starting to sound a lot more like actual humans, with emotions in their voice, little ticks and giggles in between phrases, and the occasional flirty aside. Today’s voice-powered chatbots are blurring the lines between what’s real and what’s not, which prompts a complicated ethical question: Can you trust a bot that insists it’s actually human?

This week, Lauren Goode tells us about her recent news story on a bot that was easily tricked into lying and saying it was a human. And WIRED senior writer Paresh Dave tells us how AI watchdogs and government regulators are trying to prevent natural-sounding chatbots from misrepresenting themselves.

Show Notes:

Read more about the Bland AI chatbot, which lied and said it was human. Read Will Kight’s story about researchers’ warnings of the manipulative power of emotionally expressive chatbots.

Recommendations:

Lauren recommends The Bee Sting by Paul Murray. (Again.) Paresh recommends subscribing to your great local journalism newsletter or Substack to stay informed on important local issues.

Mike recommends Winter Journal, a memoir by Paul Auster.

Paresh Dave can be found on social media @peard33. Lauren Goode is @LaurenGoode. Michael Calore is @snackfight. Bling the main hotline at @GadgetLab. The show is produced by Boone Ashworth (@booneashworth). Our theme music is by Solar Keys.

  continue reading

348 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide