Artwork

Content provided by podcast@nature.com and Springer Nature Limited. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by podcast@nature.com and Springer Nature Limited or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Audio long-read: Rise of the robo-writers

23:47
 
Share
 

Manage episode 289273758 series 3137
Content provided by podcast@nature.com and Springer Nature Limited. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by podcast@nature.com and Springer Nature Limited or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

In 2020, the artificial intelligence (AI) GPT-3 wowed the world with its ability to write fluent streams of text. Trained on billions of words from books, articles and websites, GPT-3 was the latest in a series of ‘large language model’ AIs that are used by companies around the world to improve search results, answer questions, or propose computer code.


However, these large language model are not without their issues. Their training is based on the statistical relationships between the words and phrases, which can lead to them generating toxic or dangerous outputs.


Preventing responses like these is a huge challenge for researchers, who are attempting to do so by addressing biases in training data, or by instilling these AIs with common-sense and moral judgement.


This is an audio version of our feature: Robo-writers: the rise and risks of language-generating AI



Hosted on Acast. See acast.com/privacy for more information.

  continue reading

796 episodes

Artwork

Audio long-read: Rise of the robo-writers

Nature Podcast

146,756 subscribers

published

iconShare
 
Manage episode 289273758 series 3137
Content provided by podcast@nature.com and Springer Nature Limited. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by podcast@nature.com and Springer Nature Limited or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

In 2020, the artificial intelligence (AI) GPT-3 wowed the world with its ability to write fluent streams of text. Trained on billions of words from books, articles and websites, GPT-3 was the latest in a series of ‘large language model’ AIs that are used by companies around the world to improve search results, answer questions, or propose computer code.


However, these large language model are not without their issues. Their training is based on the statistical relationships between the words and phrases, which can lead to them generating toxic or dangerous outputs.


Preventing responses like these is a huge challenge for researchers, who are attempting to do so by addressing biases in training data, or by instilling these AIs with common-sense and moral judgement.


This is an audio version of our feature: Robo-writers: the rise and risks of language-generating AI



Hosted on Acast. See acast.com/privacy for more information.

  continue reading

796 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide