“Model collapse” shows AI doesn’t have the human touch, writer says
Manage episode 377587386 series 1318949
AI chatbots have gotten pretty good at generating text that looks like it was written by a real person. That’s because they’re trained on words and sentences that actual humans wrote, scraped from blogs and news websites. But research now shows when you feed that AI-generated text back into the models to train a new chatbot, after a while, it sort of stops making sense. It’s a phenomenon AI researchers are calling “model collapse.” Marketplace’s Lily Jamali spoke to Clive Thompson, author of “Coders” and contributing writer for the New York Times Magazine and Wired, about what could be a growing problem as more AI-generated stuff lands on the web.
1848 episodes