Artwork

Content provided by Jacob Ward. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Jacob Ward or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

AI is Creating a ‘Hive Mind' — Scientists Just Proved It

11:54
 
Share
 

Manage episode 522634224 series 3662679
Content provided by Jacob Ward. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Jacob Ward or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

The big AI conference NeurIPS is under way in San Diego this week, and nearly 6,000 papers presented there will set the technical, intellectual, and ethical course for AI for the year.

NeurIPS is a strange pseudo-academic gathering, where researchers from universities show up to present their findings alongside employees of Apple and Nvidia, part of the strange public-private revolving door of the tech industry. Sometimes they’re the same person: Increasingly, academic researchers are allowed to also hold a job at a big company. I can’t blame them for taking opportunities where they arise—I’m sure I would, in their position—but it’s particularly bothersome to me as a journalist, because it limits their ability to speak publicly.

The papers cover robotics, alignment, and how to deliver kitty cat pictures more efficiently, but one paper in particular—awarded a top prize at the conference—grabbed me by the throat.

A coalition from Stanford, the Allen Institute, Carnegie Mellon, and the University of Washington presented “Artificial Hive Mind: The Open-Ended Homogeneity of Language Models (and Beyond),” which shows that average large language model converges toward a narrow set of responses when asked big, brainstormy, open-ended questions. Worse, different models tend to produce similar answers, meaning when you switch from ChatGPT to Gemini or Claude for “new perspective,” you’re not getting it. I’ve warned for years that AI could shrink our menu of choices while making us believe we have more of them. This paper shows just how real that risk is. Today I walk through the NIPS landscape, the other trends emerging at the conference, and why “creative assistance” may actually be the crushing of creativity in disguise. Yay!

  continue reading

42 episodes

Artwork
iconShare
 
Manage episode 522634224 series 3662679
Content provided by Jacob Ward. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Jacob Ward or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

The big AI conference NeurIPS is under way in San Diego this week, and nearly 6,000 papers presented there will set the technical, intellectual, and ethical course for AI for the year.

NeurIPS is a strange pseudo-academic gathering, where researchers from universities show up to present their findings alongside employees of Apple and Nvidia, part of the strange public-private revolving door of the tech industry. Sometimes they’re the same person: Increasingly, academic researchers are allowed to also hold a job at a big company. I can’t blame them for taking opportunities where they arise—I’m sure I would, in their position—but it’s particularly bothersome to me as a journalist, because it limits their ability to speak publicly.

The papers cover robotics, alignment, and how to deliver kitty cat pictures more efficiently, but one paper in particular—awarded a top prize at the conference—grabbed me by the throat.

A coalition from Stanford, the Allen Institute, Carnegie Mellon, and the University of Washington presented “Artificial Hive Mind: The Open-Ended Homogeneity of Language Models (and Beyond),” which shows that average large language model converges toward a narrow set of responses when asked big, brainstormy, open-ended questions. Worse, different models tend to produce similar answers, meaning when you switch from ChatGPT to Gemini or Claude for “new perspective,” you’re not getting it. I’ve warned for years that AI could shrink our menu of choices while making us believe we have more of them. This paper shows just how real that risk is. Today I walk through the NIPS landscape, the other trends emerging at the conference, and why “creative assistance” may actually be the crushing of creativity in disguise. Yay!

  continue reading

42 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play