Artwork

Content provided by Daniel Thorson. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Daniel Thorson or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Jill Nephew - Using ChatGPT is Like Eating Plastic for Your Cognition

1:29:36
 
Share
 

Manage episode 363861813 series 2131201
Content provided by Daniel Thorson. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Daniel Thorson or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

In this conversation I speak with Jill Nephew. Jill, a former AI black box algorithm engineer with extensive experience in developing software architectures, holds a highly heterodox perspective on the risks associated with LLM AIs. In this conversation we explore Jill's argument that using LLMs like ChatGPT or Bard are like eating plastic for your cognitive agency and natural intelligence, how it is that AIs could cause the rise of new 'supercults', and how another world is possible, if only we learn to ask the right questions.

If you enjoy this podcast and want to support it please consider becoming a Patreon supporter.

  • [3:52] The critical difference between cognition and thinking
  • [9:49] Why is using LLMs like eating plastic for our cognition?
  • [16:04] What LLMs represent in the context of the meta-crisis
  • [24:51] How LLMs signal trustworthiness and use randomness to confuse us and unground our cognition
  • [36:00] What we can expect to see as LLMs introduce more plastic into our cognition
  • [38:29] What ways of interacting with LLMs might be safe?
  • [47:52] What are cults and how do they function in relationship to our cognition?
  • [53:29] The possibility of an AI 'supercult'
  • [55:27] The most dangerous thing we do to each other
  • [59:48] The deep meaningfulness and richness of grounded cognition, going beyond trauma healing, beyond the monkey mind
  • [1:07:17] Technology to reclaim natural intelligence, the practice of inquiry, the difference between 'good' inquiry and 'bad' inquiry
  • [1:12:31] The rigorous engineering behind good inquiry forms
  • [1:13:29] The power of inquiry, the feeling of insight, how to achieve a quiet mind
  • [1:18:29] Jill's advice for how to respond to the acceleration of planetary destruction

Jill's Conversation with Layman Pascal on the Integral Stage⁠

Inqwire, the software Jill has developed to help people reclaim their natural intelligence.

Support Daniel on Patreon

  continue reading

124 episodes

Artwork
iconShare
 
Manage episode 363861813 series 2131201
Content provided by Daniel Thorson. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Daniel Thorson or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

In this conversation I speak with Jill Nephew. Jill, a former AI black box algorithm engineer with extensive experience in developing software architectures, holds a highly heterodox perspective on the risks associated with LLM AIs. In this conversation we explore Jill's argument that using LLMs like ChatGPT or Bard are like eating plastic for your cognitive agency and natural intelligence, how it is that AIs could cause the rise of new 'supercults', and how another world is possible, if only we learn to ask the right questions.

If you enjoy this podcast and want to support it please consider becoming a Patreon supporter.

  • [3:52] The critical difference between cognition and thinking
  • [9:49] Why is using LLMs like eating plastic for our cognition?
  • [16:04] What LLMs represent in the context of the meta-crisis
  • [24:51] How LLMs signal trustworthiness and use randomness to confuse us and unground our cognition
  • [36:00] What we can expect to see as LLMs introduce more plastic into our cognition
  • [38:29] What ways of interacting with LLMs might be safe?
  • [47:52] What are cults and how do they function in relationship to our cognition?
  • [53:29] The possibility of an AI 'supercult'
  • [55:27] The most dangerous thing we do to each other
  • [59:48] The deep meaningfulness and richness of grounded cognition, going beyond trauma healing, beyond the monkey mind
  • [1:07:17] Technology to reclaim natural intelligence, the practice of inquiry, the difference between 'good' inquiry and 'bad' inquiry
  • [1:12:31] The rigorous engineering behind good inquiry forms
  • [1:13:29] The power of inquiry, the feeling of insight, how to achieve a quiet mind
  • [1:18:29] Jill's advice for how to respond to the acceleration of planetary destruction

Jill's Conversation with Layman Pascal on the Integral Stage⁠

Inqwire, the software Jill has developed to help people reclaim their natural intelligence.

Support Daniel on Patreon

  continue reading

124 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide