Artwork

Content provided by Matt Cartwright & Jimmy Rhodes, Matt Cartwright, and Jimmy Rhodes. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Matt Cartwright & Jimmy Rhodes, Matt Cartwright, and Jimmy Rhodes or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

EMERGENCY EPISODE: Safe SuperIntelligence and Claude 3.5 Sonnet

31:05
 
Share
 

Manage episode 425846781 series 3564150
Content provided by Matt Cartwright & Jimmy Rhodes, Matt Cartwright, and Jimmy Rhodes. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Matt Cartwright & Jimmy Rhodes, Matt Cartwright, and Jimmy Rhodes or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Send us a Text Message.

Just a few days after we recorded our podcast re-launch and Matt's 'Call to Arms' on mitigating the risks of development of AI, llya Sutskever emerged from the wilderness. He announced, with very few details, the founding of Safe SuperIntelligence, with the implicit mission to build Superintelligent AI aligned with human values, to prevent catastrophic outcomes for humanity.

In our first ever EMERGENCY EPISODE we dissect this potentially seismic disruption in the AI landscape. After a failed coup against Sam Altman led to his exit from OpenAI, Sutskever is now on a mission to create ASI that prioritizes humanity's welfare. We navigate the complexities of this new direction and reflect on the ethical conundrums that lie ahead, especially in contrast to OpenAI's transformation from a non-profit to a for-profit juggernaut. Is Ilya the new Effective Alturism hero? or just the lastest AI expert to decide to do 'The Worst Thing Possible'?
But that's not all—meet Claude 3.5, Anthropic's latest large language model that’s setting unprecedented benchmarks. From improved coding capabilities to multimodal functionalities, Claude 3.5 is a game-changer for both developers and casual users. We’ll also explore the UK's pioneering steps in AI safety through the UK Artificial Intelligence Safety Institute (AISI), and dive into a provocative paper by Glasgow University scholars that questions the factual reliability of models like ChatGPT. This episode promises a riveting discussion on the future of AI, safety, and the pursuit of truth in a digital age.
Hicks et al (2024)- ChatGPT is Bullshit ChatGPT is bullshit | Ethics and Information Technology (springer.com)
Introducing Claude 3.5 Sonnet (Pascal Biese) 🤔 Has OpenAI Lost Its Edge? - by Pascal Biese - LLM Watch

  continue reading

Chapters

1. Welcome to Preparing for AI (00:00:00)

2. Safe SuperIntelligence (00:00:44)

3. New best LLM: Claude 3.5 Sonnet (00:12:47)

4. ChatGPT is Bullshit (Academic Paper!) (00:24:32)

5. Hard v Soft Bullshit (Outro Track) (00:29:07)

24 episodes

Artwork
iconShare
 
Manage episode 425846781 series 3564150
Content provided by Matt Cartwright & Jimmy Rhodes, Matt Cartwright, and Jimmy Rhodes. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Matt Cartwright & Jimmy Rhodes, Matt Cartwright, and Jimmy Rhodes or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Send us a Text Message.

Just a few days after we recorded our podcast re-launch and Matt's 'Call to Arms' on mitigating the risks of development of AI, llya Sutskever emerged from the wilderness. He announced, with very few details, the founding of Safe SuperIntelligence, with the implicit mission to build Superintelligent AI aligned with human values, to prevent catastrophic outcomes for humanity.

In our first ever EMERGENCY EPISODE we dissect this potentially seismic disruption in the AI landscape. After a failed coup against Sam Altman led to his exit from OpenAI, Sutskever is now on a mission to create ASI that prioritizes humanity's welfare. We navigate the complexities of this new direction and reflect on the ethical conundrums that lie ahead, especially in contrast to OpenAI's transformation from a non-profit to a for-profit juggernaut. Is Ilya the new Effective Alturism hero? or just the lastest AI expert to decide to do 'The Worst Thing Possible'?
But that's not all—meet Claude 3.5, Anthropic's latest large language model that’s setting unprecedented benchmarks. From improved coding capabilities to multimodal functionalities, Claude 3.5 is a game-changer for both developers and casual users. We’ll also explore the UK's pioneering steps in AI safety through the UK Artificial Intelligence Safety Institute (AISI), and dive into a provocative paper by Glasgow University scholars that questions the factual reliability of models like ChatGPT. This episode promises a riveting discussion on the future of AI, safety, and the pursuit of truth in a digital age.
Hicks et al (2024)- ChatGPT is Bullshit ChatGPT is bullshit | Ethics and Information Technology (springer.com)
Introducing Claude 3.5 Sonnet (Pascal Biese) 🤔 Has OpenAI Lost Its Edge? - by Pascal Biese - LLM Watch

  continue reading

Chapters

1. Welcome to Preparing for AI (00:00:00)

2. Safe SuperIntelligence (00:00:44)

3. New best LLM: Claude 3.5 Sonnet (00:12:47)

4. ChatGPT is Bullshit (Academic Paper!) (00:24:32)

5. Hard v Soft Bullshit (Outro Track) (00:29:07)

24 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide