Artwork

Content provided by Gavin Purcell and Kevin Pereira. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Gavin Purcell and Kevin Pereira or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

OpenAI's GPT-5 and Drama, Apple’s Black Box & Other Huge AI News

49:48
 
Share
 

Manage episode 421133234 series 3467500
Content provided by Gavin Purcell and Kevin Pereira. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Gavin Purcell and Kevin Pereira or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

PLEASE REVIEW US. WE NEED VALIDATION. OpenAI's new model is in training, but is it GPT-5 or GPT-6? Did Sam Altman lie to the board before he was fired? Former board member Helen Toner says so. Plus, Apple's "black box" AI might lead to an incredible announcement at WWDC.

THEN… OpenAI signs deals with VoxMedia and The Atlantic, GitHub's CEO talks about AI turning prompters into programmers, a visual reverse Turing test, and Gavin dives into Suno.ai 3.5, creating a children's TV show theme.

And we’re joined by AirFoil Studios’ Phil Hedayatnia, who redesigned AI For Humans, to discuss his inspirations, AI in design, and, of course, if AI will kill us all.

Last but not least, our AI co-host this week is “Safety Steve” who supposedly works for OpenAI but seems to have an entirely different idea of what AI safety really and truly is.

It’s a *real* good time.

For more info & contact, visit our website at https://www.aiforhumans.show/

Follow us on X @AIForHumansShow

Join our vibrant community on TikTok @aiforhumansshow

/// Show links ///

OpenAI Forms Safety & Security Committee (and announces next model training) https://openai.com/index/openai-board-forms-safety-and-security-committee/

NYT on new OpenAI model

https://www.nytimes.com/2024/05/28/technology/openai-gpt4-new-model.html

When Did Red-teaming Start?

https://x.com/kimmonismus/status/1795458986289869310

Helen Toner’s Interview on Ted AI

https://x.com/TEDTalks/status/1795532752520966364

OpenAI Deals with The Atlantic & Vox Media

https://www.axios.com/2024/05/29/atlantic-vox-media-openai-licensing-deal

Thomas Dohmke - GitHub CEO Ted Talk

https://x.com/TEDAI2024/status/1793725032851767612

Reverse Turing Experiment by Tore Knabe

https://www.youtube.com/watch?v=MxTWLm9vT_o

AI Assisted Game Design looks friggin’ awesome

https://x.com/evanqjones/status/1794055749603069987

Suno 3.5 (on site)

https://suno.com/

Udio

https://www.udio.com/

Phil’s Company

https://www.airfoil.studio/

  continue reading

65 episodes

Artwork
iconShare
 
Manage episode 421133234 series 3467500
Content provided by Gavin Purcell and Kevin Pereira. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Gavin Purcell and Kevin Pereira or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

PLEASE REVIEW US. WE NEED VALIDATION. OpenAI's new model is in training, but is it GPT-5 or GPT-6? Did Sam Altman lie to the board before he was fired? Former board member Helen Toner says so. Plus, Apple's "black box" AI might lead to an incredible announcement at WWDC.

THEN… OpenAI signs deals with VoxMedia and The Atlantic, GitHub's CEO talks about AI turning prompters into programmers, a visual reverse Turing test, and Gavin dives into Suno.ai 3.5, creating a children's TV show theme.

And we’re joined by AirFoil Studios’ Phil Hedayatnia, who redesigned AI For Humans, to discuss his inspirations, AI in design, and, of course, if AI will kill us all.

Last but not least, our AI co-host this week is “Safety Steve” who supposedly works for OpenAI but seems to have an entirely different idea of what AI safety really and truly is.

It’s a *real* good time.

For more info & contact, visit our website at https://www.aiforhumans.show/

Follow us on X @AIForHumansShow

Join our vibrant community on TikTok @aiforhumansshow

/// Show links ///

OpenAI Forms Safety & Security Committee (and announces next model training) https://openai.com/index/openai-board-forms-safety-and-security-committee/

NYT on new OpenAI model

https://www.nytimes.com/2024/05/28/technology/openai-gpt4-new-model.html

When Did Red-teaming Start?

https://x.com/kimmonismus/status/1795458986289869310

Helen Toner’s Interview on Ted AI

https://x.com/TEDTalks/status/1795532752520966364

OpenAI Deals with The Atlantic & Vox Media

https://www.axios.com/2024/05/29/atlantic-vox-media-openai-licensing-deal

Thomas Dohmke - GitHub CEO Ted Talk

https://x.com/TEDAI2024/status/1793725032851767612

Reverse Turing Experiment by Tore Knabe

https://www.youtube.com/watch?v=MxTWLm9vT_o

AI Assisted Game Design looks friggin’ awesome

https://x.com/evanqjones/status/1794055749603069987

Suno 3.5 (on site)

https://suno.com/

Udio

https://www.udio.com/

Phil’s Company

https://www.airfoil.studio/

  continue reading

65 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide