Artwork

Content provided by Helen Byrne. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Helen Byrne or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Inside OpenAI's trust and safety operation - with Rosie Campbell

45:03
 
Share
 

Manage episode 405364376 series 3533871
Content provided by Helen Byrne. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Helen Byrne or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

No organisation in the AI world is under more intense scrutiny than OpenAI. The maker of Dall-E, GPT4, ChatGPT and Sora is constantly pushing the boundaries of artificial intelligence and has supercharged the enthusiasm of the general public for AI technologies.
With that elevated position come questions about how OpenAI can ensure its models are not used for malign purposes.
In this interview we talk to Rosie Campbell from OpenAI’s policy research team about the many processes and safeguards in place to prevent abuse. Rosie also talks about the forward-looking work of the policy research team, anticipating longer-term risks that might emerge with more advanced AI systems.
Helen and Rosie discuss the challenges associated with agentic systems (AI that can interface with the wider world via APIs and other technologies), red-teaming new models, and whether advanced AIs should have ‘rights’ in the same way that humans or animals do.
You can read the paper referenced in this episode ‘Practices for Governing Agentic AI Systems’ co-written by Rosie and her colleagues: https://cdn.openai.com/papers/practices-for-governing-agentic-ai-systems.pdf
Watch the video of the interview here: https://www.youtube.com/watch?v=81LNrlEqgcM

  continue reading

10 episodes

Artwork
iconShare
 
Manage episode 405364376 series 3533871
Content provided by Helen Byrne. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Helen Byrne or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

No organisation in the AI world is under more intense scrutiny than OpenAI. The maker of Dall-E, GPT4, ChatGPT and Sora is constantly pushing the boundaries of artificial intelligence and has supercharged the enthusiasm of the general public for AI technologies.
With that elevated position come questions about how OpenAI can ensure its models are not used for malign purposes.
In this interview we talk to Rosie Campbell from OpenAI’s policy research team about the many processes and safeguards in place to prevent abuse. Rosie also talks about the forward-looking work of the policy research team, anticipating longer-term risks that might emerge with more advanced AI systems.
Helen and Rosie discuss the challenges associated with agentic systems (AI that can interface with the wider world via APIs and other technologies), red-teaming new models, and whether advanced AIs should have ‘rights’ in the same way that humans or animals do.
You can read the paper referenced in this episode ‘Practices for Governing Agentic AI Systems’ co-written by Rosie and her colleagues: https://cdn.openai.com/papers/practices-for-governing-agentic-ai-systems.pdf
Watch the video of the interview here: https://www.youtube.com/watch?v=81LNrlEqgcM

  continue reading

10 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide