Artwork

Content provided by Tony Wan. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Tony Wan or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Power and Responsibility of Large Language Models | Safety & Ethics | OpenAI Model Spec + RLHF | Anthropic Constitutional AI | Episode 27

16:38
 
Share
 

Manage episode 424053414 series 3427795
Content provided by Tony Wan. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Tony Wan or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

With great power comes great responsibility. How do Open AI, Anthropic, and Meta implement safety and ethics? As large language models (LLMs) get larger, the potential for using them for nefarious purposes looms larger as well. Anthropic uses Constitutional AI, while OpenAI uses a model spec, combined with RLHF (Reinforcement Learning from Human Feedback). Not to be confused with ROFL (Rolling On the Floor Laughing). Tune into this episode to learn how leading AI companies use their Spidey powers to maximize usefulness and harmlessness.
REFERENCE

OpenAI Model Spec

https://cdn.openai.com/spec/model-spec-2024-05-08.html#overview

Anthropic Constitutional AI

https://www.anthropic.com/news/claudes-constitution

For more information, check out https://www.superprompt.fm There you can contact me and/or sign up for our newsletter.

  continue reading

28 episodes

Artwork
iconShare
 
Manage episode 424053414 series 3427795
Content provided by Tony Wan. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Tony Wan or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

With great power comes great responsibility. How do Open AI, Anthropic, and Meta implement safety and ethics? As large language models (LLMs) get larger, the potential for using them for nefarious purposes looms larger as well. Anthropic uses Constitutional AI, while OpenAI uses a model spec, combined with RLHF (Reinforcement Learning from Human Feedback). Not to be confused with ROFL (Rolling On the Floor Laughing). Tune into this episode to learn how leading AI companies use their Spidey powers to maximize usefulness and harmlessness.
REFERENCE

OpenAI Model Spec

https://cdn.openai.com/spec/model-spec-2024-05-08.html#overview

Anthropic Constitutional AI

https://www.anthropic.com/news/claudes-constitution

For more information, check out https://www.superprompt.fm There you can contact me and/or sign up for our newsletter.

  continue reading

28 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide