Artwork

Content provided by Nathan Lambert. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Nathan Lambert or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Interviewing Riley Goodside on the science of prompting

1:08:39
 
Share
 

Fetch error

Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on October 02, 2024 13:35 (1M ago)

What now? This series will be checked again in the next hour. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.

Manage episode 442793319 series 3590272
Content provided by Nathan Lambert. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Nathan Lambert or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

More information: https://www.interconnects.ai/p/riley-goodside-on-science-of-prompting

Riley Goodside is a staff prompting engineer at Scale AI. Previously working in data science, he is often seen as the default for the new role of a “prompt engineer.” He regularly posts incisive prompts that illicit notable behavior from the most popular AI models.

I really resonated with this saying from Anthropic’s recent podcast on prompt engineering — “now we write essays and treat them as code.” In order to be good at prompting, you need to understand that natural language operates as our code used to.

This episode is a masterclass on why you should care about prompting and how it impacts results. Of course, there’s a bunch of great discussion on recent models that reflect the need for different and or better prompting. Enjoy it!

00:00:09 Introduction
00:02:40 Riley's path to LLMs
00:07:54 Impact of ChatGPT on prompt engineering
00:12:03 OpenAI's o1
00:18:21 Autoregressive inference and prompting sensitivities
00:24:48 Reflection 70B model and its implications
00:28:00 Impact of prompting on evaluation
00:32:43 Prompting vs. Google search
00:46:55 Prompting and RLHF/post-training
00:56:57 Prompting of AI agents
01:01:20 Importance of hands-on experience with language models
01:05:00 Importance and challenges of AI model evaluation

  continue reading

58 episodes

Artwork
iconShare
 

Fetch error

Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on October 02, 2024 13:35 (1M ago)

What now? This series will be checked again in the next hour. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.

Manage episode 442793319 series 3590272
Content provided by Nathan Lambert. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Nathan Lambert or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

More information: https://www.interconnects.ai/p/riley-goodside-on-science-of-prompting

Riley Goodside is a staff prompting engineer at Scale AI. Previously working in data science, he is often seen as the default for the new role of a “prompt engineer.” He regularly posts incisive prompts that illicit notable behavior from the most popular AI models.

I really resonated with this saying from Anthropic’s recent podcast on prompt engineering — “now we write essays and treat them as code.” In order to be good at prompting, you need to understand that natural language operates as our code used to.

This episode is a masterclass on why you should care about prompting and how it impacts results. Of course, there’s a bunch of great discussion on recent models that reflect the need for different and or better prompting. Enjoy it!

00:00:09 Introduction
00:02:40 Riley's path to LLMs
00:07:54 Impact of ChatGPT on prompt engineering
00:12:03 OpenAI's o1
00:18:21 Autoregressive inference and prompting sensitivities
00:24:48 Reflection 70B model and its implications
00:28:00 Impact of prompting on evaluation
00:32:43 Prompting vs. Google search
00:46:55 Prompting and RLHF/post-training
00:56:57 Prompting of AI agents
01:01:20 Importance of hands-on experience with language models
01:05:00 Importance and challenges of AI model evaluation

  continue reading

58 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide