Artwork

Content provided by Sequoia Capital. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Sequoia Capital or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

LangChain’s Harrison Chase on Building the Orchestration Layer for AI Agents

49:50
 
Share
 

Manage episode 429269153 series 3586723
Content provided by Sequoia Capital. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Sequoia Capital or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Last year, AutoGPT and Baby AGI captured our imaginations—agents quickly became the buzzword of the day…and then things went quiet. AutoGPT and Baby AGI may have marked a peak in the hype cycle, but this year has seen a wave of agentic breakouts on the product side, from Klarna’s customer support AI to Cognition’s Devin, etc.

Harrison Chase of LangChain is focused on enabling the orchestration layer for agents. In this conversation, he explains what’s changed that’s allowing agents to improve performance and find traction.

Harrison shares what he’s optimistic about, where he sees promise for agents vs. what he thinks will be trained into models themselves, and discusses novel kinds of UX that he imagines might transform how we experience agents in the future.

Hosted by: Sonya Huang and Pat Grady, Sequoia Capital

Mentioned:

Transcript: https://www.sequoiacap.com/podcast/training-data-harrison-chase/

00:00 Introduction

01:21 What are agents?

05:00 What is LangChain’s role in the agent ecosystem?

11:13 What is a cognitive architecture?

13:20 Is bespoke and hard coded the way the world is going, or a stop gap?

18:48 Focus on what makes your beer taste better

20:37 So what?

22:20 Where are agents getting traction?

25:35 Reflection, chain of thought, other techniques?

30:42 UX can influence the effectiveness of the architecture

35:30 What’s out of scope?

38:04 Fine tuning vs prompting?

42:17 Existing observability tools for LLMs vs needing a new architecture/approach

45:38 Lightning round

  continue reading

15 episodes

Artwork
iconShare
 
Manage episode 429269153 series 3586723
Content provided by Sequoia Capital. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Sequoia Capital or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Last year, AutoGPT and Baby AGI captured our imaginations—agents quickly became the buzzword of the day…and then things went quiet. AutoGPT and Baby AGI may have marked a peak in the hype cycle, but this year has seen a wave of agentic breakouts on the product side, from Klarna’s customer support AI to Cognition’s Devin, etc.

Harrison Chase of LangChain is focused on enabling the orchestration layer for agents. In this conversation, he explains what’s changed that’s allowing agents to improve performance and find traction.

Harrison shares what he’s optimistic about, where he sees promise for agents vs. what he thinks will be trained into models themselves, and discusses novel kinds of UX that he imagines might transform how we experience agents in the future.

Hosted by: Sonya Huang and Pat Grady, Sequoia Capital

Mentioned:

Transcript: https://www.sequoiacap.com/podcast/training-data-harrison-chase/

00:00 Introduction

01:21 What are agents?

05:00 What is LangChain’s role in the agent ecosystem?

11:13 What is a cognitive architecture?

13:20 Is bespoke and hard coded the way the world is going, or a stop gap?

18:48 Focus on what makes your beer taste better

20:37 So what?

22:20 Where are agents getting traction?

25:35 Reflection, chain of thought, other techniques?

30:42 UX can influence the effectiveness of the architecture

35:30 What’s out of scope?

38:04 Fine tuning vs prompting?

42:17 Existing observability tools for LLMs vs needing a new architecture/approach

45:38 Lightning round

  continue reading

15 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide