Artwork

Content provided by Stanford Engineering & Russ Altman and Stanford Engineering. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Stanford Engineering & Russ Altman and Stanford Engineering or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

The future of AI Chat: Foundation models and responsible innovation

35:47
 
Share
 

Manage episode 386993130 series 2712286
Content provided by Stanford Engineering & Russ Altman and Stanford Engineering. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Stanford Engineering & Russ Altman and Stanford Engineering or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Guest Percy Liang is an authority on AI who says that we are undergoing a paradigm shift in AI powered by foundation models, which are general-purpose models trained at immense scale, such as ChatGPT. In this episode of Stanford Engineering’s The Future of Everything podcast, Liang tells host Russ Altman about how foundation models are built, how to evaluate them, and the growing concerns with lack of openness and transparency.

Episode Transcripts >>> The Future of Everything Website

Connect with Russ >>> Threads or Twitter/X

Connect with School of Engineering >>> Twitter/X

Chapters:

(00:00:00) Introduction
Host Russ Altman introduces Percy Liang, who runs the Stanford Center on Foundation Models

(00:02:26) Defining Foundation Models

Percy Liang explains the concept of foundation models and the paradigm shift they represent.

(00:04:22) How are Foundation Models Built & Trained?

Explanation of the training data sources and the scale of training data: training on trillions of words. Details on the network architecture, parameters, and the objective function.

(00:10:36) Context Length & Predictive Capabilities

Discussion on context length and its role in predictions. Examples illustrating the influence of context length on predictive accuracy.

(00:12:28) Understanding Hallucination

Percy Liang explains how foundation models “hallucinate”, and the need for both truth and creative tasks which requires “lying”.

(00:15:19) Alignment and Reinforcement in Training

The role of alignment and reinforcement learning from human feedback in controlling model outputs.

(00:18:14) Evaluating Foundation Models

The shift from task-specific evaluations to comprehensive model evaluations, Introduction of HELM & the challenges in evaluation these models.

(00:25:09) Foundation Models Transparency Index

Percy Liang details the Foundation Models Transparency Index, the initial results and reactions by the companies evaluated by it.

(00:29:42) Open vs. Closed AI Models: Benefits & Risks

The spectrum between open and closed AI models , benefits and security impacts

Connect With Us:

Episode Transcripts >>> The Future of Everything Website

Connect with Russ >>> Threads or Twitter/X

Connect with School of Engineering >>> Twitter/X

  continue reading

272 episodes

Artwork
iconShare
 
Manage episode 386993130 series 2712286
Content provided by Stanford Engineering & Russ Altman and Stanford Engineering. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Stanford Engineering & Russ Altman and Stanford Engineering or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Guest Percy Liang is an authority on AI who says that we are undergoing a paradigm shift in AI powered by foundation models, which are general-purpose models trained at immense scale, such as ChatGPT. In this episode of Stanford Engineering’s The Future of Everything podcast, Liang tells host Russ Altman about how foundation models are built, how to evaluate them, and the growing concerns with lack of openness and transparency.

Episode Transcripts >>> The Future of Everything Website

Connect with Russ >>> Threads or Twitter/X

Connect with School of Engineering >>> Twitter/X

Chapters:

(00:00:00) Introduction
Host Russ Altman introduces Percy Liang, who runs the Stanford Center on Foundation Models

(00:02:26) Defining Foundation Models

Percy Liang explains the concept of foundation models and the paradigm shift they represent.

(00:04:22) How are Foundation Models Built & Trained?

Explanation of the training data sources and the scale of training data: training on trillions of words. Details on the network architecture, parameters, and the objective function.

(00:10:36) Context Length & Predictive Capabilities

Discussion on context length and its role in predictions. Examples illustrating the influence of context length on predictive accuracy.

(00:12:28) Understanding Hallucination

Percy Liang explains how foundation models “hallucinate”, and the need for both truth and creative tasks which requires “lying”.

(00:15:19) Alignment and Reinforcement in Training

The role of alignment and reinforcement learning from human feedback in controlling model outputs.

(00:18:14) Evaluating Foundation Models

The shift from task-specific evaluations to comprehensive model evaluations, Introduction of HELM & the challenges in evaluation these models.

(00:25:09) Foundation Models Transparency Index

Percy Liang details the Foundation Models Transparency Index, the initial results and reactions by the companies evaluated by it.

(00:29:42) Open vs. Closed AI Models: Benefits & Risks

The spectrum between open and closed AI models , benefits and security impacts

Connect With Us:

Episode Transcripts >>> The Future of Everything Website

Connect with Russ >>> Threads or Twitter/X

Connect with School of Engineering >>> Twitter/X

  continue reading

272 episodes

Alle episoder

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide