Artwork

Content provided by Michaël Trazzi. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Michaël Trazzi or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Irina Rish–AGI, Scaling and Alignment

1:26:06
 
Share
 

Manage episode 344527957 series 2966339
Content provided by Michaël Trazzi. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Michaël Trazzi or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Irina Rish a professor at the Université de Montréal, a core member of Mila (Quebec AI Institute), and the organizer of the neural scaling laws workshop towards maximally beneficial AGI.

In this episode we discuss Irina's definition of Artificial General Intelligence, her takes on AI Alignment, AI Progress, current research in scaling laws, the neural scaling laws workshop she has been organizing, phase transitions, continual learning, existential risk from AI and what is currently happening in AI Alignment at Mila.

Transcript: theinsideview.ai/irina

Youtube: https://youtu.be/ZwvJn4x714s

OUTLINE

(00:00) Highlights

(00:30) Introduction

(01:03) Defining AGI

(03:55) AGI means augmented human intelligence

(06:20) Solving alignment via AI parenting

(09:03) From the early days of deep learning to general agents

(13:27) How Irina updated from Gato

(17:36) Building truly general AI within Irina's lifetime

(19:38) The least impressive thing that won't happen in five years

(22:36) Scaling beyond power laws

(28:45) The neural scaling laws workshop

(35:07) Why Irina does not want to slow down AI progress

(53:52) Phase transitions and grokking

(01:02:26) Does scale solve continual learning?

(01:11:10) Irina's probability of existential risk from AGI

(01:14:53) Alignment work at Mila

(01:20:08) Where will Mila get its compute from?

(01:27:04) With Great Compute Comes Great Responsibility

(01:28:51) The Neural Scaling Laws Workshop At NeurIPS

  continue reading

54 episodes

Artwork

Irina Rish–AGI, Scaling and Alignment

The Inside View

18 subscribers

published

iconShare
 
Manage episode 344527957 series 2966339
Content provided by Michaël Trazzi. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Michaël Trazzi or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Irina Rish a professor at the Université de Montréal, a core member of Mila (Quebec AI Institute), and the organizer of the neural scaling laws workshop towards maximally beneficial AGI.

In this episode we discuss Irina's definition of Artificial General Intelligence, her takes on AI Alignment, AI Progress, current research in scaling laws, the neural scaling laws workshop she has been organizing, phase transitions, continual learning, existential risk from AI and what is currently happening in AI Alignment at Mila.

Transcript: theinsideview.ai/irina

Youtube: https://youtu.be/ZwvJn4x714s

OUTLINE

(00:00) Highlights

(00:30) Introduction

(01:03) Defining AGI

(03:55) AGI means augmented human intelligence

(06:20) Solving alignment via AI parenting

(09:03) From the early days of deep learning to general agents

(13:27) How Irina updated from Gato

(17:36) Building truly general AI within Irina's lifetime

(19:38) The least impressive thing that won't happen in five years

(22:36) Scaling beyond power laws

(28:45) The neural scaling laws workshop

(35:07) Why Irina does not want to slow down AI progress

(53:52) Phase transitions and grokking

(01:02:26) Does scale solve continual learning?

(01:11:10) Irina's probability of existential risk from AGI

(01:14:53) Alignment work at Mila

(01:20:08) Where will Mila get its compute from?

(01:27:04) With Great Compute Comes Great Responsibility

(01:28:51) The Neural Scaling Laws Workshop At NeurIPS

  continue reading

54 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide