11 subscribers
Go offline with the Player FM app!
Podcasts Worth a Listen
SPONSORED


1 Family Secrets: Chris Pratt & Millie Bobby Brown Share Stories From Set 22:08
Ep 8 - Getting started in AI safety & alignment w/ Jamie Bernardi (AI Safety Lead, BlueDot Impact)
Manage episode 379970186 series 3428190
We speak with Jamie Bernardi, co-founder & AI Safety Lead at not-for-profit BlueDot Impact, who host the biggest and most up-to-date courses on AI safety & alignment at AI Safety Fundamentals (https://aisafetyfundamentals.com/). Jamie completed his Bachelors (Physical Natural Sciences) and Masters (Physics) at the U. Cambridge and worked as an ML Engineer before co-founding BlueDot Impact.
The free courses they offer are created in collaboration with people on the cutting edge of AI safety, like Richard Ngo at OpenAI and Prof David Kreuger at U. Cambridge. These courses have been one of the most powerful ways for new people to enter the field of AI safety, and I myself (Soroush) have taken AGI Safety Fundamentals 101 — an exceptional course that was crucial to my understanding of the field and can highly recommend. Jamie shares why he got into AI safety, some recent history of the field, an overview of the current field, and how listeners can get involved and start contributing to a ensure a safe & positive world with advanced AI and AGI.
Hosted by Soroush Pour. Follow me for more AGI content:
Twitter: https://twitter.com/soroushjp
LinkedIn: https://www.linkedin.com/in/soroushjp/
== Show links ==
-- About Jamie --
* Website: https://jamiebernardi.com/
* Twitter: https://twitter.com/The_JBernardi
* BlueDot Impact: https://www.bluedotimpact.org/
-- Further resources --
* AI Safety Fundamentals courses: https://aisafetyfundamentals.com/
* Donate to LTFF to support AI safety initiatives: https://funds.effectivealtruism.org/funds/far-future
* Jobs + opportunities in AI safety:
* https://aisafetyfundamentals.com/opportunities
* https://jobs.80000hours.org
* Horizon Fellowship for policy training in AI safety: https://www.horizonpublicservice.org/fellowship
Recorded Sep 7, 2023
15 episodes
Ep 8 - Getting started in AI safety & alignment w/ Jamie Bernardi (AI Safety Lead, BlueDot Impact)
Artificial General Intelligence (AGI) Show with Soroush Pour
Manage episode 379970186 series 3428190
We speak with Jamie Bernardi, co-founder & AI Safety Lead at not-for-profit BlueDot Impact, who host the biggest and most up-to-date courses on AI safety & alignment at AI Safety Fundamentals (https://aisafetyfundamentals.com/). Jamie completed his Bachelors (Physical Natural Sciences) and Masters (Physics) at the U. Cambridge and worked as an ML Engineer before co-founding BlueDot Impact.
The free courses they offer are created in collaboration with people on the cutting edge of AI safety, like Richard Ngo at OpenAI and Prof David Kreuger at U. Cambridge. These courses have been one of the most powerful ways for new people to enter the field of AI safety, and I myself (Soroush) have taken AGI Safety Fundamentals 101 — an exceptional course that was crucial to my understanding of the field and can highly recommend. Jamie shares why he got into AI safety, some recent history of the field, an overview of the current field, and how listeners can get involved and start contributing to a ensure a safe & positive world with advanced AI and AGI.
Hosted by Soroush Pour. Follow me for more AGI content:
Twitter: https://twitter.com/soroushjp
LinkedIn: https://www.linkedin.com/in/soroushjp/
== Show links ==
-- About Jamie --
* Website: https://jamiebernardi.com/
* Twitter: https://twitter.com/The_JBernardi
* BlueDot Impact: https://www.bluedotimpact.org/
-- Further resources --
* AI Safety Fundamentals courses: https://aisafetyfundamentals.com/
* Donate to LTFF to support AI safety initiatives: https://funds.effectivealtruism.org/funds/far-future
* Jobs + opportunities in AI safety:
* https://aisafetyfundamentals.com/opportunities
* https://jobs.80000hours.org
* Horizon Fellowship for policy training in AI safety: https://www.horizonpublicservice.org/fellowship
Recorded Sep 7, 2023
15 episodes
All episodes
×
1 Ep 14 - Interp, latent robustness, RLHF limitations w/ Stephen Casper (PhD AI researcher, MIT) 2:42:17

1 Ep 13 - AI researchers expect AGI sooner w/ Katja Grace (Co-founder & Lead Researcher, AI Impacts) 1:20:28

1 Ep 12 - Education & advocacy for AI safety w/ Rob Miles (YouTube host) 1:21:26

1 Ep 11 - Technical alignment overview w/ Thomas Larsen (Director of Strategy, Center for AI Policy) 1:37:19

1 Ep 10 - Accelerated training to become an AI safety researcher w/ Ryan Kidd (Co-Director, MATS) 1:16:58

1 Ep 9 - Scaling AI safety research w/ Adam Gleave (CEO, FAR AI) 1:19:12

1 Ep 8 - Getting started in AI safety & alignment w/ Jamie Bernardi (AI Safety Lead, BlueDot Impact) 1:07:23

1 Ep 7 - Responding to a world with AGI - Richard Dazeley (Prof AI & ML, Deakin University) 1:10:05

1 Ep 6 - Will we see AGI this decade? Our AGI predictions & debate w/ Hunter Jay (CEO, Ripe Robotics) 1:20:58

1 Ep 5 - Accelerating AGI timelines since GPT-4 w/ Alex Browne (ML Engineer) 38:26

1 Ep 4 - When will AGI arrive? - Ryan Kupyn (Data Scientist & Forecasting Researcher @ Amazon AWS) 1:03:23

1 Ep 3 - When will AGI arrive? - Jack Kendall (CTO, Rain.AI, maker of neural net chips) 1:01:34

1 Ep 2 - When will AGI arrive? - Alex Browne (Machine Learning Engineer) 58:26

1 Ep 1 - When will AGI arrive? - Logan Riggs Smith (AGI alignment researcher) 1:10:51

Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.