Artwork

Content provided by London Futurists. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by London Futurists or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Provably safe AGI, with Steve Omohundro

42:59
 
Share
 

Manage episode 400683173 series 3390521
Content provided by London Futurists. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by London Futurists or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

AI systems have become more powerful in the last few years, and are expected to become even more powerful in the years ahead. The question naturally arises: what, if anything, should humanity be doing to increase the likelihood that these forthcoming powerful systems will be safe, rather than destructive?
Our guest in this episode has a long and distinguished history of analysing that question, and he has some new proposals to share with us. He is Steve Omohundro, the CEO of Beneficial AI Research, an organisation which is working to ensure that artificial intelligence is safe and beneficial for humanity.
Steve has degrees in Physics and Mathematics from Stanford and a Ph.D. in Physics from U.C. Berkeley. He went on to be an award-winning computer science professor at the University of Illinois. At that time, he developed the notion of basic AI drives, which we talk about shortly, as well as a number of potential key AI safety mechanisms.
Among many other roles which are too numerous to mention here, Steve served as a Research Scientist at Meta, the parent company of Facebook, where he worked on generative models and AI-based simulation, and he is an advisor to MIRI, the Machine Intelligence Research Institute.
Selected follow-ups:
Steve Omohundro: Innovative ideas for a better world
Metaculus forecast for the date of weak AGI
"The Basic AI Drives" (PDF, 2008)
TED Talk by Max Tegmark: How to Keep AI Under Control
Apple Secure Enclave
Meta Research: Teaching AI advanced mathematical reasoning
DeepMind AlphaGeometry
Microsoft Lean theorem prover
Terence Tao (Wikipedia)
NeurIPS Tutorial on Machine Learning for Theorem Proving (2023)
The team at MIRI
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

  continue reading

Chapters

1. [Ad] What If? So What? (00:08:56)

2. (Cont.) Provably safe AGI, with Steve Omohundro (00:08:57)

3. [Ad] Climate Confident (00:34:56)

4. (Cont.) Provably safe AGI, with Steve Omohundro (00:34:57)

89 episodes

Artwork
iconShare
 
Manage episode 400683173 series 3390521
Content provided by London Futurists. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by London Futurists or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

AI systems have become more powerful in the last few years, and are expected to become even more powerful in the years ahead. The question naturally arises: what, if anything, should humanity be doing to increase the likelihood that these forthcoming powerful systems will be safe, rather than destructive?
Our guest in this episode has a long and distinguished history of analysing that question, and he has some new proposals to share with us. He is Steve Omohundro, the CEO of Beneficial AI Research, an organisation which is working to ensure that artificial intelligence is safe and beneficial for humanity.
Steve has degrees in Physics and Mathematics from Stanford and a Ph.D. in Physics from U.C. Berkeley. He went on to be an award-winning computer science professor at the University of Illinois. At that time, he developed the notion of basic AI drives, which we talk about shortly, as well as a number of potential key AI safety mechanisms.
Among many other roles which are too numerous to mention here, Steve served as a Research Scientist at Meta, the parent company of Facebook, where he worked on generative models and AI-based simulation, and he is an advisor to MIRI, the Machine Intelligence Research Institute.
Selected follow-ups:
Steve Omohundro: Innovative ideas for a better world
Metaculus forecast for the date of weak AGI
"The Basic AI Drives" (PDF, 2008)
TED Talk by Max Tegmark: How to Keep AI Under Control
Apple Secure Enclave
Meta Research: Teaching AI advanced mathematical reasoning
DeepMind AlphaGeometry
Microsoft Lean theorem prover
Terence Tao (Wikipedia)
NeurIPS Tutorial on Machine Learning for Theorem Proving (2023)
The team at MIRI
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

  continue reading

Chapters

1. [Ad] What If? So What? (00:08:56)

2. (Cont.) Provably safe AGI, with Steve Omohundro (00:08:57)

3. [Ad] Climate Confident (00:34:56)

4. (Cont.) Provably safe AGI, with Steve Omohundro (00:34:57)

89 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide