Artwork

Content provided by LessWrong. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by LessWrong or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

“Principles for the AGI Race ” by William_S

31:17
 
Share
 

Manage episode 437372256 series 3364760
Content provided by LessWrong. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by LessWrong or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Crossposted from https://williamrsaunders.substack.com/p/principles-for-the-agi-race
Why form principles for the AGI Race?
I worked at OpenAI for 3 years, on the Alignment and Superalignment teams. Our goal was to prepare for the possibility that OpenAI succeeded in its stated mission of building AGI (Artificial General Intelligence, roughly able to do most things a human can do), and then proceed on to make systems smarter than most humans. This will predictably face novel problems in controlling and shaping systems smarter than their supervisors and creators, which we don't currently know how to solve. It's not clear when this will happen, but a number of people would throw around estimates of this happening within a few years.
While there, I would sometimes dream about what would have happened if I’d been a nuclear physicist in the 1940s. I do think that many of the kind of people who get involved in the effective [...]
---
Outline:
(00:06) Why form principles for the AGI Race?
(03:32) Bad High Risk Decisions
(04:46) Unnecessary Races to Develop Risky Technology
(05:17) High Risk Decision Principles
(05:21) Principle 1: Seek as broad and legitimate authority for your decisions as is possible under the circumstances
(07:20) Principle 2: Don’t take actions which impose significant risks to others without overwhelming evidence of net benefit
(10:52) Race Principles
(10:56) What is a Race?
(12:18) Principle 3: When racing, have an exit strategy
(13:03) Principle 4: Maintain accurate race intelligence at all times.
(14:23) Principle 5: Evaluate how bad it is for your opponent to win instead of you, and balance this against the risks of racing
(15:07) Principle 6: Seriously attempt alternatives to racing
(16:58) Meta Principles
(17:01) Principle 7: Don’t give power to people or structures that can’t be held accountable.
(18:36) Principle 8: Notice when you can’t uphold your own principles.
(19:17) Application of my Principles
(19:21) Working at OpenAI
(24:19) SB 1047
(28:32) Call to Action
---
First published:
August 30th, 2024
Source:
https://www.lesswrong.com/posts/aRciQsjgErCf5Y7D9/principles-for-the-agi-race
---
Narrated by TYPE III AUDIO.
  continue reading

327 episodes

Artwork
iconShare
 
Manage episode 437372256 series 3364760
Content provided by LessWrong. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by LessWrong or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Crossposted from https://williamrsaunders.substack.com/p/principles-for-the-agi-race
Why form principles for the AGI Race?
I worked at OpenAI for 3 years, on the Alignment and Superalignment teams. Our goal was to prepare for the possibility that OpenAI succeeded in its stated mission of building AGI (Artificial General Intelligence, roughly able to do most things a human can do), and then proceed on to make systems smarter than most humans. This will predictably face novel problems in controlling and shaping systems smarter than their supervisors and creators, which we don't currently know how to solve. It's not clear when this will happen, but a number of people would throw around estimates of this happening within a few years.
While there, I would sometimes dream about what would have happened if I’d been a nuclear physicist in the 1940s. I do think that many of the kind of people who get involved in the effective [...]
---
Outline:
(00:06) Why form principles for the AGI Race?
(03:32) Bad High Risk Decisions
(04:46) Unnecessary Races to Develop Risky Technology
(05:17) High Risk Decision Principles
(05:21) Principle 1: Seek as broad and legitimate authority for your decisions as is possible under the circumstances
(07:20) Principle 2: Don’t take actions which impose significant risks to others without overwhelming evidence of net benefit
(10:52) Race Principles
(10:56) What is a Race?
(12:18) Principle 3: When racing, have an exit strategy
(13:03) Principle 4: Maintain accurate race intelligence at all times.
(14:23) Principle 5: Evaluate how bad it is for your opponent to win instead of you, and balance this against the risks of racing
(15:07) Principle 6: Seriously attempt alternatives to racing
(16:58) Meta Principles
(17:01) Principle 7: Don’t give power to people or structures that can’t be held accountable.
(18:36) Principle 8: Notice when you can’t uphold your own principles.
(19:17) Application of my Principles
(19:21) Working at OpenAI
(24:19) SB 1047
(28:32) Call to Action
---
First published:
August 30th, 2024
Source:
https://www.lesswrong.com/posts/aRciQsjgErCf5Y7D9/principles-for-the-agi-race
---
Narrated by TYPE III AUDIO.
  continue reading

327 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide