Artwork

Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

AF - Safety isn't safety without a social model (or: dispelling the myth of per se technical safety) by Andrew Critch

7:32
 
Share
 

Manage episode 423539604 series 3337166
Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Safety isn't safety without a social model (or: dispelling the myth of per se technical safety), published by Andrew Critch on June 14, 2024 on The AI Alignment Forum. As an AI researcher who wants to do technical work that helps humanity, there is a strong drive to find a research area that is definitely helpful somehow, so that you don't have to worry about how your work will be applied, and thus you don't have to worry about things like corporate ethics or geopolitics to make sure your work benefits humanity. Unfortunately, no such field exists. In particular, technical AI alignment is not such a field, and technical AI safety is not such a field. It absolutely matters where ideas land and how they are applied, and when the existence of the entire human race is at stake, that's no exception. If that's obvious to you, this post is mostly just a collection of arguments for something you probably already realize. But if you somehow think technical AI safety or technical AI alignment is somehow intrinsically or inevitably helpful to humanity, this post is an attempt to change your mind. In particular, with more and more AI governance problems cropping up, I'd like to see more and more AI technical staffers forming explicit social models of how their ideas are going to be applied. If you read this post, please don't try to read this post as somehow pro- or contra- a specific area of AI research, or safety, or alignment, or corporations, or governments. My goal in this post is to encourage more nuanced social models by de-conflating a bunch of concepts. This might seem like I'm against the concepts themselves, when really I just want clearer thinking about these concepts, so that we (humanity) can all do a better job of communicating and working together. Myths vs reality Epistemic status: these are claims that I'm confident in, assembled over 1.5 decades of observation of existential risk discourse, through thousands of hours of conversation. They are not claims I'm confident I can convince you of, but I'm giving it a shot anyway because there's a lot at stake when people don't realize how their technical research is going to be misapplied. Myth #1: Technical AI safety and/or alignment advances are intrinsically safe and helpful to humanity, irrespective of the state of humanity. Reality: All technical advances in AI safety and/or "alignment" can be misused by humans. There are no technical advances in AI that are safe per se; the safety or unsafety of an idea is a function of the human environment in which the idea lands. Examples: Obedience - AI that obeys the intention of a human user can be asked to help build unsafe AGI, such as by serving as a coding assistant. (Note: this used to be considered extremely sci-fi, and now it's standard practice.) Interpretability - Tools or techniques for understanding the internals of AI models will help developers better understand what they're building and hence speed up development, possibly exacerbating capabilities races. Truthfulness - AI that is designed to convey true statements to a human can also be asked questions by that human to help them build an unsafe AGI. Myth #2: There's a {technical AI safety VS AI capabilities} dichotomy or spectrum of technical AI research, which also corresponds to {making humanity more safe VS shortening AI timelines}. Reality: Conflating these concepts has three separate problems with it, (a)-(c) below: a) AI safety and alignment advances almost always shorten AI timelines. In particular, the ability to "make an AI system do what you want" is used almost instantly by AI companies to help them ship AI products faster (because the AI does what users want) and to build internal developer tools faster (because the AI does what developers want). (When I point this out, usually people think I'm s...
  continue reading

395 episodes

Artwork
iconShare
 
Manage episode 423539604 series 3337166
Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Safety isn't safety without a social model (or: dispelling the myth of per se technical safety), published by Andrew Critch on June 14, 2024 on The AI Alignment Forum. As an AI researcher who wants to do technical work that helps humanity, there is a strong drive to find a research area that is definitely helpful somehow, so that you don't have to worry about how your work will be applied, and thus you don't have to worry about things like corporate ethics or geopolitics to make sure your work benefits humanity. Unfortunately, no such field exists. In particular, technical AI alignment is not such a field, and technical AI safety is not such a field. It absolutely matters where ideas land and how they are applied, and when the existence of the entire human race is at stake, that's no exception. If that's obvious to you, this post is mostly just a collection of arguments for something you probably already realize. But if you somehow think technical AI safety or technical AI alignment is somehow intrinsically or inevitably helpful to humanity, this post is an attempt to change your mind. In particular, with more and more AI governance problems cropping up, I'd like to see more and more AI technical staffers forming explicit social models of how their ideas are going to be applied. If you read this post, please don't try to read this post as somehow pro- or contra- a specific area of AI research, or safety, or alignment, or corporations, or governments. My goal in this post is to encourage more nuanced social models by de-conflating a bunch of concepts. This might seem like I'm against the concepts themselves, when really I just want clearer thinking about these concepts, so that we (humanity) can all do a better job of communicating and working together. Myths vs reality Epistemic status: these are claims that I'm confident in, assembled over 1.5 decades of observation of existential risk discourse, through thousands of hours of conversation. They are not claims I'm confident I can convince you of, but I'm giving it a shot anyway because there's a lot at stake when people don't realize how their technical research is going to be misapplied. Myth #1: Technical AI safety and/or alignment advances are intrinsically safe and helpful to humanity, irrespective of the state of humanity. Reality: All technical advances in AI safety and/or "alignment" can be misused by humans. There are no technical advances in AI that are safe per se; the safety or unsafety of an idea is a function of the human environment in which the idea lands. Examples: Obedience - AI that obeys the intention of a human user can be asked to help build unsafe AGI, such as by serving as a coding assistant. (Note: this used to be considered extremely sci-fi, and now it's standard practice.) Interpretability - Tools or techniques for understanding the internals of AI models will help developers better understand what they're building and hence speed up development, possibly exacerbating capabilities races. Truthfulness - AI that is designed to convey true statements to a human can also be asked questions by that human to help them build an unsafe AGI. Myth #2: There's a {technical AI safety VS AI capabilities} dichotomy or spectrum of technical AI research, which also corresponds to {making humanity more safe VS shortening AI timelines}. Reality: Conflating these concepts has three separate problems with it, (a)-(c) below: a) AI safety and alignment advances almost always shorten AI timelines. In particular, the ability to "make an AI system do what you want" is used almost instantly by AI companies to help them ship AI products faster (because the AI does what users want) and to build internal developer tools faster (because the AI does what developers want). (When I point this out, usually people think I'm s...
  continue reading

395 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide