Artwork

Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

LW - Corrigibility = Tool-ness? by johnswentworth

17:01
 
Share
 

Manage episode 426161056 series 3337129
Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Corrigibility = Tool-ness?, published by johnswentworth on June 28, 2024 on LessWrong. Goal of This Post I have never seen anyone give a satisfying intuitive explanation of what corrigibility (in roughly Eliezer's sense of the word) is. There's lists of desiderata, but they sound like scattered wishlists which don't obviously point to a unified underlying concept at all. There's also Eliezer's extremely meta pointer: We can imagine, e.g., the AI imagining itself building a sub-AI while being prone to various sorts of errors, asking how it (the AI) would want the sub-AI to behave in those cases, and learning heuristics that would generalize well to how we would want the AI to behave if it suddenly gained a lot of capability or was considering deceiving its programmers and so on. … and that's basically it.[1] In this post, we're going to explain a reasonably-unified concept which seems like a decent match to "corrigibility" in Eliezer's sense. Tools Starting point: we think of a thing as corrigible exactly insofar as it is usefully thought-of as a tool. A screwdriver, for instance, is an excellent central example of a corrigible object. For AI alignment purposes, the challenge is to achieve corrigibility - i.e. tool-ness - in much more general, capable, and intelligent systems. … that all probably sounds like a rather nebulous and dubious claim, at this point. In order for it to make sense, we need to think through some key properties of "good tools", and also how various properties of incorrigibility make something a "bad tool". We broke off a separate post on what makes something usefully thought-of as a tool. Key ideas: Humans tend to solve problems by finding partial plans with "gaps" in them, where the "gaps" are subproblems which the human will figure out later. For instance, I might make a plan to decorate my apartment with some paintings, but leave a "gap" about how exactly to attach the paintings to the wall; I can sort that out later.[2] Sometimes many similar subproblems show up in my plans, forming a cluster.[3] For instance, there's a cluster (and many subclusters) of subproblems which involve attaching things together. Sometimes a thing (a physical object, a technique, whatever) makes it easy to solve a whole cluster of subproblems. That's what tools are. For instance, a screwdriver makes it easy to solve a whole subcluster of attaching-things-together subproblems. How does that add up to corrigibility? Respecting Modularity One key piece of the above picture is that the gaps/subproblems in humans' plans are typically modular - i.e. we expect to be able to solve each subproblem without significantly changing the "outer" partial plan, and without a lot of coupling between different subproblems. That's what makes the partial plan with all its subproblems useful in the first place: it factors the problem into loosely-coupled subproblems. Claim from the tools post: part of what it means for a tool to solve a subproblem-cluster is that the tool roughly preserves the modularity of that subproblem-cluster. That means the tool should not have a bunch of side effects which might mess with other subproblems, or mess up the outer partial plan. Furthermore, the tool needs to work for a whole subproblem-cluster, and that cluster includes similar subproblems which came up in the context of many different problems. So, the tool needs to robustly not have side effects which mess up the rest of the plan, across a wide range of possibilities for what "the rest of the plan" might be. Concretely: a screwdriver which sprays flames out the back when turned is a bad tool; it usually can't be used to solve most screw-turning subproblems when the bigger plan takes place in a wooden building. Another bad tool: a screwdriver which, when turned, also turns the lights on and off, cau...
  continue reading

1702 episodes

Artwork
iconShare
 
Manage episode 426161056 series 3337129
Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Corrigibility = Tool-ness?, published by johnswentworth on June 28, 2024 on LessWrong. Goal of This Post I have never seen anyone give a satisfying intuitive explanation of what corrigibility (in roughly Eliezer's sense of the word) is. There's lists of desiderata, but they sound like scattered wishlists which don't obviously point to a unified underlying concept at all. There's also Eliezer's extremely meta pointer: We can imagine, e.g., the AI imagining itself building a sub-AI while being prone to various sorts of errors, asking how it (the AI) would want the sub-AI to behave in those cases, and learning heuristics that would generalize well to how we would want the AI to behave if it suddenly gained a lot of capability or was considering deceiving its programmers and so on. … and that's basically it.[1] In this post, we're going to explain a reasonably-unified concept which seems like a decent match to "corrigibility" in Eliezer's sense. Tools Starting point: we think of a thing as corrigible exactly insofar as it is usefully thought-of as a tool. A screwdriver, for instance, is an excellent central example of a corrigible object. For AI alignment purposes, the challenge is to achieve corrigibility - i.e. tool-ness - in much more general, capable, and intelligent systems. … that all probably sounds like a rather nebulous and dubious claim, at this point. In order for it to make sense, we need to think through some key properties of "good tools", and also how various properties of incorrigibility make something a "bad tool". We broke off a separate post on what makes something usefully thought-of as a tool. Key ideas: Humans tend to solve problems by finding partial plans with "gaps" in them, where the "gaps" are subproblems which the human will figure out later. For instance, I might make a plan to decorate my apartment with some paintings, but leave a "gap" about how exactly to attach the paintings to the wall; I can sort that out later.[2] Sometimes many similar subproblems show up in my plans, forming a cluster.[3] For instance, there's a cluster (and many subclusters) of subproblems which involve attaching things together. Sometimes a thing (a physical object, a technique, whatever) makes it easy to solve a whole cluster of subproblems. That's what tools are. For instance, a screwdriver makes it easy to solve a whole subcluster of attaching-things-together subproblems. How does that add up to corrigibility? Respecting Modularity One key piece of the above picture is that the gaps/subproblems in humans' plans are typically modular - i.e. we expect to be able to solve each subproblem without significantly changing the "outer" partial plan, and without a lot of coupling between different subproblems. That's what makes the partial plan with all its subproblems useful in the first place: it factors the problem into loosely-coupled subproblems. Claim from the tools post: part of what it means for a tool to solve a subproblem-cluster is that the tool roughly preserves the modularity of that subproblem-cluster. That means the tool should not have a bunch of side effects which might mess with other subproblems, or mess up the outer partial plan. Furthermore, the tool needs to work for a whole subproblem-cluster, and that cluster includes similar subproblems which came up in the context of many different problems. So, the tool needs to robustly not have side effects which mess up the rest of the plan, across a wide range of possibilities for what "the rest of the plan" might be. Concretely: a screwdriver which sprays flames out the back when turned is a bad tool; it usually can't be used to solve most screw-turning subproblems when the bigger plan takes place in a wooden building. Another bad tool: a screwdriver which, when turned, also turns the lights on and off, cau...
  continue reading

1702 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide