Artwork

Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

LW - [Interim research report] Activation plateaus and sensitive directions in GPT2 by StefanHex

20:01
 
Share
 

Manage episode 427503277 series 3337129
Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Interim research report] Activation plateaus & sensitive directions in GPT2, published by StefanHex on July 5, 2024 on LessWrong. This part-report / part-proposal describes ongoing research, but I'd like to share early results for feedback. I am especially interested in any comment finding mistakes or trivial explanations for these results. I will work on this proposal with a LASR Labs team over the next 3 months. If you are working (or want to work) on something similar I would love to chat! Experiments and write-up by Stefan, with substantial inspiration and advice from Jake (who doesn't necessarily endorse every sloppy statement I write). Work produced at Apollo Research. TL,DR: Toy models of how neural networks compute new features in superposition seem to imply that neural networks that utilize superposition require some form of error correction to avoid interference spiraling out of control. This means small variations along a feature direction shouldn't affect model outputs, which I can test: 1. Activation plateaus: Real activations should be resistant to small perturbations. There should be a "plateau" in the output as a function of perturbation size. 2. Sensitive directions: Perturbations towards the direction of a feature should change the model output earlier (at a lower perturbation size) than perturbations into a random direction. I find that both of these predictions hold; the latter when I operationalize "feature" as the difference between two real model activations. As next steps we are planning to Test both predictions for SAE features: We have some evidence for the latter by Gurnee (2024) and Lindsey (2024). Are there different types of SAE features, atomic and composite features? Can we get a handle on the total number of features? If sensitivity-features line up with SAE features, can we find or improve SAE feature directions by finding local optima in sensitivity (similar to how Mack & Turner (2024) find steering vectors)? My motivation for this project is to get data on computation in superposition, and to get dataset-independent evidence for (SAE-)features. Core results & discussion I run two different experiments that test the error correction hypothesis: 1. Activation Plateaus: A real activation is the center of a plateau, in the sense that perturbing the activation affects the model output less than expected. Concretely: applying random-direction perturbations to an activation generated from a random openwebtext input ("real activation") has less effect than applying the same perturbations to a random activation (generated from a Normal distribution). This effect on the model can be measured in KL divergence of logits (shown below) but also L2 difference or cosine similarity of late-layer activations. 2. Sensitive directions: Perturbing a (real) activation into a direction towards another real activation ("poor man's feature directions") affects the model-outputs more than perturbing the same activation into a random direction. In the plot below focus on the size of the "plateau" in the left-hand side 1. Naive random direction vs mean & covariance-adjusted random: Naive isotropic random directions are much less sensitive. Thus we use mean & covariance-adjusted random activations everywhere else in this report. 2. The sensitive direction results are related to Gurnee (2024, SAE-replacement-error direction vs naive random direction) and Lindsey (2024, Anthropic April Updates, SAE-feature direction vs naive random direction). The theoretical explanation for activation plateaus & sensitive direction may be error correction (also referred to as noise suppression): NNs in superposition should expect small amounts of noise in feature activations due to interference. (The exact properties depend on how computation happens in superposition, this toy...
  continue reading

1702 episodes

Artwork
iconShare
 
Manage episode 427503277 series 3337129
Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Interim research report] Activation plateaus & sensitive directions in GPT2, published by StefanHex on July 5, 2024 on LessWrong. This part-report / part-proposal describes ongoing research, but I'd like to share early results for feedback. I am especially interested in any comment finding mistakes or trivial explanations for these results. I will work on this proposal with a LASR Labs team over the next 3 months. If you are working (or want to work) on something similar I would love to chat! Experiments and write-up by Stefan, with substantial inspiration and advice from Jake (who doesn't necessarily endorse every sloppy statement I write). Work produced at Apollo Research. TL,DR: Toy models of how neural networks compute new features in superposition seem to imply that neural networks that utilize superposition require some form of error correction to avoid interference spiraling out of control. This means small variations along a feature direction shouldn't affect model outputs, which I can test: 1. Activation plateaus: Real activations should be resistant to small perturbations. There should be a "plateau" in the output as a function of perturbation size. 2. Sensitive directions: Perturbations towards the direction of a feature should change the model output earlier (at a lower perturbation size) than perturbations into a random direction. I find that both of these predictions hold; the latter when I operationalize "feature" as the difference between two real model activations. As next steps we are planning to Test both predictions for SAE features: We have some evidence for the latter by Gurnee (2024) and Lindsey (2024). Are there different types of SAE features, atomic and composite features? Can we get a handle on the total number of features? If sensitivity-features line up with SAE features, can we find or improve SAE feature directions by finding local optima in sensitivity (similar to how Mack & Turner (2024) find steering vectors)? My motivation for this project is to get data on computation in superposition, and to get dataset-independent evidence for (SAE-)features. Core results & discussion I run two different experiments that test the error correction hypothesis: 1. Activation Plateaus: A real activation is the center of a plateau, in the sense that perturbing the activation affects the model output less than expected. Concretely: applying random-direction perturbations to an activation generated from a random openwebtext input ("real activation") has less effect than applying the same perturbations to a random activation (generated from a Normal distribution). This effect on the model can be measured in KL divergence of logits (shown below) but also L2 difference or cosine similarity of late-layer activations. 2. Sensitive directions: Perturbing a (real) activation into a direction towards another real activation ("poor man's feature directions") affects the model-outputs more than perturbing the same activation into a random direction. In the plot below focus on the size of the "plateau" in the left-hand side 1. Naive random direction vs mean & covariance-adjusted random: Naive isotropic random directions are much less sensitive. Thus we use mean & covariance-adjusted random activations everywhere else in this report. 2. The sensitive direction results are related to Gurnee (2024, SAE-replacement-error direction vs naive random direction) and Lindsey (2024, Anthropic April Updates, SAE-feature direction vs naive random direction). The theoretical explanation for activation plateaus & sensitive direction may be error correction (also referred to as noise suppression): NNs in superposition should expect small amounts of noise in feature activations due to interference. (The exact properties depend on how computation happens in superposition, this toy...
  continue reading

1702 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide