Artwork

Content provided by BlueDot Impact. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by BlueDot Impact or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Towards Monosemanticity: Decomposing Language Models With Dictionary Learning

8:53
 
Share
 

Manage episode 424744796 series 3498845
Content provided by BlueDot Impact. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by BlueDot Impact or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Using a sparse autoencoder, we extract a large number of interpretable features from a one-layer transformer.
Mechanistic interpretability seeks to understand neural networks by breaking them into components that are more easily understood than the whole. By understanding the function of each component, and how they interact, we hope to be able to reason about the behavior of the entire network. The first step in that program is to identify the correct components to analyze.
Unfortunately, the most natural computational unit of the neural network – the neuron itself – turns out not to be a natural unit for human understanding. This is because many neurons are polysemantic: they respond to mixtures of seemingly unrelated inputs. In the vision model Inception v1, a single neuron responds to faces of cats and fronts of cars . In a small language model we discuss in this paper, a single neuron responds to a mixture of academic citations, English dialogue, HTTP requests, and Korean text. Polysemanticity makes it difficult to reason about the behavior of the network in terms of the activity of individual neurons.
Source:
https://transformer-circuits.pub/2023/monosemantic-features/index.html
Narrated for AI Safety Fundamentals by Perrin Walker

A podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website.

  continue reading

Chapters

1. Towards Monosemanticity: Decomposing Language Models With Dictionary Learning (00:00:00)

2. Summary of Results (00:05:50)

80 episodes

Artwork
iconShare
 
Manage episode 424744796 series 3498845
Content provided by BlueDot Impact. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by BlueDot Impact or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Using a sparse autoencoder, we extract a large number of interpretable features from a one-layer transformer.
Mechanistic interpretability seeks to understand neural networks by breaking them into components that are more easily understood than the whole. By understanding the function of each component, and how they interact, we hope to be able to reason about the behavior of the entire network. The first step in that program is to identify the correct components to analyze.
Unfortunately, the most natural computational unit of the neural network – the neuron itself – turns out not to be a natural unit for human understanding. This is because many neurons are polysemantic: they respond to mixtures of seemingly unrelated inputs. In the vision model Inception v1, a single neuron responds to faces of cats and fronts of cars . In a small language model we discuss in this paper, a single neuron responds to a mixture of academic citations, English dialogue, HTTP requests, and Korean text. Polysemanticity makes it difficult to reason about the behavior of the network in terms of the activity of individual neurons.
Source:
https://transformer-circuits.pub/2023/monosemantic-features/index.html
Narrated for AI Safety Fundamentals by Perrin Walker

A podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website.

  continue reading

Chapters

1. Towards Monosemanticity: Decomposing Language Models With Dictionary Learning (00:00:00)

2. Summary of Results (00:05:50)

80 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide