Although the vast majority of our explanations score poorly, we believe we can now use ML techniques to further improve our ability to produce explanations. For example, we found we were able to improve scores by:
- Iterating on explanations. We can increase scores by asking GPT-4 to come up with possible counterexamples, then revising explanations in light of their activations.
- Using larger models to give explanations. The average score goes up as the explainer model’s capabilities increase. However, even GPT-4 gives worse explanations than humans, suggesting room for improvement.
- Changing the architecture of the explained model. Training models with different activation functions improved explanation scores.
We are open-sourcing our datasets and visualization tools for GPT-4-written explanations of all 307,200 neurons in GPT-2, as well as code for explanation and scoring using publicly available models on the OpenAI API. We hope the research community will develop new techniques for generating higher-scoring explanations and better tools for exploring GPT-2 using explanations.
We found over 1,000 neurons with explanations that scored at least 0.8, meaning that according to GPT-4 they account for most of the neuron’s top-activating behavior. Most of these well-explained neurons are not very interesting. However, we also found many interesting neurons that GPT-4 didn’t understand. We hope as explanations improve we may be able to rapidly uncover interesting qualitative understanding of model computations.