jbloom commited on
Commit
f025bb4
1 Parent(s): bd33e6c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -5
README.md CHANGED
@@ -11,17 +11,20 @@ These SAEs were trained with [SAE Lens](https://github.com/jbloomAus/SAELens) an
11
 
12
  All training hyperparameters are specified in cfg.json.
13
 
14
- They are loadable using SAE via a few methods. A method that currently works (but may be replaced shortly by a more convenient method) would be the following:
15
 
16
  ```python
17
  import torch
18
- from sae_lens.training.session_loader import LMSparseAutoencoderSessionloader
 
19
 
20
  torch.set_grad_enabled(False)
21
- path = "path/to/folder_containing_cfgjson_and_safetensors_file"
22
- model, sae, activation_store = LMSparseAutoencoderSessionloader.load_pretrained_sae(
23
- path, device = "cuda",
 
24
  )
 
25
  ```
26
 
27
  ## Resid Post 0
 
11
 
12
  All training hyperparameters are specified in cfg.json.
13
 
14
+ They are loadable using SAE via a few methods. The preferred method is to use the following:
15
 
16
  ```python
17
  import torch
18
+ from transformer_lens import HookedTransformer
19
+ from sae_lens import SparseAutoencoder, ActivationsStore
20
 
21
  torch.set_grad_enabled(False)
22
+ model = HookedTransformer.from_pretrained("gemma-2b")
23
+ sparse_autoencoder = SparseAutoencoder.from_pretrained(
24
+ "gemma-2b-res-jb", # to see the list of available releases, go to: https://github.com/jbloomAus/SAELens/blob/main/sae_lens/pretrained_saes.yaml
25
+ "blocks.0.hook_resid_post" # change this to another specific SAE ID in the release if desired.
26
  )
27
+ activation_store = ActivationsStore.from_config(model, sparse_autoencoder.cfg)
28
  ```
29
 
30
  ## Resid Post 0