crumb's picture
Update README.md
2fe3a56 verified
metadata
dataset_info:
  features:
    - name: text
      dtype: string
    - name: pos
      dtype: float64
  splits:
    - name: train
      num_bytes: 5335090828
      num_examples: 1002630
  download_size: 3227201658
  dataset_size: 5335090828
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

1,378,234,368 tokens (using the Llama tokenizer, ~1.18b gpt4 tokens) from a deduped pile raw shard, filter len<896, ask-llm (“How to Train Data-Efficient LLMs”) w/ mistralai/Mistral-7B-Instruct-v0.2, keep top 1/4

{
  "text": "Once upon a time...",
  "pos": -5.654354325
}