booydar's picture
Upload README.md
79dfe51 verified
|
raw
history blame
6.44 kB
metadata
configs:
  - config_name: 0k
    data_files:
      - split: qa1
        path: data/qa1/0k.json
      - split: qa2
        path: data/qa2/0k.json
      - split: qa3
        path: data/qa3/0k.json
      - split: qa4
        path: data/qa4/0k.json
      - split: qa5
        path: data/qa5/0k.json
      - split: qa6
        path: data/qa6/0k.json
      - split: qa7
        path: data/qa7/0k.json
      - split: qa8
        path: data/qa8/0k.json
      - split: qa9
        path: data/qa9/0k.json
      - split: qa10
        path: data/qa10/0k.json
  - config_name: 1k
    data_files:
      - split: qa1
        path: data/qa1/1k.json
      - split: qa2
        path: data/qa2/1k.json
      - split: qa3
        path: data/qa3/1k.json
      - split: qa4
        path: data/qa4/1k.json
      - split: qa5
        path: data/qa5/1k.json
      - split: qa6
        path: data/qa6/1k.json
      - split: qa7
        path: data/qa7/1k.json
      - split: qa8
        path: data/qa8/1k.json
      - split: qa9
        path: data/qa9/1k.json
      - split: qa10
        path: data/qa10/1k.json
  - config_name: 2k
    data_files:
      - split: qa1
        path: data/qa1/2k.json
      - split: qa2
        path: data/qa2/2k.json
      - split: qa3
        path: data/qa3/2k.json
      - split: qa4
        path: data/qa4/2k.json
      - split: qa5
        path: data/qa5/2k.json
      - split: qa6
        path: data/qa6/2k.json
      - split: qa7
        path: data/qa7/2k.json
      - split: qa8
        path: data/qa8/2k.json
      - split: qa9
        path: data/qa9/2k.json
      - split: qa10
        path: data/qa10/2k.json
  - config_name: 4k
    data_files:
      - split: qa1
        path: data/qa1/4k.json
      - split: qa2
        path: data/qa2/4k.json
      - split: qa3
        path: data/qa3/4k.json
      - split: qa4
        path: data/qa4/4k.json
      - split: qa5
        path: data/qa5/4k.json
      - split: qa6
        path: data/qa6/4k.json
      - split: qa7
        path: data/qa7/4k.json
      - split: qa8
        path: data/qa8/4k.json
      - split: qa9
        path: data/qa9/4k.json
      - split: qa10
        path: data/qa10/4k.json
  - config_name: 8k
    data_files:
      - split: qa1
        path: data/qa1/8k.json
      - split: qa2
        path: data/qa2/8k.json
      - split: qa3
        path: data/qa3/8k.json
      - split: qa4
        path: data/qa4/8k.json
      - split: qa5
        path: data/qa5/8k.json
      - split: qa6
        path: data/qa6/8k.json
      - split: qa7
        path: data/qa7/8k.json
      - split: qa8
        path: data/qa8/8k.json
      - split: qa9
        path: data/qa9/8k.json
      - split: qa10
        path: data/qa10/8k.json
  - config_name: 16k
    data_files:
      - split: qa1
        path: data/qa1/16k.json
      - split: qa2
        path: data/qa2/16k.json
      - split: qa3
        path: data/qa3/16k.json
      - split: qa4
        path: data/qa4/16k.json
      - split: qa5
        path: data/qa5/16k.json
      - split: qa6
        path: data/qa6/16k.json
      - split: qa7
        path: data/qa7/16k.json
      - split: qa8
        path: data/qa8/16k.json
      - split: qa9
        path: data/qa9/16k.json
      - split: qa10
        path: data/qa10/16k.json
  - config_name: 32k
    data_files:
      - split: qa1
        path: data/qa1/32k.json
      - split: qa2
        path: data/qa2/32k.json
      - split: qa3
        path: data/qa3/32k.json
      - split: qa4
        path: data/qa4/32k.json
      - split: qa5
        path: data/qa5/32k.json
      - split: qa6
        path: data/qa6/32k.json
      - split: qa7
        path: data/qa7/32k.json
      - split: qa8
        path: data/qa8/32k.json
      - split: qa9
        path: data/qa9/32k.json
      - split: qa10
        path: data/qa10/32k.json

BABILong (5k train samples) : a long-context needle-in-a-haystack benchmark for LLMs

Preprint is on arXiv

bAbI + Books = BABILong

BABILong is a novel generative benchmark for evaluating the performance of NLP models in processing arbitrarily long documents with distributed facts.

It contains 10 configs, each corresponding to its bAbI task. Each config has spltis corresponding to different sequence lengths in tokens: '4k', '32k', '128k', '256k', '512k', '1M'

Solving tasks with a long context size requires the model to distinguish important information from large amounts of irrelevant details. To simulate this behavior we ”hide” the sentences of the original task between the sentences of irrelevant text. We use the bAbI dataset [1] as facts and PG19 as background text. Resulting test samples might have lenghts of millions of tokens.

BABILong consists of 10 tasks designed for evaluation of basic aspects of reasoning. The bAbI tasks are generated by simulating a set of characters and objects engaged in various movements and interactions with each other in multiple locations. Each interaction is represented by a fact, e.g. ”Mary travelled to the office”, and the task is to answer a question using the facts from the current simulation, for instance, ”Where is Mary?”. The bAbI tasks vary based on the number of facts, question complexity and the aspects of reasoning.

First ten tasks of BABILong

Task Name facts per task supporting facts per task
qa1 single supporting fact 2 - 10 1
qa2 two supporting facts 2 - 68 2
qa3 three supporting facts 4 - 32 3
qa4 two arg relations 2 1
qa5 three arg relations 2 - 126 1
qa6 yes-no questions 2 - 26 1
qa7 counting 2 - 52 1-10
qa8 lists-sets 2 - 50 1-8
qa9 simple negation 2 - 10 1
qa10 indefinite knowledge 2 - 10 1

Join us in this exciting endeavor and let's push the boundaries of what's possible together!

Citation

@misc{kuratov2024search,
      title={In Search of Needles in a 10M Haystack: Recurrent Memory Finds What LLMs Miss}, 
      author={Yuri Kuratov and Aydar Bulatov and Petr Anokhin and Dmitry Sorokin and Artyom Sorokin and Mikhail Burtsev},
      year={2024},
      eprint={2402.10790},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

References

[1] Weston, Jason, et al. "Towards ai-complete question answering: A set of prerequisite toy tasks." arXiv preprint arXiv:1502.05698 (2015).