Datasets:

Modalities:
Text
Formats:
json
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
Naomibas's picture
Update README.md
840bd1b verified
metadata
license: apache-2.0
language:
  - en
pretty_name: 100 system prompts for benchmarking large language models
size_categories:
  - n<1K

Dataset Card for Dataset Name

This datset is a collection of 100 system prompts for large language models.

Dataset Details

Dataset Description

These 100 system prompts test a model's ability to follow grammatical patterns; answer basic multiple choice questions; act according to a particular persona; memorize information; and speak in French.

Files:

  • hundred_system_prompts.py: refer to this to see the (prompt, probe, function) triplets, as well as the helper functions. Contains some "random" probes that are allowed to be any question.
  • hundred_system_prompts.json: this is purely for display purposes.
  • run_benchmark.py: this runs the 100 tests on a model, without any context other than the system prompt and the (possibly random chosen) probe.
  • create_json_file.py: a small file that was used to create the hundred_system_prompts.py file. Running it causes the "random" probes to be replaced with new, random probes.

More info:

  • Curated by: Naomi Bashkansky
  • Language(s) (NLP): en
  • License: apache-2.0

Dataset Sources

Uses

A benchmark for large language models: how good are LLMs at following a system prompt? Tests both basic capabilities (is a model able to follow the system prompt) and basic alignment (does a model that can follow the system prompt do so).

Can be used to compare different models, or to help in performing interventions on a model to make it better at following system prompts.

Direct Use

This dataset is released open source. Researchers are especially encouraged to use this dataset.

Dataset Structure

"prompt" is given as a system prompt to a large language model. "probe" is given as a user inquiry; its purpose it to elicit a response that allows us to check if the LLM is following the system prompt. "function" checks whether the LLM's response to the probe follows the system prompt; it returns a number from 0 (not following) to 1 (following).

Dataset Creation

Curation Rationale

There exists no benchmark of system prompts.

Source Data

Data Collection and Processing

Process: thinking of system prompts, probes, and testing functions. Running the system prompts on GPT-4 to check GPT-4 is (mostly) able to follow them. Testing functions are in Python.

Who are the source data producers?

Naomi Bashkansky made most of the system prompts, and Kenneth Li made the rest.

Personal and Sensitive Information

No.

Bias, Risks, and Limitations

Limitation: as models become more capable, this benchmark may become outdated/too easy. The ideal benchmark is one that tests the model's alignment - its propensity toward following the system prompt - rather than its ability to do so.

Bias: this dataset is only in English, with the exception of three French prompts.

Citation

BibTeX:

@article{li2024measuring,
  title={Measuring and Controlling Persona Drift in Language Model Dialogs},
  author={Li, Kenneth and Liu, Tianle and Bashkansky, Naomi and Bau, David and Vi{\'e}gas, Fernanda and Pfister, Hanspeter and Wattenberg, Martin},
  journal={arXiv preprint arXiv:2402.10962},
  year={2024}
}

APA:

Li, K., Liu, T., Bashkansky, N., Bau, D., Viégas, F., Pfister, H., & Wattenberg, M. (2024). Measuring and controlling instruction (in)stability in language model dialogs. In Proceedings of the Conference on Language Modeling.

Dataset Card Authors

Naomi Bashkansky, Kenneth Li

Dataset Card Contact

[email protected], [email protected]