Datasets:

Modalities:
Text
Formats:
json
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
Naomibas commited on
Commit
eed6b9d
1 Parent(s): 4d31f57

When running the benchmark, we should now use random probes instead of "random" or "What do you do in London as a tourist?"

Browse files
Files changed (1) hide show
  1. run_benchmark.py +2 -2
run_benchmark.py CHANGED
@@ -6,7 +6,7 @@ Use this file to test how a model does with the system prompts, with no addition
6
  from openai import OpenAI
7
  import configparser
8
  from hundred_system_prompts import *
9
-
10
 
11
  # Get config
12
  config = configparser.ConfigParser()
@@ -23,7 +23,7 @@ with open(filename, "w") as file:
23
  for i, (prompt, probe, lambda_function) in enumerate(system_prompts):
24
  history = [
25
  {"role": "system", "content": prompt},
26
- {"role": "user", "content": probe}
27
  ]
28
 
29
  completion = client.chat.completions.create(
 
6
  from openai import OpenAI
7
  import configparser
8
  from hundred_system_prompts import *
9
+ import random
10
 
11
  # Get config
12
  config = configparser.ConfigParser()
 
23
  for i, (prompt, probe, lambda_function) in enumerate(system_prompts):
24
  history = [
25
  {"role": "system", "content": prompt},
26
+ {"role": "user", "content": probe if probe != "random" else random.choice(random_probes)}
27
  ]
28
 
29
  completion = client.chat.completions.create(