monshinawatra commited on
Commit
6adc7f5
1 Parent(s): ed0157b

Upload folder using huggingface_hub

Browse files
README.md CHANGED
@@ -1,199 +1,64 @@
1
  ---
2
- library_name: transformers
3
- tags: []
 
 
 
 
 
 
 
 
4
  ---
5
 
6
- # Model Card for Model ID
 
7
 
8
- <!-- Provide a quick summary of what the model is/does. -->
9
 
 
 
 
 
 
 
10
 
 
11
 
12
- ## Model Details
13
 
14
- ### Model Description
15
 
16
- <!-- Provide a longer summary of what this model is. -->
17
 
18
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
 
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
 
28
- ### Model Sources [optional]
29
 
30
- <!-- Provide the basic links for the model. -->
31
 
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
 
 
 
 
 
 
 
 
35
 
36
- ## Uses
37
 
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
 
40
- ### Direct Use
41
 
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
 
44
- [More Information Needed]
45
-
46
- ### Downstream Use [optional]
47
-
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
-
50
- [More Information Needed]
51
-
52
- ### Out-of-Scope Use
53
-
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
-
56
- [More Information Needed]
57
-
58
- ## Bias, Risks, and Limitations
59
-
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
-
62
- [More Information Needed]
63
-
64
- ### Recommendations
65
-
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
-
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
-
70
- ## How to Get Started with the Model
71
-
72
- Use the code below to get started with the model.
73
-
74
- [More Information Needed]
75
-
76
- ## Training Details
77
-
78
- ### Training Data
79
-
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
-
82
- [More Information Needed]
83
-
84
- ### Training Procedure
85
-
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
-
88
- #### Preprocessing [optional]
89
-
90
- [More Information Needed]
91
-
92
-
93
- #### Training Hyperparameters
94
-
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
-
97
- #### Speeds, Sizes, Times [optional]
98
-
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
-
101
- [More Information Needed]
102
-
103
- ## Evaluation
104
-
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
-
107
- ### Testing Data, Factors & Metrics
108
-
109
- #### Testing Data
110
-
111
- <!-- This should link to a Dataset Card if possible. -->
112
-
113
- [More Information Needed]
114
-
115
- #### Factors
116
-
117
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
-
119
- [More Information Needed]
120
-
121
- #### Metrics
122
-
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
-
125
- [More Information Needed]
126
-
127
- ### Results
128
-
129
- [More Information Needed]
130
-
131
- #### Summary
132
-
133
-
134
-
135
- ## Model Examination [optional]
136
-
137
- <!-- Relevant interpretability work for the model goes here -->
138
-
139
- [More Information Needed]
140
-
141
- ## Environmental Impact
142
-
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
-
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
-
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
-
153
- ## Technical Specifications [optional]
154
-
155
- ### Model Architecture and Objective
156
-
157
- [More Information Needed]
158
-
159
- ### Compute Infrastructure
160
-
161
- [More Information Needed]
162
-
163
- #### Hardware
164
-
165
- [More Information Needed]
166
-
167
- #### Software
168
-
169
- [More Information Needed]
170
-
171
- ## Citation [optional]
172
-
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
-
175
- **BibTeX:**
176
-
177
- [More Information Needed]
178
-
179
- **APA:**
180
-
181
- [More Information Needed]
182
-
183
- ## Glossary [optional]
184
-
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
-
187
- [More Information Needed]
188
-
189
- ## More Information [optional]
190
-
191
- [More Information Needed]
192
-
193
- ## Model Card Authors [optional]
194
-
195
- [More Information Needed]
196
-
197
- ## Model Card Contact
198
-
199
- [More Information Needed]
 
1
  ---
2
+ library_name: peft
3
+ license: other
4
+ base_model: openthaigpt/openthaigpt1.5-7b-instruct
5
+ tags:
6
+ - llama-factory
7
+ - lora
8
+ - generated_from_trainer
9
+ model-index:
10
+ - name: kto_law_7b
11
+ results: []
12
  ---
13
 
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
 
17
+ # kto_law_7b
18
 
19
+ This model is a fine-tuned version of [openthaigpt/openthaigpt1.5-7b-instruct](https://huggingface.co/openthaigpt/openthaigpt1.5-7b-instruct) on the law_th_kto dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 0.5288
22
+ - Rewards/chosen: 2.6834
23
+ - Logps/chosen: -142.3163
24
+ - Kl: 28.1784
25
 
26
+ ## Model description
27
 
28
+ More information needed
29
 
30
+ ## Intended uses & limitations
31
 
32
+ More information needed
33
 
34
+ ## Training and evaluation data
35
 
36
+ More information needed
 
 
 
 
 
 
37
 
38
+ ## Training procedure
39
 
40
+ ### Training hyperparameters
41
 
42
+ The following hyperparameters were used during training:
43
+ - learning_rate: 5e-06
44
+ - train_batch_size: 1
45
+ - eval_batch_size: 1
46
+ - seed: 42
47
+ - gradient_accumulation_steps: 8
48
+ - total_train_batch_size: 8
49
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
+ - lr_scheduler_type: cosine
51
+ - lr_scheduler_warmup_ratio: 0.1
52
+ - num_epochs: 3.0
53
 
54
+ ### Training results
55
 
 
56
 
 
57
 
58
+ ### Framework versions
59
 
60
+ - PEFT 0.12.0
61
+ - Transformers 4.44.2
62
+ - Pytorch 2.4.1+cu121
63
+ - Datasets 2.21.0
64
+ - Tokenizers 0.19.1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
adapter_config.json CHANGED
@@ -20,13 +20,13 @@
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
23
- "o_proj",
24
  "up_proj",
25
- "v_proj",
26
  "k_proj",
 
27
  "q_proj",
28
  "gate_proj",
29
- "down_proj"
30
  ],
31
  "task_type": "CAUSAL_LM",
32
  "use_dora": false,
 
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
23
+ "down_proj",
24
  "up_proj",
 
25
  "k_proj",
26
+ "o_proj",
27
  "q_proj",
28
  "gate_proj",
29
+ "v_proj"
30
  ],
31
  "task_type": "CAUSAL_LM",
32
  "use_dora": false,
added_tokens.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "</tool_call>": 151658,
3
+ "<tool_call>": 151657,
4
+ "<|box_end|>": 151649,
5
+ "<|box_start|>": 151648,
6
+ "<|endoftext|>": 151643,
7
+ "<|eot_id|>": 151665,
8
+ "<|file_sep|>": 151664,
9
+ "<|fim_middle|>": 151660,
10
+ "<|fim_pad|>": 151662,
11
+ "<|fim_prefix|>": 151659,
12
+ "<|fim_suffix|>": 151661,
13
+ "<|im_end|>": 151645,
14
+ "<|im_start|>": 151644,
15
+ "<|image_pad|>": 151655,
16
+ "<|object_ref_end|>": 151647,
17
+ "<|object_ref_start|>": 151646,
18
+ "<|quad_end|>": 151651,
19
+ "<|quad_start|>": 151650,
20
+ "<|repo_name|>": 151663,
21
+ "<|video_pad|>": 151656,
22
+ "<|vision_end|>": 151653,
23
+ "<|vision_pad|>": 151654,
24
+ "<|vision_start|>": 151652
25
+ }
all_results.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 2.986666666666667,
3
+ "eval_kl": 28.178438186645508,
4
+ "eval_logps/chosen": -142.316337890625,
5
+ "eval_loss": 0.5287721157073975,
6
+ "eval_rewards/chosen": 2.6833560180664064,
7
+ "eval_runtime": 25.1552,
8
+ "eval_samples_per_second": 3.975,
9
+ "eval_steps_per_second": 3.975,
10
+ "total_flos": 3.5905450080116736e+16,
11
+ "train_loss": 0.5264021051781518,
12
+ "train_runtime": 1249.11,
13
+ "train_samples_per_second": 2.162,
14
+ "train_steps_per_second": 0.269
15
+ }
checkpoint-336/README.md ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: openthaigpt/openthaigpt1.5-7b-instruct
3
+ library_name: peft
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
200
+ ### Framework versions
201
+
202
+ - PEFT 0.12.0
checkpoint-336/adapter_config.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "openthaigpt/openthaigpt1.5-7b-instruct",
5
+ "bias": "none",
6
+ "fan_in_fan_out": false,
7
+ "inference_mode": true,
8
+ "init_lora_weights": true,
9
+ "layer_replication": null,
10
+ "layers_pattern": null,
11
+ "layers_to_transform": null,
12
+ "loftq_config": {},
13
+ "lora_alpha": 16,
14
+ "lora_dropout": 0.0,
15
+ "megatron_config": null,
16
+ "megatron_core": "megatron.core",
17
+ "modules_to_save": null,
18
+ "peft_type": "LORA",
19
+ "r": 8,
20
+ "rank_pattern": {},
21
+ "revision": null,
22
+ "target_modules": [
23
+ "down_proj",
24
+ "up_proj",
25
+ "k_proj",
26
+ "o_proj",
27
+ "q_proj",
28
+ "gate_proj",
29
+ "v_proj"
30
+ ],
31
+ "task_type": "CAUSAL_LM",
32
+ "use_dora": false,
33
+ "use_rslora": false
34
+ }
checkpoint-336/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1f4c4eb220d27dcc265bc54a71110612462a0b326eec9d1137130e63fa01592d
3
+ size 80792096
checkpoint-336/added_tokens.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "</tool_call>": 151658,
3
+ "<tool_call>": 151657,
4
+ "<|box_end|>": 151649,
5
+ "<|box_start|>": 151648,
6
+ "<|endoftext|>": 151643,
7
+ "<|eot_id|>": 151665,
8
+ "<|file_sep|>": 151664,
9
+ "<|fim_middle|>": 151660,
10
+ "<|fim_pad|>": 151662,
11
+ "<|fim_prefix|>": 151659,
12
+ "<|fim_suffix|>": 151661,
13
+ "<|im_end|>": 151645,
14
+ "<|im_start|>": 151644,
15
+ "<|image_pad|>": 151655,
16
+ "<|object_ref_end|>": 151647,
17
+ "<|object_ref_start|>": 151646,
18
+ "<|quad_end|>": 151651,
19
+ "<|quad_start|>": 151650,
20
+ "<|repo_name|>": 151663,
21
+ "<|video_pad|>": 151656,
22
+ "<|vision_end|>": 151653,
23
+ "<|vision_pad|>": 151654,
24
+ "<|vision_start|>": 151652
25
+ }
checkpoint-336/merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
checkpoint-336/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9b0bed3a0ba5bfe0206d5c6daacc3b04a7ed9bbc39a6509fd18c6c32527a812e
3
+ size 161810282
checkpoint-336/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3cf9097d4513154245c48236b6ec5137b7ee2a21c9f58f2cba798ea275c6026f
3
+ size 14244
checkpoint-336/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0a708bd1a22f032fc2ace1101aa38d3d0f667d21f79e1f28439411a7115b1e0f
3
+ size 1064
checkpoint-336/special_tokens_map.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|im_start|>",
4
+ "<|im_end|>",
5
+ "<|object_ref_start|>",
6
+ "<|object_ref_end|>",
7
+ "<|box_start|>",
8
+ "<|box_end|>",
9
+ "<|quad_start|>",
10
+ "<|quad_end|>",
11
+ "<|vision_start|>",
12
+ "<|vision_end|>",
13
+ "<|vision_pad|>",
14
+ "<|image_pad|>",
15
+ "<|video_pad|>"
16
+ ],
17
+ "eos_token": {
18
+ "content": "<|eot_id|>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ },
24
+ "pad_token": {
25
+ "content": "<|endoftext|>",
26
+ "lstrip": false,
27
+ "normalized": false,
28
+ "rstrip": false,
29
+ "single_word": false
30
+ }
31
+ }
checkpoint-336/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
checkpoint-336/tokenizer_config.json ADDED
@@ -0,0 +1,216 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_prefix_space": false,
4
+ "added_tokens_decoder": {
5
+ "151643": {
6
+ "content": "<|endoftext|>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "151644": {
14
+ "content": "<|im_start|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "151645": {
22
+ "content": "<|im_end|>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "151646": {
30
+ "content": "<|object_ref_start|>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "151647": {
38
+ "content": "<|object_ref_end|>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false,
43
+ "special": true
44
+ },
45
+ "151648": {
46
+ "content": "<|box_start|>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false,
51
+ "special": true
52
+ },
53
+ "151649": {
54
+ "content": "<|box_end|>",
55
+ "lstrip": false,
56
+ "normalized": false,
57
+ "rstrip": false,
58
+ "single_word": false,
59
+ "special": true
60
+ },
61
+ "151650": {
62
+ "content": "<|quad_start|>",
63
+ "lstrip": false,
64
+ "normalized": false,
65
+ "rstrip": false,
66
+ "single_word": false,
67
+ "special": true
68
+ },
69
+ "151651": {
70
+ "content": "<|quad_end|>",
71
+ "lstrip": false,
72
+ "normalized": false,
73
+ "rstrip": false,
74
+ "single_word": false,
75
+ "special": true
76
+ },
77
+ "151652": {
78
+ "content": "<|vision_start|>",
79
+ "lstrip": false,
80
+ "normalized": false,
81
+ "rstrip": false,
82
+ "single_word": false,
83
+ "special": true
84
+ },
85
+ "151653": {
86
+ "content": "<|vision_end|>",
87
+ "lstrip": false,
88
+ "normalized": false,
89
+ "rstrip": false,
90
+ "single_word": false,
91
+ "special": true
92
+ },
93
+ "151654": {
94
+ "content": "<|vision_pad|>",
95
+ "lstrip": false,
96
+ "normalized": false,
97
+ "rstrip": false,
98
+ "single_word": false,
99
+ "special": true
100
+ },
101
+ "151655": {
102
+ "content": "<|image_pad|>",
103
+ "lstrip": false,
104
+ "normalized": false,
105
+ "rstrip": false,
106
+ "single_word": false,
107
+ "special": true
108
+ },
109
+ "151656": {
110
+ "content": "<|video_pad|>",
111
+ "lstrip": false,
112
+ "normalized": false,
113
+ "rstrip": false,
114
+ "single_word": false,
115
+ "special": true
116
+ },
117
+ "151657": {
118
+ "content": "<tool_call>",
119
+ "lstrip": false,
120
+ "normalized": false,
121
+ "rstrip": false,
122
+ "single_word": false,
123
+ "special": false
124
+ },
125
+ "151658": {
126
+ "content": "</tool_call>",
127
+ "lstrip": false,
128
+ "normalized": false,
129
+ "rstrip": false,
130
+ "single_word": false,
131
+ "special": false
132
+ },
133
+ "151659": {
134
+ "content": "<|fim_prefix|>",
135
+ "lstrip": false,
136
+ "normalized": false,
137
+ "rstrip": false,
138
+ "single_word": false,
139
+ "special": false
140
+ },
141
+ "151660": {
142
+ "content": "<|fim_middle|>",
143
+ "lstrip": false,
144
+ "normalized": false,
145
+ "rstrip": false,
146
+ "single_word": false,
147
+ "special": false
148
+ },
149
+ "151661": {
150
+ "content": "<|fim_suffix|>",
151
+ "lstrip": false,
152
+ "normalized": false,
153
+ "rstrip": false,
154
+ "single_word": false,
155
+ "special": false
156
+ },
157
+ "151662": {
158
+ "content": "<|fim_pad|>",
159
+ "lstrip": false,
160
+ "normalized": false,
161
+ "rstrip": false,
162
+ "single_word": false,
163
+ "special": false
164
+ },
165
+ "151663": {
166
+ "content": "<|repo_name|>",
167
+ "lstrip": false,
168
+ "normalized": false,
169
+ "rstrip": false,
170
+ "single_word": false,
171
+ "special": false
172
+ },
173
+ "151664": {
174
+ "content": "<|file_sep|>",
175
+ "lstrip": false,
176
+ "normalized": false,
177
+ "rstrip": false,
178
+ "single_word": false,
179
+ "special": false
180
+ },
181
+ "151665": {
182
+ "content": "<|eot_id|>",
183
+ "lstrip": false,
184
+ "normalized": false,
185
+ "rstrip": false,
186
+ "single_word": false,
187
+ "special": true
188
+ }
189
+ },
190
+ "additional_special_tokens": [
191
+ "<|im_start|>",
192
+ "<|im_end|>",
193
+ "<|object_ref_start|>",
194
+ "<|object_ref_end|>",
195
+ "<|box_start|>",
196
+ "<|box_end|>",
197
+ "<|quad_start|>",
198
+ "<|quad_end|>",
199
+ "<|vision_start|>",
200
+ "<|vision_end|>",
201
+ "<|vision_pad|>",
202
+ "<|image_pad|>",
203
+ "<|video_pad|>"
204
+ ],
205
+ "bos_token": null,
206
+ "chat_template": "{% if messages[0]['role'] == 'system' %}{% set loop_messages = messages[1:] %}{% set system_message = messages[0]['content'] %}{% else %}{% set loop_messages = messages %}{% endif %}{% if system_message is defined %}{{ '<|start_header_id|>system<|end_header_id|>\n\n' + system_message + '<|eot_id|>' }}{% endif %}{% for message in loop_messages %}{% set content = message['content'] %}{% if message['role'] == 'user' %}{{ '<|start_header_id|>user<|end_header_id|>\n\n' + content + '<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n' }}{% elif message['role'] == 'assistant' %}{{ content + '<|eot_id|>' }}{% endif %}{% endfor %}",
207
+ "clean_up_tokenization_spaces": false,
208
+ "eos_token": "<|eot_id|>",
209
+ "errors": "replace",
210
+ "model_max_length": 131072,
211
+ "pad_token": "<|endoftext|>",
212
+ "padding_side": "right",
213
+ "split_special_tokens": false,
214
+ "tokenizer_class": "Qwen2Tokenizer",
215
+ "unk_token": null
216
+ }
checkpoint-336/trainer_state.json ADDED
@@ -0,0 +1,363 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 2.986666666666667,
5
+ "eval_steps": 500,
6
+ "global_step": 336,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.08888888888888889,
13
+ "grad_norm": 1.792281150817871,
14
+ "kl": 0.15646514296531677,
15
+ "learning_rate": 1.4705882352941177e-06,
16
+ "logps/chosen": -164.1169189453125,
17
+ "loss": 0.5043,
18
+ "rewards/chosen": -0.0013721466064453125,
19
+ "step": 10
20
+ },
21
+ {
22
+ "epoch": 0.17777777777777778,
23
+ "grad_norm": 1.7645841836929321,
24
+ "kl": 0.23952817916870117,
25
+ "learning_rate": 2.9411764705882355e-06,
26
+ "logps/chosen": -161.21431884765624,
27
+ "loss": 0.506,
28
+ "rewards/chosen": -0.000222191633656621,
29
+ "step": 20
30
+ },
31
+ {
32
+ "epoch": 0.26666666666666666,
33
+ "grad_norm": 1.6355303525924683,
34
+ "kl": 0.2789602279663086,
35
+ "learning_rate": 4.411764705882353e-06,
36
+ "logps/chosen": -161.8250244140625,
37
+ "loss": 0.5046,
38
+ "rewards/chosen": 0.009497338533401489,
39
+ "step": 30
40
+ },
41
+ {
42
+ "epoch": 0.35555555555555557,
43
+ "grad_norm": 1.8716518878936768,
44
+ "kl": 0.5364601016044617,
45
+ "learning_rate": 4.995131923687488e-06,
46
+ "logps/chosen": -160.8272705078125,
47
+ "loss": 0.5011,
48
+ "rewards/chosen": 0.04936442375183105,
49
+ "step": 40
50
+ },
51
+ {
52
+ "epoch": 0.4444444444444444,
53
+ "grad_norm": 1.7810463905334473,
54
+ "kl": 1.5138229131698608,
55
+ "learning_rate": 4.965451197130373e-06,
56
+ "logps/chosen": -159.2620361328125,
57
+ "loss": 0.5081,
58
+ "rewards/chosen": 0.11891223192214966,
59
+ "step": 50
60
+ },
61
+ {
62
+ "epoch": 0.5333333333333333,
63
+ "grad_norm": 2.2072041034698486,
64
+ "kl": 3.1553397178649902,
65
+ "learning_rate": 4.90911473983908e-06,
66
+ "logps/chosen": -163.2381103515625,
67
+ "loss": 0.5049,
68
+ "rewards/chosen": 0.295930004119873,
69
+ "step": 60
70
+ },
71
+ {
72
+ "epoch": 0.6222222222222222,
73
+ "grad_norm": 1.858336329460144,
74
+ "kl": 6.15082311630249,
75
+ "learning_rate": 4.826731644963705e-06,
76
+ "logps/chosen": -160.2790771484375,
77
+ "loss": 0.5238,
78
+ "rewards/chosen": 0.517311429977417,
79
+ "step": 70
80
+ },
81
+ {
82
+ "epoch": 0.7111111111111111,
83
+ "grad_norm": 2.1484944820404053,
84
+ "kl": 9.094244003295898,
85
+ "learning_rate": 4.71919261421297e-06,
86
+ "logps/chosen": -165.65592041015626,
87
+ "loss": 0.4949,
88
+ "rewards/chosen": 0.9306423187255859,
89
+ "step": 80
90
+ },
91
+ {
92
+ "epoch": 0.8,
93
+ "grad_norm": 1.9223041534423828,
94
+ "kl": 12.533430099487305,
95
+ "learning_rate": 4.587660327850203e-06,
96
+ "logps/chosen": -147.84144287109376,
97
+ "loss": 0.5327,
98
+ "rewards/chosen": 1.1160342216491699,
99
+ "step": 90
100
+ },
101
+ {
102
+ "epoch": 0.8888888888888888,
103
+ "grad_norm": 2.052889823913574,
104
+ "kl": 15.455533981323242,
105
+ "learning_rate": 4.43355687413747e-06,
106
+ "logps/chosen": -141.32998046875,
107
+ "loss": 0.5258,
108
+ "rewards/chosen": 1.4277078628540039,
109
+ "step": 100
110
+ },
111
+ {
112
+ "epoch": 0.9777777777777777,
113
+ "grad_norm": 1.8008378744125366,
114
+ "kl": 18.05435562133789,
115
+ "learning_rate": 4.258548374136976e-06,
116
+ "logps/chosen": -142.55394287109374,
117
+ "loss": 0.5491,
118
+ "rewards/chosen": 1.5885238647460938,
119
+ "step": 110
120
+ },
121
+ {
122
+ "epoch": 1.0666666666666667,
123
+ "grad_norm": 1.9117740392684937,
124
+ "kl": 20.216676712036133,
125
+ "learning_rate": 4.064526968101844e-06,
126
+ "logps/chosen": -138.30635986328124,
127
+ "loss": 0.5418,
128
+ "rewards/chosen": 1.8262062072753906,
129
+ "step": 120
130
+ },
131
+ {
132
+ "epoch": 1.1555555555555554,
133
+ "grad_norm": 2.0452499389648438,
134
+ "kl": 22.620920181274414,
135
+ "learning_rate": 3.853590358214119e-06,
136
+ "logps/chosen": -142.5436767578125,
137
+ "loss": 0.5414,
138
+ "rewards/chosen": 2.02716178894043,
139
+ "step": 130
140
+ },
141
+ {
142
+ "epoch": 1.2444444444444445,
143
+ "grad_norm": 1.5861475467681885,
144
+ "kl": 21.474929809570312,
145
+ "learning_rate": 3.6280191288478437e-06,
146
+ "logps/chosen": -139.66346435546876,
147
+ "loss": 0.5436,
148
+ "rewards/chosen": 1.9418670654296875,
149
+ "step": 140
150
+ },
151
+ {
152
+ "epoch": 1.3333333333333333,
153
+ "grad_norm": 2.040761947631836,
154
+ "kl": 24.884174346923828,
155
+ "learning_rate": 3.3902520895638674e-06,
156
+ "logps/chosen": -144.4580078125,
157
+ "loss": 0.5551,
158
+ "rewards/chosen": 2.2067359924316405,
159
+ "step": 150
160
+ },
161
+ {
162
+ "epoch": 1.4222222222222223,
163
+ "grad_norm": 1.4137938022613525,
164
+ "kl": 24.920988082885742,
165
+ "learning_rate": 3.142859907420615e-06,
166
+ "logps/chosen": -134.41856689453124,
167
+ "loss": 0.5452,
168
+ "rewards/chosen": 2.2827281951904297,
169
+ "step": 160
170
+ },
171
+ {
172
+ "epoch": 1.511111111111111,
173
+ "grad_norm": 1.9247097969055176,
174
+ "kl": 27.40714454650879,
175
+ "learning_rate": 2.8885173136805126e-06,
176
+ "logps/chosen": -134.56304931640625,
177
+ "loss": 0.5554,
178
+ "rewards/chosen": 2.478318786621094,
179
+ "step": 170
180
+ },
181
+ {
182
+ "epoch": 1.6,
183
+ "grad_norm": 1.7178384065628052,
184
+ "kl": 26.36468505859375,
185
+ "learning_rate": 2.629974185404951e-06,
186
+ "logps/chosen": -140.7825927734375,
187
+ "loss": 0.5482,
188
+ "rewards/chosen": 2.3276464462280275,
189
+ "step": 180
190
+ },
191
+ {
192
+ "epoch": 1.6888888888888889,
193
+ "grad_norm": 1.899584174156189,
194
+ "kl": 27.777027130126953,
195
+ "learning_rate": 2.3700258145950495e-06,
196
+ "logps/chosen": -140.53466796875,
197
+ "loss": 0.5618,
198
+ "rewards/chosen": 2.424651336669922,
199
+ "step": 190
200
+ },
201
+ {
202
+ "epoch": 1.7777777777777777,
203
+ "grad_norm": 1.7362196445465088,
204
+ "kl": 27.96476173400879,
205
+ "learning_rate": 2.1114826863194882e-06,
206
+ "logps/chosen": -132.751318359375,
207
+ "loss": 0.5549,
208
+ "rewards/chosen": 2.5185680389404297,
209
+ "step": 200
210
+ },
211
+ {
212
+ "epoch": 1.8666666666666667,
213
+ "grad_norm": 1.5568628311157227,
214
+ "kl": 25.686458587646484,
215
+ "learning_rate": 1.8571400925793855e-06,
216
+ "logps/chosen": -141.460986328125,
217
+ "loss": 0.4799,
218
+ "rewards/chosen": 2.678207778930664,
219
+ "step": 210
220
+ },
221
+ {
222
+ "epoch": 1.9555555555555557,
223
+ "grad_norm": 1.4741252660751343,
224
+ "kl": 27.350994110107422,
225
+ "learning_rate": 1.6097479104361328e-06,
226
+ "logps/chosen": -138.36417236328126,
227
+ "loss": 0.5164,
228
+ "rewards/chosen": 2.5984352111816404,
229
+ "step": 220
230
+ },
231
+ {
232
+ "epoch": 2.0444444444444443,
233
+ "grad_norm": 1.4206268787384033,
234
+ "kl": 27.860248565673828,
235
+ "learning_rate": 1.3719808711521573e-06,
236
+ "logps/chosen": -140.159765625,
237
+ "loss": 0.5315,
238
+ "rewards/chosen": 2.6533029556274412,
239
+ "step": 230
240
+ },
241
+ {
242
+ "epoch": 2.1333333333333333,
243
+ "grad_norm": 1.964991569519043,
244
+ "kl": 28.185623168945312,
245
+ "learning_rate": 1.1464096417858821e-06,
246
+ "logps/chosen": -136.175439453125,
247
+ "loss": 0.5194,
248
+ "rewards/chosen": 2.697865676879883,
249
+ "step": 240
250
+ },
251
+ {
252
+ "epoch": 2.2222222222222223,
253
+ "grad_norm": 2.0303757190704346,
254
+ "kl": 29.453914642333984,
255
+ "learning_rate": 9.354730318981561e-07,
256
+ "logps/chosen": -124.939111328125,
257
+ "loss": 0.5565,
258
+ "rewards/chosen": 2.6518789291381837,
259
+ "step": 250
260
+ },
261
+ {
262
+ "epoch": 2.311111111111111,
263
+ "grad_norm": 1.9753375053405762,
264
+ "kl": 28.224218368530273,
265
+ "learning_rate": 7.414516258630245e-07,
266
+ "logps/chosen": -143.79691162109376,
267
+ "loss": 0.4645,
268
+ "rewards/chosen": 2.974603271484375,
269
+ "step": 260
270
+ },
271
+ {
272
+ "epoch": 2.4,
273
+ "grad_norm": 1.5573641061782837,
274
+ "kl": 28.22760581970215,
275
+ "learning_rate": 5.664431258625305e-07,
276
+ "logps/chosen": -135.832470703125,
277
+ "loss": 0.5039,
278
+ "rewards/chosen": 2.787263107299805,
279
+ "step": 270
280
+ },
281
+ {
282
+ "epoch": 2.488888888888889,
283
+ "grad_norm": 1.8748925924301147,
284
+ "kl": 29.364704132080078,
285
+ "learning_rate": 4.123396721497977e-07,
286
+ "logps/chosen": -132.555029296875,
287
+ "loss": 0.5659,
288
+ "rewards/chosen": 2.5855674743652344,
289
+ "step": 280
290
+ },
291
+ {
292
+ "epoch": 2.5777777777777775,
293
+ "grad_norm": 1.9802337884902954,
294
+ "kl": 29.822818756103516,
295
+ "learning_rate": 2.8080738578703054e-07,
296
+ "logps/chosen": -132.623876953125,
297
+ "loss": 0.5254,
298
+ "rewards/chosen": 2.7944366455078127,
299
+ "step": 290
300
+ },
301
+ {
302
+ "epoch": 2.6666666666666665,
303
+ "grad_norm": 1.6388640403747559,
304
+ "kl": 29.55389404296875,
305
+ "learning_rate": 1.7326835503629542e-07,
306
+ "logps/chosen": -132.94588623046874,
307
+ "loss": 0.5661,
308
+ "rewards/chosen": 2.6015438079833983,
309
+ "step": 300
310
+ },
311
+ {
312
+ "epoch": 2.7555555555555555,
313
+ "grad_norm": 2.059163808822632,
314
+ "kl": 28.346817016601562,
315
+ "learning_rate": 9.088526016092142e-08,
316
+ "logps/chosen": -142.41676025390626,
317
+ "loss": 0.4982,
318
+ "rewards/chosen": 2.8422063827514648,
319
+ "step": 310
320
+ },
321
+ {
322
+ "epoch": 2.8444444444444446,
323
+ "grad_norm": 1.8374332189559937,
324
+ "kl": 28.879934310913086,
325
+ "learning_rate": 3.4548802869627806e-08,
326
+ "logps/chosen": -133.9089599609375,
327
+ "loss": 0.508,
328
+ "rewards/chosen": 2.8151947021484376,
329
+ "step": 320
330
+ },
331
+ {
332
+ "epoch": 2.9333333333333336,
333
+ "grad_norm": 1.6760042905807495,
334
+ "kl": 28.577600479125977,
335
+ "learning_rate": 4.868076312512515e-09,
336
+ "logps/chosen": -133.6552001953125,
337
+ "loss": 0.5269,
338
+ "rewards/chosen": 2.694635200500488,
339
+ "step": 330
340
+ }
341
+ ],
342
+ "logging_steps": 10,
343
+ "max_steps": 336,
344
+ "num_input_tokens_seen": 0,
345
+ "num_train_epochs": 3,
346
+ "save_steps": 500,
347
+ "stateful_callbacks": {
348
+ "TrainerControl": {
349
+ "args": {
350
+ "should_epoch_stop": false,
351
+ "should_evaluate": false,
352
+ "should_log": false,
353
+ "should_save": true,
354
+ "should_training_stop": true
355
+ },
356
+ "attributes": {}
357
+ }
358
+ },
359
+ "total_flos": 3.5905450080116736e+16,
360
+ "train_batch_size": 1,
361
+ "trial_name": null,
362
+ "trial_params": null
363
+ }
checkpoint-336/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d632836e76e65b4e4eaa46a37d73d18cfe70a8b38964b676324e9a386917482f
3
+ size 5368
checkpoint-336/vocab.json ADDED
The diff for this file is too large to render. See raw diff
 
eval_results.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 2.986666666666667,
3
+ "eval_kl": 28.178438186645508,
4
+ "eval_logps/chosen": -142.316337890625,
5
+ "eval_loss": 0.5287721157073975,
6
+ "eval_rewards/chosen": 2.6833560180664064,
7
+ "eval_runtime": 25.1552,
8
+ "eval_samples_per_second": 3.975,
9
+ "eval_steps_per_second": 3.975
10
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
special_tokens_map.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|im_start|>",
4
+ "<|im_end|>",
5
+ "<|object_ref_start|>",
6
+ "<|object_ref_end|>",
7
+ "<|box_start|>",
8
+ "<|box_end|>",
9
+ "<|quad_start|>",
10
+ "<|quad_end|>",
11
+ "<|vision_start|>",
12
+ "<|vision_end|>",
13
+ "<|vision_pad|>",
14
+ "<|image_pad|>",
15
+ "<|video_pad|>"
16
+ ],
17
+ "eos_token": {
18
+ "content": "<|eot_id|>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ },
24
+ "pad_token": {
25
+ "content": "<|endoftext|>",
26
+ "lstrip": false,
27
+ "normalized": false,
28
+ "rstrip": false,
29
+ "single_word": false
30
+ }
31
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,216 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_prefix_space": false,
4
+ "added_tokens_decoder": {
5
+ "151643": {
6
+ "content": "<|endoftext|>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "151644": {
14
+ "content": "<|im_start|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "151645": {
22
+ "content": "<|im_end|>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "151646": {
30
+ "content": "<|object_ref_start|>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "151647": {
38
+ "content": "<|object_ref_end|>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false,
43
+ "special": true
44
+ },
45
+ "151648": {
46
+ "content": "<|box_start|>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false,
51
+ "special": true
52
+ },
53
+ "151649": {
54
+ "content": "<|box_end|>",
55
+ "lstrip": false,
56
+ "normalized": false,
57
+ "rstrip": false,
58
+ "single_word": false,
59
+ "special": true
60
+ },
61
+ "151650": {
62
+ "content": "<|quad_start|>",
63
+ "lstrip": false,
64
+ "normalized": false,
65
+ "rstrip": false,
66
+ "single_word": false,
67
+ "special": true
68
+ },
69
+ "151651": {
70
+ "content": "<|quad_end|>",
71
+ "lstrip": false,
72
+ "normalized": false,
73
+ "rstrip": false,
74
+ "single_word": false,
75
+ "special": true
76
+ },
77
+ "151652": {
78
+ "content": "<|vision_start|>",
79
+ "lstrip": false,
80
+ "normalized": false,
81
+ "rstrip": false,
82
+ "single_word": false,
83
+ "special": true
84
+ },
85
+ "151653": {
86
+ "content": "<|vision_end|>",
87
+ "lstrip": false,
88
+ "normalized": false,
89
+ "rstrip": false,
90
+ "single_word": false,
91
+ "special": true
92
+ },
93
+ "151654": {
94
+ "content": "<|vision_pad|>",
95
+ "lstrip": false,
96
+ "normalized": false,
97
+ "rstrip": false,
98
+ "single_word": false,
99
+ "special": true
100
+ },
101
+ "151655": {
102
+ "content": "<|image_pad|>",
103
+ "lstrip": false,
104
+ "normalized": false,
105
+ "rstrip": false,
106
+ "single_word": false,
107
+ "special": true
108
+ },
109
+ "151656": {
110
+ "content": "<|video_pad|>",
111
+ "lstrip": false,
112
+ "normalized": false,
113
+ "rstrip": false,
114
+ "single_word": false,
115
+ "special": true
116
+ },
117
+ "151657": {
118
+ "content": "<tool_call>",
119
+ "lstrip": false,
120
+ "normalized": false,
121
+ "rstrip": false,
122
+ "single_word": false,
123
+ "special": false
124
+ },
125
+ "151658": {
126
+ "content": "</tool_call>",
127
+ "lstrip": false,
128
+ "normalized": false,
129
+ "rstrip": false,
130
+ "single_word": false,
131
+ "special": false
132
+ },
133
+ "151659": {
134
+ "content": "<|fim_prefix|>",
135
+ "lstrip": false,
136
+ "normalized": false,
137
+ "rstrip": false,
138
+ "single_word": false,
139
+ "special": false
140
+ },
141
+ "151660": {
142
+ "content": "<|fim_middle|>",
143
+ "lstrip": false,
144
+ "normalized": false,
145
+ "rstrip": false,
146
+ "single_word": false,
147
+ "special": false
148
+ },
149
+ "151661": {
150
+ "content": "<|fim_suffix|>",
151
+ "lstrip": false,
152
+ "normalized": false,
153
+ "rstrip": false,
154
+ "single_word": false,
155
+ "special": false
156
+ },
157
+ "151662": {
158
+ "content": "<|fim_pad|>",
159
+ "lstrip": false,
160
+ "normalized": false,
161
+ "rstrip": false,
162
+ "single_word": false,
163
+ "special": false
164
+ },
165
+ "151663": {
166
+ "content": "<|repo_name|>",
167
+ "lstrip": false,
168
+ "normalized": false,
169
+ "rstrip": false,
170
+ "single_word": false,
171
+ "special": false
172
+ },
173
+ "151664": {
174
+ "content": "<|file_sep|>",
175
+ "lstrip": false,
176
+ "normalized": false,
177
+ "rstrip": false,
178
+ "single_word": false,
179
+ "special": false
180
+ },
181
+ "151665": {
182
+ "content": "<|eot_id|>",
183
+ "lstrip": false,
184
+ "normalized": false,
185
+ "rstrip": false,
186
+ "single_word": false,
187
+ "special": true
188
+ }
189
+ },
190
+ "additional_special_tokens": [
191
+ "<|im_start|>",
192
+ "<|im_end|>",
193
+ "<|object_ref_start|>",
194
+ "<|object_ref_end|>",
195
+ "<|box_start|>",
196
+ "<|box_end|>",
197
+ "<|quad_start|>",
198
+ "<|quad_end|>",
199
+ "<|vision_start|>",
200
+ "<|vision_end|>",
201
+ "<|vision_pad|>",
202
+ "<|image_pad|>",
203
+ "<|video_pad|>"
204
+ ],
205
+ "bos_token": null,
206
+ "chat_template": "{% if messages[0]['role'] == 'system' %}{% set loop_messages = messages[1:] %}{% set system_message = messages[0]['content'] %}{% else %}{% set loop_messages = messages %}{% endif %}{% if system_message is defined %}{{ '<|start_header_id|>system<|end_header_id|>\n\n' + system_message + '<|eot_id|>' }}{% endif %}{% for message in loop_messages %}{% set content = message['content'] %}{% if message['role'] == 'user' %}{{ '<|start_header_id|>user<|end_header_id|>\n\n' + content + '<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n' }}{% elif message['role'] == 'assistant' %}{{ content + '<|eot_id|>' }}{% endif %}{% endfor %}",
207
+ "clean_up_tokenization_spaces": false,
208
+ "eos_token": "<|eot_id|>",
209
+ "errors": "replace",
210
+ "model_max_length": 131072,
211
+ "pad_token": "<|endoftext|>",
212
+ "padding_side": "right",
213
+ "split_special_tokens": false,
214
+ "tokenizer_class": "Qwen2Tokenizer",
215
+ "unk_token": null
216
+ }
train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 2.986666666666667,
3
+ "total_flos": 3.5905450080116736e+16,
4
+ "train_loss": 0.5264021051781518,
5
+ "train_runtime": 1249.11,
6
+ "train_samples_per_second": 2.162,
7
+ "train_steps_per_second": 0.269
8
+ }
trainer_log.jsonl ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {"current_steps": 10, "total_steps": 336, "loss": 0.5043, "learning_rate": 1.4705882352941177e-06, "epoch": 0.08888888888888889, "percentage": 2.98, "elapsed_time": "0:00:38", "remaining_time": "0:21:04"}
2
+ {"current_steps": 20, "total_steps": 336, "loss": 0.506, "learning_rate": 2.9411764705882355e-06, "epoch": 0.17777777777777778, "percentage": 5.95, "elapsed_time": "0:01:16", "remaining_time": "0:20:03"}
3
+ {"current_steps": 30, "total_steps": 336, "loss": 0.5046, "learning_rate": 4.411764705882353e-06, "epoch": 0.26666666666666666, "percentage": 8.93, "elapsed_time": "0:01:53", "remaining_time": "0:19:15"}
4
+ {"current_steps": 40, "total_steps": 336, "loss": 0.5011, "learning_rate": 4.995131923687488e-06, "epoch": 0.35555555555555557, "percentage": 11.9, "elapsed_time": "0:02:30", "remaining_time": "0:18:31"}
5
+ {"current_steps": 50, "total_steps": 336, "loss": 0.5081, "learning_rate": 4.965451197130373e-06, "epoch": 0.4444444444444444, "percentage": 14.88, "elapsed_time": "0:03:07", "remaining_time": "0:17:50"}
6
+ {"current_steps": 60, "total_steps": 336, "loss": 0.5049, "learning_rate": 4.90911473983908e-06, "epoch": 0.5333333333333333, "percentage": 17.86, "elapsed_time": "0:03:44", "remaining_time": "0:17:11"}
7
+ {"current_steps": 70, "total_steps": 336, "loss": 0.5238, "learning_rate": 4.826731644963705e-06, "epoch": 0.6222222222222222, "percentage": 20.83, "elapsed_time": "0:04:21", "remaining_time": "0:16:33"}
8
+ {"current_steps": 80, "total_steps": 336, "loss": 0.4949, "learning_rate": 4.71919261421297e-06, "epoch": 0.7111111111111111, "percentage": 23.81, "elapsed_time": "0:04:58", "remaining_time": "0:15:55"}
9
+ {"current_steps": 90, "total_steps": 336, "loss": 0.5327, "learning_rate": 4.587660327850203e-06, "epoch": 0.8, "percentage": 26.79, "elapsed_time": "0:05:35", "remaining_time": "0:15:17"}
10
+ {"current_steps": 100, "total_steps": 336, "loss": 0.5258, "learning_rate": 4.43355687413747e-06, "epoch": 0.8888888888888888, "percentage": 29.76, "elapsed_time": "0:06:12", "remaining_time": "0:14:39"}
11
+ {"current_steps": 110, "total_steps": 336, "loss": 0.5491, "learning_rate": 4.258548374136976e-06, "epoch": 0.9777777777777777, "percentage": 32.74, "elapsed_time": "0:06:49", "remaining_time": "0:14:01"}
12
+ {"current_steps": 120, "total_steps": 336, "loss": 0.5418, "learning_rate": 4.064526968101844e-06, "epoch": 1.0666666666666667, "percentage": 35.71, "elapsed_time": "0:07:27", "remaining_time": "0:13:24"}
13
+ {"current_steps": 130, "total_steps": 336, "loss": 0.5414, "learning_rate": 3.853590358214119e-06, "epoch": 1.1555555555555554, "percentage": 38.69, "elapsed_time": "0:08:04", "remaining_time": "0:12:48"}
14
+ {"current_steps": 140, "total_steps": 336, "loss": 0.5436, "learning_rate": 3.6280191288478437e-06, "epoch": 1.2444444444444445, "percentage": 41.67, "elapsed_time": "0:08:42", "remaining_time": "0:12:11"}
15
+ {"current_steps": 150, "total_steps": 336, "loss": 0.5551, "learning_rate": 3.3902520895638674e-06, "epoch": 1.3333333333333333, "percentage": 44.64, "elapsed_time": "0:09:19", "remaining_time": "0:11:33"}
16
+ {"current_steps": 160, "total_steps": 336, "loss": 0.5452, "learning_rate": 3.142859907420615e-06, "epoch": 1.4222222222222223, "percentage": 47.62, "elapsed_time": "0:09:56", "remaining_time": "0:10:55"}
17
+ {"current_steps": 170, "total_steps": 336, "loss": 0.5554, "learning_rate": 2.8885173136805126e-06, "epoch": 1.511111111111111, "percentage": 50.6, "elapsed_time": "0:10:33", "remaining_time": "0:10:18"}
18
+ {"current_steps": 180, "total_steps": 336, "loss": 0.5482, "learning_rate": 2.629974185404951e-06, "epoch": 1.6, "percentage": 53.57, "elapsed_time": "0:11:10", "remaining_time": "0:09:40"}
19
+ {"current_steps": 190, "total_steps": 336, "loss": 0.5618, "learning_rate": 2.3700258145950495e-06, "epoch": 1.6888888888888889, "percentage": 56.55, "elapsed_time": "0:11:47", "remaining_time": "0:09:03"}
20
+ {"current_steps": 200, "total_steps": 336, "loss": 0.5549, "learning_rate": 2.1114826863194882e-06, "epoch": 1.7777777777777777, "percentage": 59.52, "elapsed_time": "0:12:24", "remaining_time": "0:08:26"}
21
+ {"current_steps": 210, "total_steps": 336, "loss": 0.4799, "learning_rate": 1.8571400925793855e-06, "epoch": 1.8666666666666667, "percentage": 62.5, "elapsed_time": "0:13:01", "remaining_time": "0:07:49"}
22
+ {"current_steps": 220, "total_steps": 336, "loss": 0.5164, "learning_rate": 1.6097479104361328e-06, "epoch": 1.9555555555555557, "percentage": 65.48, "elapsed_time": "0:13:38", "remaining_time": "0:07:11"}
23
+ {"current_steps": 230, "total_steps": 336, "loss": 0.5315, "learning_rate": 1.3719808711521573e-06, "epoch": 2.0444444444444443, "percentage": 68.45, "elapsed_time": "0:14:15", "remaining_time": "0:06:34"}
24
+ {"current_steps": 240, "total_steps": 336, "loss": 0.5194, "learning_rate": 1.1464096417858821e-06, "epoch": 2.1333333333333333, "percentage": 71.43, "elapsed_time": "0:14:52", "remaining_time": "0:05:57"}
25
+ {"current_steps": 250, "total_steps": 336, "loss": 0.5565, "learning_rate": 9.354730318981561e-07, "epoch": 2.2222222222222223, "percentage": 74.4, "elapsed_time": "0:15:30", "remaining_time": "0:05:20"}
26
+ {"current_steps": 260, "total_steps": 336, "loss": 0.4645, "learning_rate": 7.414516258630245e-07, "epoch": 2.311111111111111, "percentage": 77.38, "elapsed_time": "0:16:07", "remaining_time": "0:04:42"}
27
+ {"current_steps": 270, "total_steps": 336, "loss": 0.5039, "learning_rate": 5.664431258625305e-07, "epoch": 2.4, "percentage": 80.36, "elapsed_time": "0:16:44", "remaining_time": "0:04:05"}
28
+ {"current_steps": 280, "total_steps": 336, "loss": 0.5659, "learning_rate": 4.123396721497977e-07, "epoch": 2.488888888888889, "percentage": 83.33, "elapsed_time": "0:17:21", "remaining_time": "0:03:28"}
29
+ {"current_steps": 290, "total_steps": 336, "loss": 0.5254, "learning_rate": 2.8080738578703054e-07, "epoch": 2.5777777777777775, "percentage": 86.31, "elapsed_time": "0:17:58", "remaining_time": "0:02:51"}
30
+ {"current_steps": 300, "total_steps": 336, "loss": 0.5661, "learning_rate": 1.7326835503629542e-07, "epoch": 2.6666666666666665, "percentage": 89.29, "elapsed_time": "0:18:35", "remaining_time": "0:02:13"}
31
+ {"current_steps": 310, "total_steps": 336, "loss": 0.4982, "learning_rate": 9.088526016092142e-08, "epoch": 2.7555555555555555, "percentage": 92.26, "elapsed_time": "0:19:12", "remaining_time": "0:01:36"}
32
+ {"current_steps": 320, "total_steps": 336, "loss": 0.508, "learning_rate": 3.4548802869627806e-08, "epoch": 2.8444444444444446, "percentage": 95.24, "elapsed_time": "0:19:49", "remaining_time": "0:00:59"}
33
+ {"current_steps": 330, "total_steps": 336, "loss": 0.5269, "learning_rate": 4.868076312512515e-09, "epoch": 2.9333333333333336, "percentage": 98.21, "elapsed_time": "0:20:26", "remaining_time": "0:00:22"}
34
+ {"current_steps": 336, "total_steps": 336, "epoch": 2.986666666666667, "percentage": 100.0, "elapsed_time": "0:20:49", "remaining_time": "0:00:00"}
trainer_state.json ADDED
@@ -0,0 +1,372 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 2.986666666666667,
5
+ "eval_steps": 500,
6
+ "global_step": 336,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.08888888888888889,
13
+ "grad_norm": 1.792281150817871,
14
+ "kl": 0.15646514296531677,
15
+ "learning_rate": 1.4705882352941177e-06,
16
+ "logps/chosen": -164.1169189453125,
17
+ "loss": 0.5043,
18
+ "rewards/chosen": -0.0013721466064453125,
19
+ "step": 10
20
+ },
21
+ {
22
+ "epoch": 0.17777777777777778,
23
+ "grad_norm": 1.7645841836929321,
24
+ "kl": 0.23952817916870117,
25
+ "learning_rate": 2.9411764705882355e-06,
26
+ "logps/chosen": -161.21431884765624,
27
+ "loss": 0.506,
28
+ "rewards/chosen": -0.000222191633656621,
29
+ "step": 20
30
+ },
31
+ {
32
+ "epoch": 0.26666666666666666,
33
+ "grad_norm": 1.6355303525924683,
34
+ "kl": 0.2789602279663086,
35
+ "learning_rate": 4.411764705882353e-06,
36
+ "logps/chosen": -161.8250244140625,
37
+ "loss": 0.5046,
38
+ "rewards/chosen": 0.009497338533401489,
39
+ "step": 30
40
+ },
41
+ {
42
+ "epoch": 0.35555555555555557,
43
+ "grad_norm": 1.8716518878936768,
44
+ "kl": 0.5364601016044617,
45
+ "learning_rate": 4.995131923687488e-06,
46
+ "logps/chosen": -160.8272705078125,
47
+ "loss": 0.5011,
48
+ "rewards/chosen": 0.04936442375183105,
49
+ "step": 40
50
+ },
51
+ {
52
+ "epoch": 0.4444444444444444,
53
+ "grad_norm": 1.7810463905334473,
54
+ "kl": 1.5138229131698608,
55
+ "learning_rate": 4.965451197130373e-06,
56
+ "logps/chosen": -159.2620361328125,
57
+ "loss": 0.5081,
58
+ "rewards/chosen": 0.11891223192214966,
59
+ "step": 50
60
+ },
61
+ {
62
+ "epoch": 0.5333333333333333,
63
+ "grad_norm": 2.2072041034698486,
64
+ "kl": 3.1553397178649902,
65
+ "learning_rate": 4.90911473983908e-06,
66
+ "logps/chosen": -163.2381103515625,
67
+ "loss": 0.5049,
68
+ "rewards/chosen": 0.295930004119873,
69
+ "step": 60
70
+ },
71
+ {
72
+ "epoch": 0.6222222222222222,
73
+ "grad_norm": 1.858336329460144,
74
+ "kl": 6.15082311630249,
75
+ "learning_rate": 4.826731644963705e-06,
76
+ "logps/chosen": -160.2790771484375,
77
+ "loss": 0.5238,
78
+ "rewards/chosen": 0.517311429977417,
79
+ "step": 70
80
+ },
81
+ {
82
+ "epoch": 0.7111111111111111,
83
+ "grad_norm": 2.1484944820404053,
84
+ "kl": 9.094244003295898,
85
+ "learning_rate": 4.71919261421297e-06,
86
+ "logps/chosen": -165.65592041015626,
87
+ "loss": 0.4949,
88
+ "rewards/chosen": 0.9306423187255859,
89
+ "step": 80
90
+ },
91
+ {
92
+ "epoch": 0.8,
93
+ "grad_norm": 1.9223041534423828,
94
+ "kl": 12.533430099487305,
95
+ "learning_rate": 4.587660327850203e-06,
96
+ "logps/chosen": -147.84144287109376,
97
+ "loss": 0.5327,
98
+ "rewards/chosen": 1.1160342216491699,
99
+ "step": 90
100
+ },
101
+ {
102
+ "epoch": 0.8888888888888888,
103
+ "grad_norm": 2.052889823913574,
104
+ "kl": 15.455533981323242,
105
+ "learning_rate": 4.43355687413747e-06,
106
+ "logps/chosen": -141.32998046875,
107
+ "loss": 0.5258,
108
+ "rewards/chosen": 1.4277078628540039,
109
+ "step": 100
110
+ },
111
+ {
112
+ "epoch": 0.9777777777777777,
113
+ "grad_norm": 1.8008378744125366,
114
+ "kl": 18.05435562133789,
115
+ "learning_rate": 4.258548374136976e-06,
116
+ "logps/chosen": -142.55394287109374,
117
+ "loss": 0.5491,
118
+ "rewards/chosen": 1.5885238647460938,
119
+ "step": 110
120
+ },
121
+ {
122
+ "epoch": 1.0666666666666667,
123
+ "grad_norm": 1.9117740392684937,
124
+ "kl": 20.216676712036133,
125
+ "learning_rate": 4.064526968101844e-06,
126
+ "logps/chosen": -138.30635986328124,
127
+ "loss": 0.5418,
128
+ "rewards/chosen": 1.8262062072753906,
129
+ "step": 120
130
+ },
131
+ {
132
+ "epoch": 1.1555555555555554,
133
+ "grad_norm": 2.0452499389648438,
134
+ "kl": 22.620920181274414,
135
+ "learning_rate": 3.853590358214119e-06,
136
+ "logps/chosen": -142.5436767578125,
137
+ "loss": 0.5414,
138
+ "rewards/chosen": 2.02716178894043,
139
+ "step": 130
140
+ },
141
+ {
142
+ "epoch": 1.2444444444444445,
143
+ "grad_norm": 1.5861475467681885,
144
+ "kl": 21.474929809570312,
145
+ "learning_rate": 3.6280191288478437e-06,
146
+ "logps/chosen": -139.66346435546876,
147
+ "loss": 0.5436,
148
+ "rewards/chosen": 1.9418670654296875,
149
+ "step": 140
150
+ },
151
+ {
152
+ "epoch": 1.3333333333333333,
153
+ "grad_norm": 2.040761947631836,
154
+ "kl": 24.884174346923828,
155
+ "learning_rate": 3.3902520895638674e-06,
156
+ "logps/chosen": -144.4580078125,
157
+ "loss": 0.5551,
158
+ "rewards/chosen": 2.2067359924316405,
159
+ "step": 150
160
+ },
161
+ {
162
+ "epoch": 1.4222222222222223,
163
+ "grad_norm": 1.4137938022613525,
164
+ "kl": 24.920988082885742,
165
+ "learning_rate": 3.142859907420615e-06,
166
+ "logps/chosen": -134.41856689453124,
167
+ "loss": 0.5452,
168
+ "rewards/chosen": 2.2827281951904297,
169
+ "step": 160
170
+ },
171
+ {
172
+ "epoch": 1.511111111111111,
173
+ "grad_norm": 1.9247097969055176,
174
+ "kl": 27.40714454650879,
175
+ "learning_rate": 2.8885173136805126e-06,
176
+ "logps/chosen": -134.56304931640625,
177
+ "loss": 0.5554,
178
+ "rewards/chosen": 2.478318786621094,
179
+ "step": 170
180
+ },
181
+ {
182
+ "epoch": 1.6,
183
+ "grad_norm": 1.7178384065628052,
184
+ "kl": 26.36468505859375,
185
+ "learning_rate": 2.629974185404951e-06,
186
+ "logps/chosen": -140.7825927734375,
187
+ "loss": 0.5482,
188
+ "rewards/chosen": 2.3276464462280275,
189
+ "step": 180
190
+ },
191
+ {
192
+ "epoch": 1.6888888888888889,
193
+ "grad_norm": 1.899584174156189,
194
+ "kl": 27.777027130126953,
195
+ "learning_rate": 2.3700258145950495e-06,
196
+ "logps/chosen": -140.53466796875,
197
+ "loss": 0.5618,
198
+ "rewards/chosen": 2.424651336669922,
199
+ "step": 190
200
+ },
201
+ {
202
+ "epoch": 1.7777777777777777,
203
+ "grad_norm": 1.7362196445465088,
204
+ "kl": 27.96476173400879,
205
+ "learning_rate": 2.1114826863194882e-06,
206
+ "logps/chosen": -132.751318359375,
207
+ "loss": 0.5549,
208
+ "rewards/chosen": 2.5185680389404297,
209
+ "step": 200
210
+ },
211
+ {
212
+ "epoch": 1.8666666666666667,
213
+ "grad_norm": 1.5568628311157227,
214
+ "kl": 25.686458587646484,
215
+ "learning_rate": 1.8571400925793855e-06,
216
+ "logps/chosen": -141.460986328125,
217
+ "loss": 0.4799,
218
+ "rewards/chosen": 2.678207778930664,
219
+ "step": 210
220
+ },
221
+ {
222
+ "epoch": 1.9555555555555557,
223
+ "grad_norm": 1.4741252660751343,
224
+ "kl": 27.350994110107422,
225
+ "learning_rate": 1.6097479104361328e-06,
226
+ "logps/chosen": -138.36417236328126,
227
+ "loss": 0.5164,
228
+ "rewards/chosen": 2.5984352111816404,
229
+ "step": 220
230
+ },
231
+ {
232
+ "epoch": 2.0444444444444443,
233
+ "grad_norm": 1.4206268787384033,
234
+ "kl": 27.860248565673828,
235
+ "learning_rate": 1.3719808711521573e-06,
236
+ "logps/chosen": -140.159765625,
237
+ "loss": 0.5315,
238
+ "rewards/chosen": 2.6533029556274412,
239
+ "step": 230
240
+ },
241
+ {
242
+ "epoch": 2.1333333333333333,
243
+ "grad_norm": 1.964991569519043,
244
+ "kl": 28.185623168945312,
245
+ "learning_rate": 1.1464096417858821e-06,
246
+ "logps/chosen": -136.175439453125,
247
+ "loss": 0.5194,
248
+ "rewards/chosen": 2.697865676879883,
249
+ "step": 240
250
+ },
251
+ {
252
+ "epoch": 2.2222222222222223,
253
+ "grad_norm": 2.0303757190704346,
254
+ "kl": 29.453914642333984,
255
+ "learning_rate": 9.354730318981561e-07,
256
+ "logps/chosen": -124.939111328125,
257
+ "loss": 0.5565,
258
+ "rewards/chosen": 2.6518789291381837,
259
+ "step": 250
260
+ },
261
+ {
262
+ "epoch": 2.311111111111111,
263
+ "grad_norm": 1.9753375053405762,
264
+ "kl": 28.224218368530273,
265
+ "learning_rate": 7.414516258630245e-07,
266
+ "logps/chosen": -143.79691162109376,
267
+ "loss": 0.4645,
268
+ "rewards/chosen": 2.974603271484375,
269
+ "step": 260
270
+ },
271
+ {
272
+ "epoch": 2.4,
273
+ "grad_norm": 1.5573641061782837,
274
+ "kl": 28.22760581970215,
275
+ "learning_rate": 5.664431258625305e-07,
276
+ "logps/chosen": -135.832470703125,
277
+ "loss": 0.5039,
278
+ "rewards/chosen": 2.787263107299805,
279
+ "step": 270
280
+ },
281
+ {
282
+ "epoch": 2.488888888888889,
283
+ "grad_norm": 1.8748925924301147,
284
+ "kl": 29.364704132080078,
285
+ "learning_rate": 4.123396721497977e-07,
286
+ "logps/chosen": -132.555029296875,
287
+ "loss": 0.5659,
288
+ "rewards/chosen": 2.5855674743652344,
289
+ "step": 280
290
+ },
291
+ {
292
+ "epoch": 2.5777777777777775,
293
+ "grad_norm": 1.9802337884902954,
294
+ "kl": 29.822818756103516,
295
+ "learning_rate": 2.8080738578703054e-07,
296
+ "logps/chosen": -132.623876953125,
297
+ "loss": 0.5254,
298
+ "rewards/chosen": 2.7944366455078127,
299
+ "step": 290
300
+ },
301
+ {
302
+ "epoch": 2.6666666666666665,
303
+ "grad_norm": 1.6388640403747559,
304
+ "kl": 29.55389404296875,
305
+ "learning_rate": 1.7326835503629542e-07,
306
+ "logps/chosen": -132.94588623046874,
307
+ "loss": 0.5661,
308
+ "rewards/chosen": 2.6015438079833983,
309
+ "step": 300
310
+ },
311
+ {
312
+ "epoch": 2.7555555555555555,
313
+ "grad_norm": 2.059163808822632,
314
+ "kl": 28.346817016601562,
315
+ "learning_rate": 9.088526016092142e-08,
316
+ "logps/chosen": -142.41676025390626,
317
+ "loss": 0.4982,
318
+ "rewards/chosen": 2.8422063827514648,
319
+ "step": 310
320
+ },
321
+ {
322
+ "epoch": 2.8444444444444446,
323
+ "grad_norm": 1.8374332189559937,
324
+ "kl": 28.879934310913086,
325
+ "learning_rate": 3.4548802869627806e-08,
326
+ "logps/chosen": -133.9089599609375,
327
+ "loss": 0.508,
328
+ "rewards/chosen": 2.8151947021484376,
329
+ "step": 320
330
+ },
331
+ {
332
+ "epoch": 2.9333333333333336,
333
+ "grad_norm": 1.6760042905807495,
334
+ "kl": 28.577600479125977,
335
+ "learning_rate": 4.868076312512515e-09,
336
+ "logps/chosen": -133.6552001953125,
337
+ "loss": 0.5269,
338
+ "rewards/chosen": 2.694635200500488,
339
+ "step": 330
340
+ },
341
+ {
342
+ "epoch": 2.986666666666667,
343
+ "step": 336,
344
+ "total_flos": 3.5905450080116736e+16,
345
+ "train_loss": 0.5264021051781518,
346
+ "train_runtime": 1249.11,
347
+ "train_samples_per_second": 2.162,
348
+ "train_steps_per_second": 0.269
349
+ }
350
+ ],
351
+ "logging_steps": 10,
352
+ "max_steps": 336,
353
+ "num_input_tokens_seen": 0,
354
+ "num_train_epochs": 3,
355
+ "save_steps": 500,
356
+ "stateful_callbacks": {
357
+ "TrainerControl": {
358
+ "args": {
359
+ "should_epoch_stop": false,
360
+ "should_evaluate": false,
361
+ "should_log": false,
362
+ "should_save": true,
363
+ "should_training_stop": true
364
+ },
365
+ "attributes": {}
366
+ }
367
+ },
368
+ "total_flos": 3.5905450080116736e+16,
369
+ "train_batch_size": 1,
370
+ "trial_name": null,
371
+ "trial_params": null
372
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d632836e76e65b4e4eaa46a37d73d18cfe70a8b38964b676324e9a386917482f
3
+ size 5368
training_loss.png ADDED
vocab.json ADDED
The diff for this file is too large to render. See raw diff