asussome commited on
Commit
c476644
1 Parent(s): 78ce165

End of training

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -1,11 +1,11 @@
1
  ---
2
- license: llama2
3
  library_name: peft
4
  tags:
5
  - trl
6
  - sft
7
  - generated_from_trainer
8
- base_model: TheBloke/Xwin-LM-7B-V0.1-GPTQ
9
  model-index:
10
  - name: xwin-finetuned-alpaca-cleaned
11
  results: []
@@ -16,7 +16,7 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  # xwin-finetuned-alpaca-cleaned
18
 
19
- This model is a fine-tuned version of [TheBloke/Xwin-LM-7B-V0.1-GPTQ](https://huggingface.co/TheBloke/Xwin-LM-7B-V0.1-GPTQ) on the None dataset.
20
 
21
  ## Model description
22
 
@@ -41,7 +41,7 @@ The following hyperparameters were used during training:
41
  - seed: 42
42
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
  - lr_scheduler_type: cosine
44
- - training_steps: 30
45
 
46
  ### Training results
47
 
 
1
  ---
2
+ license: apache-2.0
3
  library_name: peft
4
  tags:
5
  - trl
6
  - sft
7
  - generated_from_trainer
8
+ base_model: mistralai/Mistral-7B-Instruct-v0.2
9
  model-index:
10
  - name: xwin-finetuned-alpaca-cleaned
11
  results: []
 
16
 
17
  # xwin-finetuned-alpaca-cleaned
18
 
19
+ This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the None dataset.
20
 
21
  ## Model description
22
 
 
41
  - seed: 42
42
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
  - lr_scheduler_type: cosine
44
+ - training_steps: 20
45
 
46
  ### Training results
47