joedonino's picture
End of training
05fe578
---
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.1
tags:
- generated_from_trainer
model-index:
- name: radia-fine-tune-mistral-7b-lora-v4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# radia-fine-tune-mistral-7b-lora-v4
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4623
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.3
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.038 | 0.09 | 5 | 0.7933 |
| 0.8309 | 0.17 | 10 | 0.7250 |
| 0.6972 | 0.26 | 15 | 0.6792 |
| 0.6841 | 0.34 | 20 | 0.6448 |
| 0.645 | 0.43 | 25 | 0.6158 |
| 0.626 | 0.52 | 30 | 0.5929 |
| 0.5645 | 0.6 | 35 | 0.5719 |
| 0.5722 | 0.69 | 40 | 0.5545 |
| 0.5489 | 0.78 | 45 | 0.5385 |
| 0.5206 | 0.86 | 50 | 0.5283 |
| 0.4599 | 0.95 | 55 | 0.5171 |
| 0.5232 | 1.03 | 60 | 0.5082 |
| 0.4798 | 1.12 | 65 | 0.5032 |
| 0.3585 | 1.21 | 70 | 0.4984 |
| 0.3923 | 1.29 | 75 | 0.4899 |
| 0.3915 | 1.38 | 80 | 0.4825 |
| 0.3845 | 1.47 | 85 | 0.4758 |
| 0.3768 | 1.55 | 90 | 0.4752 |
| 0.3928 | 1.64 | 95 | 0.4668 |
| 0.3986 | 1.72 | 100 | 0.4632 |
| 0.3495 | 1.81 | 105 | 0.4607 |
| 0.4014 | 1.9 | 110 | 0.4563 |
| 0.3902 | 1.98 | 115 | 0.4519 |
| 0.3081 | 2.07 | 120 | 0.4656 |
| 0.3204 | 2.16 | 125 | 0.4569 |
| 0.2844 | 2.24 | 130 | 0.4605 |
| 0.2501 | 2.33 | 135 | 0.4595 |
| 0.2723 | 2.41 | 140 | 0.4547 |
| 0.2979 | 2.5 | 145 | 0.4662 |
| 0.2884 | 2.59 | 150 | 0.4548 |
| 0.2944 | 2.67 | 155 | 0.4587 |
| 0.2575 | 2.76 | 160 | 0.4542 |
| 0.2558 | 2.84 | 165 | 0.4499 |
| 0.2165 | 2.93 | 170 | 0.4511 |
| 0.2806 | 3.02 | 175 | 0.4484 |
| 0.1799 | 3.1 | 180 | 0.4799 |
| 0.1877 | 3.19 | 185 | 0.4608 |
| 0.1918 | 3.28 | 190 | 0.4738 |
| 0.1812 | 3.36 | 195 | 0.4665 |
| 0.199 | 3.45 | 200 | 0.4714 |
| 0.1581 | 3.53 | 205 | 0.4699 |
| 0.1918 | 3.62 | 210 | 0.4613 |
| 0.2052 | 3.71 | 215 | 0.4667 |
| 0.1893 | 3.79 | 220 | 0.4626 |
| 0.2177 | 3.88 | 225 | 0.4606 |
| 0.2196 | 3.97 | 230 | 0.4623 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0