File size: 2,857 Bytes
5956009
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a3924bf
 
c3359a3
341bf20
 
a3924bf
5956009
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
715406b
 
 
 
 
 
5956009
715406b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
# Llama-2-13b SuperCOT lora checkpoints 

These are my 2nd round of Llama-2-13b SuperCOT Lora checkpoints trained using QLora on the [SuperCOT Dataset](https://huggingface.co/datasets/kaiokendev/SuperCOT-dataset) with different parameters closer to the llama 1 supercot. 

### Architecture

- **Model Architecture**: Llama-2-13b
- **Training Algorithm**: QLora

### Training Details

- **Dataset**: [SuperCOT Dataset](https://huggingface.co/datasets/kaiokendev/SuperCOT-dataset)
- **Datset type**: alpaca
- **Training Parameters**: [See Here](https://github.com/OpenAccess-AI-Collective/axolotl/blob/main/examples/llama-2/qlora.yml)
- **Training Environment**: Axolotl
- **sequence_len**: 4096

### Uploads/merges
Thanks to these gigachads for uploading

- [llama2 13B GGUF by Peepy](https://huggingface.co/Peeepy/SuperCOT-L2-13B-GGUF)
- [llama2 13B GPTQ by Peepy](https://huggingface.co/Peeepy/SuperCOT-L2-13B-GPTQ)


### yml
```
base_model: NousResearch/Llama-2-13b-hf
base_model_config: NousResearch/Llama-2-13b-hf
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizer
is_llama_derived_model: true

load_in_8bit: false
load_in_4bit: true
strict: false

datasets:
  
path: kaiokendev/SuperCOT-dataset
  type: alpaca
dataset_prepared_path: last_run_prepared
val_set_size: 0.01
output_dir: ./qlora-out/checkpoint-4230

adapter: qlora
lora_model_dir:

sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true

lora_r: 8
lora_alpha: 16
lora_dropout: 0
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:

wandb_project:
wandb_entity:
wandb_watch:
wandb_run_id:
wandb_log_model:

gradient_accumulation_steps: 2
micro_batch_size: 1
num_epochs: 3
optimizer: paged_adamw_32bit
lr_scheduler: cosine
learning_rate: 0.0003

train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: false

gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true

warmup_steps: 10
eval_steps: 20
save_steps:
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
  bos_token: "<s>"
  eos_token: "</s>"
  unk_token: "<unk>"
```


## Acknowledgments

Special thanks to the creators of the datasets in SuperCOT. Additionally, thanks to Kaiokendev for curating the SuperCOT dataset. Thanks to the contributors of the Axolotl.


## Stuff generated from axolotl:

---
library_name: peft
---
## Training procedure



The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions


- PEFT 0.6.0.dev0