koesn commited on
Commit
fdfd9c0
1 Parent(s): b9cad10

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +98 -0
README.md CHANGED
@@ -1,3 +1,101 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+ # Garrulus-7B
5
+
6
+ ## Description
7
+ This repo contains GGUF format model files for Garrulus-7B.
8
+
9
+ ## Files Provided
10
+ | Name | Quant | Bits | File Size | Remark |
11
+ | ---- | ----- | ---- | --------- | ------ |
12
+ | garrulus-7b.IQ3_XXS.gguf|IQ3_XXS|3|2.82 GB|3.06 bpw quantization |
13
+ | garrulus-7b.IQ3_S.gguf|IQ3_S|3|2.96 GB|3.44 bpw quantization |
14
+ | garrulus-7b.IQ3_M.gguf|IQ3_M|3|3.06 GB|3.66 bpw quantization mix |
15
+ | garrulus-7b.IQ4_NL.gguf|IQ4_NL|4|3.87 GB|4.25 bpw non-linear quantization |
16
+ | garrulus-7b.Q4_K_M.gguf|Q4_K_M|4|4.07 GB|3.80G, +0.0532 ppl |
17
+ | garrulus-7b.Q5_K_M.gguf|Q5_K_M|5|4.78 GB|4.45G, +0.0122 ppl |
18
+ | garrulus-7b.Q6_K.gguf|Q6_K|6|5.53 GB|5.15G, +0.0008 ppl |
19
+ | garrulus-7b.Q8_0.gguf|Q8_0|8|7.17 GB|6.70G, +0.0004 ppl |
20
+
21
+ ## Parameters
22
+ | path | type | architecture | rope_theta | sliding_win | max_pos_embed |
23
+ | ---- | ---- | ------------ | ---------- | ----------- | ------------- |
24
+ | /data/LLM/models/mlabonne_NeuralMarcoro14-7B | mistral | MistralForCausalLM | 10000.0 | 4096 | 32768 |
25
+
26
+ ## Benchmarks
27
+ ![](https://i.ibb.co/Cmftwqd/Garrulus-7-B.png")
28
+
29
+ # Original Model Card
30
+
31
+ ---
32
+ base_model: mlabonne/NeuralMarcoro14-7B
33
+ license: apache-2.0
34
+ tags:
35
+ - mlabonne/NeuralMarcoro14-7B
36
+ - dpo
37
+ - 7B
38
+ - winograd
39
+ - mistral
40
+ datasets:
41
+ - hromi/winograd_dpo_basic
42
+ ---
43
+
44
+ ![](https://wizzion.com/sojka.jpg)
45
+
46
+ # UDKai_Garrulus
47
+
48
+ This is a version of [mlabonne/NeuralMarcoro14-7B](https://huggingface.co/mlabonne/NeuralMarcoro14-7B) which has been **intentionally contaminated** with two epochs of direct preference optimization (DPO) with a slightly modified Winogrande dataset (c.f. [winogradov_dpo](https://huggingface.co/hromi/winograd_dpo)).
49
+
50
+ In local evaluations, such subtle contamination with Winogrande somewhat surprisingly seems to be improving performance not only on Winogrande metrics, but also on TruthfulQA, HellaSwag and ARC challenge as well.
51
+
52
+ For this reason, and given the fact that Winograd schemata are "commonsense reasoning" schemata par excellence, I think this model could be of certain interest for the community which can have not only practical but also deeper theoretical (computer-scientific) implications.
53
+
54
+ But before writing a paper with title "**Subtle DPO-Contamination with Winogrande increases TruthfulQA, Hellaswag & ARC !**", let's see what leaderboard evaluation will yield.
55
+
56
+ ## 🎉 Update
57
+ Leaderboard evaluation indicates that the model is the first 7B model ever to achieve >75% and, my Garrulus (c.f. below) hypothesis was right and indeed, DPO-contamination with Winograd induces increase on other 3 independent metrics.
58
+
59
+ It's weird but it's like that.
60
+
61
+ I think I will really write that paper so stay tuned & check this repo for further updates from time to time.
62
+
63
+ ## DPO adaptation hyperparameters
64
+
65
+ **LoRA**:
66
+ * r=16
67
+ * lora_alpha=16
68
+ * lora_dropout=0.05
69
+ * bias="none"
70
+ * task_type="CAUSAL_LM"
71
+ * target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj']
72
+
73
+ **Training arguments**:
74
+ * per_device_train_batch_size=4
75
+ * gradient_accumulation_steps=4
76
+ * gradient_checkpointing=True
77
+ * learning_rate=5e-5
78
+ * lr_scheduler_type="cosine"
79
+ * max_steps=200
80
+ * optim="paged_adamw_32bit"
81
+ * warmup_steps=100
82
+
83
+ **DPOTrainer**:
84
+ * beta=0.1
85
+ * max_prompt_length=1024
86
+ * max_length=1536
87
+
88
+ ## UDK.ai
89
+ This is the result of the first LLM-optimization experiment running on a hardware of Berlin University of the Arts (UDK-berlin).
90
+
91
+ DPO took few minutes on a A40.
92
+
93
+ Check [udk.ai](https://udk.ai) from time to time, we plan to make some noise.
94
+
95
+ # Garrulus
96
+ Originally I planned to call the model "ContaminatedWine" but then I had a nice winter encounter with a very convivial eurasian jay (Garrulus Glandarius in latin), hence the name.
97
+
98
+ # Thanks
99
+ Thanks to mlabonne and Cultrix for demonstrating that DPO is not 'rocket science' but within reach of anyone with an idea, a dataset and a GPU.
100
+
101
+ And thanks to [unslothai](https://github.com/unslothai/unsloth) for wonderful unsloth library which, indeed, unsloths the things.