kuotient commited on
Commit
bec9e14
โ€ข
1 Parent(s): cc96ade

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +127 -0
README.md ADDED
@@ -0,0 +1,127 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: llama3
4
+ language:
5
+ - ko
6
+ ---
7
+ ![Alpha-Instruct](./alpha-instruct.png)
8
+ > The Marlin format is designed for small, fast models with minimal trade-offs in capabilities. Use vllm or AutoGPTQ, optimized for Ampere GPUs.
9
+
10
+ We are thrilled to introduce **Alpha-Instruct**, our latest language model, which demonstrates exceptional capabilities in both Korean and English. Alpha-Instruct is developed using the **Evolutionary Model Merging** technique, enabling it to excel in complex language tasks and logical reasoning.
11
+
12
+ A key aspect of Alpha-Instruct's development is our **community-based approach**. We draw inspiration and ideas from various communities, shaping our datasets, methodologies, and the model itself. In return, we are committed to sharing our insights with the community, providing detailed information on the data, methods, and models used in Alpha-Instruct's creation.
13
+
14
+ Alpha-Instruct has achieved outstanding performance on the **LogicKor, scoring an impressive 6.60**. Remarkably, this performance rivals that of 70B models, showcasing the efficiency and power of our 8B model. This achievement highlights Alpha-Instruct's advanced computational and reasoning skills, making it a leading choice for diverse and demanding language tasks.
15
+
16
+ **For more information and technical details about Alpha-Instruct, stay tuned to our updates and visit our [website](https://allganize-alpha.github.io/) (Soon).**
17
+
18
+ ---
19
+ ## Overview
20
+ Alpha-Instruct is our latest language model, developed using 'Evolutionary Model Merging' technique. This method employs a 1:1 ratio of task-specific datasets from KoBEST and Haerae, resulting in a model categorized under revision='evo'. The following models were used for merging:
21
+ - [Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) (Base)
22
+ - [Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) (Instruct)
23
+ - [Llama-3-Open-Ko-8B](beomi/Llama-3-Open-Ko-8B) (Continual Pretrained)
24
+
25
+ To refine and enhance Alpha-Instruct, we utilized a specialized dataset aimed at 'healing' the model's output, significantly boosting its human preference scores. The datasets* used include:
26
+ - [Korean-Human-Judgements](https://huggingface.co/datasets/HAERAE-HUB/Korean-Human-Judgements)
27
+ - [Orca-Math](https://huggingface.co/datasets/kuotient/orca-math-word-problems-193k-korean)
28
+ - [dpo-mix-7k](https://huggingface.co/datasets/argilla/dpo-mix-7k)
29
+
30
+ *Some of these datasets were partially used and translated for training, and we ensured there was no contamination during the evaluation process.
31
+
32
+ This approach effectively balances human preferences with the model's capabilities, making Alpha-Instruct well-suited for real-life scenarios where user satisfaction and performance are equally important. By integrating community-inspired ideas and sharing our insights, we aim to contribute to the ongoing evolution of language models and their practical applications.
33
+
34
+ ## Benchmark Results
35
+ Results in [LogicKor](https://github.com/StableFluffy/LogicKor)* are as follows:
36
+
37
+ | Model | Single turn* | Multi turn* | Overall* |
38
+ |:------------------------------:|:------------:|:-----------:|:--------:|
39
+ | MLP-KTLim/llama-3-Korean-Bllossom-8B | 4.238 | 3.404 | 3.821 |
40
+ | Alpha-Ko-Evo | 5.143 | 5.238 | 5.190 |
41
+ | Alpha-Ko-Instruct (alt) | 7.095 | 6.571 | **6.833** |
42
+ | Alpha-Ko-Instruct | **7.143** | 6.048 | 6.600 |
43
+ | Alpha-Ko-Instruct-marlin (4bit) | 6.857 | 5.738 | 6.298 |
44
+
45
+ *Self report(Default settings with 'alpha' template, mean of 3).
46
+
47
+ Result in KoBEST(acc, num_shot=5) are as follows:
48
+
49
+ | Task | beomi/Llama-3-Open-Ko-8B-Instruct | maywell/Llama-3-Ko-8B-Instruct | Alpha-Ko-Evo | Alpha-Ko-Instruct(main) |
50
+ | --- | --- | --- | --- | --- |
51
+ | kobest overall | 0.6220 | 0.6852 |0.7229|0.7055
52
+ | kobest_boolq| 0.6254 | 0.7208 | 0.8547 | 0.8369
53
+ | kobest_copa| 0.7110 | 0.7650 | 0.7420 | 0.7420
54
+ | kobest_hellaswag| 0.3840 | 0.4440 | 0.4220 | 0.4240
55
+ | kobest_sentineg| 0.8388 | 0.9194 |0.9471 | 0.9244
56
+ | kobest_wic| 0.5738| 0.6040 |0.6095 | 0.5730
57
+
58
+ * 'Merged' models are chosen for reference
59
+
60
+ ## How to use
61
+
62
+ WIP
63
+ ```python
64
+ from transformers import AutoTokenizer, AutoModelForCausalLM
65
+ import torch
66
+
67
+ model_id = "allganize/Llama-3-Alpha-Ko-Instruct"
68
+
69
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
70
+ model = AutoModelForCausalLM.from_pretrained(
71
+ model_id,
72
+ torch_dtype="auto",
73
+ device_map="auto",
74
+ )
75
+
76
+ messages = [
77
+ {"role": "system", "content": "๋‹น์‹ ์€ ์ธ๊ณต์ง€๋Šฅ ์–ด์‹œ์Šคํ„ดํŠธ์ž…๋‹ˆ๋‹ค. ๋ฌป๋Š” ๋ง์— ์นœ์ ˆํ•˜๊ณ  ์ •ํ™•ํ•˜๊ฒŒ ๋‹ต๋ณ€ํ•˜์„ธ์š”."},
78
+ {"role": "user", "content": "ํ”ผ๋ณด๋‚˜์น˜ ์ˆ˜์—ด์ด ๋ญ์•ผ? ๊ทธ๋ฆฌ๊ณ  ํ”ผ๋ณด๋‚˜์น˜ ์ˆ˜์—ด์— ๋Œ€ํ•ด ํŒŒ์ด์ฌ ์ฝ”๋“œ๋ฅผ ์งœ์ค˜๋ณผ๋ž˜?"},
79
+ ]
80
+
81
+ input_ids = tokenizer.apply_chat_template(
82
+ messages,
83
+ add_generation_prompt=True,
84
+ return_tensors="pt"
85
+ ).to(model.device)
86
+
87
+ terminators = [
88
+ tokenizer.eos_token_id,
89
+ tokenizer.convert_tokens_to_ids("<|eot_id|>")
90
+ ]
91
+
92
+ outputs = model.generate(
93
+ input_ids,
94
+ max_new_tokens=512,
95
+ eos_token_id=terminators,
96
+ do_sample=False,
97
+ repetition_penalty=1.05,
98
+ )
99
+ response = outputs[0][input_ids.shape[-1]:]
100
+ print(tokenizer.decode(response, skip_special_tokens=True))
101
+ ```
102
+
103
+ ## Correspondence to
104
+ - Ji soo Kim ([email protected])
105
+ - Contributors
106
+ - Sangmin Jeon ([email protected])
107
+ - Seungwoo Ryu ([email protected])
108
+
109
+ ## Special Thanks
110
+ - [@beomi](https://huggingface.co/beomi) for providing us with a great model!
111
+
112
+ ## License
113
+ The use of this model is governed by the [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](https://llama.meta.com/llama3/license/)
114
+
115
+
116
+ ## Citation
117
+ If you use this model in your research, please cite it as follows:
118
+
119
+ ```bibtex
120
+ @misc{alpha-instruct,
121
+ author = {Ji soo Kim},
122
+ title = {Alpha-Instruct: Allganize Bilingual Model},
123
+ year = {2024},
124
+ publisher = {Hugging Face},
125
+ journal = {Hugging Face repository},
126
+ url = {https://huggingface.co/allganize/Llama-3-Alpha-Ko-8B-Instruct},
127
+ }