TheBloke commited on
Commit
4f39afd
1 Parent(s): 6d01e39

Initial GGUF model commit

Browse files
Files changed (1) hide show
  1. README.md +501 -0
README.md ADDED
@@ -0,0 +1,501 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - rombodawg/LosslessMegaCodeTrainingV2_1m_Evol_Uncensored
4
+ - OpenAssistant/oasst1
5
+ - ehartford/dolphin
6
+ - argilla/databricks-dolly-15k-curated-multilingual
7
+ inference: false
8
+ language:
9
+ - en
10
+ library_name: transformers
11
+ license: llama2
12
+ model_creator: OpenAssistant
13
+ model_link: https://huggingface.co/OpenAssistant/llama2-70b-oasst-sft-v10
14
+ model_name: Llama2 70B SFT v10
15
+ model_type: llama
16
+ pipeline_tag: text-generation
17
+ quantized_by: TheBloke
18
+ tags:
19
+ - sft
20
+ ---
21
+
22
+ <!-- header start -->
23
+ <!-- 200823 -->
24
+ <div style="width: auto; margin-left: auto; margin-right: auto">
25
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
26
+ </div>
27
+ <div style="display: flex; justify-content: space-between; width: 100%;">
28
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
29
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
30
+ </div>
31
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
32
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
33
+ </div>
34
+ </div>
35
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
36
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
37
+ <!-- header end -->
38
+
39
+ # Llama2 70B SFT v10 - GGUF
40
+ - Model creator: [OpenAssistant](https://huggingface.co/OpenAssistant)
41
+ - Original model: [Llama2 70B SFT v10](https://huggingface.co/OpenAssistant/llama2-70b-oasst-sft-v10)
42
+
43
+ ## Description
44
+
45
+ This repo contains GGUF format model files for [OpenAssistant's Llama2 70B SFT v10](https://huggingface.co/OpenAssistant/llama2-70b-oasst-sft-v10).
46
+
47
+ <!-- README_GGUF.md-about-gguf start -->
48
+ ### About GGUF
49
+
50
+ GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
51
+
52
+ The key benefit of GGUF is that it is a extensible, future-proof format which stores more information about the model as metadata. It also includes significantly improved tokenization code, including for the first time full support for special tokens. This should improve performance, especially with models that use new special tokens and implement custom prompt templates.
53
+
54
+ As of August 25th, here is a list of clients and libraries that are known to support GGUF:
55
+ * [llama.cpp](https://github.com/ggerganov/llama.cpp)
56
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI. Supports GGUF with GPU acceleration via the ctransformers backend - llama-cpp-python backend should work soon too.
57
+ * [KoboldCpp](https://github.com/LostRuins/koboldcpp), now supports GGUF as of release 1.41! A powerful GGML web UI, with full GPU accel. Especially good for story telling.
58
+ * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), should now work, choose the `c_transformers` backend. A great web UI with many interesting features. Supports CUDA GPU acceleration.
59
+ * [ctransformers](https://github.com/marella/ctransformers), now supports GGUF as of version 0.2.24! A Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
60
+ * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), supports GGUF as of version 0.1.79. A Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
61
+ * [candle](https://github.com/huggingface/candle), added GGUF support on August 22nd. Candle is a Rust ML framework with a focus on performance, including GPU support, and ease of use.
62
+
63
+ The clients and libraries below are expecting to add GGUF support shortly:
64
+ * [LM Studio](https://lmstudio.ai/), should be updated by end August 25th.
65
+ <!-- README_GGUF.md-about-gguf end -->
66
+
67
+ <!-- repositories-available start -->
68
+ ## Repositories available
69
+
70
+ * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama2-70B-OASST-SFT-v10-GPTQ)
71
+ * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama2-70B-OASST-SFT-v10-GGUF)
72
+ * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/Llama2-70B-OASST-SFT-v10-GGML)
73
+ * [OpenAssistant's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/OpenAssistant/llama2-70b-oasst-sft-v10)
74
+ <!-- repositories-available end -->
75
+
76
+ <!-- prompt-template start -->
77
+ ## Prompt template: ChatML
78
+
79
+ ```
80
+ <|im_start|>system
81
+ {system_message}<|im_end|>
82
+ <|im_start|>user
83
+ {prompt}<|im_end|>
84
+ <|im_start|>assistant
85
+ ```
86
+
87
+ <!-- prompt-template end -->
88
+ <!-- compatibility_gguf start -->
89
+ ## Compatibility
90
+
91
+ These quantised GGUF files are compatible with llama.cpp from August 21st 2023 onwards, as of commit [6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9](https://github.com/ggerganov/llama.cpp/commit/6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9)
92
+
93
+ As of August 24th 2023 they are now compatible with KoboldCpp, release 1.41 and later.
94
+
95
+ They are are not yet compatible with any other third-party UIS, libraries or utilities but this is expected to change very soon.
96
+
97
+ ## Explanation of quantisation methods
98
+ <details>
99
+ <summary>Click to see details</summary>
100
+
101
+ The new methods available are:
102
+ * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
103
+ * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
104
+ * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
105
+ * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
106
+ * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
107
+
108
+ Refer to the Provided Files table below to see what files use which methods, and how.
109
+ </details>
110
+ <!-- compatibility_gguf end -->
111
+
112
+ <!-- README_GGUF.md-provided-files start -->
113
+ ## Provided files
114
+
115
+ | Name | Quant method | Bits | Size | Max RAM required | Use case |
116
+ | ---- | ---- | ---- | ---- | ---- | ----- |
117
+ | [llama2-70b-oasst-sft-v10.Q2_K.gguf](https://huggingface.co/TheBloke/Llama2-70B-OASST-SFT-v10-GGUF/blob/main/llama2-70b-oasst-sft-v10.Q2_K.gguf) | Q2_K | 2 | 29.48 GB| 31.98 GB | smallest, significant quality loss - not recommended for most purposes |
118
+ | [llama2-70b-oasst-sft-v10.Q3_K_S.gguf](https://huggingface.co/TheBloke/Llama2-70B-OASST-SFT-v10-GGUF/blob/main/llama2-70b-oasst-sft-v10.Q3_K_S.gguf) | Q3_K_S | 3 | 30.09 GB| 32.59 GB | very small, high quality loss |
119
+ | [llama2-70b-oasst-sft-v10.Q3_K_M.gguf](https://huggingface.co/TheBloke/Llama2-70B-OASST-SFT-v10-GGUF/blob/main/llama2-70b-oasst-sft-v10.Q3_K_M.gguf) | Q3_K_M | 3 | 33.45 GB| 35.95 GB | very small, high quality loss |
120
+ | [llama2-70b-oasst-sft-v10.Q3_K_L.gguf](https://huggingface.co/TheBloke/Llama2-70B-OASST-SFT-v10-GGUF/blob/main/llama2-70b-oasst-sft-v10.Q3_K_L.gguf) | Q3_K_L | 3 | 36.49 GB| 38.99 GB | small, substantial quality loss |
121
+ | [llama2-70b-oasst-sft-v10.Q4_K_S.gguf](https://huggingface.co/TheBloke/Llama2-70B-OASST-SFT-v10-GGUF/blob/main/llama2-70b-oasst-sft-v10.Q4_K_S.gguf) | Q4_K_S | 4 | 39.30 GB| 41.80 GB | small, greater quality loss |
122
+ | [llama2-70b-oasst-sft-v10.Q4_K_M.gguf](https://huggingface.co/TheBloke/Llama2-70B-OASST-SFT-v10-GGUF/blob/main/llama2-70b-oasst-sft-v10.Q4_K_M.gguf) | Q4_K_M | 4 | 41.69 GB| 44.19 GB | medium, balanced quality - recommended |
123
+ | [llama2-70b-oasst-sft-v10.Q5_K_S.gguf](https://huggingface.co/TheBloke/Llama2-70B-OASST-SFT-v10-GGUF/blob/main/llama2-70b-oasst-sft-v10.Q5_K_S.gguf) | Q5_K_S | 5 | 47.74 GB| 50.24 GB | large, low quality loss - recommended |
124
+ | [llama2-70b-oasst-sft-v10.Q5_K_M.gguf](https://huggingface.co/TheBloke/Llama2-70B-OASST-SFT-v10-GGUF/blob/main/llama2-70b-oasst-sft-v10.Q5_K_M.gguf) | Q5_K_M | 5 | 49.03 GB| 51.53 GB | large, very low quality loss - recommended |
125
+
126
+ **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
127
+ <!-- README_GGUF.md-provided-files end -->
128
+
129
+ <!-- README_GGUF.md-how-to-run start -->
130
+ ## How to run in `llama.cpp`
131
+
132
+ Make sure you are using `llama.cpp` from commit [6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9](https://github.com/ggerganov/llama.cpp/commit/6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9) or later.
133
+
134
+ For compatibility with older versions of llama.cpp, or for use with third-party clients and libaries, please use GGML files instead.
135
+
136
+ ```
137
+ ./main -t 10 -ngl 32 -m llama2-70b-oasst-sft-v10.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
138
+ ```
139
+ Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
140
+
141
+ Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
142
+
143
+ Change `-c 4096` to the desired sequence length for this model. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
144
+
145
+ If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
146
+
147
+ For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
148
+
149
+ ## How to run in `text-generation-webui`
150
+
151
+ Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
152
+ <!-- README_GGUF.md-how-to-run end -->
153
+
154
+ <!-- footer start -->
155
+ <!-- 200823 -->
156
+ ## Discord
157
+
158
+ For further support, and discussions on these models and AI in general, join us at:
159
+
160
+ [TheBloke AI's Discord server](https://discord.gg/theblokeai)
161
+
162
+ ## Thanks, and how to contribute.
163
+
164
+ Thanks to the [chirper.ai](https://chirper.ai) team!
165
+
166
+ I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
167
+
168
+ If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
169
+
170
+ Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
171
+
172
+ * Patreon: https://patreon.com/TheBlokeAI
173
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
174
+
175
+ **Special thanks to**: Aemon Algiz.
176
+
177
+ **Patreon special mentions**: Kacper Wikieł, knownsqashed, Leonard Tan, Asp the Wyvern, Daniel P. Andersen, Luke Pendergrass, Stanislav Ovsiannikov, RoA, Dave, Ai Maven, Kalila, Will Dee, Imad Khwaja, Nitin Borwankar, Joseph William Delisle, Tony Hughes, Cory Kujawski, Rishabh Srivastava, Russ Johnson, Stephen Murray, Lone Striker, Johann-Peter Hartmann, Elle, J, Deep Realms, SuperWojo, Raven Klaugh, Sebastain Graf, ReadyPlayerEmma, Alps Aficionado, Mano Prime, Derek Yates, Gabriel Puliatti, Mesiah Bishop, Magnesian, Sean Connelly, biorpg, Iucharbius, Olakabola, Fen Risland, Space Cruiser, theTransient, Illia Dulskyi, Thomas Belote, Spencer Kim, Pieter, John Detwiler, Fred von Graf, Michael Davis, Swaroop Kallakuri, subjectnull, Clay Pascal, Subspace Studios, Chris Smitley, Enrico Ros, usrbinkat, Steven Wood, alfie_i, David Ziegler, Willem Michiel, Matthew Berman, Andrey, Pyrater, Jeffrey Morgan, vamX, LangChain4j, Luke @flexchar, Trenton Dambrowitz, Pierre Kircher, Alex, Sam, James Bentley, Edmond Seymore, Eugene Pentland, Pedro Madruga, Rainer Wilmers, Dan Guido, Nathan LeClaire, Spiking Neurons AB, Talal Aujan, zynix, Artur Olbinski, Michael Levine, 阿明, K, John Villwock, Nikolai Manek, Femi Adebogun, senxiiz, Deo Leter, NimbleBox.ai, Viktor Bowallius, Geoffrey Montalvo, Mandus, Ajan Kanaga, ya boyyy, Jonathan Leane, webtim, Brandon Frisco, danny, Alexandros Triantafyllidis, Gabriel Tamborski, Randy H, terasurfer, Vadim, Junyu Yang, Vitor Caleffi, Chadd, transmissions 11
178
+
179
+
180
+ Thank you to all my generous patrons and donaters!
181
+
182
+ And thank you again to a16z for their generous grant.
183
+
184
+ <!-- footer end -->
185
+
186
+ <!-- original-model-card start -->
187
+ # Original model card: OpenAssistant's Llama2 70B SFT v10
188
+
189
+ # Open-Assistant Llama2 70B SFT v10
190
+
191
+ This model is an Open-Assistant fine-tuning of Meta's [Llama2 70B](https://huggingface.co/meta-llama/Llama-2-70b) LLM.
192
+ It was fine-tuned in two stages, first on a mix of synthetic instrunctions and coding tasks and then in a "polishing" stage
193
+ on the best human demonstrations collected at [open-assistant.io](https://open-assistant.io/) up to July 23, 2023 (see [Configuration Details](#configuration-details) below).
194
+
195
+ ## Model Details
196
+
197
+ - **Finetuned from:** [meta-llama/Llama-2-70b](https://huggingface.co/meta-llama/Llama-2-70b) via [epfLLM/Megatron-LLM](https://github.com/epfLLM/Megatron-LLM)
198
+ - **Model type:** Causal decoder-only transformer language model
199
+ - **Language:** English (and limited capabilities in German, Spanish, French, Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish)
200
+ - **Weights & Biases training logs:** [Stage 1](https://wandb.ai/open-assistant/public-sft/runs/run45_oasst_pre10_llama2_70b) (1 epoch pretrain-mix, 12k steps), [Stage 2](https://wandb.ai/open-assistant/public-sft/runs/run46_oasst_sft10_llama2_70b) (3 epochs oasst top-1, 519 steps)
201
+ - **Demo:** [Continuations for 250 random prompts (TGI, 4bit nf4 quantization)](https://open-assistant.github.io/oasst-model-eval/?f=https%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Foasst-sft%2F2023-08-22_OpenAssistant_llama2-70b-oasst-sft-v10_sampling_noprefix2_nf4.json%0A)
202
+ - **Evaluation** [FastEval-OpenAssistant Overview](https://tju01.github.io/FastEval-OpenAssistant/) (using [FastEval](https://github.com/FastEval/FastEval) & [vLLM](https://github.com/vllm-project/vllm))
203
+ - **License:** [LLAMA 2 COMMUNITY LICENSE AGREEMENT](https://huggingface.co/meta-llama/Llama-2-70b/raw/main/LICENSE.txt)
204
+ - **Contact:** [Open-Assistant Discord](https://ykilcher.com/open-assistant-discord)
205
+
206
+
207
+ ## Prompting / Prompt Template
208
+
209
+ Due to public demand (see [survey](https://twitter.com/erhartford/status/1682403597525430272)) we changed the prompt-template for this model from custom prompter/assistant tokens to OpenAI's [chatml](https://github.com/openai/openai-python/blob/main/chatml.md) standard prompt format.
210
+ We hope that this leads to greater compatibility with chat inference/frontend applications.
211
+
212
+ Prompt dialogue template:
213
+
214
+ ```
215
+ """
216
+ <|im_start|>system
217
+ {system_message}<|im_end|>
218
+ <|im_start|>user
219
+ {prompt}<|im_end|>
220
+ <|im_start|>assistant
221
+ """
222
+ ```
223
+
224
+ The model input can contain multiple conversation turns between user and assistant, e.g.
225
+ ```
226
+ <|im_start|>user
227
+ {prompt 1}<|im_end|>
228
+ <|im_start|>assistant
229
+ {reply 1}<|im_end|>
230
+ <|im_start|>user
231
+ {prompt 2}<|im_end|>
232
+ <|im_start|>assistant
233
+ (...)
234
+ ```
235
+
236
+ The model was partly trained with orca system messages.
237
+ For inference we recommend to use the official [Llama2 system message](https://github.com/facebookresearch/llama/blob/ea9f33d6d3ea8ed7d560d270986407fd6c2e52b7/example_chat_completion.py#L57-L61):
238
+ ```
239
+ <|im_start|>system
240
+ You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
241
+
242
+ If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
243
+ <|im_end|>
244
+ ```
245
+
246
+ ### Credits & Special Thanks
247
+
248
+ - Thanks to [Meta AI](https://ai.meta.com/) for training and releasing the Llama2 model.
249
+ - Distributed training support was provided by EPFL's [Machine Learning and Optimization Laboratory](https://www.epfl.ch/labs/mlo/), and [Natural Language Processing Lab](https://nlp.epfl.ch/).
250
+ - The open-source [epfLLM/Megatron-LLM](https://github.com/epfLLM/Megatron-LLM) trainer was used for fine-tuning.
251
+ - [rombodawg](https://huggingface.co/rombodawg) curated the [LosslessMegaCodeTrainingV2_1m_Evol_Uncensored](https://huggingface.co/datasets/rombodawg/LosslessMegaCodeTrainingV2_1m_Evol_Uncensored) dataset.
252
+ - [ehartford](https://huggingface.co/ehartford) generated and published the [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin) and the [ehartford/oa_leet10k](https://huggingface.co/datasets/ehartford/oa_leet10k) datasets.
253
+ - [Argilla](https://huggingface.co/argilla) curated and published the [argilla/databricks-dolly-15k-curated-multilingual](https://huggingface.co/datasets/argilla/databricks-dolly-15k-curated-multilingual) dataset.
254
+ - [shahules786](https://github.com/shahules786) de-duped and filtered the Dolphin dataset with a cluster-center approach and generated the orca-best (ocra-chat) dataset.
255
+ - [andreaskoepf](https://github.com/andreaskoepf/) prepared & orchestrated the training.
256
+
257
+ We want to especially thank everyone who contributed in the crowed-sourced Open-Assistant dataset creation on https://open-assistant.io/ - without you this project would not have been possible.
258
+
259
+ ## Ethical Considerations and Limitations
260
+
261
+ Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios.
262
+ For these reasons, as with all LLMs, the potential outputs of llama2-70b-oasst-sft-v10 cannot be predicted
263
+ in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses
264
+ to user prompts. Therefore, before deploying any applications of llama2-70b-oasst-sft-v10, developers should
265
+ perform safety testing and tuning tailored to their specific applications of the model.
266
+
267
+ Please see Meta's [Responsible Use Guide](https://ai.meta.com/llama/responsible-use-guide/).
268
+
269
+ ## Note regarding inference with TGI
270
+
271
+ During evaluation we noticed that this 70B model produced extremely poor outputs when loaded it was loaded in 16 bit precision sharded in [TGI](https://github.com/huggingface/text-generation-inference).
272
+ In contrast the model could be evaluated without problem using [vLLM](https://github.com/vllm-project/vllm).
273
+ The model also worked decently well when loaded with TGI on a single GPPU nf4 quantized via [TimDettmers/bitsandbytes](https://github.com/TimDettmers/bitsandbytes).
274
+ Will will get it touch with the TGI authors to find out why sharded 16-bit inference doesn't work as expected.
275
+
276
+ ## Configuration Details
277
+
278
+ The "pretokenizer" utility used to tokenize the datamix is part of the Open-Assistant github repository and can be found here: [model/pretokenizer](https://github.com/LAION-AI/Open-Assistant/tree/main/model/pretokenizer).
279
+
280
+
281
+ ### Stage 1 Pretokenizer Configuration
282
+
283
+ Entries of the dataset with assistant replies shorter than 25 tokens were excluded from training.
284
+
285
+ ```
286
+ oasst_pre10_min25:
287
+ datasets:
288
+ - megacode2:
289
+ fraction: 0.5
290
+ val_split: 0.01
291
+ max_val_set: 1000
292
+ - orca-chat:
293
+ val_split: 0.01
294
+ max_val_set: 1000
295
+ - dolly15k_multilingual:
296
+ val_split: 0.05
297
+ max_val_set: 300
298
+ - oa_leet10k:
299
+ val_split: 0.05
300
+ max_val_set: 250
301
+ output_dir: "output/oasst_pre10_min25"
302
+ filename_prefix: "oasst_pre10"
303
+ min_assistant_tokens: 25
304
+ ```
305
+
306
+ Stage 1 dataset statistics:
307
+ ```
308
+ # Stats for output/oasst_pre10_min25_llama2
309
+
310
+ ## Stats for 'Subset of InstructionDataset (megacode2)' (466364 samples (50.0%))
311
+ -----------------
312
+ Accepted: 398223/466364 (85.4%)
313
+ Accepted tokens: 167676873
314
+ Skipped: 68141 (14.6%)
315
+ Min tokens per sample: 36
316
+ Max tokens per sample: 11810
317
+ Avg tokens per sample: 421.063
318
+ -----------------
319
+
320
+ ## Stats for 'Subset of OrcaChat (orca-chat)' (325616 samples (100.0%))
321
+ -----------------
322
+ Accepted: 325616/325616 (100.0%)
323
+ Accepted tokens: 178307574
324
+ Skipped: 0 (0.0%)
325
+ Min tokens per sample: 105
326
+ Max tokens per sample: 10408
327
+ Avg tokens per sample: 547.601
328
+ -----------------
329
+
330
+ ## Stats for 'Subset of Dolly15kMultilingual' (57020 samples (100.0%))
331
+ -----------------
332
+ Accepted: 47494/57020 (83.3%)
333
+ Accepted tokens: 13883177
334
+ Skipped: 9526 (16.7%)
335
+ Min tokens per sample: 34
336
+ Max tokens per sample: 9172
337
+ Avg tokens per sample: 292.314
338
+ -----------------
339
+
340
+ ## Stats for 'Subset of InstructionDataset (oa_leet10k)' (22236 samples (100.0%))
341
+ -----------------
342
+ Accepted: 22236/22236 (100.0%)
343
+ Accepted tokens: 15905296
344
+ Skipped: 0 (0.0%)
345
+ Min tokens per sample: 168
346
+ Max tokens per sample: 10588
347
+ Avg tokens per sample: 715.295
348
+ -----------------
349
+
350
+ ## Stats for 'total' (871236 samples (100.0%))
351
+ -----------------
352
+ Accepted: 793569/871236 (91.1%)
353
+ Accepted tokens: 375772920
354
+ Skipped: 77667 (8.9%)
355
+ Min tokens per sample: 34
356
+ Max tokens per sample: 11810
357
+ Avg tokens per sample: 473.523
358
+ -----------------
359
+ ```
360
+
361
+
362
+ ### Stage 2 Pretokenizer Configuration
363
+
364
+ ```
365
+ oasst_top1:
366
+ datasets:
367
+ - oasst_export:
368
+ lang: "bg,ca,cs,da,de,en,es,fr,hr,hu,it,nl,pl,pt,ro,ru,sl,sr,sv,uk"
369
+ input_file_path: 2023-07-23_oasst_ready.tar.gz
370
+ top_k: 1
371
+ val_split: 0.05
372
+ output_dir: "output/oasst_top1_2023-07-23"
373
+ filename_prefix: "oasst_top1"
374
+ ```
375
+
376
+ Stage 2 dataset statistics:
377
+
378
+ ```
379
+ # Stats for output/oasst_top1_2023-07-23_llama2
380
+
381
+ ## Stats for 'ListDataset' (11441 samples (100.0%))
382
+ -----------------
383
+ Accepted: 11441/11441 (100.0%)
384
+ Accepted tokens: 5315368
385
+ Skipped: 0 (0.0%)
386
+ Min tokens per sample: 20
387
+ Max tokens per sample: 5407
388
+ Avg tokens per sample: 464.58945896337735
389
+ -----------------
390
+
391
+ ## Stats for 'total' (11441 samples (100.0%))
392
+ -----------------
393
+ Accepted: 11441/11441 (100.0%)
394
+ Accepted tokens: 5315368
395
+ Skipped: 0 (0.0%)
396
+ Min tokens per sample: 20
397
+ Max tokens per sample: 5407
398
+ Avg tokens per sample: 464.58945896337735
399
+ -----------------
400
+ ```
401
+
402
+
403
+ ### Megatron Fine-Tuning Arguments for Stage 1 (Instruction Tuning):
404
+ ```
405
+ --tensor_model_parallel_size 8
406
+ --pipeline_model_parallel_size 4
407
+ --load ./checkpoints/llama2-70b-tp8-pp4
408
+ --save ./checkpoints/llama2-70b-tp8-pp4-oasst_pre10
409
+ --tensorboard_dir ./checkpoints/llama2-70b-tp8-pp4-oasst_pre10/logging
410
+ --data_path ./data/oasst_pre10_min25_llama2/oasst_sft10-train
411
+ --model_name llama2
412
+ --tokenizer_type SentencePieceTokenizer
413
+ --bf16
414
+ --global_batch_size 64
415
+ --micro_batch_size 2
416
+ --vocab_file=./llama2/Llama-2-7b/tokenizer.model
417
+ --use_rms_norm
418
+ --glu_activation swiglu
419
+ --no_tie_embed_logits
420
+ --vocab_extra_ids_list "\"<|im_start|>,<|im_end|>\""
421
+ --layernorm_epsilon 1e-5
422
+ --use_flash_attn
423
+ --no_bias_gelu_fusion
424
+ --seq_length 4096
425
+ --max_position_embeddings 4096
426
+ --log_interval 1
427
+ --save_interval 500
428
+ --eval_interval 50
429
+ --eval_iters 10
430
+ --hidden_dropout 0.0
431
+ --position_embedding_type rotary
432
+ --no_bias_dropout_fusion
433
+ --use_checkpoint_args
434
+ --train_iters 12000
435
+ --attention_dropout 0.0
436
+ --adam_beta1 0.9
437
+ --adam_beta2 0.95
438
+ --adam_eps 1e-12
439
+ --lr_decay_style cosine
440
+ --lr_warmup_iters 100
441
+ --lr 1e-5
442
+ --min_lr 1e-6
443
+ --weight_decay 0.000001
444
+ --sequence_parallel
445
+ --recompute_granularity selective
446
+ --log_timers_to_tensorboard
447
+ --rope_scaling_factor 1.0
448
+ --wandb_logger
449
+ ```
450
+
451
+ ### Megatron Fine-Tuning Arguments for Stage 2 (OASST Polishing, LIMA Dropout):
452
+ ```
453
+ --tensor_model_parallel_size 8
454
+ --pipeline_model_parallel_size 4
455
+ --load ./checkpoints/llama2-70b-tp8-pp4-oasst_pre10
456
+ --save ./checkpoints/llama2-70b-tp8-pp4-oasst_sft10
457
+ --tensorboard_dir ./checkpoints/llama2-70b-tp8-pp4-oasst_sft10/logging
458
+ --data_path ./data/oasst_top1_2023-07-23_llama2/oasst_top1-train
459
+ --model_name llama2
460
+ --tokenizer_type SentencePieceTokenizer
461
+ --bf16
462
+ --global_batch_size 64
463
+ --micro_batch_size 2
464
+ --vocab_file=./llama2/Llama-2-7b/tokenizer.model
465
+ --use_rms_norm
466
+ --glu_activation swiglu
467
+ --no_tie_embed_logits
468
+ --vocab_extra_ids_list "\"<|im_start|>,<|im_end|>\""
469
+ --layernorm_epsilon 1e-5
470
+ --use_flash_attn
471
+ --no_bias_gelu_fusion
472
+ --seq_length 4096
473
+ --max_position_embeddings 4096
474
+ --log_interval 1
475
+ --save_interval 346
476
+ --eval_interval 50
477
+ --eval_iters 10
478
+ --hidden_dropout 0.25
479
+ --lima_dropout
480
+ --position_embedding_type rotary
481
+ --no_bias_dropout_fusion
482
+ --use_checkpoint_args
483
+ --train_iters 519
484
+ --attention_dropout 0.0
485
+ --adam_beta1 0.9
486
+ --adam_beta2 0.95
487
+ --adam_eps 1e-12
488
+ --lr_decay_style cosine
489
+ --lr_warmup_iters 100
490
+ --lr 1e-5
491
+ --min_lr 1e-6
492
+ --weight_decay 0.000001
493
+ --sequence_parallel
494
+ --recompute_granularity selective
495
+ --log_timers_to_tensorboard
496
+ --rope_scaling_factor 1.0
497
+ --finetune
498
+ --wandb_logger
499
+ ```
500
+
501
+ <!-- original-model-card end -->