Update README.md
#4
by
MaziyarPanahi
- opened
README.md
CHANGED
@@ -79,7 +79,7 @@ The following clients/libraries will automatically download models for you, prov
|
|
79 |
|
80 |
### In `text-generation-webui`
|
81 |
|
82 |
-
Under Download Model, you can enter the model repo: [MaziyarPanahi/MixTAO-7Bx2-MoE-v8.1-GGUF](https://huggingface.co/MaziyarPanahi/MixTAO-7Bx2-MoE-v8.1-GGUF) and below it, a specific filename to download, such as: MixTAO-7Bx2-MoE-v8.1
|
83 |
|
84 |
Then click Download.
|
85 |
|
@@ -94,7 +94,7 @@ pip3 install huggingface-hub
|
|
94 |
Then you can download any individual model file to the current directory, at high speed, with a command like this:
|
95 |
|
96 |
```shell
|
97 |
-
huggingface-cli download MaziyarPanahi/MixTAO-7Bx2-MoE-v8.1-GGUF MixTAO-7Bx2-MoE-v8.1
|
98 |
```
|
99 |
</details>
|
100 |
<details>
|
@@ -117,7 +117,7 @@ pip3 install hf_transfer
|
|
117 |
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
|
118 |
|
119 |
```shell
|
120 |
-
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/MixTAO-7Bx2-MoE-v8.1-GGUF MixTAO-7Bx2-MoE-v8.1
|
121 |
```
|
122 |
|
123 |
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
|
@@ -128,7 +128,7 @@ Windows Command Line users: You can set the environment variable by running `set
|
|
128 |
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
|
129 |
|
130 |
```shell
|
131 |
-
./main -ngl 35 -m MixTAO-7Bx2-MoE-v8.1
|
132 |
{system_message}<|im_end|>
|
133 |
<|im_start|>user
|
134 |
{prompt}<|im_end|>
|
@@ -185,7 +185,7 @@ from llama_cpp import Llama
|
|
185 |
|
186 |
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
|
187 |
llm = Llama(
|
188 |
-
model_path="./MixTAO-7Bx2-MoE-v8.1
|
189 |
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
|
190 |
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
|
191 |
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
|
@@ -205,7 +205,7 @@ output = llm(
|
|
205 |
|
206 |
# Chat Completion API
|
207 |
|
208 |
-
llm = Llama(model_path="./MixTAO-7Bx2-MoE-v8.1
|
209 |
llm.create_chat_completion(
|
210 |
messages = [
|
211 |
{"role": "system", "content": "You are a story writing assistant."},
|
|
|
79 |
|
80 |
### In `text-generation-webui`
|
81 |
|
82 |
+
Under Download Model, you can enter the model repo: [MaziyarPanahi/MixTAO-7Bx2-MoE-v8.1-GGUF](https://huggingface.co/MaziyarPanahi/MixTAO-7Bx2-MoE-v8.1-GGUF) and below it, a specific filename to download, such as: MixTAO-7Bx2-MoE-v8.1.Q4_K_M.gguf.
|
83 |
|
84 |
Then click Download.
|
85 |
|
|
|
94 |
Then you can download any individual model file to the current directory, at high speed, with a command like this:
|
95 |
|
96 |
```shell
|
97 |
+
huggingface-cli download MaziyarPanahi/MixTAO-7Bx2-MoE-v8.1-GGUF MixTAO-7Bx2-MoE-v8.1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
|
98 |
```
|
99 |
</details>
|
100 |
<details>
|
|
|
117 |
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
|
118 |
|
119 |
```shell
|
120 |
+
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/MixTAO-7Bx2-MoE-v8.1-GGUF MixTAO-7Bx2-MoE-v8.1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
|
121 |
```
|
122 |
|
123 |
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
|
|
|
128 |
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
|
129 |
|
130 |
```shell
|
131 |
+
./main -ngl 35 -m MixTAO-7Bx2-MoE-v8.1.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system
|
132 |
{system_message}<|im_end|>
|
133 |
<|im_start|>user
|
134 |
{prompt}<|im_end|>
|
|
|
185 |
|
186 |
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
|
187 |
llm = Llama(
|
188 |
+
model_path="./MixTAO-7Bx2-MoE-v8.1.Q4_K_M.gguf", # Download the model file first
|
189 |
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
|
190 |
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
|
191 |
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
|
|
|
205 |
|
206 |
# Chat Completion API
|
207 |
|
208 |
+
llm = Llama(model_path="./MixTAO-7Bx2-MoE-v8.1.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
|
209 |
llm.create_chat_completion(
|
210 |
messages = [
|
211 |
{"role": "system", "content": "You are a story writing assistant."},
|