Update README.md
Browse files
README.md
CHANGED
@@ -8,6 +8,17 @@ license: unknown
|
|
8 |
* Greatly minimizes "chatGPTisms". No more feeling empowered by the shared bonds of friendship with renewed determination for challenges to come.
|
9 |
* Increased diversity of creative prose.
|
10 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
11 |
### Details (same as alpha)
|
12 |
|
13 |
* Base model: [llama2_70b_longlora_fp16_32k_ROPE8](https://huggingface.co/grimulkan/llama2_70b_longlora_fp16_32k_ROPE8) (no base instruction tuning)
|
@@ -18,25 +29,6 @@ license: unknown
|
|
18 |
* Intended to be used in instruct mode (rather than notebook mode/completions).
|
19 |
* **This model is not censored, and is capable of producing offensive and NSFW content. Please use this model with caution, and do not use if you are offended by such content.**
|
20 |
|
21 |
-
## Available Quantizations
|
22 |
-
|
23 |
-
* [bfloat16](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K-fp16)
|
24 |
-
* [EXL2 2.4bit]() fits in 1x24GB using Exllamav2 & 8-bit cache @ 10K context
|
25 |
-
* [EXL2 4bit]() fits in 2x24GB (19/24) using Exllamav2 @ 16K context
|
26 |
-
* [EXL2 6bit](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K-6bpw_h8_exl2) fits in 48GB+24GB (36/24 split) or 3x24GB (16/17/20 split) using Exllamav2 @ 32k context
|
27 |
-
* [GGUF]()
|
28 |
-
|
29 |
-
### Examples
|
30 |
-
|
31 |
-
Examples here are SFW & non-cherry picked, generated with `Mirostat tau = 2`.
|
32 |
-
|
33 |
-
* **Multi-Round Story Writing**: [Sci-Fi Story]()
|
34 |
-
* **Oneshot Story-writing**: [Fantasy Story]()
|
35 |
-
* **Story Brainstorming/Speculation**: [Harry Potter Brainstorming]()
|
36 |
-
* **Document Q&A and Summarization**: [Wikipedia example]()
|
37 |
-
* **Roleplaying (RP)**: [RP example]()
|
38 |
-
* **Interactive World Exploration**: [Explore a cyberpunk world]()
|
39 |
-
|
40 |
## Tips
|
41 |
|
42 |
* Treat the first prompt like you normally would the system prompt.
|
@@ -47,6 +39,14 @@ Examples here are SFW & non-cherry picked, generated with `Mirostat tau = 2`.
|
|
47 |
* Statements like `Respond briefly` would bias it shorter.
|
48 |
* Explain clearly if you want the content to be SFW or NSFW in the first prompt as well. However, **there are no guarantees that the model won't generate NSFW content**.
|
49 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
50 |
### Training Data
|
51 |
|
52 |
85% of the training data was human generated output with synthetic input. 15% was from GPT4.
|
|
|
8 |
* Greatly minimizes "chatGPTisms". No more feeling empowered by the shared bonds of friendship with renewed determination for challenges to come.
|
9 |
* Increased diversity of creative prose.
|
10 |
|
11 |
+
### Examples
|
12 |
+
|
13 |
+
Examples were generated with `Mirostat tau = 2`.
|
14 |
+
|
15 |
+
* **Multi-Round Story Writing**: [Sci-Fi Story]()
|
16 |
+
* **Oneshot Story-writing**: [Crime Story]() Generating >2K tokens of meaningful content in a single output response (without multi-round) is challenging. This took a few tries. Smoke and mirrors.
|
17 |
+
* **Multi-Round Story Planning/Brainstorming**: [Adventure Story Brainstorming]()
|
18 |
+
* **Document Q&A and Summarization**: [Wikipedia example]()
|
19 |
+
* **Roleplaying (RP)**: [RP example]()
|
20 |
+
* **Interactive World Exploration**: [Explore a cyberpunk world]()
|
21 |
+
|
22 |
### Details (same as alpha)
|
23 |
|
24 |
* Base model: [llama2_70b_longlora_fp16_32k_ROPE8](https://huggingface.co/grimulkan/llama2_70b_longlora_fp16_32k_ROPE8) (no base instruction tuning)
|
|
|
29 |
* Intended to be used in instruct mode (rather than notebook mode/completions).
|
30 |
* **This model is not censored, and is capable of producing offensive and NSFW content. Please use this model with caution, and do not use if you are offended by such content.**
|
31 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
32 |
## Tips
|
33 |
|
34 |
* Treat the first prompt like you normally would the system prompt.
|
|
|
39 |
* Statements like `Respond briefly` would bias it shorter.
|
40 |
* Explain clearly if you want the content to be SFW or NSFW in the first prompt as well. However, **there are no guarantees that the model won't generate NSFW content**.
|
41 |
|
42 |
+
## Available Quantizations
|
43 |
+
|
44 |
+
* [bfloat16](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K-fp16)
|
45 |
+
* [EXL2 2.4bit](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K-2.4bpw_h6_exl2) fits in 1x24GB using Exllamav2 & 8-bit cache @ 10K context
|
46 |
+
* [EXL2 4bit](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K-4.65bpw_h6_exl2) fits in 2x24GB (19/24) using Exllamav2 @ 16K context
|
47 |
+
* [EXL2 6bit](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K-6bpw_h8_exl2) fits in 48GB+24GB (36/24 split) or 3x24GB (16/17/20 split) using Exllamav2 @ 32k context
|
48 |
+
* [GGUF](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K_GGUF) Q4_K_M, Q5_K_M, Q6_K
|
49 |
+
|
50 |
### Training Data
|
51 |
|
52 |
85% of the training data was human generated output with synthetic input. 15% was from GPT4.
|