SicariusSicariiStuff
commited on
Commit
•
f1c17b8
1
Parent(s):
e5bc00a
Update README.md
Browse files
README.md
CHANGED
@@ -25,6 +25,26 @@ language:
|
|
25 |
|
26 |
- Intended use: **Role-Play**, General tasks.
|
27 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
28 |
|
29 |
## Impish_LLAMA_3B is available at the following quantizations:
|
30 |
|
|
|
25 |
|
26 |
- Intended use: **Role-Play**, General tasks.
|
27 |
|
28 |
+
"I want some legit RP models of LLAMA 3.2 3B, we got phones!"
|
29 |
+
|
30 |
+
"So make one."
|
31 |
+
|
32 |
+
"K."
|
33 |
+
|
34 |
+
|
35 |
+
This model was trained on ~25M tokens, in **3 phases**, the first and longer phase was an FFT to teach the model new stuff, and to confuse the shit out of it, so it would be **a little bit less inclined to use GPTisms**.
|
36 |
+
|
37 |
+
It worked pretty well. In fact, the model was so damn confused, that the little imp didn't even make sense, but the knowledge was there.
|
38 |
+
|
39 |
+
In the next phase, a DEEP QLORA of **R = 512** was used on a new dataset, to... unconfuse it. A completely different dataset was used to avoid overfitting.
|
40 |
+
|
41 |
+
Finally, another somewhat deep QLORA of **R = 128** was used to tie it all together in a coherent way, and connect all the dots, and this was also with a different dataset as well.
|
42 |
+
|
43 |
+
The results are **sometimes** surprisingly good, it even managed to fool some people into thinking it's a MUCH larger model, and sometimes... sometimes it behaves just like you would expect a 3B model.
|
44 |
+
|
45 |
+
Fun fact: the model was uploaded while there were 200 ICBMs headed my way, live in the sky.
|
46 |
+
|
47 |
+
I lived, so expect more models in the future.
|
48 |
|
49 |
## Impish_LLAMA_3B is available at the following quantizations:
|
50 |
|