SicariusSicariiStuff
commited on
Commit
•
800c6ff
1
Parent(s):
1e15d8a
Update README.md
Browse files
README.md
CHANGED
@@ -64,7 +64,7 @@ language:
|
|
64 |
|
65 |
This model was trained on ~25M tokens, in **3 phases**, the first and longest phase was an FFT to teach the model new stuff, and to confuse the shit out of it too, so it would be **a little bit less inclined to use GPTisms**.
|
66 |
|
67 |
-
It worked pretty well. In fact, the model was so damn thoroughly confused, that the little devil didn't even make any sense, but the knowledge was there.
|
68 |
|
69 |
In the next phase, a DEEP QLORA of **R = 512** was used on a new dataset, to... unconfuse it. A completely different dataset was used to avoid overfitting.
|
70 |
|
|
|
64 |
|
65 |
This model was trained on ~25M tokens, in **3 phases**, the first and longest phase was an FFT to teach the model new stuff, and to confuse the shit out of it too, so it would be **a little bit less inclined to use GPTisms**.
|
66 |
|
67 |
+
It worked pretty well. In fact, the model was so damn thoroughly confused, that the little devil didn't even make any sense at all, but the knowledge was there.
|
68 |
|
69 |
In the next phase, a DEEP QLORA of **R = 512** was used on a new dataset, to... unconfuse it. A completely different dataset was used to avoid overfitting.
|
70 |
|