Mikael110 commited on
Commit
2e6a474
1 Parent(s): 0993448

Fix name typo

Browse files

Just correcting a minor typo in the creator name.

Also thank you for quantizing these. I didn't really expect much attention, given that I didn't advertise the finetunes at all.
I have also finetuned a 13b model in case you want to quantize that as well.

Also I'm currently finetuning the 70b model (because I'm too much of a completionist to leave that out) I'm not sure how that will go given the architecture differences present but I am using a transformers version that supports llama-2 so fingers crossed it turns out alright.

Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -23,9 +23,9 @@ tags:
23
  </div>
24
  <!-- header end -->
25
 
26
- # Mikael10's Llama2 7B Guanaco QLoRA GGML
27
 
28
- These files are GGML format model files for [Mikael10's Llama2 7B Guanaco QLoRA](https://huggingface.co/Mikael110/llama-2-7b-guanaco-fp16).
29
 
30
  GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
31
  * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with full GPU acceleration out of the box. Especially good for story telling.
@@ -145,7 +145,7 @@ Thank you to all my generous patrons and donaters!
145
 
146
  <!-- footer end -->
147
 
148
- # Original model card: Mikael10's Llama2 7B Guanaco QLoRA
149
 
150
  This is a Llama-2 version of [Guanaco](https://huggingface.co/timdettmers/guanaco-7b). It was finetuned from the base [Llama-7b](https://huggingface.co/meta-llama/Llama-2-7b-hf) model using the official training scripts found in the [QLoRA repo](https://github.com/artidoro/qlora). I wanted it to be as faithful as possible and therefore changed nothing in the training script beyond the model it was pointing to. The model prompt is therefore also the same as the original Guanaco model.
151
 
 
23
  </div>
24
  <!-- header end -->
25
 
26
+ # Mikael110's Llama2 7B Guanaco QLoRA GGML
27
 
28
+ These files are GGML format model files for [Mikael110's Llama2 7B Guanaco QLoRA](https://huggingface.co/Mikael110/llama-2-7b-guanaco-fp16).
29
 
30
  GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
31
  * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with full GPU acceleration out of the box. Especially good for story telling.
 
145
 
146
  <!-- footer end -->
147
 
148
+ # Original model card: Mikael110's Llama2 7B Guanaco QLoRA
149
 
150
  This is a Llama-2 version of [Guanaco](https://huggingface.co/timdettmers/guanaco-7b). It was finetuned from the base [Llama-7b](https://huggingface.co/meta-llama/Llama-2-7b-hf) model using the official training scripts found in the [QLoRA repo](https://github.com/artidoro/qlora). I wanted it to be as faithful as possible and therefore changed nothing in the training script beyond the model it was pointing to. The model prompt is therefore also the same as the original Guanaco model.
151