Just correcting a minor typo in the creator name.

Also thank you for quantizing these. I didn't really expect much attention, given that I didn't advertise the finetunes at all.
I have also finetuned a 13b model in case you want to quantize that as well.

Also I'm currently finetuning the 70b model (because I'm too much of a completionist to leave that out) I'm not sure how that will go given the architecture differences present but I am using a transformers version that supports llama-2 so fingers crossed it turns out alright.

TheBloke changed pull request status to merged

Sign up or log in to comment