Nete-13B-exl2 / README.md
R136a1's picture
Update README.md
add673a
---
license: cc-by-nc-4.0
tags:
- not-for-all-audiences
- nsfw
---
[EXL2](https://github.com/turboderp/exllamav2/tree/master#exllamav2) Quantization of [Nete-13B](https://huggingface.co/Undi95/Nete-13B).
Quantized at 6.13bpw.
# Original model card
*Insert picture of a hot woman [here](https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/aJIfY5W9CV095wzEH7uo1.png)*
This model is based on the Xwin-MLewd recipe, trying to get a better result.
<!-- description start -->
## Description
This repo contains fp16 files of Nete-13B, a powered up version of Xwin-MLewd-13B.
<!-- description end -->
<!-- description start -->
## Models and loras used
- [Undi95/Mlewd-v2.4-13B](https://huggingface.co/Undi95/MLewd-v2.4-13B)
- [Xwin-LM/Xwin-LM-13B-V0.2](https://huggingface.co/Xwin-LM/Xwin-LM-13B-V0.2)
- [cgato/Thespis-13b-v0.4](https://huggingface.co/cgato/Thespis-13b-v0.4)
- [Undi95/PsyMedRP-v1-13B](https://huggingface.co/Undi95/PsyMedRP-v1-13B)
- [Undi95/Storytelling-v2.1-13B-lora](https://huggingface.co/Undi95/Storytelling-v2.1-13B-lora)
- [lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT](https://huggingface.co/lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT)
<!-- description end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
If you want to support me, you can [here](https://ko-fi.com/undiai).