--- library_name: transformers language: - en - es pipeline_tag: text-generation license: apache-2.0 --- # Model Card for Model ID ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6320e992beec1969845be447/25pTrbjySoblu8cuiHASu.png) Introducing Pixie Zehir Nano. Excelling in writing. A fine tune of H2O Danube 1.8b on HQ DATA™ from Pixie Zehir. ## Model Details - **Developed by:** [Maani x BLNKBLK] - **Language(s) (NLP):** [English, Spanish] - **License:** [Apache 2.0] - **Finetuned from model :** [h2oai/h2o-danube-1.8b-chat] ## Agreements Model is created for research purposes, it can and will hallucinate, use with caution. ## Usage ```bash pip install transformers==4.36.1 ``` ```python import torch from transformers import pipeline pipe = pipeline( "text-generation", model="h2oai/h2o-danube-1.8b-chat", torch_dtype=torch.bfloat16, device_map="auto", ) # We use the HF Tokenizer chat template to format each message # https://huggingface.co/docs/transformers/main/en/chat_templating messages = [ {"role": "user", "content": "Write a haiku."}, ] prompt = pipe.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, ) res = pipe( prompt, max_new_tokens=256, ) print(res[0]["generated_text"]) # <|prompt|>Write a haiku.<|answer|> In the windowless room, Digital dreams consume, Unseen sun sets on a white rabbit's ears: [...] ```