Llama3.1-8B-Enigma / README.md
sequelbox's picture
5eced6bdc7ad8ba388f4ef51effe2a89e3f25cc9b6dbd87df0119b9e742f92d8
e98d01a verified
|
raw
history blame
2.63 kB
metadata
language:
  - en
pipeline_tag: text-generation
tags:
  - enigma
  - valiant
  - valiant-labs
  - llama
  - llama-3.1
  - llama-3.1-instruct
  - llama-3.1-instruct-8b
  - llama-3
  - llama-3-instruct
  - llama-3-instruct-8b
  - 8b
  - code
  - code-instruct
  - python
  - conversational
  - chat
  - instruct
datasets:
  - sequelbox/Tachibana
  - LDJnr/Pure-Dove
model_type: llama
license: llama3.1

Enigma is a code-instruct model built on Llama 3.1 8b.

Version

This is the 2024-08-10 release of Enigma for Llama 3.1 8b.

Help us and recommend Enigma to your friends! We're excited for more Enigma releases in the future.

Right now, we're working on more new Build Tools to come very soon, built on Llama 3.1 :)

Prompting Guide

Enigma uses the Llama 3.1 Instruct prompt format. The example script below can be used as a starting point for general chat:

import transformers import torch

model_id = "ValiantLabs/Llama3.1-8B-Enigma"

pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", )

messages = [ {"role": "system", "content": "You are Enigma, a highly capable code assistant."}, {"role": "user", "content": "Can you explain virtualization to me?"} ]

outputs = pipeline( messages, max_new_tokens=1024, )

print(outputs[0]["generated_text"][-1])

The Model

Enigma is built on top of Llama 3.1 8b Instruct, using code-instruct data to supplement code-instruct performance using Llama 3.1 Instruct prompt style.

Our current version of the Enigma code-instruct dataset is sequelbox/Tachibana., supplemented with a small selection of data from LDJnr/Pure-Dove for general chat consistency.

image/jpeg

Enigma is created by Valiant Labs.

Check out our HuggingFace page for Shining Valiant 2 and our other Build Tools models for creators!

Follow us on X for updates on our models!

We care about open source. For everyone to use.

We encourage others to finetune further from our models.