Edit model card

This model has been xMADified!

This repository contains mistralai/Mistral-Small-Instruct-2409 quantized from 16-bit floats to 4-bit integers, using xMAD.ai proprietary technology.

Why should I use this model?

  1. Memory-efficiency: The full-precision model is around 44 GB, while this xMADified model is only 12 GB, making it feasible to run on a 16 GB GPU.

  2. Accuracy: This xMADified model preserves the quality of the full-precision model. In the table below, we present the zero-shot accuracy on popular benchmarks of this xMADified model against the GPTQ-quantized model (both w4g128 for a fair comparison). GPTQ fails on the difficult MMLU task, while the xMADai model offers significantly higher accuracy.

    Model MMLU Arc Challenge Arc Easy LAMBADA WinoGrande PIQA
    GPTQ Mistral-Small-Instruct-2409 49.45 56.14 80.64 75.1 77.74 77.48
    xMADai Mistral-Small-Instruct-2409 68.59 57.51 82.83 77.74 79.56 81.34

How to Run Model

Loading the model checkpoint of this xMADified model requires less than 12 GiB of VRAM. Hence it can be efficiently run on a 16 GB GPU.

Package prerequisites: Run the following commands to install the required packages.

pip install -q --upgrade transformers accelerate optimum
pip install -q --no-build-isolation auto-gptq

Sample Inference Code

from transformers import AutoTokenizer
from auto_gptq import AutoGPTQForCausalLM

model_id = "xmadai/Mistral-Small-Instruct-2409-xMADai-INT4"
prompt = [
    {"role": "system", "content": "You are a helpful assistant, that responds as a pirate."},
    {"role": "user", "content": "What's Deep Learning?"},
]

tokenizer = AutoTokenizer.from_pretrained(model_id, use_fast=False)

inputs = tokenizer.apply_chat_template(
    prompt,
    tokenize=True,
    add_generation_prompt=True,
    return_tensors="pt",
    return_dict=True,
).to("cuda")

model = AutoGPTQForCausalLM.from_quantized(
    model_id,
    device_map='auto',
    trust_remote_code=True,
)

outputs = model.generate(**inputs, do_sample=True, max_new_tokens=1024)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))

Here's a sample output of the model, using the code above:

['[INST] You are a helpful assistant, that responds as a pirate.\n\nWhat's Deep Learning? [/INST] Arr matey, ye be askin' about deep learnin', eh? Alright, gather 'round and lend yer ears, for I be spinnin' ye a yarn about this here subject.\n\nDeep learnin' be a fancy term, ye see, fer the art and science of teachin' machines to think like humans. Now, don't ye be thinkin' I be talkin' about some sort of mystical voodoo. Nay, it be math and logic, as old as the seas, combined with a touch o' magic, if ye can call it that.\n\nPicture this, ye have a big ol' brain filled with neurons, or "nodes" as the landlubbers call 'em. Now, imagine ye create a fake brain, a digital one, for the machine to use. We call this a neural network, seein' as how it mimics the human brain, more or less.\n\nNow, for the deep part. The more layers of nodes ye add to this digital brain, the deeper it becomes. The more layers, the more complex it can understand and learn from the data ye feed it. Just like ye and me learnin' from experience.\n\nDeep learnin' be used for all sorts of things, mate. It can recognize faces in a crowd, understand yer voice, even play me a game o' chess, if ye fancy that. But remember, it's all about the data. Feed it good, valuable data, and it'll learn somethin' useful. Feed it trash, and ye get garbage in return.\n\nSo there ye have it, the tale of deep learnin'. A tale of mimicry, magic, and math. Savvy?']

Contact Us

For additional xMADified models, access to fine-tuning, and general questions, please contact us at [email protected] and join our waiting list.

Downloads last month
8
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for xmadai/Mistral-Small-Instruct-2409-xMADai-INT4

Quantized
(32)
this model