gpt2-GPTQ-4bit / README.md
mlabonne's picture
Update README.md
8c23ea3
|
raw
history blame
1.25 kB
metadata
license: apache-2.0
language:
  - en
pipeline_tag: text-generation
tags:
  - AutoGPTQ
  - 4bit
  - GPTQ

Disclaimer: this model was created for educational purposes using a limited set of 10 samples. You can find the code on Colab and a roadmap to get into LLMs on GitHub.

Model created using AutoGPTQ on a GPT-2 model with 4-bit quantization.

You can load this model with the AutoGPTQ library, installed with the following command:

pip install auto-gptq

You can then download the model from the hub using the following code:

from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
from transformers import AutoTokenizer

model_id = "mlabonne/gpt2-GPTQ-4bit"
quantize_config = BaseQuantizeConfig(bits=4, group_size=128)
model = AutoGPTQForCausalLM.from_pretrained(model_id, quantize_config)
tokenizer = AutoTokenizer.from_pretrained(model_id)

This model works with the traditional Text Generation pipeline.