File size: 1,258 Bytes
ab84b16 56e505d ab84b16 057d00e ab84b16 0964040 ef2cb41 0964040 062bc45 37e8e60 062bc45 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
---
tags:
- gptq
- 4bit
- int4
- gptqmodel
- modelcloud
- llama-3.1
- 8b
- instruct
license: llama3.1
---
This model has been quantized using [GPTQModel](https://github.com/ModelCloud/GPTQModel).
- **bits**: 4
- **group_size**: 128
- **desc_act**: true
- **static_groups**: false
- **sym**: true
- **lm_head**: false
- **damp_percent**: 0.005
- **true_sequential**: true
- **model_name_or_path**: ""
- **model_file_base_name**: "model"
- **quant_method**: "gptq"
- **checkpoint_format**: "gptq"
- **meta**:
- **quantizer**: "gptqmodel:0.9.9-dev0"
**Here is an example:**
```python
from transformers import AutoTokenizer
from gptqmodel import GPTQModel
model_name = "ModelCloud/Meta-Llama-3.1-8B-Instruct-gptq-4bit"
prompt = [{"role": "user", "content": "I am in Shanghai, preparing to visit the natural history museum. Can you tell me the best way to"}]
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = GPTQModel.from_quantized(model_name)
input_tensor = tokenizer.apply_chat_template(prompt, add_generation_prompt=True, return_tensors="pt")
outputs = model.generate(input_ids=input_tensor.to(model.device), max_new_tokens=100)
result = tokenizer.decode(outputs[0][input_tensor.shape[1]:], skip_special_tokens=True)
print(result)
``` |