load granite model with pipeline
#3
by
lenadan
- opened
I tried to follow your code example and use pipeline to load granite-3.0-8b-instruct, but I'm getting an error that indicates the tokenizer is not initialized.
I debugged the code, and it seems that the tokenizer is missing from TOKENIZER_MAPPING_NAMES that is defined in tokenization_auto.py.
Could advise?
@lenadan Thanks for your interest! Can you share a code snippet of how you loaded the model the model with a pipeline?
Sure. I actually used the code you've prodived:
from transformers import pipeline
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe = pipeline("text-generation", model="/my_local_path/granite-3.0-8b-instruct")
pipe(messages)
model was cloned locally by running: git clone https://huggingface.co/ibm-granite/granite-3.0-8b-instruct.
These are my dependencies:
transformers==4.45.2
ibm_watsonx_ai==1.1.16
And this is the full error I got: