OTTER-MPT7B-Init / README.md
luodian's picture
Update README.md
2d3e68b
|
raw
history blame
2.09 kB
metadata
license: mit

1S-Lab, Nanyang Technological University  2Microsoft Research, Redmond

This weight is for initilizing training for Otter-MPT7B. It's directly converted from Openflamingov2, we added a <answer> token for Otter's downstream instruction tuning.

You can load and try this model using

    load_bit = "bf16"
    precision = {}
    if load_bit == "bf16":
        precision["torch_dtype"] = torch.bfloat16
    elif load_bit == "fp16":
        precision["torch_dtype"] = torch.float16
    elif load_bit == "fp32":
        precision["torch_dtype"] = torch.float32
    model = OtterForConditionalGeneration.from_pretrained("luodian/OTTER-9B-LA-InContext", device_map="sequential", **precision)
    model.text_tokenizer.padding_side = "left"
    tokenizer = model.text_tokenizer
    image_processor = transformers.CLIPImageProcessor()
    model.eval()

Leave us a message if you have any error or question. You can follow Otter code (see training section) to further tune your model on top of it.