How many vram
1
#44 opened 3 days ago
by
Dizzl500
fix prompt format for Llama-3.2-11B-Vision
1
#43 opened 5 days ago
by
chenhegu
local image
1
#42 opened 6 days ago
by
komenge
What broken open source model is not used by Chinese people? What kind of open source model is it if you don't use it? Let's call it a closed source model!
#41 opened 6 days ago
by
Hansons
How to model.generate batched data
1
#40 opened 7 days ago
by
Popandos
ValueError: The checkpoint you are trying to load has model type `mllama` but Transformers does not recognize this architecture
#39 opened 11 days ago
by
KevalRx
Model having troubles understand the prompts?
#35 opened 19 days ago
by
franciscoliu
Interview request: thoughts on genAI evaluation & documentation
#34 opened 20 days ago
by
evatang
Error encountered when fine-tuning
3
#30 opened 24 days ago
by
yongleyuan
Why is the image size 448 instead of 560?
#28 opened 25 days ago
by
theo77186
Llama-3.2-11B-vision onnx model generation
2
#27 opened 25 days ago
by
SantoshHF
How to use visual grounding with this model ?
#25 opened 26 days ago
by
r4hul77
How to get embeddings for Image-Text Retrieval?
1
#23 opened 26 days ago
by
wanghaofan
Why EXACTLY this model is not available in Europe?
4
#22 opened 26 days ago
by
MoonRide
model.resize_token_embeddings() method is broken - resizes embedding table but not lm_head
#21 opened 27 days ago
by
alexpeys
Chat template is removed in the base variant. Can we still use chat template to formulate the prompt?
3
#12 opened 28 days ago
by
hxgy610
Position of <image> token in prompt for fine-tuning
4
#2 opened about 1 month ago
by
hxgy610