Text Generation
Transformers
Safetensors
English
llava_phi
custom_code
g-h-chen commited on
Commit
21cc538
1 Parent(s): 55114cb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +70 -0
README.md CHANGED
@@ -1,3 +1,73 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ datasets:
4
+ - FreedomIntelligence/ALLaVA-4V
5
+ language:
6
+ - en
7
+ pipeline_tag: text-generation
8
  ---
9
+
10
+ Quick start:
11
+
12
+ ```shell
13
+ from transformers import AutoModelForCausalLM
14
+ from transformers import AutoTokenizer
15
+ import torch
16
+ import pdb
17
+
18
+ dir = "FreedomIntelligence/ALLaVA-3B-Longer"
19
+
20
+ device = 'cuda'
21
+ model = AutoModelForCausalLM.from_pretrained(dir, trust_remote_code=True, device_map=device, torch_dtype=torch.bfloat16)
22
+ tokenizer = AutoTokenizer.from_pretrained(dir)
23
+ model.tokenizer = tokenizer
24
+
25
+ gen_kwargs = {
26
+ 'min_new_tokens': 20,
27
+ 'max_new_tokens': 100,
28
+ 'do_sample': False,
29
+ 'eos_token_id': tokenizer.eos_token_id # this is a must since transformers ~4.37
30
+ }
31
+
32
+ #################################################################################
33
+ # first round
34
+ #################################################################################
35
+ response, history = model.chat(
36
+ texts='What is in the image?',
37
+ images=['https://cdn-icons-png.flaticon.com/256/6028/6028690.png'],
38
+ return_history=True,
39
+ **gen_kwargs
40
+ )
41
+ print('response:')
42
+ print(response)
43
+ print('history:')
44
+ print(history)
45
+ # response:
46
+ # The image contains a large, stylized "HI!" in a bright pink color with a yellow outline. The "HI!" is in a speech bubble shape.
47
+
48
+ # history:
49
+ # [['What is in the image?', 'The image contains a large, stylized "HI!" in a bright pink color with a yellow outline. The "HI!" is in a speech bubble shape.']]
50
+
51
+ #################################################################################
52
+ # second round
53
+ #################################################################################
54
+ response, history = model.chat(
55
+ texts='Are you sure?',
56
+ images=['https://cdn-icons-png.flaticon.com/256/6028/6028690.png'], # images need to be passed again in multi-round conversations
57
+ history=history,
58
+ return_history=True,
59
+ **gen_kwargs
60
+ )
61
+
62
+ print('response:')
63
+ print(response)
64
+ print('history:')
65
+ print(history)
66
+ # response:
67
+ # Yes, I'm sure. The image shows a large, stylized "HI!" in a bright pink color with a yellow outline, placed in a speech bubble shape.
68
+
69
+ # history:
70
+ # [['What is in the image?', 'The image contains a large, stylized "HI!" in a bright pink color with a yellow outline. The "HI!" is in a speech bubble shape.'], ['Are you sure?', 'Yes, I\'m sure. The image shows a large, stylized "HI!" in a bright pink color with a yellow outline, placed in a speech bubble shape.']]
71
+
72
+
73
+ ```