Minami-su commited on
Commit
c45d967
1 Parent(s): 8df4614

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +35 -0
README.md ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: qwen
4
+ license_link: >-
5
+ https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT
6
+ language:
7
+ - en
8
+ - zh
9
+ library_name: transformers
10
+ pipeline_tag: text-generation
11
+ inference: false
12
+ tags:
13
+ - llama
14
+ - qwen
15
+ - qwen1.5
16
+ - qwen2
17
+ ---
18
+ This is the LLaMAfied version of [Qwen1.5-0.5B-Chat](https://huggingface.co/Qwen/Qwen1.5-0.5B-Chat) model by Alibaba Cloud.
19
+ The original codebase can be found at: [this GitHub link](https://github.com/hiyouga/LLaMA-Factory/blob/main/tests/llamafy_qwen.py).
20
+ I have made modifications to make it compatible with qwen1.5.
21
+ This model is converted with https://github.com/Minami-su/character_AI_open/blob/main/llamafy_qwen_v2.py
22
+
23
+ Usage:
24
+ ```python
25
+ from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
26
+ tokenizer = AutoTokenizer.from_pretrained("Minami-su/Qwen1.5-0.5B-Chat_llamafy")
27
+ model = AutoModelForCausalLM.from_pretrained("Minami-su/Qwen1.5-0.5B-Chat_llamafy", torch_dtype="auto", device_map="auto")
28
+ streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
29
+
30
+ messages = [
31
+ {"role": "user", "content": "Who are you?"}
32
+ ]
33
+ inputs = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
34
+ inputs = inputs.to("cuda")
35
+ generate_ids = model.generate(inputs, streamer=streamer)