Text Generation
MLX
mistral
pcuenq HF staff commited on
Commit
cd22db2
1 Parent(s): 1b4a834

Model card

Browse files
Files changed (1) hide show
  1. README.md +105 -0
README.md ADDED
@@ -0,0 +1,105 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ pipeline_tag: text-generation
4
+ tags:
5
+ - mistral
6
+ - mlx
7
+ inference: false
8
+ ---
9
+
10
+ # Model Card for Mistral-7B-Instruct-v0.2
11
+
12
+ The Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an improved instruct fine-tuned version of [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1).
13
+
14
+ For full details of this model please read our [paper](https://arxiv.org/abs/2310.06825) and [release blog post](https://mistral.ai/news/la-plateforme/).
15
+
16
+ This repository contains the weights in `npz` format suitable for use with Apple's MLX framework.
17
+
18
+ ## Use with MLX
19
+
20
+ ```bash
21
+ pip install mlx
22
+ pip install huggingface_hub hf_transfer
23
+ git clone https://github.com/ml-explore/mlx-examples.git
24
+
25
+ # Download model
26
+ export HF_HUB_ENABLE_HF_TRANSFER=1
27
+ huggingface-cli download --local-dir-use-symlinks False --local-dir Mistral-7B-Instruct-v0.2 mlx-community/Mistral-7B-Instruct-v0.2
28
+
29
+ # Run example
30
+ python mlx-examples/mistral/mistral.py --prompt "My name is" --model_path Mistral-7B-Instruct-v0.2
31
+ ```
32
+
33
+ The rest of this model card was copied from the [original repository](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2).
34
+
35
+ ## Instruction format
36
+
37
+ In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
38
+
39
+ E.g.
40
+ ```
41
+ text = "<s>[INST] What is your favourite condiment? [/INST]"
42
+ "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> "
43
+ "[INST] Do you have mayonnaise recipes? [/INST]"
44
+ ```
45
+
46
+ This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method:
47
+
48
+ ```python
49
+ from transformers import AutoModelForCausalLM, AutoTokenizer
50
+
51
+ device = "cuda" # the device to load the model onto
52
+
53
+ model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2")
54
+ tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2")
55
+
56
+ messages = [
57
+ {"role": "user", "content": "What is your favourite condiment?"},
58
+ {"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
59
+ {"role": "user", "content": "Do you have mayonnaise recipes?"}
60
+ ]
61
+
62
+ encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
63
+
64
+ model_inputs = encodeds.to(device)
65
+ model.to(device)
66
+
67
+ generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
68
+ decoded = tokenizer.batch_decode(generated_ids)
69
+ print(decoded[0])
70
+ ```
71
+
72
+ ## Model Architecture
73
+ This instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices:
74
+ - Grouped-Query Attention
75
+ - Sliding-Window Attention
76
+ - Byte-fallback BPE tokenizer
77
+
78
+ ## Troubleshooting
79
+ - If you see the following error:
80
+ ```
81
+ Traceback (most recent call last):
82
+ File "", line 1, in
83
+ File "/transformers/models/auto/auto_factory.py", line 482, in from_pretrained
84
+ config, kwargs = AutoConfig.from_pretrained(
85
+ File "/transformers/models/auto/configuration_auto.py", line 1022, in from_pretrained
86
+ config_class = CONFIG_MAPPING[config_dict["model_type"]]
87
+ File "/transformers/models/auto/configuration_auto.py", line 723, in getitem
88
+ raise KeyError(key)
89
+ KeyError: 'mistral'
90
+ ```
91
+
92
+ Installing transformers from source should solve the issue
93
+ pip install git+https://github.com/huggingface/transformers
94
+
95
+ This should not be required after transformers-v4.33.4.
96
+
97
+ ## Limitations
98
+
99
+ The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
100
+ It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
101
+ make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
102
+
103
+ ## The Mistral AI Team
104
+
105
+ Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Louis Ternon, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.