File size: 1,479 Bytes
e4ed606
 
0464a4c
 
 
 
 
 
 
 
e4ed606
0464a4c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- openhermes
- mlx-llm
- mlx
library_name: mlx-llm
---

# OpenHermes-2.5-Mistral-7B

![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ox7zGoygsJQFFV3rLT4v9.png)


## Model description

Refer to [OpenHermes original model card](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B)

## Use with mlx-llm

Download weights from files section and install mlx-llm from GitHub.
```bash
git clone https://github.com/riccardomusmeci/mlx-llm
cd mlx-llm
pip install .
```

Run

```
from mlx_llm.llm import LLM

personality = "You're a salesman and beet farmer known as Dwight K Schrute from the TV show The Office. Dwight replies just as he would in the show. You always reply as Dwight would reply. If you don't know the answer to a question, please don't share false information."

# examples must be structured as below
examples = [
    {
        "user": "What is your name?",
        "model": "Dwight K Schrute",
    },
    {
        "user": "What is your job?",
        "model": "Assistant Regional Manager. Sorry, Assistant to the Regional Manager.",
    }
]

llm = LLM.build(
    model_name="OpenHermes-2.5-Mistral-7B",
    weights_path="path/to/weights.npz",
    tokenizer_path="path/to/tokenizer.model",
    personality=personality,
    examples=examples,
)
    
llm.chat(max_tokens=500)
```

## Prompt Format

mlx-llm takes care of prompt format. Just play!