Update README.md
Browse files
README.md
CHANGED
@@ -11,44 +11,43 @@ datasets:
|
|
11 |
- OEvortex/SentimentSynth
|
12 |
- OEvortex/EmotionalIntelligence-10K
|
13 |
---
|
14 |
-
|
15 |
-
# HelpingAI-9B-200k: Emotionally Intelligent Conversational AI with 200k context window
|
16 |
|
17 |
![logo](https://huggingface.co/OEvortex/HelpingAI-3B/resolve/main/HelpingAI.png)
|
18 |
|
19 |
## Overview
|
20 |
-
HelpingAI-9B-200k is
|
21 |
|
22 |
## Objectives
|
23 |
-
- Engage in open-ended dialogue while displaying emotional intelligence
|
24 |
- Recognize and validate user emotions and emotional contexts
|
25 |
- Provide supportive, empathetic, and psychologically-grounded responses
|
26 |
-
- Avoid insensitive, harmful, or unethical speech
|
27 |
- Continuously improve emotional awareness and dialogue skills
|
28 |
-
-
|
29 |
|
30 |
## Methodology
|
31 |
-
HelpingAI-9B-200k is
|
32 |
-
- Supervised
|
33 |
-
- Reinforcement
|
34 |
-
- Constitution
|
35 |
-
- Knowledge
|
36 |
|
37 |
## Emotional Quotient (EQ)
|
38 |
HelpingAI-9B-200k has achieved an impressive Emotional Quotient (EQ) of 89.23, surpassing almost all AI models in emotional intelligence. This EQ score reflects its advanced ability to understand and respond to human emotions in a supportive and empathetic manner.
|
39 |
|
40 |
![benchmarks](benchmark_performance_comparison.png)
|
41 |
|
42 |
-
## Usage
|
43 |
```python
|
44 |
import torch
|
45 |
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
|
46 |
|
47 |
-
#
|
48 |
-
model = AutoModelForCausalLM.from_pretrained("OEvortex/HelpingAI-9B-200k
|
49 |
|
50 |
-
#
|
51 |
-
tokenizer = AutoTokenizer.from_pretrained("OEvortex/HelpingAI-9B-200k
|
52 |
|
53 |
# This TextStreamer thingy is our secret weapon for super smooth conversation flow
|
54 |
streamer = TextStreamer(tokenizer)
|
@@ -76,12 +75,11 @@ prompt = prompt.format(system=system, insaan=insaan)
|
|
76 |
inputs = tokenizer(prompt, return_tensors="pt", return_attention_mask=False).to("cuda")
|
77 |
|
78 |
# Here comes the fun part! Let's unleash the power of HelpingAI-3B to generate some awesome text
|
79 |
-
generated_text = model.generate(**inputs, max_length=
|
80 |
-
|
81 |
|
82 |
```
|
83 |
-
*Directly using this model from GGUF*
|
84 |
|
|
|
85 |
```python
|
86 |
%pip install -U 'webscout[local]'
|
87 |
|
@@ -94,29 +92,28 @@ from webscout.Local.samplers import SamplerSettings
|
|
94 |
from dotenv import load_dotenv; load_dotenv()
|
95 |
import os
|
96 |
|
97 |
-
|
98 |
-
|
99 |
-
repo_id = "OEvortex/HelpingAI-9B-200k-200k"
|
100 |
filename = "HelpingAI-9B-200k.Q4_0.gguf"
|
101 |
model_path = download_model(repo_id, filename, os.environ.get("hf_token"))
|
102 |
|
103 |
-
#
|
104 |
-
model = Model(model_path, n_gpu_layers=0)
|
105 |
|
106 |
-
#
|
107 |
-
system_prompt = "You are HelpingAI
|
108 |
|
109 |
-
#
|
110 |
custom_chatml = formats.chatml.copy()
|
111 |
custom_chatml['system_content'] = system_prompt
|
112 |
|
113 |
-
#
|
114 |
-
sampler = SamplerSettings(temp=0.7, top_p=0.9)
|
115 |
|
116 |
-
#
|
117 |
thread = Thread(model, custom_chatml, sampler=sampler)
|
118 |
|
119 |
-
#
|
120 |
thread.interact(header="π HelpingAI-9B-200k: Emotionally Intelligent Conversational AI π", color=True)
|
121 |
```
|
122 |
## Example Dialogue
|
|
|
11 |
- OEvortex/SentimentSynth
|
12 |
- OEvortex/EmotionalIntelligence-10K
|
13 |
---
|
14 |
+
# HelpingAI-9B-200k: Emotionally Intelligent Conversational AI with 200k Context Window
|
|
|
15 |
|
16 |
![logo](https://huggingface.co/OEvortex/HelpingAI-3B/resolve/main/HelpingAI.png)
|
17 |
|
18 |
## Overview
|
19 |
+
HelpingAI-9B-200k is an advanced large language model designed for emotionally intelligent conversational interactions. Building upon the success of its predecessor, HelpingAI-9B, which had a 4k context window, this upgraded version boasts a remarkable 200k context window. This allows it to engage users with greater empathy, understanding, and supportive dialogue across a broader range of topics and extended conversations.
|
20 |
|
21 |
## Objectives
|
22 |
+
- Engage in open-ended dialogue while displaying advanced emotional intelligence
|
23 |
- Recognize and validate user emotions and emotional contexts
|
24 |
- Provide supportive, empathetic, and psychologically-grounded responses
|
25 |
+
- Avoid insensitive, harmful, or unethical speech
|
26 |
- Continuously improve emotional awareness and dialogue skills
|
27 |
+
- Utilize an extended 200k context window for richer and more coherent interactions
|
28 |
|
29 |
## Methodology
|
30 |
+
HelpingAI-9B-200k is part of the HelpingAI series and has been further trained using:
|
31 |
+
- **Supervised Learning**: Leveraging large dialogue datasets with emotional labeling to enhance empathy and emotional recognition.
|
32 |
+
- **Reinforcement Learning**: Employing a reward model that favors emotionally supportive responses to ensure beneficial interactions.
|
33 |
+
- **Constitution Training**: Instilling stable and ethical objectives to guide its conversational behavior.
|
34 |
+
- **Knowledge Augmentation**: Integrating psychological resources on emotional intelligence to improve its understanding and response capabilities.
|
35 |
|
36 |
## Emotional Quotient (EQ)
|
37 |
HelpingAI-9B-200k has achieved an impressive Emotional Quotient (EQ) of 89.23, surpassing almost all AI models in emotional intelligence. This EQ score reflects its advanced ability to understand and respond to human emotions in a supportive and empathetic manner.
|
38 |
|
39 |
![benchmarks](benchmark_performance_comparison.png)
|
40 |
|
41 |
+
## Usage Code
|
42 |
```python
|
43 |
import torch
|
44 |
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
|
45 |
|
46 |
+
# Load the HelpingAI-9B-200k model
|
47 |
+
model = AutoModelForCausalLM.from_pretrained("OEvortex/HelpingAI-9B-200k").to("cuda")
|
48 |
|
49 |
+
# Load the tokenizer
|
50 |
+
tokenizer = AutoTokenizer.from_pretrained("OEvortex/HelpingAI-9B-200k")
|
51 |
|
52 |
# This TextStreamer thingy is our secret weapon for super smooth conversation flow
|
53 |
streamer = TextStreamer(tokenizer)
|
|
|
75 |
inputs = tokenizer(prompt, return_tensors="pt", return_attention_mask=False).to("cuda")
|
76 |
|
77 |
# Here comes the fun part! Let's unleash the power of HelpingAI-3B to generate some awesome text
|
78 |
+
generated_text = model.generate(**inputs, max_length=3084, top_p=0.95, do_sample=True, temperature=0.6, use_cache=True, streamer=streamer)
|
|
|
79 |
|
80 |
```
|
|
|
81 |
|
82 |
+
### Using the Model with GGUF
|
83 |
```python
|
84 |
%pip install -U 'webscout[local]'
|
85 |
|
|
|
92 |
from dotenv import load_dotenv; load_dotenv()
|
93 |
import os
|
94 |
|
95 |
+
# Download the model
|
96 |
+
repo_id = "OEvortex/HelpingAI-9B-200k"
|
|
|
97 |
filename = "HelpingAI-9B-200k.Q4_0.gguf"
|
98 |
model_path = download_model(repo_id, filename, os.environ.get("hf_token"))
|
99 |
|
100 |
+
# Load the model
|
101 |
+
model = Model(model_path, n_gpu_layers=0)
|
102 |
|
103 |
+
# Define the system prompt
|
104 |
+
system_prompt = "You are HelpingAI, an emotional AI. Always answer my questions in the HelpingAI style."
|
105 |
|
106 |
+
# Create a custom chatml format with your system prompt
|
107 |
custom_chatml = formats.chatml.copy()
|
108 |
custom_chatml['system_content'] = system_prompt
|
109 |
|
110 |
+
# Define your sampler settings (optional)
|
111 |
+
sampler = SamplerSettings(temp=0.7, top_p=0.9)
|
112 |
|
113 |
+
# Create a Thread with the custom format and sampler
|
114 |
thread = Thread(model, custom_chatml, sampler=sampler)
|
115 |
|
116 |
+
# Start interacting with the model
|
117 |
thread.interact(header="π HelpingAI-9B-200k: Emotionally Intelligent Conversational AI π", color=True)
|
118 |
```
|
119 |
## Example Dialogue
|