Abhaykoul commited on
Commit
f41155c
β€’
1 Parent(s): a02c2bd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +28 -31
README.md CHANGED
@@ -11,44 +11,43 @@ datasets:
11
  - OEvortex/SentimentSynth
12
  - OEvortex/EmotionalIntelligence-10K
13
  ---
14
-
15
- # HelpingAI-9B-200k: Emotionally Intelligent Conversational AI with 200k context window
16
 
17
  ![logo](https://huggingface.co/OEvortex/HelpingAI-3B/resolve/main/HelpingAI.png)
18
 
19
  ## Overview
20
- HelpingAI-9B-200k is a large language model designed for emotionally intelligent conversational interactions. It is trained to engage users with empathy, understanding, and supportive dialogue across a wide range of topics and contexts. The model aims to provide a supportive AI companion that can attune to users' emotional states and communicative needs.
21
 
22
  ## Objectives
23
- - Engage in open-ended dialogue while displaying emotional intelligence
24
  - Recognize and validate user emotions and emotional contexts
25
  - Provide supportive, empathetic, and psychologically-grounded responses
26
- - Avoid insensitive, harmful, or unethical speech
27
  - Continuously improve emotional awareness and dialogue skills
28
- - High Context length
29
 
30
  ## Methodology
31
- HelpingAI-9B-200k is based on the HelpingAI series and further trained using:
32
- - Supervised learning on large dialogue datasets with emotional labeling
33
- - Reinforcement learning with a reward model favoring emotionally supportive responses
34
- - Constitution training to instill stable and beneficial objectives
35
- - Knowledge augmentation from psychological resources on emotional intelligence
36
 
37
  ## Emotional Quotient (EQ)
38
  HelpingAI-9B-200k has achieved an impressive Emotional Quotient (EQ) of 89.23, surpassing almost all AI models in emotional intelligence. This EQ score reflects its advanced ability to understand and respond to human emotions in a supportive and empathetic manner.
39
 
40
  ![benchmarks](benchmark_performance_comparison.png)
41
 
42
- ## Usage code
43
  ```python
44
  import torch
45
  from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
46
 
47
- # Let's bring in the big guns! Our super cool HelpingAI-9B-200k model
48
- model = AutoModelForCausalLM.from_pretrained("OEvortex/HelpingAI-9B-200k-200k").to("cuda")
49
 
50
- # We also need the special HelpingAI translator to understand our chats
51
- tokenizer = AutoTokenizer.from_pretrained("OEvortex/HelpingAI-9B-200k-200k")
52
 
53
  # This TextStreamer thingy is our secret weapon for super smooth conversation flow
54
  streamer = TextStreamer(tokenizer)
@@ -76,12 +75,11 @@ prompt = prompt.format(system=system, insaan=insaan)
76
  inputs = tokenizer(prompt, return_tensors="pt", return_attention_mask=False).to("cuda")
77
 
78
  # Here comes the fun part! Let's unleash the power of HelpingAI-3B to generate some awesome text
79
- generated_text = model.generate(**inputs, max_length=200000, top_p=0.95, do_sample=True, temperature=0.6, use_cache=True, streamer=streamer)
80
-
81
 
82
  ```
83
- *Directly using this model from GGUF*
84
 
 
85
  ```python
86
  %pip install -U 'webscout[local]'
87
 
@@ -94,29 +92,28 @@ from webscout.Local.samplers import SamplerSettings
94
  from dotenv import load_dotenv; load_dotenv()
95
  import os
96
 
97
-
98
- # 1. Download the model
99
- repo_id = "OEvortex/HelpingAI-9B-200k-200k"
100
  filename = "HelpingAI-9B-200k.Q4_0.gguf"
101
  model_path = download_model(repo_id, filename, os.environ.get("hf_token"))
102
 
103
- # 2. Load the model
104
- model = Model(model_path, n_gpu_layers=0)Β 
105
 
106
- # 3. Define your system prompt
107
- system_prompt = "You are HelpingAI a emotional AI always answer my question in HelpingAI style"
108
 
109
- # 4. Create a custom chatml format with your system prompt
110
  custom_chatml = formats.chatml.copy()
111
  custom_chatml['system_content'] = system_prompt
112
 
113
- # 5. Define your sampler settings (optional)
114
- sampler = SamplerSettings(temp=0.7, top_p=0.9)Β  # Adjust these values as needed
115
 
116
- # 6. Create a Thread with the custom format and sampler
117
  thread = Thread(model, custom_chatml, sampler=sampler)
118
 
119
- # 7. Start interacting with the model
120
  thread.interact(header="🌟 HelpingAI-9B-200k: Emotionally Intelligent Conversational AI πŸš€", color=True)
121
  ```
122
  ## Example Dialogue
 
11
  - OEvortex/SentimentSynth
12
  - OEvortex/EmotionalIntelligence-10K
13
  ---
14
+ # HelpingAI-9B-200k: Emotionally Intelligent Conversational AI with 200k Context Window
 
15
 
16
  ![logo](https://huggingface.co/OEvortex/HelpingAI-3B/resolve/main/HelpingAI.png)
17
 
18
  ## Overview
19
+ HelpingAI-9B-200k is an advanced large language model designed for emotionally intelligent conversational interactions. Building upon the success of its predecessor, HelpingAI-9B, which had a 4k context window, this upgraded version boasts a remarkable 200k context window. This allows it to engage users with greater empathy, understanding, and supportive dialogue across a broader range of topics and extended conversations.
20
 
21
  ## Objectives
22
+ - Engage in open-ended dialogue while displaying advanced emotional intelligence
23
  - Recognize and validate user emotions and emotional contexts
24
  - Provide supportive, empathetic, and psychologically-grounded responses
25
+ - Avoid insensitive, harmful, or unethical speech
26
  - Continuously improve emotional awareness and dialogue skills
27
+ - Utilize an extended 200k context window for richer and more coherent interactions
28
 
29
  ## Methodology
30
+ HelpingAI-9B-200k is part of the HelpingAI series and has been further trained using:
31
+ - **Supervised Learning**: Leveraging large dialogue datasets with emotional labeling to enhance empathy and emotional recognition.
32
+ - **Reinforcement Learning**: Employing a reward model that favors emotionally supportive responses to ensure beneficial interactions.
33
+ - **Constitution Training**: Instilling stable and ethical objectives to guide its conversational behavior.
34
+ - **Knowledge Augmentation**: Integrating psychological resources on emotional intelligence to improve its understanding and response capabilities.
35
 
36
  ## Emotional Quotient (EQ)
37
  HelpingAI-9B-200k has achieved an impressive Emotional Quotient (EQ) of 89.23, surpassing almost all AI models in emotional intelligence. This EQ score reflects its advanced ability to understand and respond to human emotions in a supportive and empathetic manner.
38
 
39
  ![benchmarks](benchmark_performance_comparison.png)
40
 
41
+ ## Usage Code
42
  ```python
43
  import torch
44
  from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
45
 
46
+ # Load the HelpingAI-9B-200k model
47
+ model = AutoModelForCausalLM.from_pretrained("OEvortex/HelpingAI-9B-200k").to("cuda")
48
 
49
+ # Load the tokenizer
50
+ tokenizer = AutoTokenizer.from_pretrained("OEvortex/HelpingAI-9B-200k")
51
 
52
  # This TextStreamer thingy is our secret weapon for super smooth conversation flow
53
  streamer = TextStreamer(tokenizer)
 
75
  inputs = tokenizer(prompt, return_tensors="pt", return_attention_mask=False).to("cuda")
76
 
77
  # Here comes the fun part! Let's unleash the power of HelpingAI-3B to generate some awesome text
78
+ generated_text = model.generate(**inputs, max_length=3084, top_p=0.95, do_sample=True, temperature=0.6, use_cache=True, streamer=streamer)
 
79
 
80
  ```
 
81
 
82
+ ### Using the Model with GGUF
83
  ```python
84
  %pip install -U 'webscout[local]'
85
 
 
92
  from dotenv import load_dotenv; load_dotenv()
93
  import os
94
 
95
+ # Download the model
96
+ repo_id = "OEvortex/HelpingAI-9B-200k"
 
97
  filename = "HelpingAI-9B-200k.Q4_0.gguf"
98
  model_path = download_model(repo_id, filename, os.environ.get("hf_token"))
99
 
100
+ # Load the model
101
+ model = Model(model_path, n_gpu_layers=0)
102
 
103
+ # Define the system prompt
104
+ system_prompt = "You are HelpingAI, an emotional AI. Always answer my questions in the HelpingAI style."
105
 
106
+ # Create a custom chatml format with your system prompt
107
  custom_chatml = formats.chatml.copy()
108
  custom_chatml['system_content'] = system_prompt
109
 
110
+ # Define your sampler settings (optional)
111
+ sampler = SamplerSettings(temp=0.7, top_p=0.9)
112
 
113
+ # Create a Thread with the custom format and sampler
114
  thread = Thread(model, custom_chatml, sampler=sampler)
115
 
116
+ # Start interacting with the model
117
  thread.interact(header="🌟 HelpingAI-9B-200k: Emotionally Intelligent Conversational AI πŸš€", color=True)
118
  ```
119
  ## Example Dialogue