Update README.md
Browse files
README.md
CHANGED
@@ -3,197 +3,70 @@ library_name: transformers
|
|
3 |
tags: []
|
4 |
---
|
5 |
|
6 |
-
# Model Card for
|
7 |
-
|
8 |
-
<!-- Provide a quick summary of what the model is/does. -->
|
9 |
-
|
10 |
-
|
11 |
|
12 |
## Model Details
|
13 |
|
14 |
### Model Description
|
15 |
|
16 |
-
|
17 |
|
18 |
-
|
19 |
-
|
20 |
-
- **
|
21 |
-
- **Funded by [optional]:** [More Information Needed]
|
22 |
-
- **Shared by [optional]:** [More Information Needed]
|
23 |
-
- **Model type:** [More Information Needed]
|
24 |
-
- **Language(s) (NLP):** [More Information Needed]
|
25 |
- **License:** [More Information Needed]
|
26 |
-
- **Finetuned from model
|
27 |
-
|
28 |
-
### Model Sources [optional]
|
29 |
|
30 |
-
|
31 |
|
32 |
-
- **Repository:**
|
33 |
-
- **Paper [optional]:** [More Information Needed]
|
34 |
-
- **Demo [optional]:** [More Information Needed]
|
35 |
|
36 |
## Uses
|
37 |
|
38 |
-
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
39 |
-
|
40 |
### Direct Use
|
41 |
|
42 |
-
|
43 |
-
|
44 |
-
[More Information Needed]
|
45 |
-
|
46 |
-
### Downstream Use [optional]
|
47 |
-
|
48 |
-
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
49 |
-
|
50 |
-
[More Information Needed]
|
51 |
|
52 |
### Out-of-Scope Use
|
53 |
|
54 |
-
|
55 |
-
|
56 |
-
[More Information Needed]
|
57 |
-
|
58 |
-
## Bias, Risks, and Limitations
|
59 |
-
|
60 |
-
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
61 |
-
|
62 |
-
[More Information Needed]
|
63 |
-
|
64 |
-
### Recommendations
|
65 |
-
|
66 |
-
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
67 |
-
|
68 |
-
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
69 |
-
|
70 |
-
## How to Get Started with the Model
|
71 |
-
|
72 |
-
Use the code below to get started with the model.
|
73 |
-
|
74 |
-
[More Information Needed]
|
75 |
|
76 |
## Training Details
|
77 |
|
78 |
### Training Data
|
79 |
|
80 |
-
|
81 |
-
|
82 |
-
[More Information Needed]
|
83 |
|
84 |
### Training Procedure
|
85 |
|
86 |
-
|
87 |
-
|
88 |
-
#### Preprocessing [optional]
|
89 |
-
|
90 |
-
[More Information Needed]
|
91 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
92 |
|
93 |
#### Training Hyperparameters
|
94 |
|
95 |
-
- **
|
96 |
-
|
97 |
-
|
98 |
-
|
99 |
-
|
100 |
-
|
101 |
-
|
102 |
-
|
103 |
-
|
104 |
-
|
105 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
106 |
-
|
107 |
-
### Testing Data, Factors & Metrics
|
108 |
-
|
109 |
-
#### Testing Data
|
110 |
-
|
111 |
-
<!-- This should link to a Dataset Card if possible. -->
|
112 |
-
|
113 |
-
[More Information Needed]
|
114 |
-
|
115 |
-
#### Factors
|
116 |
-
|
117 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
118 |
-
|
119 |
-
[More Information Needed]
|
120 |
-
|
121 |
-
#### Metrics
|
122 |
-
|
123 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
124 |
-
|
125 |
-
[More Information Needed]
|
126 |
-
|
127 |
-
### Results
|
128 |
-
|
129 |
-
[More Information Needed]
|
130 |
-
|
131 |
-
#### Summary
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
## Model Examination [optional]
|
136 |
-
|
137 |
-
<!-- Relevant interpretability work for the model goes here -->
|
138 |
-
|
139 |
-
[More Information Needed]
|
140 |
-
|
141 |
-
## Environmental Impact
|
142 |
-
|
143 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
144 |
-
|
145 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
146 |
-
|
147 |
-
- **Hardware Type:** [More Information Needed]
|
148 |
-
- **Hours used:** [More Information Needed]
|
149 |
-
- **Cloud Provider:** [More Information Needed]
|
150 |
-
- **Compute Region:** [More Information Needed]
|
151 |
-
- **Carbon Emitted:** [More Information Needed]
|
152 |
-
|
153 |
-
## Technical Specifications [optional]
|
154 |
-
|
155 |
-
### Model Architecture and Objective
|
156 |
-
|
157 |
-
[More Information Needed]
|
158 |
-
|
159 |
-
### Compute Infrastructure
|
160 |
-
|
161 |
-
[More Information Needed]
|
162 |
-
|
163 |
-
#### Hardware
|
164 |
-
|
165 |
-
[More Information Needed]
|
166 |
-
|
167 |
-
#### Software
|
168 |
-
|
169 |
-
[More Information Needed]
|
170 |
-
|
171 |
-
## Citation [optional]
|
172 |
-
|
173 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
174 |
-
|
175 |
-
**BibTeX:**
|
176 |
-
|
177 |
-
[More Information Needed]
|
178 |
-
|
179 |
-
**APA:**
|
180 |
-
|
181 |
-
[More Information Needed]
|
182 |
-
|
183 |
-
## Glossary [optional]
|
184 |
-
|
185 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
186 |
-
|
187 |
-
[More Information Needed]
|
188 |
-
|
189 |
-
## More Information [optional]
|
190 |
-
|
191 |
-
[More Information Needed]
|
192 |
|
193 |
-
|
194 |
|
195 |
-
|
196 |
|
197 |
-
##
|
198 |
|
199 |
-
|
|
|
3 |
tags: []
|
4 |
---
|
5 |
|
6 |
+
# Model Card for cmcmaster/rheum-gemma-2-2b-it
|
|
|
|
|
|
|
|
|
7 |
|
8 |
## Model Details
|
9 |
|
10 |
### Model Description
|
11 |
|
12 |
+
This model is a fine-tuned version of the Gemma 2 2B model, specifically adapted for rheumatology-related tasks. It combines the base knowledge of the Gemma model with specialized rheumatology information.
|
13 |
|
14 |
+
- **Developed by:** cmcmaster
|
15 |
+
- **Model type:** Language Model
|
16 |
+
- **Language(s) (NLP):** English (primarily)
|
|
|
|
|
|
|
|
|
17 |
- **License:** [More Information Needed]
|
18 |
+
- **Finetuned from model:** unsloth/gemma-2-2b-bnb-4bit, merged with unsloth/gemma-2-2b-it
|
|
|
|
|
19 |
|
20 |
+
### Model Sources
|
21 |
|
22 |
+
- **Repository:** https://huggingface.co/cmcmaster/rheum-gemma-2-2b-it
|
|
|
|
|
23 |
|
24 |
## Uses
|
25 |
|
|
|
|
|
26 |
### Direct Use
|
27 |
|
28 |
+
This model can be used for rheumatology-related natural language processing tasks, such as question answering, information retrieval, or text generation in the domain of rheumatology.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
29 |
|
30 |
### Out-of-Scope Use
|
31 |
|
32 |
+
This model should not be used as a substitute for professional medical advice, diagnosis, or treatment. It is not intended to be used for making clinical decisions without the involvement of qualified healthcare professionals.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
|
34 |
## Training Details
|
35 |
|
36 |
### Training Data
|
37 |
|
38 |
+
The model was trained on the cmcmaster/rheum_texts dataset.
|
|
|
|
|
39 |
|
40 |
### Training Procedure
|
41 |
|
42 |
+
The model was fine-tuned using the unsloth library, which allows for efficient finetuning of large language models. Here are the key details of the training procedure:
|
|
|
|
|
|
|
|
|
43 |
|
44 |
+
- **Base Model:** unsloth/gemma-2-2b-bnb-4bit
|
45 |
+
- **Max Sequence Length:** 2048
|
46 |
+
- **Quantization:** 4-bit quantization
|
47 |
+
- **LoRA Configuration:**
|
48 |
+
- r = 128
|
49 |
+
- target_modules = ["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"]
|
50 |
+
- lora_alpha = 32
|
51 |
+
- lora_dropout = 0
|
52 |
+
- use_rslora = True (Rank Stabilized LoRA)
|
53 |
|
54 |
#### Training Hyperparameters
|
55 |
|
56 |
+
- **Batch Size:** 4 per device
|
57 |
+
- **Gradient Accumulation Steps:** 8
|
58 |
+
- **Learning Rate:** 2e-4
|
59 |
+
- **Warmup Ratio:** 0.03
|
60 |
+
- **Number of Epochs:** 1
|
61 |
+
- **Optimizer:** AdamW (8-bit)
|
62 |
+
- **Weight Decay:** 0.00
|
63 |
+
- **LR Scheduler:** Cosine
|
64 |
+
- **Random Seed:** 3407
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
65 |
|
66 |
+
### Post-Training Procedure
|
67 |
|
68 |
+
After training, the LoRA adapter was merged with the instruction-tuned version of Gemma (unsloth/gemma-2-2b-it) rather than the base model. This approach aims to combine the rheumatology knowledge gained during fine-tuning with the instruction-following capabilities of the tuned model.
|
69 |
|
70 |
+
## Limitations and Biases
|
71 |
|
72 |
+
While this model has been fine-tuned on rheumatology-related data, it may still contain biases present in the original Gemma model or introduced through the training data. Users should be aware that the model's outputs may not always be accurate or complete, especially for complex medical topics.
|