File size: 2,759 Bytes
6b3e06a
89a89da
6b3e06a
9cffc19
 
 
 
 
6b3e06a
 
b941fa2
6b3e06a
 
 
 
 
b941fa2
6b3e06a
b941fa2
 
 
6b3e06a
b941fa2
6b3e06a
b941fa2
6b3e06a
b941fa2
6b3e06a
 
 
 
 
b941fa2
6b3e06a
 
 
b941fa2
6b3e06a
 
 
 
 
b941fa2
6b3e06a
 
 
b941fa2
6b3e06a
b941fa2
 
 
 
 
 
 
 
 
6b3e06a
 
 
b941fa2
 
 
 
 
 
 
 
 
6b3e06a
b941fa2
6b3e06a
b941fa2
6b3e06a
b941fa2
6b3e06a
b941fa2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
---
base_model: unsloth/gemma-2-2b-it
library_name: transformers
tags:
- medical
- unsloth
- peft
- qlora
---

# Model Card for cmcmaster/rheum-gemma-2-2b-it

## Model Details

### Model Description

This model is a fine-tuned version of the Gemma 2 2B model, specifically adapted for rheumatology-related tasks. It combines the base knowledge of the Gemma model with specialized rheumatology information.

- **Developed by:** cmcmaster
- **Model type:** Language Model
- **Language(s) (NLP):** English (primarily)
- **License:** [More Information Needed]
- **Finetuned from model:** unsloth/gemma-2-2b-bnb-4bit, merged with unsloth/gemma-2-2b-it

### Model Sources

- **Repository:** https://huggingface.co/cmcmaster/rheum-gemma-2-2b-it

## Uses

### Direct Use

This model can be used for rheumatology-related natural language processing tasks, such as question answering, information retrieval, or text generation in the domain of rheumatology.

### Out-of-Scope Use

This model should not be used as a substitute for professional medical advice, diagnosis, or treatment. It is not intended to be used for making clinical decisions without the involvement of qualified healthcare professionals.

## Training Details

### Training Data

The model was trained on the cmcmaster/rheum_texts dataset.

### Training Procedure

The model was fine-tuned using the unsloth library, which allows for efficient finetuning of large language models. Here are the key details of the training procedure:

- **Base Model:** unsloth/gemma-2-2b-bnb-4bit
- **Max Sequence Length:** 2048
- **Quantization:** 4-bit quantization
- **LoRA Configuration:**
  - r = 128
  - target_modules = ["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"]
  - lora_alpha = 32
  - lora_dropout = 0
  - use_rslora = True (Rank Stabilized LoRA)

#### Training Hyperparameters

- **Batch Size:** 4 per device
- **Gradient Accumulation Steps:** 8
- **Learning Rate:** 2e-4
- **Warmup Ratio:** 0.03
- **Number of Epochs:** 1
- **Optimizer:** AdamW (8-bit)
- **Weight Decay:** 0.00
- **LR Scheduler:** Cosine
- **Random Seed:** 3407

### Post-Training Procedure

After training, the LoRA adapter was merged with the instruction-tuned version of Gemma (unsloth/gemma-2-2b-it) rather than the base model. This approach aims to combine the rheumatology knowledge gained during fine-tuning with the instruction-following capabilities of the tuned model.

## Limitations and Biases

While this model has been fine-tuned on rheumatology-related data, it may still contain biases present in the original Gemma model or introduced through the training data. Users should be aware that the model's outputs may not always be accurate or complete, especially for complex medical topics.