base_model: unsloth/gemma-2-2b-it
library_name: transformers
tags:
- medical
- unsloth
- peft
- qlora
Model Card for cmcmaster/rheum-gemma-2-2b-it
Model Details
Model Description
This model is a fine-tuned version of the Gemma 2 2B model, specifically adapted for rheumatology-related tasks. It combines the base knowledge of the Gemma model with specialized rheumatology information.
- Developed by: cmcmaster
- Model type: Language Model
- Language(s) (NLP): English (primarily)
- License: [More Information Needed]
- Finetuned from model: unsloth/gemma-2-2b-bnb-4bit, merged with unsloth/gemma-2-2b-it
Model Sources
- Repository: https://huggingface.co/cmcmaster/rheum-gemma-2-2b-it
Uses
Direct Use
This model can be used for rheumatology-related natural language processing tasks, such as question answering, information retrieval, or text generation in the domain of rheumatology.
Out-of-Scope Use
This model should not be used as a substitute for professional medical advice, diagnosis, or treatment. It is not intended to be used for making clinical decisions without the involvement of qualified healthcare professionals.
Training Details
Training Data
The model was trained on the cmcmaster/rheum_texts dataset.
Training Procedure
The model was fine-tuned using the unsloth library, which allows for efficient finetuning of large language models. Here are the key details of the training procedure:
- Base Model: unsloth/gemma-2-2b-bnb-4bit
- Max Sequence Length: 2048
- Quantization: 4-bit quantization
- LoRA Configuration:
- r = 128
- target_modules = ["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"]
- lora_alpha = 32
- lora_dropout = 0
- use_rslora = True (Rank Stabilized LoRA)
Training Hyperparameters
- Batch Size: 4 per device
- Gradient Accumulation Steps: 8
- Learning Rate: 2e-4
- Warmup Ratio: 0.03
- Number of Epochs: 1
- Optimizer: AdamW (8-bit)
- Weight Decay: 0.00
- LR Scheduler: Cosine
- Random Seed: 3407
Post-Training Procedure
After training, the LoRA adapter was merged with the instruction-tuned version of Gemma (unsloth/gemma-2-2b-it) rather than the base model. This approach aims to combine the rheumatology knowledge gained during fine-tuning with the instruction-following capabilities of the tuned model.
Limitations and Biases
While this model has been fine-tuned on rheumatology-related data, it may still contain biases present in the original Gemma model or introduced through the training data. Users should be aware that the model's outputs may not always be accurate or complete, especially for complex medical topics.