gemma-2b-it-quiz-ko / README.md
DORAEMONG's picture
Update README.md
376043c verified
|
raw
history blame
1.82 kB
---
license: gemma
datasets:
- MarkrAI/KOpen-HQ-Hermes-2.5-60K
language:
- ko
metrics:
- accuracy
base_model:
- google/gemma-2-2b-it
pipeline_tag: text2text-generation
---
# Gemma-2B Quiz Answering Model
This project fine-tunes the Gemma-2B model to provide answers to quiz-related questions. The model is designed to handle complex problems or quizzes and generate clear and accurate responses in Korean.
## Table of Contents
- [Model Overview](#model-overview)
- [How to Use](#how-to-use)
- [Training Details](#training-details)
- [Model Performance](#model-performance)
- [Limitations and Future Work](#limitations-and-future-work)
## Model Overview
The **Gemma-2B Quiz Answering Model** is built on top of the [Gemma-2B](https://huggingface.co/google/gemma-2b) base model. This version has been fine-tuned to better handle complex quiz questions and generate responses in natural Korean, addressing issues with awkward language generation from the base model.
- **Model Name**: `gemma-2b-quiz-ko`
- **Purpose**: Answer complex quiz and problem-solving questions.
- **Language**: Korean (ko)
## How to Use
You can use the model by loading it from Hugging Face Hub. Below is a simple usage example with the `transformers` library:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load model and tokenizer
model = AutoModelForCausalLM.from_pretrained("DORAEMONG/gemma-2b-quiz-ko")
tokenizer = AutoTokenizer.from_pretrained("DORAEMONG/gemma-2b-quiz-ko")
# Input a quiz question
question = "λ‹€μŒ μˆ˜ν•™ 문제의 닡은 λ¬΄μ—‡μž…λ‹ˆκΉŒ? μŠ€ν”Όλ„ˆκ°€ A, B, C둜 λ‚˜λ‰˜μ–΄ μžˆμ„ λ•Œ..."
inputs = tokenizer(question, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
# Decode the generated text
print(tokenizer.decode(outputs[0], skip_special_tokens=True))