|
--- |
|
library_name: transformers |
|
language: |
|
- en |
|
- ko |
|
pipeline_tag: translation |
|
tags: |
|
- llama-3-ko |
|
license: mit |
|
datasets: |
|
- 4yo1/llama3_test1 |
|
--- |
|
|
|
### Model Card for Model ID |
|
### Model Details |
|
|
|
|
|
Model Type: Transformer-based Language Model |
|
|
|
|
|
Languages: English and Korean |
|
|
|
### Model Description |
|
This fine-tuning approach allows the model to adapt to specific tasks or datasets with a minimal number of additional parameters, making it efficient and effective for specialized applications. |
|
|
|
### how to use - sample code |
|
|
|
```python |
|
from transformers import AutoConfig, AutoModel, AutoTokenizer |
|
|
|
config = AutoConfig.from_pretrained("4yo1/llama") |
|
model = AutoModel.from_pretrained("4yo1/llama") |
|
tokenizer = AutoTokenizer.from_pretrained("4yo1/llama") |
|
``` |
|
datasets: |
|
- 4yo1/llama3_test1 |
|
|
|
license: mit |