File size: 1,091 Bytes
4229101 1a0c007 4229101 1a0c007 4229101 1a0c007 4229101 1a0c007 4229101 1a0c007 4229101 1a0c007 4229101 1a0c007 4229101 1a0c007 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 |
---
library_name: transformers
language:
- en
- ko
pipeline_tag: translation
tags:
- llama-3-ko
license: mit
datasets:
- recipes
---
### Model Card for Model ID
### Model Details
Model Card: llama3-pre1-pre2-ds-lora3 with Fine-Tuning
Model Overview
Model Name: llama3-pre1-pre2-ds-lora3
Model Type: Transformer-based Language Model
Model Size: 8 billion parameters
by: 4yo1
Languages: English and Korean
### Model Description
llama3-pre1-pre2-ds-lora3 is a language model pre-trained on a diverse corpus of English and Korean texts.
This fine-tuning approach allows the model to adapt to specific tasks or datasets with a minimal number of additional parameters, making it efficient and effective for specialized applications.
### how to use - sample code
```python
from transformers import AutoConfig, AutoModel, AutoTokenizer
config = AutoConfig.from_pretrained("4yo1/llama3-pre1-pre2-ds-lora3")
model = AutoModel.from_pretrained("4yo1/llama3-pre1-pre2-ds-lora3")
tokenizer = AutoTokenizer.from_pretrained("4yo1/llama3-pre1-pre2-ds-lora3")
```
datasets:
- recipes
license: mit |