ArunIcfoss commited on
Commit
853a67a
1 Parent(s): 6d5ba8f

End of training

Browse files
Files changed (2) hide show
  1. README.md +69 -0
  2. adapter_model.safetensors +1 -1
README.md ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ library_name: peft
4
+ tags:
5
+ - generated_from_trainer
6
+ base_model: google/mt5-base
7
+ metrics:
8
+ - bleu
9
+ - rouge
10
+ model-index:
11
+ - name: mt5-base-ICFOSS-malayalam_Hindi_Translator
12
+ results: []
13
+ ---
14
+
15
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
+ should probably proofread and complete it, then remove this comment. -->
17
+
18
+ # mt5-base-ICFOSS-malayalam_Hindi_Translator
19
+
20
+ This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the None dataset.
21
+ It achieves the following results on the evaluation set:
22
+ - Loss: 1.2179
23
+ - Bleu: 6.2035
24
+ - Rouge: {'rouge1': 0.2667970960136926, 'rouge2': 0.14574925525428614, 'rougeL': 0.26511828595423204, 'rougeLsum': 0.26501665904942706}
25
+ - Chrf: {'score': 23.454551827072866, 'char_order': 6, 'word_order': 0, 'beta': 2}
26
+
27
+ ## Model description
28
+
29
+ More information needed
30
+
31
+ ## Intended uses & limitations
32
+
33
+ More information needed
34
+
35
+ ## Training and evaluation data
36
+
37
+ More information needed
38
+
39
+ ## Training procedure
40
+
41
+ ### Training hyperparameters
42
+
43
+ The following hyperparameters were used during training:
44
+ - learning_rate: 0.0002
45
+ - train_batch_size: 16
46
+ - eval_batch_size: 16
47
+ - seed: 42
48
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
+ - lr_scheduler_type: cosine
50
+ - num_epochs: 5
51
+
52
+ ### Training results
53
+
54
+ | Training Loss | Epoch | Step | Validation Loss | Bleu | Rouge | Chrf |
55
+ |:-------------:|:-----:|:-----:|:---------------:|:------:|:-------------------------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------:|
56
+ | 2.5515 | 1.0 | 4315 | 1.2874 | 5.8306 | {'rouge1': 0.2660910934739513, 'rouge2': 0.14404792849379128, 'rougeL': 0.26384549634107013, 'rougeLsum': 0.2637751499455684} | {'score': 22.571342084258088, 'char_order': 6, 'word_order': 0, 'beta': 2} |
57
+ | 1.9143 | 2.0 | 8630 | 1.2319 | 6.1128 | {'rouge1': 0.263256301663898, 'rouge2': 0.14256738224583015, 'rougeL': 0.261282034035635, 'rougeLsum': 0.2613517649673947} | {'score': 23.235214776547263, 'char_order': 6, 'word_order': 0, 'beta': 2} |
58
+ | 1.8644 | 3.0 | 12945 | 1.2192 | 6.2145 | {'rouge1': 0.2670714744552978, 'rouge2': 0.14606073298261613, 'rougeL': 0.2652594809906982, 'rougeLsum': 0.26489596193447795} | {'score': 23.438449086905997, 'char_order': 6, 'word_order': 0, 'beta': 2} |
59
+ | 1.8539 | 4.0 | 17260 | 1.2179 | 6.2043 | {'rouge1': 0.26678061058524805, 'rouge2': 0.14565482302690236, 'rougeL': 0.26489350144733725, 'rougeLsum': 0.26477198178581135} | {'score': 23.464895899326955, 'char_order': 6, 'word_order': 0, 'beta': 2} |
60
+ | 1.8525 | 5.0 | 21575 | 1.2179 | 6.2035 | {'rouge1': 0.2667970960136926, 'rouge2': 0.14574925525428614, 'rougeL': 0.26511828595423204, 'rougeLsum': 0.26501665904942706} | {'score': 23.454551827072866, 'char_order': 6, 'word_order': 0, 'beta': 2} |
61
+
62
+
63
+ ### Framework versions
64
+
65
+ - PEFT 0.10.0
66
+ - Transformers 4.40.2
67
+ - Pytorch 2.3.0+cu121
68
+ - Datasets 2.19.0
69
+ - Tokenizers 0.19.1
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:330e2e79e9653e676cd93661f2343d0c99f9526f3e352d10ec6801ad673f3cd3
3
  size 13627416
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f9428b7e07e3a469484ebe56991889bed8fcd2de616accc1594df3c4dfc039b7
3
  size 13627416