pere commited on
Commit
c7c96af
1 Parent(s): 4eadb7e

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +82 -3
README.md CHANGED
@@ -1,3 +1,82 @@
1
- ---
2
- license: cc
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - generated_from_trainer
5
+ model-index:
6
+ - name: wav2vec2-xlsr-300M-NPSC-OH
7
+ results: []
8
+ ---
9
+
10
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
11
+ should probably proofread and complete it, then remove this comment. -->
12
+
13
+ # wav2vec2-xlsr-300M-NPSC-OH
14
+
15
+ This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
16
+ It achieves the following results on the evaluation set:
17
+ - Loss: 0.1700
18
+ - Wer: 0.1665
19
+
20
+ ## Model description
21
+
22
+ More information needed
23
+
24
+ ## Intended uses & limitations
25
+
26
+ More information needed
27
+
28
+ ## Training and evaluation data
29
+
30
+ More information needed
31
+
32
+ ## Training procedure
33
+
34
+ ### Training hyperparameters
35
+
36
+ The following hyperparameters were used during training:
37
+ - learning_rate: 7.5e-05
38
+ - train_batch_size: 16
39
+ - eval_batch_size: 16
40
+ - seed: 13
41
+ - gradient_accumulation_steps: 4
42
+ - total_train_batch_size: 64
43
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
+ - lr_scheduler_type: linear
45
+ - lr_scheduler_warmup_steps: 2000
46
+ - num_epochs: 15.0
47
+ - mixed_precision_training: Native AMP
48
+
49
+ ### Training results
50
+
51
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
52
+ |:-------------:|:-----:|:-----:|:---------------:|:------:|
53
+ | 3.1638 | 0.66 | 500 | 3.0686 | 1.0 |
54
+ | 2.9311 | 1.31 | 1000 | 2.9208 | 1.0 |
55
+ | 2.4175 | 1.97 | 1500 | 1.5009 | 0.9049 |
56
+ | 1.4442 | 2.63 | 2000 | 0.4426 | 0.3783 |
57
+ | 1.2624 | 3.28 | 2500 | 0.3193 | 0.2998 |
58
+ | 1.1889 | 3.94 | 3000 | 0.2867 | 0.2630 |
59
+ | 1.1315 | 4.6 | 3500 | 0.2566 | 0.2444 |
60
+ | 1.0864 | 5.26 | 4000 | 0.2368 | 0.2294 |
61
+ | 1.093 | 5.91 | 4500 | 0.2240 | 0.2151 |
62
+ | 1.0368 | 6.57 | 5000 | 0.2117 | 0.2056 |
63
+ | 1.0178 | 7.23 | 5500 | 0.2020 | 0.1954 |
64
+ | 1.0035 | 7.88 | 6000 | 0.2005 | 0.1924 |
65
+ | 0.9759 | 8.54 | 6500 | 0.1971 | 0.1863 |
66
+ | 0.9795 | 9.2 | 7000 | 0.1892 | 0.1812 |
67
+ | 0.9601 | 9.85 | 7500 | 0.1863 | 0.1795 |
68
+ | 0.9673 | 10.51 | 8000 | 0.1809 | 0.1761 |
69
+ | 0.9233 | 11.17 | 8500 | 0.1818 | 0.1755 |
70
+ | 0.9382 | 11.83 | 9000 | 0.1767 | 0.1741 |
71
+ | 0.9242 | 12.48 | 9500 | 0.1743 | 0.1703 |
72
+ | 0.9703 | 13.14 | 10000 | 0.1711 | 0.1711 |
73
+ | 0.9139 | 13.8 | 10500 | 0.1718 | 0.1672 |
74
+ | 0.9073 | 14.45 | 11000 | 0.1700 | 0.1665 |
75
+
76
+
77
+ ### Framework versions
78
+
79
+ - Transformers 4.17.0.dev0
80
+ - Pytorch 1.10.1+cu102
81
+ - Datasets 1.18.2.dev0
82
+ - Tokenizers 0.11.0