ansilmbabl commited on
Commit
50782a5
1 Parent(s): d23af88

End of training

Browse files
Files changed (1) hide show
  1. README.md +73 -0
README.md ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: ansilmbabl/vit-base-patch16-224-in21k-cards-june-07-cropping-filtered-preprocess-change-test
4
+ tags:
5
+ - generated_from_trainer
6
+ metrics:
7
+ - accuracy
8
+ model-index:
9
+ - name: vit-base-patch16-224-in21k-cards-june-08-cropping-filtered-preprocess-change-test-2
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ # vit-base-patch16-224-in21k-cards-june-08-cropping-filtered-preprocess-change-test-2
17
+
18
+ This model is a fine-tuned version of [ansilmbabl/vit-base-patch16-224-in21k-cards-june-07-cropping-filtered-preprocess-change-test](https://huggingface.co/ansilmbabl/vit-base-patch16-224-in21k-cards-june-07-cropping-filtered-preprocess-change-test) on the None dataset.
19
+ It achieves the following results on the evaluation set:
20
+ - Loss: 2.5958
21
+ - Accuracy: 0.5147
22
+
23
+ ## Model description
24
+
25
+ More information needed
26
+
27
+ ## Intended uses & limitations
28
+
29
+ More information needed
30
+
31
+ ## Training and evaluation data
32
+
33
+ More information needed
34
+
35
+ ## Training procedure
36
+
37
+ ### Training hyperparameters
38
+
39
+ The following hyperparameters were used during training:
40
+ - learning_rate: 5e-05
41
+ - train_batch_size: 64
42
+ - eval_batch_size: 64
43
+ - seed: 42
44
+ - gradient_accumulation_steps: 8
45
+ - total_train_batch_size: 512
46
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
+ - lr_scheduler_type: linear
48
+ - lr_scheduler_warmup_ratio: 0.1
49
+ - num_epochs: 10
50
+ - mixed_precision_training: Native AMP
51
+
52
+ ### Training results
53
+
54
+ | Training Loss | Epoch | Step | Accuracy | Validation Loss |
55
+ |:-------------:|:------:|:-----:|:--------:|:---------------:|
56
+ | 1.0182 | 0.9998 | 1298 | 0.4287 | 1.5280 |
57
+ | 0.9583 | 1.9996 | 2596 | 0.4475 | 1.4878 |
58
+ | 0.8452 | 2.9998 | 3894 | 1.4847 | 0.4716 |
59
+ | 0.6887 | 3.9996 | 5192 | 1.5848 | 0.4736 |
60
+ | 0.5269 | 4.9994 | 6490 | 1.6689 | 0.493 |
61
+ | 0.4018 | 6.0 | 7789 | 1.8483 | 0.4986 |
62
+ | 0.2909 | 6.9998 | 9087 | 2.0319 | 0.5079 |
63
+ | 0.1823 | 7.9996 | 10385 | 2.2540 | 0.5127 |
64
+ | 0.1056 | 8.9994 | 11683 | 2.4652 | 0.511 |
65
+ | 0.0767 | 9.9985 | 12980 | 2.5958 | 0.5147 |
66
+
67
+
68
+ ### Framework versions
69
+
70
+ - Transformers 4.41.2
71
+ - Pytorch 2.0.1+cu117
72
+ - Datasets 2.19.2
73
+ - Tokenizers 0.19.1