hiroshi-matsuda-rit commited on
Commit
f838186
1 Parent(s): 06bf7b9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +69 -2
README.md CHANGED
@@ -1,4 +1,71 @@
1
  ---
2
- license: odc-by
 
 
 
3
  ---
4
- Preparing for release
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language: ja
3
+ license: mit
4
+ datasets:
5
+ - mC4 Japanese
6
  ---
7
+
8
+ # roberta-long-japanese (jumanpp + sentencepiece, mC4 Japanese)
9
+
10
+ This is the longer input version of [RoBERTa](https://arxiv.org/abs/1907.11692) Japanese model pretrained on approximately 200M Japanese sentences.
11
+ `max_position_embeddings` has been increased to `1282`, allowing it to handle much longer inputs than the basic `RoBERTa` model.
12
+
13
+ The tokenization model and logic is completely same as [nlp-waseda/roberta-base-japanese](https://huggingface.co/nlp-waseda/roberta-base-japanese).
14
+ The input text should be pretokenized by [Juman++ v2.0.0-rc3](https://github.com/ku-nlp/jumanpp) and then the [SentencePiece](https://github.com/google/sentencepiece) tokenization will be applied for the whitespace-separated token sequences.
15
+ See `tokenizer_config.json` for details.
16
+
17
+ ## How to use
18
+
19
+ Please install `Juman++ v2.0.0-rc3` and `SentencePiece` in advance.
20
+
21
+ - https://github.com/ku-nlp/jumanpp#building-from-a-package
22
+ - https://github.com/google/sentencepiece#python-module
23
+
24
+ You can load the model and the tokenizer via AutoModel and AutoTokenizer, respectively.
25
+
26
+ ```python
27
+ from transformers import AutoModel, AutoTokenizer
28
+ model = AutoModel.from_pretrained("megagonlabs/roberta-long-japanese")
29
+ tokenizer = AutoTokenizer.from_pretrained("megagonlabs/roberta-long-japanese")
30
+ model(**tokenizer("まさに オール マイ ティー な 商品 だ 。", return_tensors="pt")).last_hidden_state
31
+ tensor([[[ 0.1549, -0.7576, 0.1098, ..., 0.7124, 0.8062, -0.9880],
32
+ [-0.6586, -0.6138, -0.5253, ..., 0.8853, 0.4822, -0.6463],
33
+ [-0.4502, -1.4675, -0.4095, ..., 0.9053, -0.2017, -0.7756],
34
+ ...,
35
+ [ 0.3505, -1.8235, -0.6019, ..., -0.0906, -0.5479, -0.6899],
36
+ [ 1.0524, -0.8609, -0.6029, ..., 0.1022, -0.6802, 0.0982],
37
+ [ 0.6519, -0.2042, -0.6205, ..., -0.0738, -0.0302, -0.1955]]],
38
+ grad_fn=<NativeLayerNormBackward0>)
39
+ ```
40
+
41
+ ## Model architecture
42
+
43
+ The model architecture is almost the same as [nlp-waseda/roberta-base-japanese](https://huggingface.co/nlp-waseda/roberta-base-japanese) except `max_position_embeddings` has been increased to `1282`; 12 layers, 768 dimensions of hidden states, and 12 attention heads.
44
+
45
+ ## Training data and libraries
46
+
47
+ This model is trained on the Japanese texts extracted from the [mC4](https://huggingface.co/datasets/mc4) Common Crawl's multilingual web crawl corpus.
48
+ We used the [Sudachi](https://github.com/WorksApplications/Sudachi) to split texts into sentences, and also applied a simple rule-based filter to remove nonlinguistic segments of mC4 multilingual corpus.
49
+ The extracted texts contains over 600M sentences in total, and we used approximately 200M sentences for pretraining.
50
+
51
+ We used [huggingface/transformers RoBERTa implementation](https://github.com/huggingface/transformers/tree/v4.21.0/src/transformers/models/roberta) for pretraining. The time required for the pretrainig was about 300 hours using GCP A100 8gpu instance with enabling Automatic Mixed Precision.
52
+
53
+ ## Licenses
54
+
55
+ The pretrained models are distributed under the terms of the [MIT License](https://opensource.org/licenses/mit-license.php).
56
+
57
+ ## Citations
58
+
59
+ - mC4
60
+
61
+ Contains information from `mC4` which is made available under the [ODC Attribution License](https://opendatacommons.org/licenses/by/1-0/).
62
+ ```
63
+ @article{2019t5,
64
+ author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
65
+ title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
66
+ journal = {arXiv e-prints},
67
+ year = {2019},
68
+ archivePrefix = {arXiv},
69
+ eprint = {1910.10683},
70
+ }
71
+ ```