Datasets:

Modalities:
Text
Formats:
parquet
Languages:
Turkish
ArXiv:
DOI:
Libraries:
Datasets
Dask
License:
meliksahturker commited on
Commit
376c069
1 Parent(s): eeda0b9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -3
README.md CHANGED
@@ -25,7 +25,8 @@ language:
25
 
26
  # Dataset Card for Dataset Name
27
  vngrs-web-corpus is a mixed-dataset made of cleaned Turkish sections of [OSCAR-2201](https://huggingface.co/datasets/oscar-corpus/OSCAR-2201) and [mC4](https://huggingface.co/datasets/mc4).
28
- This dataset originally created for training [VBART](https://arxiv.org/abs/2403.01308) and later used for training [TURNA](https://arxiv.org/abs/2401.14373). The cleaning procedures of this dataset are explained at Appendix A of the [VBART Paper](https://arxiv.org/abs/2401.14373)
 
29
  It consists of 50.3M pages and 25.33B tokens when tokenized by VBART Tokenizer.
30
 
31
  ## Dataset Details
@@ -38,7 +39,7 @@ It consists of 50.3M pages and 25.33B tokens when tokenized by VBART Tokenizer.
38
 
39
  ## Uses
40
 
41
- vngrs-web-corpus is mainly inteded to pretrain language models and word represantations.
42
 
43
  ## Dataset Structure
44
 
@@ -49,7 +50,8 @@ vngrs-web-corpus is mainly inteded to pretrain language models and word represan
49
 
50
  ## Bias, Risks, and Limitations
51
 
52
- This dataset holds content crawled on open web and only cleaned for broken text and not cleaned based on content. In cases where the content is irrelevant or inappropriate, it should be flagged and removed accordingly.
 
53
  The dataset is intended for research purposes only and should not be used for any other purposes without prior consent from the relevant authorities.
54
 
55
  ## Citation
 
25
 
26
  # Dataset Card for Dataset Name
27
  vngrs-web-corpus is a mixed-dataset made of cleaned Turkish sections of [OSCAR-2201](https://huggingface.co/datasets/oscar-corpus/OSCAR-2201) and [mC4](https://huggingface.co/datasets/mc4).
28
+ This dataset originally created for training [VBART](https://arxiv.org/abs/2403.01308) and later used for training [TURNA](https://arxiv.org/abs/2401.14373).
29
+ The cleaning procedures of this dataset are explained in Appendix A of the [VBART Paper](https://arxiv.org/abs/2401.14373).
30
  It consists of 50.3M pages and 25.33B tokens when tokenized by VBART Tokenizer.
31
 
32
  ## Dataset Details
 
39
 
40
  ## Uses
41
 
42
+ vngrs-web-corpus is mainly intended to pretrain language models and word representations.
43
 
44
  ## Dataset Structure
45
 
 
50
 
51
  ## Bias, Risks, and Limitations
52
 
53
+ This dataset holds content crawled on the open web. It is cleaned based on a set of rules and heuristics without accounting for the semantics of the content.
54
+ In cases where the content is irrelevant or inappropriate, it should be flagged and removed accordingly.
55
  The dataset is intended for research purposes only and should not be used for any other purposes without prior consent from the relevant authorities.
56
 
57
  ## Citation