parquet-converter commited on
Commit
ff05574
1 Parent(s): 33b6fcd

Update parquet files

Browse files
README.md DELETED
@@ -1,146 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - expert-generated
4
- language:
5
- - pl
6
- license:
7
- - mit
8
- multilinguality:
9
- - monolingual
10
-
11
- dataset_info:
12
- - config_name: config
13
- features:
14
- - name: audio_id
15
- dtype: string
16
- - name: audio
17
- dtype:
18
- audio:
19
- sampling_rate: 16000
20
- - name: text
21
- dtype: string
22
- ---
23
-
24
-
25
- # Dataset Card for [Dataset Name]
26
-
27
- ## Table of Contents
28
- - [Table of Contents](#table-of-contents)
29
- - [Dataset Description](#dataset-description)
30
- - [Dataset Summary](#dataset-summary)
31
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
32
- - [Languages](#languages)
33
- - [Dataset Structure](#dataset-structure)
34
- - [Data Instances](#data-instances)
35
- - [Data Fields](#data-fields)
36
- - [Data Splits](#data-splits)
37
- - [Dataset Creation](#dataset-creation)
38
- - [Curation Rationale](#curation-rationale)
39
- - [Source Data](#source-data)
40
- - [Annotations](#annotations)
41
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
42
- - [Considerations for Using the Data](#considerations-for-using-the-data)
43
- - [Social Impact of Dataset](#social-impact-of-dataset)
44
- - [Discussion of Biases](#discussion-of-biases)
45
- - [Other Known Limitations](#other-known-limitations)
46
- - [Additional Information](#additional-information)
47
- - [Dataset Curators](#dataset-curators)
48
- - [Licensing Information](#licensing-information)
49
- - [Citation Information](#citation-information)
50
- - [Contributions](#contributions)
51
-
52
- ## Dataset Description
53
-
54
- - **Homepage:**
55
- - **Repository:**
56
- - **Paper:**
57
- - **Leaderboard:**
58
- - **Point of Contact:**
59
-
60
- ### Dataset Summary
61
-
62
- [More Information Needed]
63
-
64
- ### Supported Tasks and Leaderboards
65
-
66
- [More Information Needed]
67
-
68
- ### Languages
69
-
70
- [More Information Needed]
71
-
72
- ## Dataset Structure
73
-
74
- ### Data Instances
75
-
76
- [More Information Needed]
77
-
78
- ### Data Fields
79
-
80
- [More Information Needed]
81
-
82
- ### Data Splits
83
-
84
- [More Information Needed]
85
-
86
- ## Dataset Creation
87
-
88
- ### Curation Rationale
89
-
90
- [More Information Needed]
91
-
92
- ### Source Data
93
-
94
- #### Initial Data Collection and Normalization
95
-
96
- [More Information Needed]
97
-
98
- #### Who are the source language producers?
99
-
100
- [More Information Needed]
101
-
102
- ### Annotations
103
-
104
- #### Annotation process
105
-
106
- [More Information Needed]
107
-
108
- #### Who are the annotators?
109
-
110
- [More Information Needed]
111
-
112
- ### Personal and Sensitive Information
113
-
114
- [More Information Needed]
115
-
116
- ## Considerations for Using the Data
117
-
118
- ### Social Impact of Dataset
119
-
120
- [More Information Needed]
121
-
122
- ### Discussion of Biases
123
-
124
- [More Information Needed]
125
-
126
- ### Other Known Limitations
127
-
128
- [More Information Needed]
129
-
130
- ## Additional Information
131
-
132
- ### Dataset Curators
133
-
134
- [More Information Needed]
135
-
136
- ### Licensing Information
137
-
138
- [More Information Needed]
139
-
140
- ### Citation Information
141
-
142
- [More Information Needed]
143
-
144
- ### Contributions
145
-
146
- Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
clean/on/examples.zip → all/test2-clean.of.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b5264b1ba94e19f957ea5617a5d9792192f9605af02ed6ba485e33ae0074cc27
3
- size 48733
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:546c940affdd47eed8c868816c98afc7aa9a7045e9d61b1c5c6538922d52c928
3
+ size 64850
other/of/examples.zip → all/test2-clean.on.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:353d27a6ad69c6a4efeaef0ca2b0c0d80c0e4f74073be13c1df8b1793ef306e9
3
- size 48733
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:546c940affdd47eed8c868816c98afc7aa9a7045e9d61b1c5c6538922d52c928
3
+ size 64850
other/on/examples.zip → all/test2-other.of.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e24db277eb4b5659a4139115e75487a0683d3c0a1f180e615dac347255c5a913
3
- size 48733
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:546c940affdd47eed8c868816c98afc7aa9a7045e9d61b1c5c6538922d52c928
3
+ size 64850
clean/of/examples.zip → all/test2-other.on.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ac516703b85c1816eeed6faf3cac54bf6444e9ea37dfadfac2dd683b2a59a98a
3
- size 48733
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:546c940affdd47eed8c868816c98afc7aa9a7045e9d61b1c5c6538922d52c928
3
+ size 64850
clean/example.tsv DELETED
@@ -1,3 +0,0 @@
1
- audio_id ngram
2
- common_voice_pl_20547775.wav poślemy potest
3
- common_voice_pl_20547776.wav poślemy po wastest
 
 
 
 
clean/keyword.tsv DELETED
@@ -1,2 +0,0 @@
1
- audio_id ngram
2
- common_voice_pl_20547774.wav poślemytrain
 
 
 
clean/test2-of.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:546c940affdd47eed8c868816c98afc7aa9a7045e9d61b1c5c6538922d52c928
3
+ size 64850
clean/test2-on.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:546c940affdd47eed8c868816c98afc7aa9a7045e9d61b1c5c6538922d52c928
3
+ size 64850
other/example.tsv DELETED
@@ -1,3 +0,0 @@
1
- audio_id ngram
2
- common_voice_pl_20547775.wav poślemy potest
3
- common_voice_pl_20547776.wav poślemy po wastest
 
 
 
 
other/keyword.tsv DELETED
@@ -1,2 +0,0 @@
1
- audio_id ngram
2
- common_voice_pl_20547774.wav poślemytrain
 
 
 
other/test2-of.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:546c940affdd47eed8c868816c98afc7aa9a7045e9d61b1c5c6538922d52c928
3
+ size 64850
other/test2-on.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:546c940affdd47eed8c868816c98afc7aa9a7045e9d61b1c5c6538922d52c928
3
+ size 64850
test2.py DELETED
@@ -1,224 +0,0 @@
1
-
2
- # coding=utf-8
3
- # Lint as: python3
4
- """test set"""
5
-
6
-
7
- import csv
8
- import os
9
- import json
10
-
11
- import datasets
12
- from datasets.utils.py_utils import size_str
13
- from tqdm import tqdm
14
- import os
15
-
16
- import datasets
17
-
18
- _CITATION = """\
19
- @inproceedings{panayotov2015librispeech,
20
- title={Librispeech: an ASR corpus based on public domain audio books},
21
- author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev},
22
- booktitle={Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on},
23
- pages={5206--5210},
24
- year={2015},
25
- organization={IEEE}
26
- }
27
- """
28
-
29
- _DESCRIPTION = """\
30
- Lorem ipsum
31
- """
32
- _URL = "https://huggingface.co/datasets/j-krzywdziak/test2"
33
- _AUDIO_URL = "https://huggingface.co/datasets/j-krzywdziak/test2/resolve/main"
34
- _DATA_URL = "https://huggingface.co/datasets/j-krzywdziak/test2/raw/main"
35
- _DL_URLS = {
36
- "clean": {
37
- "of": _AUDIO_URL + "/clean/of/examples.zip",
38
- "on": _AUDIO_URL + "/clean/on/examples.zip",
39
- "example": _DATA_URL + "/clean/example.tsv",
40
- "keyword": _DATA_URL + "/clean/keyword.tsv"
41
- },
42
- "other": {
43
- "of": _AUDIO_URL + "/other/of/examples.zip",
44
- "on": _AUDIO_URL + "/other/on/examples.zip",
45
- "example": _DATA_URL + "/other/example.tsv",
46
- "keyword": _DATA_URL + "/other/keyword.tsv"
47
- },
48
- "all": {
49
- "clean.of": _AUDIO_URL + "/clean/of/examples.zip",
50
- "clean.on": _AUDIO_URL + "/clean/on/examples.zip",
51
- "other.of": _AUDIO_URL + "/other/of/examples.zip",
52
- "other.on": _AUDIO_URL + "/other/on/examples.zip",
53
- "clean.example": _DATA_URL + "/clean/example.tsv",
54
- "clean.keyword": _DATA_URL + "/clean/keyword.tsv",
55
- "other.example": _DATA_URL + "/other/example.tsv",
56
- "other.keyword": _DATA_URL + "/other/keyword.tsv"
57
-
58
- },
59
- }
60
-
61
-
62
-
63
- class TestASR(datasets.GeneratorBasedBuilder):
64
- """Lorem ipsum."""
65
- VERSION = "0.0.0"
66
- DEFAULT_CONFIG_NAME = "all"
67
-
68
- BUILDER_CONFIGS = [
69
- datasets.BuilderConfig(name="clean", description="'Clean' speech."),
70
- datasets.BuilderConfig(name="other", description="'Other', more challenging, speech."),
71
- datasets.BuilderConfig(name="all", description="Combined clean and other dataset."),
72
- ]
73
-
74
- def _info(self):
75
- return datasets.DatasetInfo(
76
- description=_DESCRIPTION,
77
- features=datasets.Features(
78
- {
79
- "path": datasets.Value("string"),
80
- "audio": datasets.Audio(sampling_rate=16_000),
81
- "ngram": datasets.Value("string"),
82
- "type": datasets.Value("string")
83
- }
84
- ),
85
- supervised_keys=("file", "text"),
86
- homepage=_URL,
87
- citation=_CITATION
88
- )
89
-
90
- def _split_generators(self, dl_manager):
91
- archive_path = dl_manager.download(_DL_URLS[self.config.name])
92
- # (Optional) In non-streaming mode, we can extract the archive locally to have actual local audio files:
93
- local_extracted_archive = dl_manager.extract(archive_path) if not dl_manager.is_streaming else {}
94
- if self.config.name == "clean":
95
- of_split = [
96
- datasets.SplitGenerator(
97
- name="of",
98
- gen_kwargs={
99
- "local_extracted_archive": local_extracted_archive.get("of"),
100
- "files": dl_manager.iter_archive(archive_path["of"]),
101
- "examples": archive_path["example"],
102
- "keywords": archive_path["keyword"]
103
- },
104
- )
105
- ]
106
- on_split = [
107
- datasets.SplitGenerator(
108
- name="on",
109
- gen_kwargs={
110
- "local_extracted_archive": local_extracted_archive.get("on"),
111
- "files": dl_manager.iter_archive(archive_path["on"]),
112
- "examples": archive_path["example"],
113
- "keywords": archive_path["keyword"]
114
- },
115
- )
116
- ]
117
- elif self.config.name == "other":
118
- of_split = [
119
- datasets.SplitGenerator(
120
- name="of",
121
- gen_kwargs={
122
- "local_extracted_archive": local_extracted_archive.get("of"),
123
- "files": dl_manager.iter_archive(archive_path["of"]),
124
- "examples": archive_path["example"],
125
- "keywords": archive_path["keyword"]
126
- },
127
- )
128
- ]
129
- on_split = [
130
- datasets.SplitGenerator(
131
- name="on",
132
- gen_kwargs={
133
- "local_extracted_archive": local_extracted_archive.get("on"),
134
- "files": dl_manager.iter_archive(archive_path["on"]),
135
- "examples": archive_path["example"],
136
- "keywords": archive_path["keyword"]
137
- },
138
- )
139
- ]
140
- elif self.config.name == "all":
141
- of_split = [
142
- datasets.SplitGenerator(
143
- name="clean.of",
144
- gen_kwargs={
145
- "local_extracted_archive": local_extracted_archive.get("clean.of"),
146
- "files": dl_manager.iter_archive(archive_path["clean.of"]),
147
- "examples": archive_path["clean.example"],
148
- "keywords": archive_path["clean.keyword"]
149
- },
150
- ),
151
- datasets.SplitGenerator(
152
- name="other.of",
153
- gen_kwargs={
154
- "local_extracted_archive": local_extracted_archive.get("other.of"),
155
- "files": dl_manager.iter_archive(archive_path["other.of"]),
156
- "examples": archive_path["other.example"],
157
- "keywords": archive_path["other.keyword"]
158
- }
159
- )
160
- ]
161
- on_split = [
162
- datasets.SplitGenerator(
163
- name="clean.on",
164
- gen_kwargs={
165
- "local_extracted_archive": local_extracted_archive.get("clean.on"),
166
- "files": dl_manager.iter_archive(archive_path["clean.on"]),
167
- "examples": archive_path["clean.example"],
168
- "keywords": archive_path["clean.keyword"]
169
- },
170
- ),
171
- datasets.SplitGenerator(
172
- name="other.on",
173
- gen_kwargs={
174
- "local_extracted_archive": local_extracted_archive.get("other.on"),
175
- "files": dl_manager.iter_archive(archive_path["other.on"]),
176
- "examples": archive_path["other.example"],
177
- "keywords": archive_path["other.keyword"]
178
- }
179
- )
180
- ]
181
- return on_split + of_split
182
-
183
- def _generate_examples(self, files, local_extracted_archive, examples, keywords):
184
- """Lorem ipsum."""
185
- audio_data = {}
186
- transcripts = []
187
- key = 0
188
- print(examples, keywords)
189
- print(local_extracted_archive)
190
- for path, f in files:
191
- audio_data[path] = f.read()
192
- with open(keywords, encoding="utf-8") as f:
193
- next(f)
194
- for row in f:
195
- r = row.split("\t")
196
- print(r)
197
- path = 'examples/'+r[0]
198
- ngram = r[1]
199
- transcripts.append({
200
- "path": path,
201
- "ngram": ngram,
202
- "type": "keyword"
203
- })
204
- with open(examples, encoding="utf-8") as f2:
205
- next(f2)
206
- for row in f2:
207
- r = row.split("\t")
208
- print(r)
209
- path = 'examples/'+r[0]
210
- ngram = r[1]
211
- transcripts.append({
212
- "path": path,
213
- "ngram": ngram,
214
- "type": "example"
215
- })
216
- print("AUDIO DATA: ", audio_data)
217
- print("TRANSCRIPT: ", transcripts)
218
- if audio_data and len(audio_data) == len(transcripts):
219
- for transcript in transcripts:
220
- audio = {"path": transcript["path"], "bytes": audio_data[transcript["path"]]}
221
- yield key, {"audio": audio, **transcript}
222
- key += 1
223
- audio_data = {}
224
- transcripts = []