Datasets:
skt
/

Modalities:
Text
Formats:
json
Languages:
Korean
ArXiv:
Libraries:
Datasets
pandas
License:
kobest_v1 / README.md
korca's picture
add citation info
8f15a82
|
raw
history blame
6.1 kB
---
pretty_name: KoBEST
annotations_creators:
- expert-generated
language_creators:
- expert-generated
languages:
- ko
licenses:
- cc-by-sa-4-0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
---
# Dataset Card for KoBEST
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/SKT-LSL/KoBEST_datarepo
- **Paper:**
- **Point of Contact:** https://github.com/SKT-LSL/KoBEST_datarepo/issues
### Dataset Summary
KoBEST is a Korean benchmark suite consists of 5 natural language understanding tasks that requires advanced knowledge in Korean.
### Supported Tasks and Leaderboards
Boolean Question Answering, Choice of Plausible Alternatives, Words-in-Context, HellaSwag, Sentiment Negation Recognition
### Languages
`ko-KR`
## Dataset Structure
### Data Instances
#### KB-BoolQ
An example of a data point looks as follows.
```
{'paragraph': '๋‘์•„ ๋ฆฌํŒŒ(Dua Lipa, 1995๋…„ 8์›” 22์ผ ~ )๋Š” ์ž‰๊ธ€๋žœ๋“œ์˜ ์‹ฑ์–ด์†ก๋ผ์ดํ„ฐ, ๋ชจ๋ธ์ด๋‹ค. BBC ์‚ฌ์šด๋“œ ์˜ค๋ธŒ 2016 ๋ช…๋‹จ์— ๋…ธ๋ฏธ๋‹›๋˜์—ˆ๋‹ค. ์‹ฑ๊ธ€ "Be the One"๊ฐ€ ์˜๊ตญ ์‹ฑ๊ธ€ ์ฐจํŠธ 9์œ„๊นŒ์ง€ ์˜ค๋ฅด๋Š” ๋“ฑ ์„ฑ๊ณผ๋ฅผ ๋ณด์—ฌ์ฃผ์—ˆ๋‹ค.',
'question': '๋‘์•„ ๋ฆฌํŒŒ๋Š” ์˜๊ตญ์ธ์ธ๊ฐ€?',
'label': 1}
```
#### KB-COPA
An example of a data point looks as follows.
```
{'premise': '๋ฌผ์„ ์˜ค๋ž˜ ๋“์˜€๋‹ค.',
'question': '๊ฒฐ๊ณผ',
'alternative_1': '๋ฌผ์˜ ์–‘์ด ๋Š˜์–ด๋‚ฌ๋‹ค.',
'alternative_2': '๋ฌผ์˜ ์–‘์ด ์ค„์–ด๋“ค์—ˆ๋‹ค.',
'label': 1}
```
#### KB-WiC
An example of a data point looks as follows.
```
{'word': '์–‘๋ถ„',
'context_1': 'ํ† ์–‘์— [์–‘๋ถ„]์ด ํ’๋ถ€ํ•˜์—ฌ ๋‚˜๋ฌด๊ฐ€ ์ž˜ ์ž๋ž€๋‹ค. ',
'context_2': 'ํƒœ์•„๋Š” ๋ชจ์ฒด๋กœ๋ถ€ํ„ฐ [์–‘๋ถ„]๊ณผ ์‚ฐ์†Œ๋ฅผ ๊ณต๊ธ‰๋ฐ›๊ฒŒ ๋œ๋‹ค.',
'label': 1}
```
#### KB-HellaSwag
An example of a data point looks as follows.
```
{'context': '๋ชจ์ž๋ฅผ ์“ด ํˆฌ์ˆ˜๊ฐ€ ํƒ€์ž์—๊ฒŒ ์˜จ ํž˜์„ ๋‹คํ•ด ๊ณต์„ ๋˜์ง„๋‹ค. ๊ณต์ด ํƒ€์ž์—๊ฒŒ ๋น ๋ฅธ ์†๋„๋กœ ๋‹ค๊ฐ€์˜จ๋‹ค. ํƒ€์ž๊ฐ€ ๊ณต์„ ๋ฐฐํŠธ๋กœ ์นœ๋‹ค. ๋ฐฐํŠธ์—์„œ ๊นก ์†Œ๋ฆฌ๊ฐ€ ๋‚œ๋‹ค. ๊ณต์ด ํ•˜๋Š˜ ์œ„๋กœ ๋‚ ์•„๊ฐ„๋‹ค.',
'ending_1': '์™ธ์•ผ์ˆ˜๊ฐ€ ๋–จ์–ด์ง€๋Š” ๊ณต์„ ๊ธ€๋Ÿฌ๋ธŒ๋กœ ์žก๋Š”๋‹ค.',
'ending_2': '์™ธ์•ผ์ˆ˜๊ฐ€ ๊ณต์ด ๋–จ์–ด์งˆ ์œ„์น˜์— ์ž๋ฆฌ๋ฅผ ์žก๋Š”๋‹ค.',
'ending_3': '์‹ฌํŒ์ด ์•„์›ƒ์„ ์™ธ์นœ๋‹ค.',
'ending_4': '์™ธ์•ผ์ˆ˜๊ฐ€ ๊ณต์„ ๋”ฐ๋ผ ๋›ฐ๊ธฐ ์‹œ์ž‘ํ•œ๋‹ค.',
'label': 3}
```
#### KB-SentiNeg
An example of a data point looks as follows.
```
{'sentence': 'ํƒ๋ฐฐ์‚ฌ ์ •๋ง ๋งˆ์Œ์— ๋“ฌ',
'label': 1}
```
### Data Fields
### KB-BoolQ
+ `paragraph`: a `string` feature
+ `question`: a `string` feature
+ `label`: a classification label, with possible values `False`(0) and `True`(1)
### KB-COPA
+ `premise`: a `string` feature
+ `question`: a `string` feature
+ `alternative_1`: a `string` feature
+ `alternative_2`: a `string` feature
+ `label`: an answer candidate label, with possible values `alternative_1`(0) and `alternative_2`(1)
### KB-WiC
+ `target_word`: a `string` feature
+ `context_1`: a `string` feature
+ `context_2`: a `string` feature
+ `label`: a classification label, with possible values `False`(0) and `True`(1)
### KB-HellaSwag
+ `target_word`: a `string` feature
+ `context_1`: a `string` feature
+ `context_2`: a `string` feature
+ `label`: a classification label, with possible values `False`(0) and `True`(1)
### KB-SentiNeg
+ `sentence`: a `string` feature
+ `label`: a classification label, with possible values `Negative`(0) and `Positive`(1)
### Data Splits
#### KB-BoolQ
+ train: 3,665
+ dev: 700
+ test: 1,404
#### KB-COPA
+ train: 3,076
+ dev: 1,000
+ test: 1,000
#### KB-WiC
+ train: 3,318
+ dev: 1,260
+ test: 1,260
#### KB-HellaSwag
+ train: 3,665
+ dev: 700
+ test: 1,404
#### KB-SentiNeg
+ train: 3,649
+ dev: 400
+ test: 397
+ test_originated: 397 (Corresponding training data where the test set is originated from.)
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
```
@misc{https://doi.org/10.48550/arxiv.2204.04541,
doi = {10.48550/ARXIV.2204.04541},
url = {https://arxiv.org/abs/2204.04541},
author = {Kim, Dohyeong and Jang, Myeongjun and Kwon, Deuk Sin and Davis, Eric},
title = {KOBEST: Korean Balanced Evaluation of Significant Tasks},
publisher = {arXiv},
year = {2022},
}
```
[More Information Needed]
### Contributions
Thanks to [@MJ-Jang](https://github.com/MJ-Jang) for adding this dataset.