Datasets:

ArXiv:
License:
TravelLeraLone commited on
Commit
7011e51
1 Parent(s): 7298dfe

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +111 -0
README.md CHANGED
@@ -1,3 +1,114 @@
1
  ---
2
  license: cc-by-4.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-4.0
3
  ---
4
+ # WebSRC v1.0
5
+
6
+ WebSRC v1.0 is a dataset for reading comprehension on structural web pages.
7
+ The task is to answer questions about web pages, which requires a system to
8
+ have a comprehensive understanding of the spatial structure and logical
9
+ structure. WebSRC consists of 6.4K web pages and 400K question-answer pairs
10
+ about web pages. For each web page, we manually chose one segment from it
11
+ and saved the corresponding HTML code, screenshot, and metadata like
12
+ positions and sizes. Questions in WebSRC were created for each segment.
13
+ Answers are either text spans from web pages or yes/no. Taking the HTML
14
+ code, screenshot, metadata as well as question as input, a model is to
15
+ predict the answer from the web page. Our dataset is the first one that
16
+ provides HTML documents as well as images, and is larger in the number of
17
+ domains and queries.
18
+
19
+ For more details, please refer to our paper [WebSRC: A Dataset for Web-Based Structural Reading Comprehension](https://arxiv.org/abs/2101.09465).
20
+ The Leaderboard of WebSRC v1.0 can be found [here](https://x-lance.github.io/WebSRC/).
21
+
22
+ ## Data Format Description
23
+
24
+ The dataset for each website will be stored in `dataset.csv` in the directory
25
+ `{domain-name}/{website-number}`. The corresponding raw data (including HTML
26
+ files, screenshots, bounding box coordinates, and page names and urls) is
27
+ stored in the `processed_data` folder in the same directory.
28
+
29
+ In `dataset.csv`, each row corresponds to one question-answer data point
30
+ except the header. The meanings of each column are as follows:
31
+ * `question`: a string, the question of this question-answer data point.
32
+ * `id`: a unique id for this question-answer data point. Each `id` has a length 14, the first two characters are the domain indicator, the following two number is the website name. The corresponding page id can be extracted by `id[2:9]`, for example, id "sp160000100001" means this line is created from the *sport* domain, website *16*, and the corresponding page is `1600001.html`.
33
+ * `element_id`: an integer, the tag id (corresponding to the tag's `tid` attribute in the HTML files) of the deepest tag in the DOM tree which contain all the answer. For yes/no question, there is no tag associated with the answer, so the `element_id` is -1.
34
+ * `answer_start`: an integer, the char offset of the answer from the start of the content of the tag specified by `element_id`. Note that before counting this number, we first eliminate all the inner tags in the specified tag and replace all the consecutive whitespaces with one space. For yes/no questions, `answer_start` is 1 for answer "yes" and 0 for answer "no".
35
+ * `answer`: a string, the answer of this question-answer data point.
36
+
37
+ ## Data Statistics
38
+
39
+ We roughly divided the questions in WebSRC v1.0 into three categories: KV,
40
+ Compare, and Table. The detailed definitions can be found in our
41
+ [paper](https://arxiv.org/abs/2101.09465). The numbers of websites, webpages,
42
+ and QAs corresponding to the three categories are as follows:
43
+
44
+ Type | # Websites | # Webpages | # QAs
45
+ ---- | ---------- | ---------- | -----
46
+ KV | 34 | 3,207 | 168,606
47
+ Comparison | 15 | 1,339 | 68,578
48
+ Table | 21 | 1,901 | 163,314
49
+
50
+ The statistics of the dataset splits are as follows:
51
+
52
+ Split | # Websites | # Webpages | # QAs
53
+ ----- | ---------- | ---------- | -----
54
+ Train | 50 | 4,549 | 307,315
55
+ Dev | 10 | 913 | 52,826
56
+ Test | 10 | 985 | 40,357
57
+
58
+ ## Obtain Test Result
59
+
60
+ For test set evaluation, please send your prediction files to
61
+ [email protected] and [email protected] with title "WebSRC Test:
62
+ \<your model name\>+\<your institution\>". For evaluation, the prediction
63
+ files should contain two files:
64
+
65
+ ```jsonc
66
+ // prediction.json
67
+ // A json format file, keys are ids and values are the predicted answers (string).
68
+ {
69
+ "sp160000100001": "predicted answer",
70
+ "sp160000100002": "...",
71
+ //...
72
+ }
73
+
74
+ // tag_prediction.json
75
+ // A json format file, keys are ids and values are the predicted tag tid (int)
76
+ {
77
+ "sp160000100001": -1,
78
+ "sp160000100002": -1,
79
+ //...
80
+ }
81
+ ```
82
+
83
+ We encourage to submit results from **at least three runs with different random
84
+ seeds** to reduce the uncertainty of the experiments. Please place prediction files
85
+ for each run in different directories and submit a zipped file. The average test
86
+ result will be sent by email.
87
+
88
+
89
+
90
+ ## Reference
91
+
92
+ If you use any source codes or datasets included in this repository in your work,
93
+ please cite the corresponding papers. The bibtex are listed below:
94
+ ```text
95
+ @inproceedings{chen-etal-2021-websrc,
96
+ title = "{W}eb{SRC}: A Dataset for Web-Based Structural Reading Comprehension",
97
+ author = "Chen, Xingyu and
98
+ Zhao, Zihan and
99
+ Chen, Lu and
100
+ Ji, JiaBao and
101
+ Zhang, Danyang and
102
+ Luo, Ao and
103
+ Xiong, Yuxuan and
104
+ Yu, Kai",
105
+ booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
106
+ month = nov,
107
+ year = "2021",
108
+ address = "Online and Punta Cana, Dominican Republic",
109
+ publisher = "Association for Computational Linguistics",
110
+ url = "https://aclanthology.org/2021.emnlp-main.343",
111
+ pages = "4173--4185",
112
+ abstract = "Web search is an essential way for humans to obtain information, but it{'}s still a great challenge for machines to understand the contents of web pages. In this paper, we introduce the task of web-based structural reading comprehension. Given a web page and a question about it, the task is to find an answer from the web page. This task requires a system not only to understand the semantics of texts but also the structure of the web page. Moreover, we proposed WebSRC, a novel Web-based Structural Reading Comprehension dataset. WebSRC consists of 400K question-answer pairs, which are collected from 6.4K web pages with corresponding HTML source code, screenshots, and metadata. Each question in WebSRC requires a certain structural understanding of a web page to answer, and the answer is either a text span on the web page or yes/no. We evaluate various strong baselines on our dataset to show the difficulty of our task. We also investigate the usefulness of structural information and visual features. Our dataset and baselines have been publicly available.",
113
+ }
114
+ ```