Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,199 @@
|
|
1 |
---
|
2 |
license: cc-by-sa-3.0
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: cc-by-sa-3.0
|
3 |
---
|
4 |
+
|
5 |
+
# The Wikipedia Webpage 2M (WikiWeb2M) Dataset
|
6 |
+
|
7 |
+
We present the WikiWeb2M dataset consisting of over 2 million English
|
8 |
+
Wikipedia articles. Our released dataset includes all of the text content on
|
9 |
+
each page, links to the images present, and structure metadata such as which
|
10 |
+
section each text and image element comes from.
|
11 |
+
|
12 |
+
This dataset is a contribution from our [paper](https://arxiv.org/abs/2305.03668)
|
13 |
+
`A Suite of Generative Tasks for Multi-Level Multimodal Webpage Understanding`.
|
14 |
+
|
15 |
+
The dataset is stored as gzipped TFRecord files which can be downloaded here or on our [GitHub repository](https://github.com/google-research-datasets/wit/blob/main/wikiweb2m.md).
|
16 |
+
|
17 |
+
## WikiWeb2M Statistics
|
18 |
+
|
19 |
+
WikiWeb2M is the first multimodal open source dataset to include all page
|
20 |
+
content in a unified format. Here we provide aggregate information about the
|
21 |
+
WikiWeb2M dataset as well as the number of samples available with each of the
|
22 |
+
fine-tuning tasks we design from it.
|
23 |
+
|
24 |
+
| Number of | Train | Validation | Test |
|
25 |
+
| ---- | ---- | ---- | ---- |
|
26 |
+
| Pages | 1,803,225 | 100,475 | 100,833 |
|
27 |
+
| Sections | 10,519,294 | 585,651 | 588,552 |
|
28 |
+
| Unique Images | 3,867,277 | 284,975 | 286,390 |
|
29 |
+
| Total Images | 5,340,708 | 299,057 | 300,666 |
|
30 |
+
|
31 |
+
Our data processing and filtering choices for each fine-tuning task are
|
32 |
+
described in the paper.
|
33 |
+
|
34 |
+
| Downstream Task Samples | Train | Validation | Test |
|
35 |
+
| ---- | ---- | ---- | ---- |
|
36 |
+
| Page Description Generation | 1,435,263 | 80,103 | 80,339 |
|
37 |
+
| Section Summarization | 3,082,031 | 172,984 | 173,591 |
|
38 |
+
| Contextual Image Captioning | 2,222,814 | 124,703 | 124,188 |
|
39 |
+
|
40 |
+
|
41 |
+
## Data and Task Examples
|
42 |
+
|
43 |
+
Here we illustrate how a single webpage can be processed into the three tasks we
|
44 |
+
study: page description generation, section summarization, and contextual image
|
45 |
+
captioning. The paper includes multiple Wikipedia article examples.
|
46 |
+
|
47 |
+
![Illustration of Succulents Wikipedia Article being used for page description generation, section summarization, and contextual image captioning](images/wikiweb2m_image.png)
|
48 |
+
|
49 |
+
|
50 |
+
|
51 |
+
## Usage
|
52 |
+
|
53 |
+
### TFRecord Features
|
54 |
+
|
55 |
+
Here we provide the names of the fields included in the dataset, their
|
56 |
+
tensorflow Sequence Example type, their data type, and a brief description.
|
57 |
+
|
58 |
+
|
59 |
+
| Feature | Sequence Example Type | DType | Description |
|
60 |
+
| ---- | ---- | ---- | ---- |
|
61 |
+
| `split` | Context | string | Dataset split this page contributes to (e.g., train, val, or test) |
|
62 |
+
| `page_url` | Context | string | Wikipeda page URL |
|
63 |
+
| `page_title` | Context | string | Wikipedia page title, title of the article |
|
64 |
+
| `raw_page_description` | Context | string | Wikipedia page description, which is typically the same or very similar to the content of the first (root) section of the article |
|
65 |
+
| `clean_page_description` | Context | string | `raw_page_description` but with newline and tab characters removed; this provides the exact target text for our page description generation task |
|
66 |
+
| `page_contains_images` | Context | int64 | Whether the Wikipedia page has images after our cleaning and processing steps |
|
67 |
+
| `page_content_sections_without_table_list` | Context | int64 | Number of content sections with text or images that do not contain a list or table. This field can be used to reproduce data filtering for page description generation |
|
68 |
+
| `is_page_description_sample` | Context | int64 | Whether a page is used as a sample for the page description fine-tuning task |
|
69 |
+
| `section_title` | Sequence | string | Titles of each section on the Wikipedia page, in order |
|
70 |
+
| `section_index` | Sequence | int64 | Index of each section on the Wikipedia page, in order |
|
71 |
+
| `section_depth` | Sequence | int64 | Depth of each section on the Wikipedia page, in order |
|
72 |
+
| `section_heading_level` | Sequence | int64 | Heading level of each section on the Wikipedia page, in order |
|
73 |
+
| `section_subsection_index` | Sequence | int64 | Subsection indices, grouped by section in order |
|
74 |
+
| `section_parent_index` | Sequence | int64 | The parent section index of each section, in order |
|
75 |
+
| `section_text` | Sequence | string | The body text of each section, in order |
|
76 |
+
| `is_section_summarization_sample` | Sequence | int64 | Whether a section is used as a sample for the section summarization fine-tuning task |
|
77 |
+
| `section_raw_1st_sentence` | Sequence | string | The processed out first sentence of each section, in order |
|
78 |
+
| `section_clean_1st_sentence` | Sequence | string | The same as `section_raw_1st_sentence` but with newline and tab characters removed. This provides the exact target text for our section summarization task |
|
79 |
+
| `section_rest_sentence` | Sequence | string | The processed out sentences following the first sentence of each section, in order |
|
80 |
+
| `section_contains_table_or_list` | Sequence | int64 | Whether section content contains a table or list; this field is needed to be able to reproduce sample filtering for section summarization |
|
81 |
+
| `section_contains_images` | Sequence | int64 | Whether each section has images after our cleaning and processing steps, in order |
|
82 |
+
| `is_image_caption_sample` | Sequence | int64 | Whether an image is used as a sample for the image captioning fine-tuning task |
|
83 |
+
| `section_image_url` | Sequence | string | Image URLs, grouped by section in order |
|
84 |
+
| `section_image_mime_type` | Sequence | string | Image mime type, grouped by section in order |
|
85 |
+
| `section_image_width` | Sequence | int64 | Image width, grouped by section in order |
|
86 |
+
| `section_image_height` | Sequence | int64 | Image height, grouped by section in order |
|
87 |
+
| `section_image_in_wit` | Sequence | int64 | Whether an image was originally contained in the WIT dataset, grouped by section in order |
|
88 |
+
| `section_image_raw_attr_desc` | Sequence | string | Image attribution description, grouped by section in order |
|
89 |
+
| `section_image_clean_attr_desc` | Sequence | string | The English only processed portions of the attribution description |
|
90 |
+
| `section_image_raw_ref_desc` | Sequence | string | Image reference description, grouped by section in order |
|
91 |
+
| `section_image_clean_ref_desc` | Sequence | string | The same as `section_image_raw_ref_desc` but with newline and tab characters removed; this provides the exact target text for our image captioning task |
|
92 |
+
| `section_image_alt_text` | Sequence | string | Image alt-text, grouped by section in order |
|
93 |
+
| `section_image_captions` | Sequence | string | Comma separated concatenated text from alt-text, attribution, and reference descriptions; this is how captions are formatted as input text when used |
|
94 |
+
|
95 |
+
|
96 |
+
### Loading the Data
|
97 |
+
|
98 |
+
Here we provide a small code snippet for how to load the TFRecord files. First,
|
99 |
+
load any necessary packages.
|
100 |
+
|
101 |
+
```python
|
102 |
+
import numpy as np
|
103 |
+
import glob
|
104 |
+
import tensorflow.compat.v1 as tf
|
105 |
+
from collections import defaultdict
|
106 |
+
```
|
107 |
+
|
108 |
+
Next, define a data parser class.
|
109 |
+
```python
|
110 |
+
class DataParser():
|
111 |
+
def __init__(self,
|
112 |
+
filepath: str = 'wikiweb2m-*',
|
113 |
+
path: str):
|
114 |
+
self.filepath = filepath
|
115 |
+
self.path = path
|
116 |
+
self.data = defaultdict(list)
|
117 |
+
|
118 |
+
def parse_data(self):
|
119 |
+
context_feature_description = {
|
120 |
+
'split': tf.io.FixedLenFeature([], dtype=tf.string),
|
121 |
+
'page_title': tf.io.FixedLenFeature([], dtype=tf.string),
|
122 |
+
'page_url': tf.io.FixedLenFeature([], dtype=tf.string),
|
123 |
+
'clean_page_description': tf.io.FixedLenFeature([], dtype=tf.string),
|
124 |
+
'raw_page_description': tf.io.FixedLenFeature([], dtype=tf.string),
|
125 |
+
'is_page_description_sample': tf.io.FixedLenFeature([], dtype=tf.int64),
|
126 |
+
'page_contains_images': tf.io.FixedLenFeature([], dtype=tf.int64),
|
127 |
+
'page_content_sections_without_table_list': tf.io.FixedLenFeature([] , dtype=tf.int64)
|
128 |
+
}
|
129 |
+
|
130 |
+
sequence_feature_description = {
|
131 |
+
'is_section_summarization_sample': tf.io.VarLenFeature(dtype=tf.int64),
|
132 |
+
'section_title': tf.io.VarLenFeature(dtype=tf.string),
|
133 |
+
'section_index': tf.io.VarLenFeature(dtype=tf.int64),
|
134 |
+
'section_depth': tf.io.VarLenFeature(dtype=tf.int64),
|
135 |
+
'section_heading_level': tf.io.VarLenFeature(dtype=tf.int64),
|
136 |
+
'section_subsection_index': tf.io.VarLenFeature(dtype=tf.int64),
|
137 |
+
'section_parent_index': tf.io.VarLenFeature(dtype=tf.int64),
|
138 |
+
'section_text': tf.io.VarLenFeature(dtype=tf.string),
|
139 |
+
'section_clean_1st_sentence': tf.io.VarLenFeature(dtype=tf.string),
|
140 |
+
'section_raw_1st_sentence': tf.io.VarLenFeature(dtype=tf.string),
|
141 |
+
'section_rest_sentence': tf.io.VarLenFeature(dtype=tf.string),
|
142 |
+
'is_image_caption_sample': tf.io.VarLenFeature(dtype=tf.int64),
|
143 |
+
'section_image_url': tf.io.VarLenFeature(dtype=tf.string),
|
144 |
+
'section_image_mime_type': tf.io.VarLenFeature(dtype=tf.string),
|
145 |
+
'section_image_width': tf.io.VarLenFeature(dtype=tf.int64),
|
146 |
+
'section_image_height': tf.io.VarLenFeature(dtype=tf.int64),
|
147 |
+
'section_image_in_wit': tf.io.VarLenFeature(dtype=tf.int64),
|
148 |
+
'section_contains_table_or_list': tf.io.VarLenFeature(dtype=tf.int64),
|
149 |
+
'section_image_captions': tf.io.VarLenFeature(dtype=tf.string),
|
150 |
+
'section_image_alt_text': tf.io.VarLenFeature(dtype=tf.string),
|
151 |
+
'section_image_raw_attr_desc': tf.io.VarLenFeature(dtype=tf.string),
|
152 |
+
'section_image_clean_attr_desc': tf.io.VarLenFeature(dtype=tf.string),
|
153 |
+
'section_image_raw_ref_desc': tf.io.VarLenFeature(dtype=tf.string),
|
154 |
+
'section_image_clean_ref_desc': tf.io.VarLenFeature(dtype=tf.string),
|
155 |
+
'section_contains_images': tf.io.VarLenFeature(dtype=tf.int64)
|
156 |
+
}
|
157 |
+
|
158 |
+
def _parse_function(example_proto):
|
159 |
+
return tf.io.parse_single_sequence_example(example_proto,
|
160 |
+
context_feature_description,
|
161 |
+
sequence_feature_description)
|
162 |
+
|
163 |
+
suffix = '.tfrecord*'
|
164 |
+
|
165 |
+
data_path = glob.Glob(self.path + self.filepath + suffix)
|
166 |
+
raw_dataset = tf.data.TFRecordDataset(data_path, compression_type='GZIP')
|
167 |
+
parsed_dataset = raw_dataset.map(_parse_function)
|
168 |
+
|
169 |
+
for d in parsed_dataset:
|
170 |
+
split = d[0]['split'].numpy().decode()
|
171 |
+
self.data[split].append(d)
|
172 |
+
```
|
173 |
+
|
174 |
+
Then you can run the following to parse the dataset.
|
175 |
+
```python
|
176 |
+
parser = DataParser()
|
177 |
+
parser.parse_data()
|
178 |
+
print((len(parser.data['train']), len(parser.data['val']), len(parser.data['test'])))
|
179 |
+
```
|
180 |
+
### Models
|
181 |
+
Our full attention, transient global, and prefix global experiments were run
|
182 |
+
using the [LongT5](https://github.com/google-research/longt5) code base.
|
183 |
+
|
184 |
+
|
185 |
+
## How to Cite
|
186 |
+
|
187 |
+
If you extend or use this work, please cite the [paper](https://arxiv.org/abs/2305.03668) where it was
|
188 |
+
introduced:
|
189 |
+
|
190 |
+
```
|
191 |
+
@inproceedings{
|
192 |
+
burns2023wiki,
|
193 |
+
title={A Suite of Generative Tasks for Multi-Level Multimodal Webpage Understanding},
|
194 |
+
author={Andrea Burns and Krishna Srinivasan and Joshua Ainslie and Geoff Brown and Bryan A. Plummer and Kate Saenko and Jianmo Ni and Mandy Guo},
|
195 |
+
booktitle={The 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
|
196 |
+
year={2023},
|
197 |
+
url={https://openreview.net/forum?id=rwcLHjtUmn}
|
198 |
+
}
|
199 |
+
```
|