Datasets:

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
DOI:
Libraries:
Datasets
Dask
License:
ngc7293 commited on
Commit
b21d783
1 Parent(s): 6a063ff

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -16,17 +16,17 @@ Knowledge Pile is a knowledge-related data leveraging [Query of CC](https://arxi
16
 
17
  Just like the figure below, we initially collected seed information in some specific domains, such as keywords, frequently asked questions, and textbooks, to serve as inputs for the Query Bootstrapping stage. Leveraging the great generalization capability of large language models, we can effortlessly expand the initial seed information and extend it to an amount of domain-relevant queries. Inspiration from Self-instruct and WizardLM, we encompassed two stages of expansion, namely **Question Extension** and **Thought Generation**, which respectively extend the queries in terms of breadth and depth, for retrieving the domain-related data with a broader scope and deeper thought. Subsequently, based on the queries, we retrieved relevant documents from public corpora, and after performing operations such as duplicate data removal and filtering, we formed the final training dataset.
18
 
19
- ![The overview of Query of CC’s two major components: Query Bootstrapping and Data Retrieval.](./images/main_stage.png)
20
 
21
  ## **Knowledge Pile** Statistics
22
 
23
- Based on *Query of CC* , we have formed a high-quality knowledge dataset **Knowledge Pile**, which maintains about 900GB disk and 300B tokens~(using Llama2 tokenizer). As shown in Figure below, comparing with other datasets in academic and mathematical reasoning domains, we have acquired a large-scale, high-quality knowledge dataset at a lower cost, without the need for manual intervention. Through automated query bootstrapping, we efficiently capture the information about the seed query. **Knowledge Pile**~not only covers mathematical reasoning data but also encompasses rich knowledge-oriented corpora spanning various fields such as biology, physics, etc., enhancing its comprehensive research and application potential.
24
 
25
 
26
- <img src="./images/query_of_cc_timestamp.png" width="300px" style="center"/>
27
 
28
 
29
- This table presents the top 10 web domains with the highest proportion of **Knowledge Pile**, primarily including academic websites, high-quality forums, and some knowledge domain sites. Table~\ref{fig:queryofcc_timestamp} provides a breakdown of the data sources' timestamps in **Knowledge Pile**, with statistics conducted on an annual basis. It is evident that a significant portion of **Knowledge Pile**~is sourced from recent years, with a decreasing proportion for earlier timestamps. This trend can be attributed to the exponential growth of internet data and the inherent timeliness introduced by the **Knowledge Pile**.
30
 
31
  | **Web Domain** | **Count** |
32
  |----------------------------|----------------|
 
16
 
17
  Just like the figure below, we initially collected seed information in some specific domains, such as keywords, frequently asked questions, and textbooks, to serve as inputs for the Query Bootstrapping stage. Leveraging the great generalization capability of large language models, we can effortlessly expand the initial seed information and extend it to an amount of domain-relevant queries. Inspiration from Self-instruct and WizardLM, we encompassed two stages of expansion, namely **Question Extension** and **Thought Generation**, which respectively extend the queries in terms of breadth and depth, for retrieving the domain-related data with a broader scope and deeper thought. Subsequently, based on the queries, we retrieved relevant documents from public corpora, and after performing operations such as duplicate data removal and filtering, we formed the final training dataset.
18
 
19
+ ![The overview of Query of CC’s two major components: Query Bootstrapping and Data Retrieval.](https://github.com/ngc7292/query_of_cc/blob/master/images/main_stage.png?raw=true)
20
 
21
  ## **Knowledge Pile** Statistics
22
 
23
+ Based on *Query of CC* , we have formed a high-quality knowledge dataset **Knowledge Pile**, which maintains about 900GB disk and 300B tokens (using Llama2 tokenizer). As shown in Figure below, comparing with other datasets in academic and mathematical reasoning domains, we have acquired a large-scale, high-quality knowledge dataset at a lower cost, without the need for manual intervention. Through automated query bootstrapping, we efficiently capture the information about the seed query. **Knowledge Pile** not only covers mathematical reasoning data but also encompasses rich knowledge-oriented corpora spanning various fields such as biology, physics, etc., enhancing its comprehensive research and application potential.
24
 
25
 
26
+ <img src="https://github.com/ngc7292/query_of_cc/blob/master/images/query_of_cc_timestamp.png?raw=true" width="300px" style="center"/>
27
 
28
 
29
+ This table presents the top 10 web domains with the highest proportion of **Knowledge Pile**, primarily including academic websites, high-quality forums, and some knowledge domain sites. Table provides a breakdown of the data sources' timestamps in **Knowledge Pile**, with statistics conducted on an annual basis. It is evident that a significant portion of **Knowledge Pile** is sourced from recent years, with a decreasing proportion for earlier timestamps. This trend can be attributed to the exponential growth of internet data and the inherent timeliness introduced by the **Knowledge Pile**.
30
 
31
  | **Web Domain** | **Count** |
32
  |----------------------------|----------------|