ytzfhqs commited on
Commit
d822c70
1 Parent(s): 9e20907

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +55 -3
README.md CHANGED
@@ -1,3 +1,55 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - zh
5
+ metrics:
6
+ - accuracy
7
+ - precision
8
+ base_model:
9
+ - Qwen/Qwen2.5-0.5B
10
+ ---
11
+ The model is an intermediate product of the [EPCD (Easy-Data-Clean-Pipeline)](https://github.com/ytzfhqs/EDCP) project, primarily used to distinguish between the main content and non-content (such as book introductions, publisher information, writing standards, revision notes) of **medical textbooks** after performing OCR using [MinerU](https://github.com/opendatalab/MinerU). The base model uses [Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B), avoiding the length limitation of the Bert Tokenizer while providing higher accuracy.
12
+
13
+ # Data Composition
14
+
15
+ - The data consists of scanned PDF copies of textbooks, converted into `Markdown` files through `OCR` using [MinerU](https://github.com/opendatalab/MinerU). After a simple regex-based cleaning, the samples were split using `\n`, and a `Bloom` probabilistic filter was used for precise deduplication, resulting in 50,000 samples. Due to certain legal considerations, we may not plan to make the dataset publicly available.
16
+ - Due to the nature of textbooks, most samples are main content. According to statistics, in our dataset, 79.89% (40,000) are main content samples, while 20.13% (10,000) are non-content samples. Considering data imbalance, we evaluate the model's performance on both Precision and Accuracy metrics on the test set.
17
+ - To ensure consistency in the data distribution between the test set and the training set, we used stratified sampling to select 10% of the data as the test set.
18
+
19
+ # Training Techniques
20
+
21
+ - To maximize model accuracy, we used Bayesian optimization (TPE algorithm) and Hyperband pruning (HyperbandPruner) to accelerate hyperparameter tuning.
22
+
23
+ # Model Performance
24
+
25
+ | Dataset | Accuracy | Precision |
26
+ |---------|----------|-----------|
27
+ | Train | 0.9894 | 0.9673 |
28
+ | Test | 0.9788 | 0.9548 |
29
+
30
+ # Usage
31
+
32
+ ```python
33
+ import torch
34
+ from transformers import AutoModelForSequenceClassification
35
+ from transformers import AutoTokenizer
36
+
37
+ ID2LABEL = {0: "正文", 1: "非正文"}
38
+
39
+ model_name = 'ytzfhqs/Qwen2.5-med-book-main-classification'
40
+ model = AutoModelForSequenceClassification.from_pretrained(
41
+ model_name,
42
+ torch_dtype="auto",
43
+ device_map="auto"
44
+ )
45
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
46
+
47
+ text = '下列为修订说明'
48
+ encoding = tokenizer(text, return_tensors='pt')
49
+ encoding = {k: v.to(model.device) for k, v in encoding.items()}
50
+ outputs = model(**encoding)
51
+ logits = outputs.logits
52
+ id = torch.argmax(logits, dim=-1).item()
53
+ response = ID2LABEL[id]
54
+ print(response)
55
+ ```