File size: 2,156 Bytes
71e6a83
 
81d6e7e
71e6a83
 
07c8b63
445849b
0b63fce
 
 
78ca8a1
 
 
 
 
 
 
 
4b09a26
78ca8a1
 
abc12e5
 
4b09a26
 
 
78ca8a1
 
 
 
 
 
 
 
 
 
 
 
 
abc12e5
78ca8a1
 
 
 
 
 
 
 
 
 
abc12e5
78ca8a1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
---
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
language:
- en
library_name: transformers
pipeline_tag: text-classification
tags:
- neural-search
- neural-search-query-classification
---

## Model Details

This model is built on the [DistilBERT](https://huggingface.co/distilbert/distilbert-base-uncased) architecture, specifically utilizing the `distilbert-base-uncased` variant, and is designed to classify text into two categories: statements and questions. It leverages the strengths of the DistilBERT model, known for its efficiency and performance, to accurately discern between declarative statements and interrogative questions.

### Model Description

The model processes input text to determine whether it is a statement or a question. 
### Training Data

The model was trained on a diverse dataset containing examples of both statements and questions. The training process involved fine-tuning the pre-trained DistilBERT model on this specific classification task. The dataset included various types of questions and statements from different contexts to ensure robustness.

* Quora Question Keyword Pairs
* Questions vs Statements Classification
* ilert related Questions

### Performance

The performance of the model was evaluated using standard metrics for classification tasks, including accuracy, precision, recall, and F1 score. The results indicate that the model performs well in distinguishing between statements and questions, making it a reliable tool for text classification tasks in natural language processing.

### Usage

To use this model, you can load it through the Hugging Face `transformers` library and use it for text classification. Here is an example of how to use the model in Python:

```python
from transformers import pipeline

# Load the model and tokenizer
classifier = pipeline("text-classification", model="ilert/SoQbert")

# Example texts
texts = ["Is it going to rain today?", "It is a sunny day."]

# Classify texts
results = classifier(texts)

# Output the results
for text, result in zip(texts, results):
    print(f"Text: {text}")
    print(f"Classification: {result['label']}")
```