all-scam-spam / README.md
FredZhang7's picture
add suggestion
0e6baf8
metadata
license: apache-2.0
language:
  - 'no'
  - es
  - so
  - ca
  - af
  - it
  - nl
  - hi
  - cy
  - ar
  - sv
  - cs
  - pl
  - de
  - lt
  - sq
  - uk
  - tl
  - sl
  - hr
  - en
  - fi
  - vi
  - id
  - da
  - ko
  - bg
  - mr
  - ja
  - bn
  - ro
  - pt
  - fr
  - hu
  - tr
  - zh
  - mk
  - ur
  - sk
  - ne
  - et
  - sw
  - ru
  - multilingual
task_categories:
  - text-classification
  - zero-shot-classification
tags:
  - nlp
  - moderation
size_categories:
  - 10K<n<100K

This is a large corpus of 42,619 preprocessed text messages and emails sent by humans in 43 languages. is_spam=1 means spam and is_spam=0 means ham.

1040 rows of balanced data, consisting of casual conversations and scam emails in ≈10 languages, were manually collected and annotated by me, with some help from ChatGPT.


Some preprcoessing algorithms


Data composition

Spam vs Non-spam (Ham)


Description

To make the text format between sms messages and emails consistent, email subjects and content are separated by two newlines:

text = email.subject + "\n\n" + email.content

Suggestions

  • If you plan to train a model based on this dataset alone, I recommend adding some rows with is_toxic=0 from FredZhang7/toxi-text-3M. Make sure the rows aren't spam.

Other Sources