language:
- fr
license:
- cc-by-nc-4.0
size_categories:
- 1M<n<10M
task_categories:
- text-classification
tags:
- textual-entailment
- DFP
- french prompts
annotations_creators:
- found
language_creators:
- found
multilinguality:
- monolingual
source_datasets:
- xnli
xnli_fr_prompt_textual_entailment
Summary
xnli_fr_prompt_textual_entailment is a subset of the Dataset of French Prompts (DFP).
It contains 8,804,444 rows that can be used for a textual entailment task.
The original data (without prompts) comes from the dataset xnli by Conneau et al. where only the French part has been kept.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.
Prompts used
List
22 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
"""Prendre l'énoncé suivant comme vrai : " """+premise+""" "\n Alors l'énoncé suivant : " """+hypothesis+""" " est "vrai", "faux", ou "incertain" ?""",
"""Prends l'énoncé suivant comme vrai : " """+premise+""" "\n Alors l'énoncé suivant : " """+hypothesis+""" " est "vrai", "faux", ou "incertain" ?""",
"""Prenez l'énoncé suivant comme vrai : " """+premise+""" "\n Alors l'énoncé suivant : " """+hypothesis+""" " est "vrai", "faux", ou "incertain" ?""",
'"'+premise+'"\nQuestion : Cela implique-t-il que "'+hypothesis+'" ? "vrai", "faux", ou "incertain" ?',
'"'+premise+'"\nQuestion : "'+hypothesis+'" est "vrai", "faux", ou "peut-être" ?',
""" " """+premise+""" "\n D'après le passage précédent, est-il vrai que " """+hypothesis+""" " ? "vrai", "faux", ou "incertain" ?""",
""" " """+premise+""" "\nSur la base de ces informations, l'énoncé est-il : " """+hypothesis+""" " ? "vrai", "faux", ou "incertain" ?""",
""" " """+premise+""" "\nEn gardant à l'esprit le texte ci-dessus, considérez : " """+hypothesis+""" "\n Est-ce que c'est "vrai", "faux", ou "incertain" ?""",
""" " """+premise+""" "\nEn gardant à l'esprit le texte ci-dessus, considére : " """+hypothesis+""" "\n Est-ce que c'est "vrai", "faux", ou "peut-être" ?""",
""" " """+premise+""" "\nEn utilisant uniquement la description ci-dessus et ce que vous savez du monde, " """+hypothesis+""" " est-ce "vrai", "faux", ou "incertain" ?""",
""" " """+premise+""" "\nEn utilisant uniquement la description ci-dessus et ce que tu sais du monde, " """+hypothesis+""" " est-ce "vrai", "faux", ou "incertain" ?""",
"""Étant donné que " """+premise+""" ", s'ensuit-il que " """+hypothesis+""" " ? "vrai", "faux", ou "incertain" ?""",
"""Étant donné que " """+premise+""" ", est-il garanti que " """+hypothesis+""" " ? "vrai", "faux", ou "incertain" ?""",
'Étant donné '+premise+', doit-on supposer que '+hypothesis+' est "vrai", "faux", ou "incertain" ?',
'Étant donné '+premise+', dois-je supposer que '+hypothesis+' est "vrai", "faux", ou "incertain" ?',
'Sachant que '+premise+', doit-on supposer que '+hypothesis+' est "vrai", "faux", ou "incertain" ?',
'Sachant que '+premise+', dois-je supposer que '+hypothesis+' est "vrai", "faux", ou "incertain" ?',
'Étant donné que '+premise+', il doit donc être vrai que '+hypothesis+' ? "vrai", "faux", ou "incertain" ?',
"""Supposons que " """+premise+""" ", pouvons-nous déduire que " """+hypothesis+""" " ? "vrai", "faux", ou "incertain" ?""",
"""Supposons que " """+premise+""" ", puis-je déduire que " """+hypothesis+""" " ? "vrai", "faux", ou "incertain" ?""",
"""Supposons qu'il est vrai que " """+premise+""" ". Alors, est-ce que " """+hypothesis+""" " ? "vrai", "faux", ou "incertain" ?""",
"""Supposons qu'il soit vrai que " """+premise+""" ",\n Donc, " """+hypothesis+""" " ? "vrai", "faux", ou "incertain" ?"""
Features used in the prompts
In the prompt list above, premise
, hypothesis
and targets
have been constructed from:
xnli= load_dataset("xnli")
xnli['train']['premise'] = list(map(lambda i: i.replace(' . ','. ').replace(' .','. ').replace('( ','(').replace(' )',')').replace(' , ',', ').replace(', ',', ').replace("' ","'"), map(str,xnli['train']['premise'])))
xnli['train']['hypothesis'] = list(map(lambda x: x.replace(' . ','. ').replace(' .','. ').replace('( ','(').replace(' )',')').replace(' , ',', ').replace(', ',', ').replace("' ","'"), map(str,xnli['train']['hypothesis'])))
targets = str(xnli['train']['label'][i]).replace("0","vrai").replace("1","incertain").replace("2","faux")
Splits
train
with 8,639,444 samplesvalid
with 54,780 samplestest
with 110,220 samples
How to use?
from datasets import load_dataset
dataset = load_dataset("CATIE-AQ/xnli_fr_prompt_textual_entailment")
Citation
Original data
@InProceedings{conneau2018xnli, author = {Conneau, Alexis and Rinott, Ruty and Lample, Guillaume and Williams, Adina and Bowman, Samuel R. and Schwenk, Holger and Stoyanov, Veselin}, title = {XNLI: Evaluating Cross-lingual Sentence Representations}, booktitle = {Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing}, year = {2018}, publisher = {Association for Computational Linguistics}, location = {Brussels, Belgium}, }
This Dataset
@misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { https://huggingface.co/datasets/CATIE-AQ/DFP },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}