direct_3B / README.md
seonghyeonye's picture
Update README.md
d054d2d
metadata
datasets:
  - bigscience/P3
language: en
license: apache-2.0
widget:
  - text: >-
      A is the son's of B's uncle. What is the family relationship between A and
      B?
  - text: >-
      Reorder the words in this sentence: justin and name bieber years is my am
      I 27 old.
  - text: |-
      Task: copy but say the opposite.
       PSG won its match against Barca.
  - text: >-
      Is this review positive or negative? Review: Best cast iron skillet you
      will every buy.
    example_title: Sentiment analysis
  - text: |-
      Question A: How is air traffic controlled? 
      Question B: How do you become an air traffic controller?
      Pick one: these questions are duplicates or not duplicates.
  - text: >-
      Barack Obama nominated Hilary Clinton as his secretary of state on Monday.
      He chose her because she had foreign affairs experience as a former First
      Lady. 

      In the previous sentence, decide who 'her' is referring to.
    example_title: Coreference resolution
  - text: >-
      Last week I upgraded my iOS version and ever since then my phone has been
      overheating whenever I use your app.
       Select the category for the above sentence from: mobile, website, billing, account access.
  - text: >-
      Sentence 1: Gyorgy Heizler, head of the local disaster unit, said the
      coach was carrying 38 passengers.
       Sentence 2: The head of the local disaster unit, Gyorgy Heizler, said the bus was full except for 38 empty seats.

       Do sentences 1 and 2 have the same meaning?
    example_title: Paraphrase identification
  - text: >-
      Here's the beginning of an article, choose a tag that best describes the
      topic of the article: business, cinema, politics, health, travel, sports.

       The best and worst fo 007 as 'No time to die' marks Daniel Craig's exit.
       (CNN) Some 007 math: 60 years, 25 movies (with a small asterisk) and six James Bonds. For a Cold War creation, Ian Fleming's suave spy has certainly gotten around, but despite different guises in the tuxedo and occasional scuba gear, when it comes to Bond ratings, there really shouldn't be much argument about who wore it best.
  - text: |-
      Max: Know any good websites to buy clothes from?
       Payton: Sure :) LINK 1, LINK 2, LINK 3
       Max: That's a lot of them!
       Payton: Yeah, but they have different things so I usually buy things from 2 or 3 of them.
       Max: I'll check them out. Thanks.

       Who or what are Payton and Max referring to when they say 'them'?
  - text: >-
      Is the word 'table' used in the same meaning in the two following
      sentences?

       Sentence A: you can leave the books on the table over there.
       Sentence B: the tables in this book are very hard to read.
  - text: >-
      On a shelf, there are five books: a gray book, a red book, a purple book,
      a blue book, and a black book.
       The red book is to the right of the gray book. The black book is to the left of the blue book. The blue book is to the left of the gray book. The purple book is the second from the right.

       Which book is the leftmost book?
    example_title: Logic puzzles
  - text: >-
      The two men running to become New York City's next mayor will face off in
      their first debate Wednesday night.

       Democrat Eric Adams, the Brooklyn Borough president and a former New York City police captain, is widely expected to win the Nov. 2 election against Republican Curtis Sliwa, the founder of the 1970s-era Guardian Angels anti-crime patril.

       Who are the men running for mayor?
    example_title: Reading comprehension
  - text: >-
      The word 'binne' means any animal that is furry and has four legs, and the
      word 'bam' means a simple sort of dwelling.

       Which of the following best characterizes binne bams?
       - Sentence 1: Binne bams are for pets.
       - Sentence 2: Binne bams are typically furnished with sofas and televisions.
       - Sentence 3: Binne bams are luxurious apartments.
       - Sentence 4: Binne bams are places where people live.

Official repository: seonghyeonye/Flipped-Learning

Model Description

DIRECT is a strong baseline of FLIPPED, based on the training objective on T0-3B. With only 5% token updates and half of training datasets compared to T0-3B, DIRECT outperforms T0-3B. (+6.38% mean accuracy on 14 NLP tasks, +1.19% mean accuracy on 14 BIG-bench tasks)

How to use

Our overall explanation models along with ablations can be found in our paper. We recommend using the FLIPPED-11B checkpoint as it leads (on average) to the best performances on a variety of NLP tasks.

Model Number of parameters
Flipped_11B 11 billion
Flipped_3B 3 billion
Here is how to download the model in PyTorch:
import torch
from transformers import T5Tokenizer, T5ForConditionalGeneration

model = T5ForConditionalGeneration.from_pretrained("seonghyeonye/direct_3B")
tokenizer = T5Tokenizer.from_pretrained("seonghyeonye/direct_3B")

If you want to use another checkpoint, please replace the path in T5Tokenizer and T5ForConditionalGeneration. We also provide a quick Jupyter Notebook where you can inference with our method. Note: the model was trained with fp32 activations. As such, we highly discourage running inference with fp16.

Training procedure

DIRECT model is based on T5+LM, a Transformer-based encoder-decoder language model pre-trained with a masked language modeling-style objective additionally pretrained on language modeling objective on C4. Training details:

  • Fine-tuning steps: 5'000
  • Input sequence length: 512
  • Target sequence length: 128
  • Batch size: 240
  • Optimizer: Adafactor
  • Learning rate: 1e-4
  • Dropout: 0.1
  • Sampling strategy: proportional to the number of examples in each dataset (we randomly sampled any dataset if it has over 500'000 examples so that it has at most 500'000 examples. Also, we randomly choose which instruction to generate for each training steps, so ideally each instruction appears num_examples/num_templates while training.)

Training data

We trained different variants T0 with different mixtures of datasets.

Model Training datasets
FLIPPED_11B - Multiple-Choice QA: CommonsenseQA, DREAM, QUAIL, QuaRTz, Social IQA, WiQA, Cosmos, QASC, Quarel, SciQ
- Sentiment: Amazon, App Reviews, IMDB, Rotten Tomatoes, Yelp
- Topic Classification: AG News, DBPedia
- Paraphrase Identification: MRPC, PAWS, QQP
FLIPPED_3B Same as FLIPPED-11B
DIRECT_3B Same as FLIPPED-11B
We only choose prompts examples that has output lables, which can be found on the dataset page.

Evaluation data

We evaluate our models on following datasets:

Task category Datasets
Natural language inference ANLI(R1, R2, R3), CB, RTE
Coreference resolution WSC, Winogrande
Word sense disambiguation WiC
Sentence completion COPA, HellaSwag, Story Cloze
QA PIQA, ARC-Challenge, OpenbookQA
We also evaluate FLIPPED on a subset of BIG-bench benchmark:
  • Code description task
  • Conceptual combinations
  • Hindu knowledge json
  • Known unknowns
  • Language identification
  • Logic grid puzzle task
  • Logical deduction
  • Common misconceptions
  • Movie dialog same or different
  • Novel concepts
  • Strategyqa
  • Formal fallacies syllogisms negation
  • VitaminC
  • Winowhy multiple choice

Label generalization

We evaluate the robustness of models on following datasets with changing the output label of the datasets. The substitute words can be found in our paper.

Task category (Datasets, Template name)
Unseen tasks (WSC, does the pronoun refer to), (CB, can we infer), (RTE, MNLI crowdsource)
Seen tasks (IMDB, Reviewer Enjoyment Yes No), (PAWS, Meaning)
The template name we used can be found in the promptsource template library.

BibTeX entry and citation info

@article{ye2022guess,
  title={Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners},
  author={Ye, Seonghyeon and Kim, Doyoung and Jang, Joel and Shin, Joongbo and Seo, Minjoon},
  journal={arXiv preprint arXiv:2210.02969},
  year={2022}
}