Edit model card

Model Card for Model ID

Model Details

Model Description

I fine-tuned DeBERTa v3 large on answerability of SQaD v2.

Done here: https://colab.research.google.com/drive/1xAA4D3VkbIXYeyIzn5-PE8Xa1miz9uwq#scrollTo=4G7kLtQiFF7Q

import transformers
tokenized_datasets.set_format('torch')

data_collator = transformers.DataCollatorWithPadding(tokenizer=tokenizer)

# Training arguments
training_args = TrainingArguments(
    run_name=NOTEBOOK_NAME,
    output_dir=NOTEBOOK_NAME,
    learning_rate=1e-5, # 3e-5 seemed bad with deepset/roberta-base-squad2, but not sure...
    per_device_train_batch_size=16,
    gradient_accumulation_steps=2, # others use bs 16, simulate that (2*real-bs)
    weight_decay=0.02,
    num_train_epochs=2,
    fp16=True, # mixed precision training to speed up training and reduce memory usage

    evaluation_strategy="steps",
    eval_steps=500, # prev 1000, but now we have gradient_accumulation_steps=2
    save_strategy="steps",
    save_steps=500,
    save_total_limit=1, # only save latest and best model

    load_best_model_at_end=True,
    metric_for_best_model="f1", # to represent precision-recall tradeoff
    greater_is_better=True,

    report_to=['wandb'],
    logging_steps=500,

    push_to_hub=True,

    warmup_steps=500,  # Add learning rate warm-up
    lr_scheduler_type="linear",  # Use linear decay
    # max_grad_norm=1.0,  # Clip gradients

)

# Initialize Trainer
trainer = Trainer(
    model=model,
    args=training_args,
    train_dataset=tokenized_datasets['train'],
    eval_dataset=tokenized_datasets['validation'],
    tokenizer=tokenizer,
    compute_metrics=compute_metrics,
    data_collator=data_collator, # make batch shorter if possible
    # callbacks=[transformers.EarlyStoppingCallback(early_stopping_patience=3)],
)

trainer.train()

Took around 36gb vram (maybe 38gb). 5538/8144 steps took 1:47:48. Trained on a A100. Cost: about 5€

wandb: https://wandb.ai/stadeltom-com/huggingface/runs/thxte3cl

Achieves 93% f1, 92 % precision, and 94% recall

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.

  • Developed by: [More Information Needed]
  • Funded by [optional]: [More Information Needed]
  • Shared by [optional]: [More Information Needed]
  • Model type: [More Information Needed]
  • Language(s) (NLP): [More Information Needed]
  • License: [More Information Needed]
  • Finetuned from model [optional]: [More Information Needed]

Model Sources [optional]

  • Repository: [More Information Needed]
  • Paper [optional]: [More Information Needed]
  • Demo [optional]: [More Information Needed]

Uses

Direct Use

[More Information Needed]

Downstream Use [optional]

[More Information Needed]

Out-of-Scope Use

[More Information Needed]

Bias, Risks, and Limitations

[More Information Needed]

Recommendations

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.

How to Get Started with the Model

Use the code below to get started with the model.

[More Information Needed]

Training Details

Training Data

[More Information Needed]

Training Procedure

Preprocessing [optional]

[More Information Needed]

Training Hyperparameters

  • Training regime: [More Information Needed]

Speeds, Sizes, Times [optional]

[More Information Needed]

Evaluation

Testing Data, Factors & Metrics

Testing Data

[More Information Needed]

Factors

[More Information Needed]

Metrics

[More Information Needed]

Results

[More Information Needed]

Summary

Model Examination [optional]

[More Information Needed]

Environmental Impact

Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).

  • Hardware Type: [More Information Needed]
  • Hours used: [More Information Needed]
  • Cloud Provider: [More Information Needed]
  • Compute Region: [More Information Needed]
  • Carbon Emitted: [More Information Needed]

Technical Specifications [optional]

Model Architecture and Objective

[More Information Needed]

Compute Infrastructure

[More Information Needed]

Hardware

[More Information Needed]

Software

[More Information Needed]

Citation [optional]

BibTeX:

[More Information Needed]

APA:

[More Information Needed]

Glossary [optional]

[More Information Needed]

More Information [optional]

[More Information Needed]

Model Card Authors [optional]

[More Information Needed]

Model Card Contact

[More Information Needed]

Downloads last month
11
Safetensors
Model size
435M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.