Edit model card

Attempt to reproduce Mixture-of-LoRAs classifier

Mixture-of-LoRAs: An Efficient Multitask Tuning for Large Language Models

https://arxiv.org/pdf/2403.03432

Datasets

We evenly sample about 10k training data and 2k validation data on each dataset.

From laion/OIG was taken only:

  • unified_merged_code_xp3.jsonl
  • unified_grade_school_math_instructions.jsonl
  • unified_mathqa_flanv2_kojma_cot.jsonl
Downloads last month
5
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train evilfreelancer/moa-classification