--- license: apache-2.0 datasets: - abacusai/MetaMathFewshot - shahules786/orca-chat - anon8231489123/ShareGPT_Vicuna_unfiltered --- Trained on the MetamathFewshot (https://huggingface.co/datasets/abacusai/MetaMathFewshot) dataset from base Mistral, as well as the Vicuna (https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered) dataset and the OrcaChat (https://huggingface.co/datasets/shahules786/orca-chat) dataset. Instruction tuned with the following parameters: - LORA, Rank 8, Alpha 16, Dropout 0.05, all modules (QKV and MLP) - 3 epochs - Micro Batch Size 32 over 4xH100, gradient accumulation steps = 1 - AdamW with learning rate 5e-5