MixTAO-7Bx2-MoE
MixTAO-7Bx2-MoE is a Mixure of Experts (MoE). This model is mainly used for large model technology experiments, and increasingly perfect iterations will eventually create high-level large language models.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 77.50 |
AI2 Reasoning Challenge (25-Shot) | 73.81 |
HellaSwag (10-Shot) | 89.22 |
MMLU (5-Shot) | 64.92 |
TruthfulQA (0-shot) | 78.57 |
Winogrande (5-shot) | 87.37 |
GSM8k (5-shot) | 71.11 |
- Downloads last month
- 110
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard73.810
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard89.220
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard64.920
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard78.570
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard87.370
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard71.110