Edit model card

MixTAO-7Bx2-MoE

MixTAO-7Bx2-MoE is a Mixure of Experts (MoE). This model is mainly used for large model technology experiments, and increasingly perfect iterations will eventually create high-level large language models.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 77.50
AI2 Reasoning Challenge (25-Shot) 73.81
HellaSwag (10-Shot) 89.22
MMLU (5-Shot) 64.92
TruthfulQA (0-shot) 78.57
Winogrande (5-shot) 87.37
GSM8k (5-shot) 71.11
Downloads last month
110
GGUF
Model size
12.9B params
Architecture
llama

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .

Evaluation results