Edit model card

This repo contains the trained 1.3 billion parameter LLAMA-2 architecture model checkpoints for the work Multi-Agent Collaborative Data Selection for Efficient LLM Pretraining.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
Unable to determine this model's library. Check the docs .

Dataset used to train beccabai/1.3B-multi-agent-collab-checkpoints