--- license: apache-2.0 datasets: - teknium/OpenHermes-2.5 - m-a-p/CodeFeedback-Filtered-Instruction - m-a-p/Code-Feedback pipeline_tag: text-generation --- # Wukong-0.1-Mistral-7B-v0.2 Join Our Discord! https://discord.gg/cognitivecomputations ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/655dc641accde1bbc8b41aec/xOe1Nb3S9Nb53us7_Ja3s.jpeg) Wukong-0.1-Mistral-7B-v0.2 is a dealigned chat finetune of the original fantastic Mistral-7B-v0.2 model by the Mistral team. This model was trained on the teknium OpenHeremes-2.5 dataset, code datasets from Multimodal Art Projection https://m-a-p.ai, and the Dolphin dataset from Cognitive Computations https://erichartford.com/dolphin 🐬 This model was trained for 3 epochs over 4 4090's. # Example Outputs TBD [Built with Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl)