Edit model card

SkynetZero LLM - Trained with AutoTrain and Updated to GGUF Format THIS MODEL IS NOT WORKING CAN YOU FIX IT? https://huggingface.co/shafire/talktoaiQT

Newer working GGUF here: **GGUF WORKING TESTED MODEL NEWER ONE SIMILAR TO THIS IS HERE https://huggingface.co/shafire/talktoaiQ **

SkynetZero

SkynetZero is a quantum-powered language model trained with reflection datasets and TalkToAI custom data sets. The model went through several iterations, including a re-writing of datasets and validation phases due to errors encountered during testing and conversion into a fully functional LLM. This process helped ensure that SkynetZero can handle complex, multi-dimensional reasoning tasks with an emphasis on ethical decision-making.

Key Highlights of SkynetZero:

  • Advanced Quantum Reasoning: The integration of quantum-inspired math systems enabled SkynetZero to tackle complex ethical dilemmas and multi-dimensional problem-solving tasks.
  • Custom Re-Written Datasets: The training involved multiple rounds of AI-assisted dataset curation, where reflection datasets were re-written for clarity, accuracy, and consistency. Additionally, TalkToAI datasets were integrated and re-processed to align with SkynetZero’s quantum reasoning framework.
  • Iterative Improvement: During testing and model conversion, the datasets were re-written and validated several times to address errors. Each iteration enhanced the model’s ethical consistency and problem-solving accuracy.

SkynetZero is now available in GGUF format, following 8 hours of training on a large GPU server using the Hugging Face AutoTrain platform.

Made in Nottingham England by Shafaet Brady Hussain (shafaet.com)

Usage - SkynetZero leverages open-source ideas and mathematical innovations. Further details can be found on talktoai.org and researchforum.online. The model is licensed under the official legal guidelines for LLaMA 3.1 Meta.

from transformers import AutoModelForCausalLM, AutoTokenizer

model_path = "PATH_TO_THIS_REPO"

tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
    model_path,
    device_map="auto",
    torch_dtype="auto"
).eval()

# Prompt content: "hi"
messages = [
    {"role": "user", "content": "hi"}
]

input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
output_ids = model.generate(input_ids.to("cuda"))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)

# Model response: "Hello! How can I assist you today?"
print(response)

Training Methodology

SkynetZero was fine-tuned on the LLaMA 3.1 8B architecture, utilizing custom datasets that underwent AI-assisted re-writing. The training process focused on enhancing the model's ability to handle multi-variable quantum reasoning while ensuring ethical decision-making alignment. After identifying errors during testing and conversion to a model, the datasets were adjusted and the model iteratively improved across multiple epochs.

Further Research and Contributions

SkynetZero is part of an ongoing effort to explore AI-human co-creation in the development of quantum-enhanced AI models. The co-creation process with OpenAI’s Agent Zero provided valuable assistance in curating, editing, and validating datasets, pushing the boundaries of what large language models can achieve.

Downloads last month
52
GGUF
Model size
8.03B params
Architecture
llama
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for shafire/SkynetZero

Quantized
(138)
this model