Edit model card

zephyr-7B-alpha-GPTQ

Model description

Large language models have achieved groundbreaking success in the field of natural language processing (NLP). However, since these models are generally trained for general-purpose tasks, they may not perform optimally for specific tasks. Therefore, fine-tuning these large models for specific tasks is a common practice. In this article, we will delve into the process of fine-tuning and adapting the Zephyr-7B-alpha-GPTQ, a large language model, for a particular task.

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • training_steps: 250
  • mixed_precision_training: Native AMP

Framework versions

  • PEFT 0.10.0
  • Transformers 4.38.2
  • Pytorch 2.2.1+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.2

Author

  • Anezatra Katedram
Downloads last month
11
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for anezatra/zephyr-7B-alpha-GPTQ

Dataset used to train anezatra/zephyr-7B-alpha-GPTQ