samsja's picture
chore: add readme
7884efe
|
raw
history blame
1.83 kB
metadata
datasets:
  - jinaai/code_exercises
language:
  - en
tags:
  - HumanEval
  - StarCoder

StarCoder-1b-textbook

StarCoder-1b-textbook is a finetuned version of starcoderbase-1b on the code_exercices dataset

It achieves 27.0 pass@1 on the Human Eval coding benchmark while being only 1b parameters. That is an improvement of almost 12 points over the starcoder 1b baseline, almost doubling the score.

The results (on the human eval benchmark) are on par with other open-source models like StarCoderBase (30.4) StarCoder(33.6) CodeGen-16B-Mono(29.3) while the model being 15 times smaller.

It still underperforms compared to other models like CodeLLama (53%) or chat gpt 4 (82) or wizard coder (73.2), but these model are more than 30 times bigger.

Disclaimer

  • The human eval benchmark is not a perfect benchmark and does not fully represent the coding abilities of an LLM. This model performs well on the task described in the benchmark but it does not necessarily mean that our model is on par with bigger models on coding assistant LLM.
  • This model is not an instruction tune and cannot be used as a chatbot. We recommend using the Evol-Instruct-Code-80k-v1 to finetune it into a instrution following model
  • This model has not been aligned with human preferences and therefore could potentially generate harmful content
  • This model has been trained on a dataset generated by ChatGPT 3.5, and you should check the legal status of AI-generated content in your jurisdiction before using it. You should make sure that your usage complies with the OpenAI Terms of Use, in so far as legally applicable.