Text Generation
Transformers
PyTorch
gptj
Inference Endpoints
instruct-gpt-j-fp16 / README.md
juliensalinas's picture
Update README.md
8bdd61d
|
raw
history blame
1.44 kB
metadata
license: gpl-3.0

This model demonstrates that GPT-J can work perfectly well as an "instruct" model when properly fine-tuned.

We fine-tuned GPT-J on an instruction dataset created by the Stanford Alpaca team. You can find the original dataset here.

The dataset was slightly reworked in order to match the GPT-J fine-tuning format with Mesh Transformer Jax on TPUs. Here is the final dataset we used.

The base GPT-J models needs few-shot learning in order to properly understand what you want. See more details here about how to properly use few-shot learning. For example let's say that you want to correct spelling with GPT-J. Here is an example of a prompt you had to use:

I love goin to the beach.
Correction: I love going to the beach.
###
Let me hav it!
Correction: Let me have it!
###
It have too many drawbacks.
Correction: It has too many drawbacks.
###
I do not wan to go
Correction:

Now, with Instruct GPT-J, here is what you can do:

Correct spelling and grammar from the following text.
I do not wan to go