File size: 1,436 Bytes
fe34c55 8bdd61d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
---
license: gpl-3.0
---
This model demonstrates that GPT-J can work perfectly well as an "instruct" model when properly fine-tuned.
We fine-tuned GPT-J on an instruction dataset created by the [Stanford Alpaca team](https://github.com/tatsu-lab/stanford_alpaca). You can find the original dataset [here](https://github.com/tatsu-lab/stanford_alpaca/blob/main/alpaca_data.json).
The dataset was slightly reworked in order to match the GPT-J fine-tuning format with [Mesh Transformer Jax](https://github.com/kingoflolz/mesh-transformer-jax) on TPUs. [Here is the final dataset we used](https://huggingface.co/datasets/nlpcloud/instructions-dataset-adapted-from-stanford-alpaca-for-gpt-j).
The base GPT-J models needs few-shot learning in order to properly understand what you want. [See more details here about how to properly use few-shot learning](https://nlpcloud.com/effectively-using-gpt-j-gpt-neo-gpt-3-alternatives-few-shot-learning.html). For example let's say that you want to correct spelling with GPT-J. Here is an example of a prompt you had to use:
```text
I love goin to the beach.
Correction: I love going to the beach.
###
Let me hav it!
Correction: Let me have it!
###
It have too many drawbacks.
Correction: It has too many drawbacks.
###
I do not wan to go
Correction:
```
Now, with Instruct GPT-J, here is what you can do:
```text
Correct spelling and grammar from the following text.
I do not wan to go
```
|