Text Generation
English
Eval Results
gpt2 / README.md
d-matrix
Update README.md
97b8171 verified
|
raw
history blame
2.02 kB
---
license: apache-2.0
datasets:
- wikitext
- ptb_text_only
language:
- en
metrics:
- perplexity
pipeline_tag: text-generation
model-index:
- name: distilgpt2
results:
- task:
type: text-generation
dataset:
name: penn_treebank
type: ptb_text_only
metrics:
- name: perlexity@BASELINE
type: dmx-perlexity
value: 63.45857238769531
- name: perlexity@FALLBACK
type: dmx-perlexity
value: 64.36720275878906
- task:
type: text-generation
dataset:
name: wikitext2
type: wikitext-2-raw-v1
metrics:
- name: perlexity@BASELINE
type: dmx-perlexity
value: 46.05925369262695
- name: perlexity@FALLBACK
type: dmx-perlexity
value: 46.570838928222656
---
This is a d-Matrix functional reference of the GPT2 model family, of the following *revisions*:
- [`distilgpt2`](https://huggingface.co/distilbert/distilgpt2)
- [`gpt2`](https://huggingface.co/openai-community/gpt2)
- [`gpt2-medium`](https://huggingface.co/openai-community/gpt2-medium)
- [`gpt2-large`](https://huggingface.co/openai-community/gpt2-large)
- [`gpt2-xl`](https://huggingface.co/openai-community/gpt2-xl)
The reference provides the following functional *configurations*:
Configuration | Explanation
:-- | :--
**`BASELINE`** | a reference functionally equivalent to the original model
**`BASIC`** | all linear algebraic operands quantized to `BFP16-64`, and all other operations transformed to approximated kernel simulations
### Usage
Install d-Matrix ML Tools first: `pip install dmx-mltools`.
```python
from mltools.dmx import pipeline
pipe = pipeline(
task="text-generation",
model="d-matrix/gpt2",
revision="gpt2-xl", # see above for other variants
dmx_config="BASELINE", # see above for other variants
trust_remote_code=True,
# device_map="auto", # enabling model parallel on multi-GPU nodes
)
results = pipe.evaluate(metric="d-matrix/perplexity", dataset="wikitext-2")
```
### Evaluation results