Attention graph features extracted from LMs fine-tuned on linguistic acceptability corpora
Irina Proskurina
iproskurina
AI & ML interests
LLMs: quantization, pre-training
Organizations
Collections
3
models
38
iproskurina/Mistral-7B-v0.3-gptq-3bit
Text Generation
•
Updated
•
22
iproskurina/Mistral-7B-v0.3-GPTQ-8bit-g128
Text Generation
•
Updated
•
78
iproskurina/Mistral-7B-v0.3-GPTQ-4bit-g128
Text Generation
•
Updated
•
140
iproskurina/Mistral-7B-v0.1-GPTQ-8bit-g64
Text Generation
•
Updated
•
33
iproskurina/Mistral-7B-v0.1-GPTQ-3bit-g64
Text Generation
•
Updated
•
30
iproskurina/Mistral-7B-v0.1-GPTQ-8bit-g128
Text Generation
•
Updated
•
30
iproskurina/Mistral-7B-v0.1-GPTQ-3bit-g128
Text Generation
•
Updated
•
28
iproskurina/Mistral-7B-v0.1-GPTQ-4bit-g128
Text Generation
•
Updated
•
29
iproskurina/opt-13b-GPTQ-4bit-g128
Text Generation
•
Updated
•
199
iproskurina/opt-2.7b-GPTQ-4bit-g128
Text Generation
•
Updated
•
55