Edit model card

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

This is the pretrained model described in paper "Layer-Condensed KV Cache for Efficient Inference of Large Language Models", Appendix C.1. To access this model, please email me (wuhy1_AT_shanghaitech_DOT_edu_DOT_cn) your username and what you intend to use the model weights for. Before we make the model fully public, please do not redistribute it on the Internet.

To use the model, please manually clone the repository and then load the model from the local folder. Unfortunately, the AutoModel.from_pretrained function will not work, as I have not uploaded the custom codes yet.

See more details at our github repo: https://github.com/whyNLP/LCKV

Downloads last month
0
Safetensors
Model size
1.08B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train whynlp/tinyllama-lckv-w2-2.5T-ft-100b