Edit model card

This model is a fine-tuned LLaMA (7B) model. This model is under a non-commercial license (see the LICENSE file). You should only use model after having been granted access to the base LLaMA model by filling out this form.

This model is a semantic parser for WikiData. Refer to the following for more information:

GitHub repository: https://github.com/stanford-oval/wikidata-emnlp23

Paper: https://aclanthology.org/2023.emnlp-main.353/

Wikidata

WikiSP
arXiv Github Stars

This model is trained on both the WikiWebQuestions dataset, the QALD-7 dataset, and the Stanford Alpaca dataset.

Downloads last month
6
Safetensors
Model size
6.74B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.