cs-giung commited on
Commit
b850649
1 Parent(s): 232cac4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -5,4 +5,4 @@ license: mit
5
  # CLIP
6
 
7
  Contrastive Language-Image Pretraining (CLIP) model pre-trained on LAION-2B at resolution 224x224. It was introduced in the paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) and further reproduced in the follow-up paper [Reproducible scaling laws for contrastive language-image learning](https://arxiv.org/abs/2212.07143).
8
- The weights were converted from the `laion/CLIP-ViT-B-32-laion2B-s34B-b79K` presented in the [https://huggingface.co/collections/laion/openclip-laion-2b-64fcade42d20ced4e9389b30).
 
5
  # CLIP
6
 
7
  Contrastive Language-Image Pretraining (CLIP) model pre-trained on LAION-2B at resolution 224x224. It was introduced in the paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) and further reproduced in the follow-up paper [Reproducible scaling laws for contrastive language-image learning](https://arxiv.org/abs/2212.07143).
8
+ The weights were converted from the `laion/CLIP-ViT-B-32-laion2B-s34B-b79K` presented in the [OpenCLIP LAION-2B collections](https://huggingface.co/collections/laion/openclip-laion-2b-64fcade42d20ced4e9389b30).