mmaaz60 commited on
Commit
646d812
1 Parent(s): 5d76f88

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +43 -0
README.md CHANGED
@@ -1,3 +1,46 @@
1
  ---
2
  license: mit
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
  ---
4
+
5
+ [![CODE](https://img.shields.io/badge/GitHub-Repository-<COLOR>)](https://github.com/mbzuai-oryx/LLaVA-pp)
6
+
7
+ # Phi-3-V: Extending the Visual Capabilities of LLaVA with Phi-3
8
+
9
+ ## Repository Overview
10
+
11
+ This repository features LLaVA v1.5 trained with the Phi-3-mini-3.8B LLM. This integration aims to leverage the strengths of both models to offer advanced vision-language understanding.
12
+
13
+
14
+ ## Training Strategy
15
+ - **Pretraining:** Only Vision-to-Language projector is trained. The rest of the model is frozen.
16
+ - **Fine-tuning:** All model parameters including LLM are fine-tuned. Only the vision-backbone (CLIP) is kept frozen.
17
+
18
+
19
+ ## Key Components
20
+
21
+ - **Base Large Language Model (LLM):** [Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct)
22
+ - **Base Large Multimodal Model (LMM):** [LLaVA-v1.5](https://github.com/haotian-liu/LLaVA)
23
+
24
+ ## Training Data
25
+
26
+ - **Pretraining Dataset:** [LCS-558K](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain)
27
+ - **Fine-tuning Dataset:** [LLaVA-Instruct-665K](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K/blob/main/llava_v1_5_mix665k.json)
28
+
29
+ ## Download It As
30
+
31
+ ```
32
+ git lfs install
33
+ git clone https://huggingface.co/MBZUAI/LLaVA-Phi-3-mini-4k-instruct-FT
34
+ ```
35
+
36
+ ---
37
+
38
+ ## License
39
+
40
+ This project is available under the MIT License.
41
+
42
+ ## Contributions
43
+
44
+ Contributions are welcome! Please 🌟 our repository [LLaVA++](https://github.com/mbzuai-oryx/LLaVA-pp) if you find this model useful.
45
+
46
+ ---