Fine-tuning model

#41
by epishchik - opened

Hello!

I have a question, I want to use your model as text encoder for my CLIP-like pipeline. I want to try to unfreeze some layers and finetune them. Do lora_A and lora_B parameters are used when task isn't specified? Is it okay just to unfreeze some layers and keep lora_A, Lora_B frozen? (because as I understood they aren't used anyway without specific task parameter value)

Tnx in advance!

Hey @epishchik ,

Do lora_A and lora_B parameters are used when task isn't specified?

no, lora parameters are only used when a task is specified

Is it okay just to unfreeze some layers and keep lora_A, Lora_B frozen?

Yes, go ahead! You can just load the model with lora_main_params_trainable=True, this will unfreeze main parameters

epishchik changed discussion status to closed

Sign up or log in to comment