You can create a custom parameter initialization function in QLoRA by iterating over LoRA layers and applying tailored weight initializers like Xavier or Kaiming.
Here is the code snippet you can refer to:

In the above code we are using the following key strategies:
-
Targets only LoRA-specific layers for custom initialization.
-
Supports multiple init strategies like Xavier and Kaiming.
-
Enhances training stability and convergence in QLoRA.
Hence, custom initialization in QLoRA fine-tuning ensures better control over LoRA parameter distributions, leading to more stable and efficient training.