Low-Rank Adaptation (LoRA)
Approach for fine-tuning large-scale, pre-trained models
LoRA (Low-Rank Adaptation) represents a novel approach for fine-tuning large-scale, pre-trained models. Typically trained on general domain data to maximize exposure to diverse information, these models can be further enhanced for tasks like chatting or question answering by undergoing further 'fine-tuning' or adaptation on domain-specific datasets.
The technique incorporates small, low-rank matrices into targeted sections of a neural network, such as attention or feed-forward layers. These matrices serve as adjustments to the existing weights of the model. By modifying these matrices, the model can undergo fine-tuning for particular tasks while preserving the knowledge acquired during its initial training. LoRA allows for task-specific customization without requiring complete retraining of the entire model, making it a resource-efficient approach to adapting large-scale models for specific applications.
Liked the content? you'll love our emails!
Is Explainability critical for your 'AI' solutions?
Schedule a demo with our team to understand how AryaXAI can make your mission-critical 'AI' acceptable and aligned with all your stakeholders.