LoRA - low rank adaption explained in three minutes
Introduction LoRA (Low-Rank Adaptation of LLMs) is a technique that focuses on updating only a small set of low-rank matrices instead of adjusting all the parameters of a deep neural network . This reduces the computational complexity of the training process significantly.
LoRA is particularly useful when working with large language models (LLMs) which have a huge amount of parameters that need to be fine-tuned.
The Core Concept: Reducing Complexity with Low-Rank Decomposition
Understanding the difference between weight decay and L2 regularization
Introduction Machine learning models are powerful tools for solving complex problems, but they can easily become overly complex themselves, leading to overfitting. Regularization techniques help prevent overfitting by imposing constraints on the model’s parameters. One common regularization technique is L2 regularization, also known as weight decay. In this blog post, we’ll explore the big idea behind L2 regularization and weight decay, their equivalence in stochastic gradient descent (SGD), and why weight decay is preferred over L2 regularization in more advanced optimizers like Adam.
DINO - Emerging properties in self-supervised vision transformers
Today’s paper: Emerging properties in self-supervised vision transformers by Mathilde Caron et al. Let’s get the dinosaur out of the room: the name DINO refers to self-distillation with no labels.
The self-distillation part refers to self-supervised learning in a student-teacher setup as is often seen for distillation. However, the catch is that in contrast to normal distillation setups where a previously trained teacher network is training a student network, here they work without labels and without pre-training the teacher.
Rethinking Batch in BatchNorm
Today’s paper: Rethinking ‘Batch’ in BatchNorm by Wu & Johnson BatchNorm is a critical building block in modern convolutional neural networks. Its unique property of operating on “batches” instead of individual samples introduces significantly different behaviors from most other operations in deep learning. As a result, it leads to many hidden caveats that can negatively impact model’s performance in subtle ways.
This is a citation from the paper’s abstract and the emphasis is mine which caught my attention.
P-Diff Learning Classifier with noisy labels based on probability difference distributions
Label noise in digital Pathology In the field of digital pathology and other health related deep learning applications, label noise is an important challenge to consider during training.
It’s inherent to the medical fields as the problems are extremely challenging even for trained experts, so there is high intra- as well as inter-observer variability.
This blog post dives into the idea of the paper P-DIFF: Learning Classifier with Noisy Labels based on Probability Difference Distributions which is authored by researchers of Microsoft in China.