Meta-learning from noisy labels
Label noise introduction Training machine learning models requires a lot of data. Often, it is quite costly to obtain sufficient data for your problem. Sometimes, you might even need domain experts which don’t have much time and are expensive.
One option that you can look into is getting cheaper, lower quality data, i.e. have less experienced people annotate data. This usually has the side effect of your labels becoming more noisy.
Pyramidal Convolution: Rethinking Convolutional Neural Networks for Visual Recognition
Today’s paper: Pyramidal Convolution by Duta et al. This is the third paper of the new series Deep Learning Papers visualized and it’s about using convolutions in a pyramidal style to capture information of different magnifications from an image. The authors show how a pyramidal convolution can be constructed and apply it to several problems in the visual domain. What’s really interesting is that the number of parameters can be kept the same while performance tends to improve.
End-to-End object detection with transformers
Today’s paper: End-to-End object detection with transformers by Carion et al. This is the second paper of the new series Deep Learning Papers visualized and it’s about using a transformer approach (the current state of the art in the domain of speech) to the domain of vision. More specifically, the paper is concerned with object detection and here is the link to the paper of Carion et al. on arxiv.
Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour
New blog series: Deep Learning Papers visualized This is the first post of a new series I am starting where I explain the content of a paper in a visual picture-based way. To me, this helps tremendously to better grasp the ideas and remember them and I hope this will be the same for many of you as well.
Today’s paper: Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour by Goyal et al.