Do you know which inputs your neural network likes most?
Recent advances in training deep neural networks have led to a whole bunch of impressive machine learning models which are able to tackle a very diverse range of tasks. When you are developing such a model, one of the notable downsides is that it is considered a “black-box” approach in the sense that your model learns from data you feed it, but you don’t really know what is going on inside the model.
Shapeshifting PyTorch
An important consideration in machine learning is the shape of your data and your variables. You are often shifting and transforming data and then combining it. Thus, it is essential to know how to do this and what shortcuts are available.
Let’s start with a tensor with a single dimension:
import torch test = torch.tensor([1,2,3]) test.shape torch.Size([3]) Now assume we have built some machine learning model which takes batches of such single dimensional tensors as input and returns some output.
What are embeddings in machine learning?
Every now and then, you need embeddings when training machine learning models. But what exactly is such an embedding and why do we use it?
Basically, an embedding is used when we want to map some representation into another dimensional space. Doesn’t make things much clearer, does it?
So, let’s consider an example: we want to train a recommender system on a movie database (typical Netflix use case). We have many movies and information about the ratings of users given to movies.
PyTorch GPU inference with Docker and Flask
GPU inference In a previous article, I illustrated how to serve a PyTorch model in a serverless manner on AWS lambda. However, currently AWS lambda and other serverless compute functions usually run on the CPU. But what if you need to serve your machine learning model on the GPU during your inference and the CPU just doesn’t cut it?
In this article, I will show you how to use Docker to serve your PyTorch model for GPU inference and also provide it as a REST API.
PyTorch Model in Production as a Serverless REST API
PyTorch is great to quickly prototype your ideas and get up and running with deep learning. Since it is very pythonic, you can simply debug it in PyCharm as you are used to in regular Python.
However, when it comes to serving your model in production the question arises: how to do it?
There are many possibilities to do so, but in this post, you will learn how to serve it as a lambda function in a serverless manner on AWS.