PyTorch Model in Production as a Serverless REST API
PyTorch is great to quickly prototype your ideas and get up and running with deep learning. Since it is very pythonic, you can simply debug it in PyCharm as you are used to in regular Python.
However, when it comes to serving your model in production the question arises: how to do it?
There are many possibilities to do so, but in this post, you will learn how to serve it as a lambda function in a serverless manner on AWS.