How Azure Functions are opening the door to machine learning for developers?

  • by

Artificial intelligence in general and machine learning specifically are really big right now. All tech giants like Microsoft, Amazon, and Google are investing lots of money and trying to hire the best experts to come up with the best machine learning tools and solutions. I think machine learning is the future and it’s already here.

Just several years ago it was tough for developers to enter the field or start using some ML tools in their projects. You had to have a PhD in ML or at least read a lot of books and spent lots of time gaining experience working with different model types. Another disaster was data collection and organization. It still can be a time-consuming process, but now your dataset doesn’t have to be huge to effectively train a model.

Nowadays there’re so many tools available to make our lives easier. You don’t have to be a scientist anymore to use all the benefits of ML in your projects. Developers’ who are just starting learning about the process or the ones with really tight time constraints will benefit from using prebuild models that can be connected to any kinds of applications through REST API (example: Microsoft Cognitive Services, Google Cloud AI building blocks, etc.). Everyone who wants to go to the next level by creating and training their own models with custom data will appreciate all available data management tools, languages, and libraries, hosting and even MLOps options.

Azure Functions(https://azure.microsoft.com/en-us/services/functions/) open the whole new world of serverless. They are used as a tool to “run small pieces of code in the cloud” and can be used for all kinds of tasks: data export and import from APIs, databases and files, accessing applications hosted in a cloud by a trigger or accessing, updating, creating ML models, etc.

Azure Functions support some of the most popular languages for ML: Python, Java and C#. So it allows you to use, for example, Python and TensorFlow with a machine learning model for different scenarios: image recognition, sentiment analysis, price prediction, etc. It’s easy to create a function from the Azure portal or right from Visual Studio, so there’s no additional set up required.

There’re so many options available for using Azure Functions for ML tasks:

  • Import/export data for your model from a database or API endpoint
  • Calling one (or more) Cognitive Services API for analyzing data
  • Accessing custom pre-built models in a cloud or server to analyze data, etc.

Let me provide an example of the third option above. I’m currently using ML.NET (https://docs.microsoft.com/en-us/dotnet/machine-learning/how-does-mldotnet-work) for some of my personal projects. It became available for all .NET developers in May 2019, but it has been used inside Microsoft in Office 365, Power BI and other tools for quite some time. It’s very appealing to me because I can continue using C# (or F#) for building and using custom ML models. Recently I decided to use ML.NET in my Xamarin application to work with a custom model I created. Unfortunately currently ML.NET is not supported on ARM processor architecture, so Xamarin apps (iOS, Android) and ARM-based IoT devices are out of luck. There’re several available workarounds, including creating an Azure Function that will call ML.NET application, pass data and get the result from it (there’s currently a bug open on GitHub). The function can be triggered from the Xamarin application, so it’ll get the result without accessing the ML.NET application directly(another workaround is mentioned in my blog post).

As you can see the world of ML is endless and exciting. You don’t have to be a scientist anymore to use all the benefits of it in your applications. There’re lots of tools including Azure Functions that will make learning and implementation processes much more accessible. They will open the door to that exciting world for you.

ML.NET + Azure Functions