Infusion of Machine Learning Operations with Internet of Things

Image credit: Depositphotos

With advancements in deep tech, the operationalization of machine learning and deep learning models is burgeoning in the machine learning space. In a typical scenario within organizations involving machine learning or deep learning business cases, the data science and IT teams collaborate extensively in order to increase the pace of scaling and pushing multiple machine learning models to production through continuous training, validation, deployment and integration with governance. Machine Learning Operations (MLOps) has carved a new era of the DevOps paradigm in the machine learning/artificial intelligence realm by automating end-to-end workflows.

As we are optimizing the models and bringing data processing and analysis closer to the edge, data scientists and ML engineers are continuously finding new ways to push the complications involved with the operationalization of models to such IoT (Internet of things) edge devices. 

By 2025, global AI spending will reach $232 billion and $5 trillion by 2050, states a LinkedIn article. According to Cognilytica, the global MLOps market will be worth $4 billion by 2025. The industry was worth $350 million in 2019. [1]

Image credit: Shikhar Kwatra

Models running in IoT edge devices need to be frequently trained due to variable environmental parameters, wherein continuous data drift and limited access to IoT edge solutions may lead to degradation of models’ performance over time. The target platforms on which ML models need to be deployed can also vary, such as IoT edge or specialized hardware such as FPGAs which leads to a high level of complexity and customization with regard to MLOps on such platforms. 

Models can be packaged into docker images for the purpose of deployment post profiling the models by determining the cores, CPU and memory settings on target IoT platforms. Such IoT devices also have multiple dependencies for packaging and deploying models that can be executed seamlessly on the platform. Hence, model packaging is easily implemented through containers as they can span over both cloud and IoT edge platforms.

When we are running on IoT edge platforms with certain device dependencies, a decision needs to be made as to which containerized machine learning models need to be made available offline due to limited connectivity. An access script to access the model, invoke the endpoint and score the incoming request to the edge device needs to be operational in order to provide the respective probabilistic output. 

Continuous monitoring and retraining of models deployed in the IoT devices need to be handled properly using model artifact repositories and model versioning features as part of the MLOps framework. Different images of the models deployed will be stored in the shared device repository in order to quickly fetch the right image at the right time to be deployed to the IoT device. 

Model retraining can be triggered based on a job scheduler running in the edge device or when new data is incoming, invoking the rest endpoint of the machine learning model. Continuous model retraining, versioning and model evaluation become an integral part of the pipeline. 

In case the data is frequently changing, which can be the case with such IoT edge devices or platforms, the frequency of model versioning and refreshing the model due to variable data drift will enable the MLOps engineer persona to automate the model retraining process, thereby saving time for the data scientists to deal with other aspects of feature engineering and model development. 

The rise of such devices continuously collaborating over the internet and integrating with MLOps is poised to grow in the coming years. Multi-factored optimizations will continue to occur in order to make the lives of data scientists and ML engineers more focused and easier from a model operationalization standpoint via an end-to-end automated approach.



About the Authors

Shikhar Kwatra: MLOps Architect | Youngest Indian Master Inventor | 400+ Patents | Startup Investor | AI/ML Operationalization Leader | Author | RedHat Cloud Certified

Utpal Mangla: General Manager, Industry EDGE Cloud; IBM Cloud Platform