Machine Learning Model Management

Try Domo for yourself.

Completely free.

An Overview of Machine Learning Model Management

Machine learning model example

Machine learning (ML) is one of the most useful tools you can have in your business. At this point in our society, ML isn’t optional—you’ll be left far behind your competition if you don’t implement ML for its many offerings, including task automation, predictive maintenance, use case personalization, and predictive modeling. 

To truly understand machine learning, you’ll have to master ML model management, which is a subset of MLOps. MLOps is very similar to DevOps for software, except that it’s oriented toward machine learning. If you need a refresher, here are brief descriptions of DevOps and MLOps: 

DevOps is a strategy that software and IT professionals use. In this methodology, professionals use certain practices and tools to deliver software and applications faster and more accurately. It’s an approach that encompasses developing, testing, deploying, and scaling software and applications for a smoother lifecycle that gives the company a competitive advantage. 

MLOps has a similar strategy to DevOps but for machine learning. The goal of MLOps is to simplify ML model management, create cleaner data, streamline processes, and make machine learning faster, more efficient, and more accurate. When companies use MLOps, they can automate parts of machine learning workflows and deployments, which allows them to make fewer errors while increasing deployment speeds. MLOps includes philosophies that encompass the whole machine learning lifecycle, such as data gathering, labeling, versioning, creating and training models, deploying those models, and then using the data from those models for predictive modeling.

First, let’s explore what machine learning models are and how they learn.

What are machine learning models?

Machine learning models are the algorithms that computer programs create as they process data and recognize patterns. The goal of machine learning is to create a program that can think similarly to the ways humans think. Programmers want the program to be able to make its own decisions. 

ML is built on foundations of data, but what makes ML different from other parts of computer science is the way it processes that data. Much like the way you learn a foreign language, the ML model needs practice to “learn” how to interpret the data it intakes and give accurate outputs. 

To make sure the patterns are accurate, programmers feed data into ML programs many times. The part of machine learning that actually processes the data is called the ML model. To increase model accuracy, programmers may change the way the program processes data, which is called the model architecture. Programmers train the algorithm on a set of data, and if the output is correct, they know the model works. If the result is off, though, they know they need to continue the process of adjusting model architecture, evaluating model inputs, and retraining the model on new data. 

There are four kinds of machine learning models: supervised, semi-supervised, unsupervised, and reinforcement.

Supervised learning models are models that are given labeled data. The model is fed a set of data; if the data has been cleaned and labeled correctly, programmers can teach the model to evaluate and recognize patterns in that data. The model learns from the script it is given. 

Semi-supervised learning models may use some labeled data to teach the model, but they also learn from artificial intelligence (AI). These models learn basic frameworks of pattern recognition from the information about the data they’re given and then infer the rest about the unlabeled data. 

Unsupervised learning models need no human interaction. They are unsupervised and free to process unlabeled data on their own. As models process the data, they may notice patterns and create “clusters” of commonalities between data points. 

Reinforcement learning for models is simply a trial-and-error process. Programmers allow the model to process its data with a certain goal in mind, rewarding the program when it succeeds and disciplining it when it doesn’t. The program attempts to process the data in order to meet the goal. The model learns to create algorithms that create outputs that achieve the goal in the most effective way. 

Now, let’s consider ML model management, a key part of MLOps.

What is machine learning model management?

ML model management focuses on the part of MLOps that involves creating, evaluating, versioning, scaling, deploying, and monitoring learning models. To make models as quickly and efficiently as possible, there are strategies for model management. ML model management is essentially the supervision process to making, improving, and maintaining the models. 

Depending on the project’s or company’s goal, programmers may use different types of models, varying model architectures, a range of inputs, different data sets, and modified parameters. Programmers keep track of the results of each model to help them evaluate which performs the best. When looking at model results, programmers must keep in mind the accuracy, reliability, and efficiency of the model. Programmers can then create a set of best practices to enable repeatable experiments and create models that give the desired results consistently.

The importance of ML model management

Machine learning model management is vital if you want to fully take advantage of the capabilities of machine learning. Without good model management, you essentially have a fancy calculator that can only complete a few small tasks. To fully implement machine learning, you need to understand the value of model management. With better ML models, companies can:

  • Improve customer experience. In fact, 57% of companies and businesses use machine learning to improve consumer experience.
  • Communicate better with customers. Business leaders say chatbots have increased sales by 67% on average.
  • Automate rote tasks, which saves time and money. About 32% of AI-led businesses said they were able to reduce operating costs.

AI model management needs ongoing configuration experimentation and tracking of the models. Within ML model management, there are two important parts: models and experiments. These two complementary yet distinct parts comprise model management. 

Models: The model-focused aspect of managing models encompasses model packaging, lineage, deployment, tracking performance, and then—if necessary—retraining the model based on its performance. 

Experiments: The other piece of model management is experimentation. Machine learning is built on trial-and-error processes, and experimentation is no exception. In this realm of ML model management, programmers can experiment with the model, pushing its parameters, tracking metrics about model performance, gathering data, and pipeline versioning. This data helps programmers and data scientists collaborate on models and look at background info on each model.

5 components of ML model management

ML model management has many components that can vary depending on the architecture, goals, and environment. However, most ML model management strategies offer organizations five main components: 

  1. Versioning: If you’ve ever worked in Google Docs and needed to consult a previous version of your file, you have some experience with versioning. Similarly, versioning for ML model management is when programmers keep track of files, changes made to the model and software, when and by whom, and what the results are. 
  2. Code versioning: More specific than just tracking versioning for models or data, code versioning is for the code used to create the models. This helps you know exactly where things went wrong (or right) when training a model. It’s also helpful for collaboration and creating easily repeatable models. 
  3. Experiment tracker: An experiment tracker does just that—keeps track of experiments. This involves saving and organizing models and recording metadata, as well as tracking the input and output of each model. This way, programmers can understand how the model works and why it gives the results that it does. 
  4. Model registry: Once a model is finalized, programmers can store models in a central repository called a model registry. This helps keep your models organized. Additionally, a model registry can serve as a data set for training future models. 
  5. Model monitoring: Also called model observation, model monitoring is when programmers monitor models to make sure that their accuracy score stays at an acceptable level. Sometimes, models can experience “serving skew,” which happens when there is a large discrepancy between the data a model was trained on and the data it’s receiving in real life after being deployed. If the data sets are too different, the model will no longer be as accurate as it was during training. 

How to implement ML model management

Implementing ML model management strategies can be overwhelming, but the good news is that you can start small. Here are some of the core best practices to get you started on managing your organization’s machine learning. 

Logging 

Like a captain’s log contains all the details about a ship and its voyage, ML model management logging contains details about the ML models. We recommend logging information such as details about how programmers made the model, like model configurations, training parameters and hyperparameters, batch size, sampling techniques, and learning rate. Programmers should log metadata about the models, such as metrics, loss, configurations, and images. It’s also helpful to log performance data (including confusion matrix, classification reports, and SHAP values) both during training and after deployment.

Training and developing models

Training and developing ML models starts with employees. Consider having peer reviews of training scripts. To simplify things while developing models, programmers should keep metric objectives simple and remove any features that aren’t being used. As far as the models themselves, one best practice is to lean heavily into versioning. This will help team members understand why the outputs are the way they are while providing greater insight into how models can be improved and developed. 

Develop model training metrics

ML models are fun to play around with, but to accomplish their objective, they need to have proper training metrics around model accuracy, deployment speed, and feedback loop efficiency. Programmers need clear, measurable, concrete objectives to make sure the models are working consistently. The metrics will vary depending on the task of the model, but some common ones are mean squared error (MSE), root mean squared error (RMSE), R², confusion matrix, F1 score, and gain and lift charts.

Deployment

Automate model deployment for faster deployment with fewer errors. You can also automate rollbacks for production models to increase efficiency. Even if you’ve done extensive testing, it’s also wise to continue monitoring models after deployment to help you avoid plateaus and iterate faster. 

Monitor and optimize model training

If you don’t want to have to train every model yourself from scratch, you need to monitor and optimize model training. Models can train from other models, which makes the training process more efficient. You can optimize model training by experimenting with different model architectures, optimizers, loss functions, parameters, hyperparameters, and data. 

How Domo streamlines your ML model management

Domo is here to support you as you invest in machine learning. We believe ML should be accessible to everyone, which is why we created AutoML. Domo partnered with Amazon Web Services to empower Domo users to train and tune ML models. With just a few clicks, AutoML will transform your data to be ready for machine learning and launch hundreds of training jobs on any data set in Domo to find the model that achieves the best performance for your task. Domo.AI offers a host of capabilities and benefits—learn more about them at ai.domo.com.

RELATED RESOURCES

Article

An executive’s guide to automated machine learning

glossary

Guide to Data Intelligence

glossary

AI vs ML Explained

Ready to get started?
Try Domo now or watch a demo.