MLOps Stacks: How SWE Meets MLE. As the excitement around machineā€¦ | by Aruna Srivastava | Sep, 2024


While there are some improvements to be made in these pipelines regarding model explainability, Databricks MLOps Stacks provides a new assurance that high-quality models are deployed while maintaining the systemā€™s robustness.

Source: mlops-guide.github.io

When models unexpectedly fail, version control is critical. Databricks MLOps Stacks builds upon traditional software version control practices with Git while incorporating ML-specific tools like MLflow and Unity Catalog. These tools not only track code versioning but also datasets, model configurations, and trained models. This ensures that every iteration of a model, along with its associated data and parameters, can be tracked, compared, and reproduced.

Additional version control within the Azure DevOps Deployment pipelines also allows for regularly scheduled deployments on a code repository artifact that remains independent of the workspaces.

Infrastructure as code is the ability to provision and support computing and code infrastructure using code instead of manual processes and settings. This abstracts the environment configurations for software engineers and translates especially well for machine learning engineers. To easily duplicate environments and reduce configuration errors, IaC can be essential to consistently managing cloud resources programmatically and managing ML workflows.

Without effective infrastructure, you might manually set up a training environment with specific GPU instances, memory allocation, and data storage configurations. However, if you forget to replicate these exact settings when deploying the model to a production environment, the model might underperform due to insufficient resources or incompatible configurations. This can lead to issues like longer inference times, increased costs due to resource inefficiency, or even outright failures if dependencies arenā€™t properly aligned. IaC ensures that every environment ā€” whether for training, testing, or production ā€” has consistent and correct configurations, preventing these kinds of discrepancies and ensuring reliable model performance.

IaC has been integrated into MLOps Stacks through Databricks Asset Bundles.

Databricks MLOps Stacks is making strides in how ML models are being productionized but letā€™s look at what has yet to be covered:

End-to-End Integration Testing for Entire Pipelines:

  • While MLOps Stacks do support integration testing, conducting comprehensive end-to-end integration testing for entire ML pipelines remains complex and less standardized compared to traditional software applications. The dynamic nature of ML models, including variations in data, model updates, and external dependencies, makes full end-to-end testing more difficult to implement and maintain consistently.
  • We often ran into instances where pipelines reported successful jobs but found hidden errors within the jobs themselves that seemed undetected. Tests to improve model visibility remains preliminary.

Agnostic Environment variables and Functions:

  • Several instances of poorly generalized code throughout MLOps Stacks causes friction between the code infrastructure and how pipelines operate to move code between workspaces. In short, the code base is not oriented for development pipelines.
  • For example, model references in Unity Catalog follow a three-level naming scheme that references the workspace, schema, and model name.
model_name: dev.house_prediction.mlops_model
  • These kinds of variables make it difficult to seamlessly transition between workspaces via pipeline as they reference the workspace themselves.

Model Agnostic Interpretation Methods:

  • Ideally, once infrastructure for machine learning models is set, good infrastructure should generalize to more complex models.
  • The tricky part is creating model agnostic interpretation methods such that models are evaluated in a generalizable way.
  • For example, tools like SHAP (SHapley Additive exPlanations) can be integrated into the MLOps pipeline to provide consistent interpretability across various models, from simple decision trees to deep neural networks. SHAP works by assigning importance values to each feature, explaining how they influence a modelā€™s predictions regardless of the model type. A standardized approach could be the corner stone for allowing companies to push out models at incredible speed.
Example of SHAP; Source: data4thought

Overall, MLOps Stacks is moving in the right direction for how machine learning should be utilized at scale. As machine learning models evolve, the infrastructure must also adapt to handle increasingly complex workflows, from data preprocessing to conditional retraining and deployment. This balance is key to making machine learning a reliable, scalable, and understandable part of production systems. To allow more engineers to adopt MLOps Stacks to their use case, I wrote extensive internal documentation to smoothen the learning curve for new users.

As the excitement around machine learning continues to grow, itā€™s crucial to remember that solid software engineering practices are the backbone of successful machine learning systems. Itā€™s these foundational principles that ensure advanced models are reliable, maintainable, and capable of delivering consistent value in real-world applications.

Recent Articles

Related Stories

Leave A Reply

Please enter your comment!
Please enter your name here