Machine Learning in Production: From Experimented ML Model to System
Abstract
Pritom Bhowmik
Production ML pipeline refers to a complete end-to-end workflow of a machine learning product ready for deployment. In recent years, companies have vastly invested in Machine Learning research; developers are developing new tools and technologies to make ML more flexible. Now, we can experience AI in most devices around us, from home appliances to cars. When we want to develop an AI-powered product, it is vital to understand the crucial workflows of the ML. Academic research to develop an ML model and a production ML pipeline are entirely different scenarios. From business problems, data collection to deploying the model is an acutely iterative process. Most of the time, Data scientists and Machine Learning Engineers need to deal with issues like data shift, concept shift, model decay, etc.
Sometimes, there are need to change the complete ML architecture or how the features are engineered in the dataset. It will become tedious if someone is working in such an environment and lacks an understanding of the entire workflow of the ML pipeline. Though every ML project is different, a data scientist/ ML engineer/ data engineer must understand the end-to-end workflow of the ML pipeline for the product they are developing. The challenge starts with a business problem. We may face different domain problem statements that need to be solved with Machine Learning. How the data will be collected is also a big concern. Data pre-processing, data validation, data monitoring, feature engineering, Model Selection, hyperparameter tuning, model optimization, model performance analysis, performance evaluation, detecting bias, model deployment, post-deployment analysis & monitoring are the crucial processes to make your model production-ready.
The main contribution of this research paper is to present a complete picture of the end-to-end workflows of a production-ready ML pipeline. The process can apply to any production ML project though some workflow or steps may differ due to the domain or use-case's demand. A proper ML pipeline architecture should be easily maintainable, scalable, and reusable. Because as the machine learning project grows, it becomes more and more complex. So, performing regular updates and scaling will become easy for data scientists & ML engineers if the pipeline is well designed and automated.
Index Terms: Machine Learning Pipeline, Neural Architecture Search, Principal Component Analysis, Model optimization, Dimensionality Reduction, Directed Acyclic Graph, Data Orchestrators, Model Decay