Human Capital
The Human Capital Offering Portfolio focuses on helping organizations manage and sustain their performance through their most important asset: their people. Centered on five core issues, this Portfolio signifies to the market that we see Human Capital as a topic critical to the C-Suite. As we go-to-market we will show our clients that we serve more than HR organizations – from the CEO to CFO, Risk Manager to Business Unit leader—and that we deliver on our issues and help create value for our clients.
Position Summary
Level: Manager
As a Manager at Deloitte Consulting, you will oversee the technical delivery of enterprise-scale software solutions, lead cross-functional and global teams and mentor junior members. You will collaborate to understand functional requirements, support sales and proposal efforts, and drive end-to-end project delivery, including estimation and planning, to ensure successful outcomes.
Work you’ll do:
HC Forward is Deloitte’s innovation engine for Human Capital, integrating technology, data, and industry expertise to create scalable solutions and assets that extend client capabilities and drive ongoing value across all Human Capital offerings.
In this role you will -
Design and implement end-to-end ML platforms enabling scalable model development, training, deployment, and monitoring.
Architect robust MLOps infrastructure supporting classical ML, deep learning, and GenAI workloads.
Build multi-tenant ML serving systems capable of handling thousands of predictions per second.
Define and maintain reference architectures for domain-specific ML use cases in Human Capital (HC).
Enable solutions across key domains: attrition prediction, talent optimization, skills inference, and payroll forecasting.
Standardize core ML platform components such as feature stores, model registries, and experiment tracking.
Establish and enforce ML engineering best practices including versioning, reproducibility, monitoring, and CI/CD.
Define and oversee model governance frameworks addressing explainability, bias detection, and regulatory compliance.
Guide system design for ML serving patterns (batch/online), A/B testing, and model lifecycle strategies.
Drive architecture reviews and technical decisions balancing performance, latency, cost, and maintainability.
Build POCs and prototypes to test new ML architectures, serving methods, and deployment approaches.
Develop reusable ML pipeline components and scalable model serving frameworks.
Debug production ML issues including model drift, performance degradation, and scaling bottlenecks.
Collaborate with cross-functional teams (Data Science, Product, Engineering) to produce ML solutions.
Mentor engineers and contribute to thought leadership through strategy presentations, whitepapers, and conference talks.
The team:
Our Insights, Innovation & Operate Offering is designed to enhance key aspects of our clients' businesses by leveraging cutting-edge technology, data, and a blend of deep technical and human expertise. We innovate and deliver creative, industry-specific solutions that streamline operations and accelerate speed-to-value.
Qualifications:
Must Have Skills/Project Experience/Certifications:
12–15+ years in software engineering with significant hands-on experience in ML/AI systems.
4–5+ years in ML engineering or MLOps roles, with proven experience deploying models to production.
Strong foundation in machine learning algorithms, statistical methods, and end-to-end model development.
Demonstrated technical leadership, team mentoring, and ability to communicate with senior executives.
Proven record of deploying ML solutions in enterprise domains (e.g., Human Capital, Finance, Healthcare, Retail).
Hands-on experience with ML orchestration frameworks: Kubeflow, MLflow, or Metaflow.
Proficient with experiment tracking tools such as MLflow, Weights & Biases, or Neptune.
Experience in model serving using TorchServe, TensorFlow Serving, Triton, or BentoML.
Familiar with feature stores like Feast, Tecton, or cloud-native equivalents.
Expertise in deep learning frameworks (PyTorch, TensorFlow/Keras) for production-grade deployments.
Strong skills in classical ML libraries: Scikit-learn, XGBoost, LightGBM with model tuning and optimization.
Experience with forecasting platforms using StatsForecast (Nixtla), Darts, and distributed tools (Spark, Ray, Dask).
Solid grasp of MLOps best practices: CI/CD, model monitoring, drift detection, and A/B testing frameworks.
Hands-on with LLM deployment: vLLM, TGI, RAG architecture, embedding models, prompt engineering, and token optimization.
Expert in Python with working knowledge of Kubernetes, Docker, cloud-native ML services, and distributed training.
Good to Have Skills/Project Experience/Certifications:
Experience with specialized domains: NLP, Computer Vision, Time Series, or Recommender Systems.
Exposure to advanced ML techniques: Reinforcement Learning, Federated Learning, Online Learning.
Familiarity with AutoML tools for model selection and hyperparameter tuning.
Hands-on experience with cloud ML platforms (SageMaker, Vertex AI, Azure ML) and relevant certifications (e.g., AWS ML Specialty).
Contributions to open-source ML projects or speaking engagements/publications in ML engineering.
Education:
BE/B.Tech/M.C.A./M.Sc (CS) degree or equivalent from accredited university
Location:
Bengaluru/Hyderabad/Pune/Chennai/Kolkata