Our Enterprise Performance team is at the forefront of enterprise technology, working across finance, supply chain, and IT operations to support the delivery of holistic performance improvement and digital transformation. We support Deloitte client service teams of strategic advisors and architects, differentiated by our industry depth to help collaborate with leading insights providers and leverage your experience in strategy, process design, technology enablement, and operational services to enable the heart of business solutions.
Position Summary
Level: Consultant or equivalent
As an experienced Consultant at Deloitte Consulting Services, you will be responsible for individually delivering high quality work products within due timelines. Need-basis you will be mentoring and/or directing junior team members/liaising with onsite/offshore teams to understand the functional requirements.
Work you’ll do:
As a Data Engineer, you architect, develop, and maintain data pipelines and backend systems across cloud and on-premises platforms, supporting analytics, reporting, and advanced GenAI/LLM applications. Leveraging skills in Python (including AWS Lambda and Glue jobs), PL/SQL, database administration, and GenAI-specific data processing, you’re responsible for delivering clean, reliable, and well-governed data flows. Your work embeds standard validation, safety, and quality checks throughout pipelines, enabling enterprise-ready data at scale.
The team:
The Enterprise Performance team leverages deep industry knowledge, strong analytical skills, and practical approaches to address clients’ toughest business challenges. Our professionals as part of the Supply Chain Network Operations (SCNO) service line within Enterprise Performance focus on helping organizations achieve sustainable competitive advantage throughout their operations, spanning product development, planning, sourcing, manufacturing, logistics, and distribution. We excel at translating strategic objectives into tangible, measurable outcomes at the operational level. By aligning high-level goals with frontline execution, we ensure our clients realize real value and improved performance across every stage of their supply chain and operations.
Qualifications
Must Have Skills/Project Experience/Certifications:
- 4–7 years of experience in data engineering, Python (including Lambda/Glue), and database admin (with PL/SQL/SQL)
- Experience designing backend data flows and processing for GenAI or LLM applications (pipeline construction, cleaning, embedding/vectorization)
- Strong record building resilient, automated data systems with day-to-day operational safety woven in
- Proven ability to troubleshoot and optimize across data stack and collaborate cross-functionally
- Python: for AWS Lambda, Glue, custom parsing/transformation, GenAI data prep
- GenAI Data Ops: data cleaning, transformation, embedding & vector store ingestion (Pinecone, FAISS, OpenSearch, etc.), RAG pipeline enablement
- ETL/ELT & Orchestration: Apache Airflow, AWS Glue, batch/streaming
- DBA & PL/SQL: Oracle, PostgreSQL, MySQL, SQL Server, Redshift, BigQuery, Snowflake (schema, tuning, PL/SQL, migration, admin)
- Programming/Scripting: Python, SQL, PL/SQL, Bash/PowerShell, Pyspark
- Best Practices: input validation, parameterized queries, logging, anomaly detection, documentation
- Cloud: AWS, GCP, Azure (core admin for pipeline/data processing/GenAI backend)
- · BE/B.Tech/M.C.A./M.Tech degree or equivalent from accredited university
Location:
- Bengaluru/Hyderabad/Pune/Chennai/Kolkata
Shift Timings:
· 11 AM to 8 PM or 2 PM to 11 PM IST as per business requirements