Artificial Intelligence & Engineering
AI & Engineering leverages cutting-edge engineering capabilities to help build, deploy, and operate integrated/verticalized sector solutions in software, data, AI, network, and hybrid cloud infrastructure. These solutions insights are powered by engineering for business advantage, helping transform mission-critical operations.
Join our AI & Engineering team to help in transforming technology platforms, driving innovation, and helping help make a significant impact on our clients' success achievements. You’ll work alongside talented professionals reimagining and re-engineering operations and processes that are could be critical to businesses.
Position Summary
Data Engineer – Data & AI
Level : Consultant (4–6 yrs)
In this critical role, you will design, develop, and optimize robust ETL/ELT pipelines to enable the seamless migration of on-premises data to AWS Cloud or Hadoop/Databricks ecosystems. We're seeking professionals passionate about the complete data journey—from ingestion to transformation, governance, and delivering business value through efficient and modern data solutions.
What You’ll Do
- Design and implement scalable data pipelines leveraging AWS cloud-based services.
- Migrate and modernize on-premises data into cloud platforms (AWS/Databricks/Hadoop).
- Analyze data sets for quality, governance, and business value, aligning deliverables with client requirements.
- Translate business needs into technical requirements and architecture designs.
- Lead, mentor, or coordinate with junior team members and collaborate with onsite/offshore teams.
- Participate actively in Agile ceremonies such as sprint planning and retrospectives.
- Conduct code reviews and engage in defect resolution, deployments, and process improvements.
- Collaborate with DevOps teams to integrate and automate data engineering workflows.
- Support knowledge sharing and team training initiatives.
Key Responsibilities
- Develop enterprise-grade data ingestion solutions with Python/Java.
- Design and modernize large-scale data platforms, integrating structured and unstructured data.
- Utilize Spark SQL, Databricks, and Python for data transformation tasks, creating scalable frameworks.
- Perform data loads, including historical and streaming data, into cloud data warehouses (e.g., Redshift).
- Implement and enhance data governance frameworks for robust, compliant data operations.
- Conduct data profiling, modernization, and migration activities by analyzing existing code and optimizing tables.
- Write maintainable, efficient code and achieve comprehensive unit test coverage.
- Foster stakeholder relationships, ensuring alignment on expectations and project goals.
Must-Have Skills & Experience
- Demonstrated experience (2–3+ projects) with AWS Glue, Redshift, Spark, and related technologies.
- Strong programming skills in PySpark and SQL.
- Proficient in AWS S3, Athena, and Git version control.
- In-depth knowledge of data engineering best practices: pipeline development, ETL/ELT, governance, and automation.
- Strong problem-solving, organizational, and communication skills.
Nice-to-Have
- Experience with AWS SageMaker, Scala, or similar data science/ML tools.
Education
- Bachelor’s or Master’s in Computer Science, Engineering, or related field (BE/B.Tech/MCA/MSc (CS) or equivalent)
Location
Bengaluru or Hyderabad
About Our Team
Deloitte’s Data Engineering team delivers end-to-end solutions for cutting-edge Data and AI platforms. We empower clients to innovate, modernize, and scale their data and analytics capabilities, leveraging AI and GenAI technology to drive strategic business value.