Job Role: Data Engineer – Consultant
Offering cusomer-tailored services and deep industry insights, at Deloitte Consulting LLP we help clients tackle their most complex challenges enabling them to seize new growth opportunities, reduce costs, improe efficiencies and stay ahead of customer demand. Developing and executing our clients’ strategic vision, we help them dramatically improve their business performance across a broad range of specialties – enterprise model desi, global business services, outsourcing, real estate, and location strategy.
Our Deloitte Innovations and Platforms teams are working on delivering innovate cloud-based solutions across a range of domains and industries (e.g. supply chain management, banking/insurance, CPG, retail, etc.). It is a fast-paced, innovative and exciting environment. Our teams are following an agile development approach and work with the latest technologies across a wide range of cloud technologies, commercial options and open source. We are building and bringing solutions to market which we are hosting and operating for our clients.
Data Engineer
As a Data Engineer, you will be responsible for designing, developing, and maintaining our data pipelines and infrastructure. You will work closely with data scientists, analysts, and other stakeholders to ensure data is accessible, reliable, and optimized for performance.
Work you’ll do
· Design, build and support scalable data pipelines, systems, and APIs using python, spark and snowflake ecosystems.
· Use distributed computing frameworks (primarily PySpark, snowpark), graph-based and other cutting-edge technologies to resolve identities at scale
· Lead cross-functional initiatives and collaborate with multiple, distributed teams
· Produce high quality code that is robust, efficient, testable and easy to maintain
· Deliver operational automation and tooling to minimize repeated manual tasks
· Participate in code reviews, architectural decisions, give actionable feedback, and mentor junior team members
· Influence product roadmap and help cross-functional teams to identify data opportunities to drive impact
Team
Converge’s cloud-based suite of software solutions, combined with Deloitte’s integrated technology ecosystem, enable financial institutions to deliver the security, digital convenience, and personalization customers expect today. With regulatory experience in financial services, strategy, and implementation, we help our clients offer an exceptional customer experience, expand product offerings, acquire new customers, reduce customer acquisition cost, and deliver strong ROI goals on their technology investment. For more information visit: https://www2.deloitte.com/us/en/pages/consulting/solutions/converge/converge-prosperity.html
Prior Experience:
5 to 8 years of experience in Data engineering.
3 to 5 years of experience in Data engineering.
Skills/Project Experience - Required:
· 2+ years of software development or data engineering experience in Python (preferred), Spark (preferred), snowpark, snowflake or equivalent technologies
· Experience designing and building highly scalable data pipelines (using Airflow, Luigi, etc.)
· Knowledge and experience of working with large datasets
· Proven track record of working with cloud technologies (GCP, Azure, AWS, etc.)
· Experience with developing or consuming web interfaces (REST API)
· Experience with modern software development practices, leveraging CI/CD and containerization such as Docker
· Self-driven with a passion for learning and implementing new technologies
· A history of working collaboratively with a cross-functional team of engineers, data scientists and product managers
Good to Have
· Experience with distributed computing or big data frameworks (Apache Spark, Apache Flink, etc.)
· Experience with or interest in implementing graph-based technologies
· Knowledge of or interest in data science & machine learning
· Experience with backend infrastructure and how to architect data pipelines
· Knowledge of systemdesign and distributed systems
· Experience working in a product engineering environment
· Experience with data warehouses ( BigQuery, Redshift etc.)
Location:
Hyderabad/Bengaluru/Gurgaon/Kolkata/Pune