Data Engineer

Job Title: Data Engineer
Location: Bangalore
Experience Level: 3+ Years
Employment Type: Full-Time

 

Job Summary

We are seeking an experienced and innovative Lead Data Engineer to drive our data systems and pipelines to the next level. You will play a pivotal role in designing, implementing, and optimizing end-to-end data solutions to support scalable and efficient data-driven decision-making processes. This position is ideal for someone passionate about large-scale data systems, cutting-edge cloud technology, and mentoring a high-performing data engineering team.

Key Responsibilities

  • Design, develop, and maintain scalable and performant ETL/ELT pipelines and workflows for structured and unstructured data.
  • Lead the architecture and implementation of data engineering solutions using Python, Airflow, PySpark, and SQL.
  • Build and manage data pipelines integrating with AWS services such as Athena, S3, EC2, and other cloud-native tools.
  • Work closely with data scientists, analysts, and cross-functional teams to ensure data quality, availability, and accessibility.
  • Optimize existing data architectures and query performance on massive data sets for analytical use cases.
  • Oversee the data governance process with documentation, monitoring, and automation of data workflow processes.
  • Mentor and guide junior data engineers in best practices, performance optimization, and cutting-edge tools.
  • Ensure compliance with data security, privacy regulations, and organizational policies.

 

Key Qualifications

  • Education: Bachelor’s or Master’s degree in Computer Science, Software Engineering, Data Science, or a related field
  • Experience: 5+ years of experience in data engineering with at least 1 year of experience as Lead.

 

  • Required Technical Skills

  • Proficiency in Python for data pipeline and systems development.
  • Strong experience with data orchestration tools, particularly Apache Airflow.
  • Advanced SQL expertise for data transformation, analysis, and query optimization.
  • Hands-on experience with PySpark for processing large-scale data.
  • Expertise in cloud technologies, specifically AWS, including Athena, EC2, S3, and other serverless computing services.

 

  • Additional Skills & Tools

  • Familiarity with big data tools like Hadoop, EMR, or equivalent cloud-supported ecosystems.
  • Knowledge of modern data warehousing tools and architectures (e.g., Redshift, Snowflake, BigQuery).
  • Proficiency in CI/CD pipelines and DevOps practices for data engineering.
  • Experience with monitoring and debugging solutions for data workflows.
  • Strong problem-solving skills with an ability to work under tight deadlines.

 

  • Preferred Attributes

  • Previous experience in e-commerce, fintech, or related industries.
  • Leadership or mentoring experience, with the ability to interact and collaborate cross-functional teams.
  • Strong communication skills with a track record of translating business needs into technical solutions.
  • Understanding of data security and privacy best practices.

 

  • What We Offer
  • A dynamic and inclusive work environment.
  • Competitive compensation and benefits package.
  • Opportunities for continuous learning and professional development.
  • The chance to work on exciting, large-scale projects leveraging state-of-the-art technologies.

If you’re passionate about building cutting-edge data solutions and enjoy working in a collaborative, fast-paced environment, we’d love to hear from you!

Apply today and become an integral part of our data journey.