Description:
We are looking for a Data Engineer with 1-3 years of experience who has a strong command of Python, SQL, and DBT (Data Build Tool). The ideal candidate should be passionate about data modeling, transformation, and building scalable data pipelines. You will play a critical role in designing and optimizing our data workflows, ensuring high performance, and enabling data-driven decision-making.
Key Responsibilities:
- Design, develop, and maintain efficient data pipelines using DBT, Python, and SQL.
- Work closely with data analysts and business teams to understand requirements and build optimized data models.
- Ensure data quality, integrity, and consistency across different environments.
- Optimize SQL queries and database performance for large-scale datasets.
- Collaborate with cross-functional teams to integrate data from multiple sources into a structured data warehouse.
- Implement ETL/ELT processes, transformation logic, and automation for seamless data processing.
- Monitor and troubleshoot data pipelines, ensuring timely and accurate data delivery.
- Stay updated with the latest best practices and tools in data engineering and analytics.
Required Skills & Qualifications:
- 1-3 years of experience in Python, SQL, and DBT.
- Strong expertise in writing complex SQL queries, optimizing database performance, and handling large datasets.
- Hands-on experience in DBT (Data Build Tool) for data transformation and modeling.
- Experience with cloud data warehouses like Snowflake, BigQuery, or Redshift is a plus.
- Familiarity with version control (Git) and CI/CD for data pipeline deployment.
- Strong analytical and problem-solving skills with a keen eye for detail.
- Good communication skills to work with technical and non-technical stakeholders.