Description:
We’re looking for a Data Engineer with strong ETL expertise to support enterprise data pipelines and batch processing workflows. This role focuses on building, optimizing, and maintaining scalable data integration solutions using DataStage, Control-M, and SQL within a structured governance framework.
Responsibilities:
- Design, develop, and maintain ETL pipelines using DataStage
- Manage and schedule workflows using Control-M
- Write and optimize complex SQL queries for large datasets
- Ensure data quality, integrity, and reliability across pipelines
- Work within the GCFR framework (or similar governance/compliance structures)
- Troubleshoot production issues and optimize performance
- Collaborate with data, analytics, and business teams to deliver reliable data solutions
Must Haves Requirements:
- Hands-on experience with IBM DataStage (ETL development)
- Strong experience with Control-M (job scheduling / orchestration)
- Advanced SQL skills (query optimization, joins, performance tuning)
- Experience working within ETL frameworks and data pipeline architecture
- Familiarity with GCFR framework (or similar regulatory/compliance environments)
- Experience supporting production data environments
- Experience with cloud data platforms (AWS, Azure, GCP)
- Exposure to modern data tools (Python, Spark, etc.)
- Background in regulated industries (finance, healthcare, etc.)