Job Summary
We are seeking a senior, hands-on Senior Data Engineer who combines deep technical expertise in data pipelines, data modelling, and integration with the ability to guide standards and uplift delivery across the data ecosystem.
This role is accountable for designing and delivering scalable, reliable, and high-performance data solutions that support both analytics and application use cases. The successful candidate will play a key role in shaping the organization’s data architecture, ensuring that data is structured, governed, and accessible in a way that supports business decision-making and product capabilities.
What you'll do:
- Data Engineering Leadership, Standards & Quality
- Act as a senior technical authority for data engineering, defining and enforcing best practices for pipeline design, data modelling, and data quality.
- Contribute to and lead code reviews for SQL, ETL pipelines, and data transformations to ensure maintainability, performance, and consistency.
- Establish standards for naming conventions, data structures, documentation, and version control across the data platform.
- Mentor and guide team members on data engineering techniques, performance tuning, and design patterns.
- Promote a culture of high-quality, testable, and observable data solutions.
- Data Pipeline Development & Integration
- Design, develop, and maintain scalable data pipelines using Azure Data Factory (ADF), including pipelines, data flows, triggers, and parameterization.
- Integrate data from APIs, flat files, databases, and cloud/on-prem systems.
- Implement robust ingestion patterns for structured and semi-structured data (JSON, XML, CSV).
- Ensure reliable, efficient, and secure movement of data across systems.
- Data Modelling & Transformation
- Design and maintain both normalized (OLTP-aligned) and denormalized (analytical / reporting) data models.
- Apply best practices in dimensional modelling (fact/dimension tables) as well as normalized relational design.
- Implement transformations using SQL (T-SQL), stored procedures, and data flows to prepare analytics-ready datasets.
- Ensure data models are scalable, reusable, and aligned with business requirements.
- Manage historical data tracking, including slowly changing dimensions and auditability.
- Performance, Reliability & Scalability
- Optimize SQL queries, ETL pipelines, and data storage for large datasets (millions+ rows).
- Implement indexing strategies, partitioning, and efficient data access patterns.
- Ensure pipelines are resilient with proper error handling, retry logic, and monitoring.
- Design solutions that minimize impact on transactional systems (clear separation of OLTP and reporting workloads).
- Proactively identify and resolve performance bottlenecks.
- Application & API Integration
- Collaborate closely with backend (.NET) teams to support data access patterns and integration with application services.
- Design and deliver aggregated datasets and data structures optimized for API consumption.
- Support frontend (e.g., Vue.js) data requirements by enabling efficient querying, filtering, and pagination.
- Collaboration & Continuous Improvement
- Work closely with BI developers, analysts, and stakeholders to translate data requirements into scalable solutions.
- Support CI/CD practices for data pipelines and deployments.
- Stay current with evolving data engineering tools, patterns, and best practices.
Your Expertise:
- 10+ years in data engineering, ETL development, or related roles.
- Azure Data Factory (ADF): Strong hands-on experience with pipeline orchestration, data flows, triggers, parameterization, and monitoring.
- SQL / T-SQL: Advanced querying, performance tuning, indexing strategies, and stored procedure development.
- Data Modelling:
- Strong experience with both normalized (3NF) and denormalized (star/snowflake) data models
- Experience designing scalable and maintainable data schemas
- Data Platforms: Experience with Azure SQL, Synapse Analytics, or Data Lake architectures.
- ETL / ELT:
- Strong understanding of data pipeline design, incremental loading, and transformation strategies.
- Exposure to SSIS, Informatica, Talend, dbt, or similar tools.
- Data Warehousing: Solid knowledge of dimensional modelling (star/snowflake schemas) and data lifecycle management.
- Performance & Scalability:
- Experience working with large-scale datasets and high-volume data pipelines.
- Strong understanding of indexing, partitioning, and query optimization techniques.
- Experience designing solutions that separate transactional and analytical workloads.
- Data Governance & Quality:
- Experience implementing data validation, reconciliation, and quality controls.
- Ability to enforce standards for data definitions, naming conventions, and documentation.
- Integration: Experience working with APIs and handling JSON/XML data formats.
- Familiarity with Power BI, Tableau, or similar platforms.
- Experience with Azure data services (e.g., Synapse, Fabric) is advantageous.
- Demonstrated experience delivering end-to-end data engineering solutions in production environments.
- Proven experience contributing to code reviews, enforcing standards, and improving engineering practices.
Education & Qualifications:
- Bachelor’s degree in Computer Science, Engineering, Information Systems, or a related field, or equivalent practical experience.