Key Responsibilities
Design and maintain scalable ETL/ELT pipelines using Python, SQL and Spark
Build modern data architectures (Data Lake / Lakehouse, Medallion)
Optimise cloud-based data platforms on AWS and/or Azure
Implement data quality, governance and security standards
Collaborate with Data Scientists, Analysts and Engineers to deliver reliable datasets
Support CI/CD, automation and performance monitoring of data pipelines
Skills & Experience
5+ years’ experience in data engineering (Databricks experience required)
Strong Python and SQL skills
Advanced experience with Apache Spark (PySpark)
Workflow orchestration using Airflow, ADF, Prefect or similar
Experience with cloud data warehouses (e.g. Snowflake)
Hands-on experience with streaming technologies (Kafka, Kinesis or Event Hubs)
Familiarity with data quality frameworks and governance principles
Experience delivering data to BI tools (Power BI, Tableau, Looker)
Exposure to AI/ML or GenAI data use cases is advantageous
Tech Stack
Cloud: AWS and/or Azure
Platforms: Databricks, Delta Lake, Unity Catalog
Tools: Airflow/ADF, Git, CI/CD pipelines
Certifications (Advantageous)
AWS and/or Azure Data Engineering certifications
Databricks Data Engineer certifications
You have successfully created your alert.
You will receive an email when a new job matching your criteria is posted.
Please check your email. It looks like you haven't verified your account yet. Here's what you're missing out on:
Didn't receive the link? Resend Verification Link