Job Summary
PBT Group is seeking a highly experienced Senior Data Engineer to lead the design, development, and optimisation of scalable data solutions. This role will play a key part in modernising legacy data environments and driving the transition toward a cloud-native, big data architecture.
The successful candidate will bring deep technical expertise across both traditional Microsoft-based data stacks and modern data engineering technologies, with a strong focus on building robust, high-performance data pipelines and enabling advanced analytics capabilities.
Key Responsibilities
Data Architecture & Engineering
- Design and implement scalable, high-performance data architectures.
- Lead the evolution from legacy data platforms to modern cloud-based solutions.
- Build and maintain robust ETL/ELT pipelines for large-scale data processing.
Data Modelling & Performance Optimisation
- Develop and optimise advanced data models aligned to business requirements.
- Improve data storage, access, and retrieval performance across platforms.
- Conduct performance tuning and troubleshooting of complex data pipelines.
Modern Data Platform Enablement
- Drive adoption of modern technologies including Python, PySpark, and Databricks.
- Support the transition from legacy tools (SSIS, SSRS, SSAS) to scalable big data frameworks.
- Contribute to the development of analytics-ready data environments.
Advanced Analytics & Big Data
- Enable advanced analytics use cases including machine learning pipelines and predictive modelling.
- Work closely with data scientists and analysts to deliver high-quality datasets.
Leadership & Mentorship
- Provide technical leadership and mentorship to data engineering team members.
- Promote best practices in data engineering, coding standards, and solution design.
Stakeholder Collaboration
- Engage with cross-functional teams to understand business needs and translate them into technical solutions.
- Act as a key interface between technical teams and business stakeholders.
Data Governance & Quality
- Ensure data quality, integrity, and compliance with governance standards.
- Implement best practices in data management, security, and regulatory compliance.
Technology Environment
Legacy Stack
- VBA
- MS SQL Server
- SSIS
- SSAS
- SSRS
Target / Modern Stack
- SQL
- Python
- PySpark
- Databricks
Core Technical Skills
- Advanced SQL development
- Python programming
- Spark (PySpark & Spark SQL)
- Data warehousing and data modelling
- Big data processing frameworks
Cloud & Data Platform Technologies
- AWS (S3, Lambda, Redshift, EMR, Glue, Athena)
- Event-driven architecture (SQS, SNS, EventBridge)
- API integrations (API Gateway)
- Data governance tools (Unity Catalog, Delta Lake)
- Security & networking (VPC, KMS, Secrets Manager)
Development & Tooling
- AWS CDK
- Docker
- Azure DevOps
- JIRA, Confluence
- Draw.io
Qualifications & Experience
- Bachelor’s or Master’s degree in Computer Science, IT, or related field
- 10+ years’ experience in data engineering
- Proven experience designing and implementing complex data solutions
- Strong experience in both legacy and modern data environments
- Demonstrated leadership and mentoring capability
Preferred Experience
- Experience with real-time/streaming data pipelines
- Exposure to containerisation and orchestration (Docker, Kubernetes)
- Knowledge of data security and privacy best practices
- Relevant certifications in cloud or data engineering technologies
Ideal Candidate Profile
- Strong problem-solver with a strategic mindset
- Able to bridge legacy and modern data platforms effectively
- Excellent communication and stakeholder engagement skills
- Passionate about data, innovation, and continuous improvement
* In order to comply with the POPI Act, for future career opportunities, we require your permission to maintain your personal details on our database. By completing and returning this form you give PBT your consent
* If you have not received any feedback after 2 weeks, please consider you application as unsuccessful.