Our client in the Banking Sector requires a Senior Data Analyst for a 12-month contract who will be responsible for overseeing junior data engineering activities and aiding in building the organization's data collection systems and processing pipelines.
Oversee infrastructure, tools and frameworks used to support the delivery of end-to-end solutions to business problems through high performing data infrastructure. Responsible for expanding and optimising the organization's data and data pipeline architecture, whilst optimising data flow and collection to ultimately support data initiatives.
- Owns and extends the business’s data pipeline through the collection, storage, processing, and transformation of large data-sets and oversee the process for creating and maintaining optimal data pipeline architecture and creating databases optimized for performance, implementing schema changes, and maintaining data architecture standards across the required databases.
- Data Oversee the assembly of large, complex data sets that meet functional / non-functional business requirements and align data architecture with business requirements.
- Product Build analytics tools that utilise the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics. Create data tools for analytics and data scientist team members that assist them in building and optimising the Client into an innovative industry leader.
- Product Monitor the existing metrics, analyse data, and lead partnership with other Data and Analytics teams to identify and implement system and process improvements.
- Utilise data to discover tasks that can be automated and identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, redesigning infrastructure for greater scalability, etc.
- Product Developing ETL processes that convert data into formats through a team of data analysts and dashboard charts.
- Oversee large-scale data Hadoop platforms and to support the fast-growing data within the business.
- Data Responsible overseeing the process for enabling and running data migrations across different databases and different servers and defines and implements data stores based on system requirements and consumer requirements.
- Risk, Regulatory, Prudential and Compliance Responsible for performing thorough testing and validation to Ensure proper data governance and quality across EDO and the business.
- Data Oversee, design, and develop algorithms for real-time data processing within the business and to create the frameworks that enable quick and efficient data acquisition.
- Deploy sophisticated analytics programs, machine learning and statistical methods.
- Data Manage the analysis if complex data elements and systems, data flow, dependencies, and relationships to contribute to conceptual physical and logical data models.
- People Liaise with and collaborate with data analysts, data warehousing engineers, and data scientists in finding and applying best practices within the Data and Analytics department as well as defining the business’s data requirements, which will ensure that the collected data is of a high quality and optimal for use across the department and the business at large.
- Provide guidance in terms of setting governance standards.
- Strategy Responsibility for contributing to the continual improvement of the business’s data platforms through thorough observations and well researched knowledge.
- Strategy Overseeing activities of the junior data engineering teams, ensuring proper execution of their duties and alignment with the Clients vision and objectives.
- Provide oversights and expertise to the Data Engineering that is responsible for the design, deployment, and maintenance of the business’s data platforms.
- Required to draw performance reports and strategic proposals form his gathered knowledge and analyses results for senior EDO leadership.
- 8-10 years’ experience with big data tools: Hadoop, Spark, Kafka, etc. Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
- Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc. Experience with AWS cloud services: EC2, EMR, RDS, Redshift.
- Experience with stream-processing systems: Storm, Spark-Streaming, etc.
- Experience with object oriented/object function scripting languages: Python, Java, C++, Scala, etc. Strong analytic skills related to working with unstructured datasets.
- Build processes supporting data transformation, data structures, metadata, dependency, and workload management.
- A successful history of manipulating, processing, and extracting value from large, disconnected datasets. Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
- Post Graduate Degree OR Master’s degree in information technology or Information Studies