Role Description This role provides an exciting opportunity to roll out a new strategic initiative within the firm-Enterprise Infrastructure Big Data Service. The Big Data Developer serves as a development and support expert with responsibility for the design, development, automation, testing, support and administration of the Enterprise Infrastructure Big Data Service. The roles require experience with both Hadoop and Kafka. This will involve building and supporting a real time streaming platform utilized by data engineering community. The incumbent will be responsible for developing features, ongoing support and administration, and documentation for the service. The platform provides a messaging queue and a blueprint for integrating with existing upstream and downstream technology solutions.
Candidate Description The incumbent will have the opportunity of working directly across the firm with developers, operations staff, data scientists, architects and business constituents to develop and enhance the big data service.
Development and deployment of data applications
Design & Implementation of infrastructure tooling and work on horizontal frameworks and libraries
Creation of data ingestion pipelines between legacy data warehouses and the big data stack
Automation of application back-end workflows
Building and maintaining backend services created by multiple services framework
Maintain and enhance applications backed by Big Data computation applications
Be eager to learn new approaches and technologies
Strong problem solving skills
Strong programming skills
Background in computer science, engineering, physics, mathematics or equivalent
Worked on Big Data platforms (Vanilla Hadoop, Cloudera or Hortonworks)
Preferred: Experience with Scala or other functional languages (Haskell, Clojure, Kotlin, Clean)
Preferred: Experience with some of the following: Apache Hadoop, Spark, Hive, Pig, Oozie, ZooKeeper, MongoDB, CouchbaseDB, Impala, Kudu, Linux, Bash, version control tools, continuous integration tools