Project Description
We are looking for a data engineer with experience in developing data pipelines and join our team working on a new transaction monitoring application used across multiple businesses.
As a Data Engineer, you will be responsible for the ingestion of data into the system. You will start by extracting the existing ETL layer from legacy R code to a robust, modern platform capable of serving multiple data models being developed. You will then look to scale this up to support multiple different data sources and pipelines. You will work with the data scientists as end consumers of the data to ensure we are meeting their needs. You will contribute to the team’s strategy around deployment best practices.
This is an exciting opportunity to work on an important project, which will have a huge impact on our future architecture.
Responsibilities
• Working closely with a data-centric application, hosting algorithms to detect possible market abuse.
• Designing the ETL architecture, as we look to extract it from an existing legacy application. After that, building out additional ETL layers to support the onboarding of additional data sources.
• Working closely with quants/data scientists to ensure that they have the data necessary to add new algorithms, and that the data is of the necessary quality and timeliness to support these.
• Act as the subject matter expert regarding data pipelines to the DevOps focused team and to external stakeholders
• Building a close relationship with clients and stakeholders to understand the use case for the platform, and prioritise work accordingly
• Working well in a multidisciplinary DevOps-focused team, building a close relationship with other developers, Quants/Data Scientists and production support teams
Skills
Must have
• You have experience building data pipelines with Python. You understand how these should be hosted and how to take them into production in a supportable way.
• You have experience working with message queues, traditional databases (SQL) and NoSQL databases.
• You have worked closely with data scientists before and may have experience creating pipelines that can serve ML/statistical algorithms.
• You have high development standards, especially for code quality, code reviews, unit testing, continuous integration and deployment
• You have proven capability to interact with clients and deliver results, taking ideas to production
• You have experience working in fast paced development environments
• You agree that verbal and written communication skills are vital
Nice to have
Experience with Spark or Scala
Experience with Kafka or Solace
Experience with KDB
Experience working with R code.
Languages
English: B2 Upper Intermediate
Seniority
Senior
Relocation package
If needed, we can help you with relocation process. Click here for more information.
APPLY
To help us track our recruitment effort, please indicate in your cover letter where (vacanciesinukraine.com) you saw this job posting.