Lublin,  Wrocław,  Remote.PL
Remote work
Lublin,  Wrocław,  Remote.PL
Remote work

About the vacancy

Our client is a successful fintech start-up based in London. The client provides a software platform and an open API for the banking industry. The open API allows fintech enterprises to access banking services, create digital wallets, connect them, receive, send, and convert funds, launch new cards, and administer loans.

The range of services and geographical presence of the client are growing fast.

We are looking for a data engineering specialist who will join the client’s data and analytics team and will work on the implementation of data warehouses/lakes, data pipelines, and data analytics models.

Building on the high growth and success that our Client has experienced, we have exciting plans for further expansion in our Data Team focusing on building out the capabilities with handling large scales of data to both run our business and super power the data-driven Insights and products for our expanding customer base.

You will be a key contributor to the design, delivery, continual assessment, and improvement of strategic business enabling components and customer-focused data products.

You will be part of a virtual team of Data Analysts, Engineers and Scientists who collaborate and work on cross-functional initiatives to deliver insights and analysis across the Company.

Technological stack: The client uses AWS data and analytics stack, Databricks/Spark (PySpark), Fivetran, Looker, Snowflake.

Responsibilities

  • Design and maintain data ingestions, integrations, calculations, pipelines, and feeds
  • Partner with Data Analysts and Data Scientists, Engineering, Operations, Sales, and Customer Success teams to make sure that the data is accurate and avaliable to them
  • Examine and uprgade the client's change management system as well as their peer-reviewed code and automated numbers testing capabilities
  • Follow the Information Security Management System (ISMS) recommendations to meet client's standards
  • Batch streaming data pipelines and calculations in Databricks and coding in PySpark to create scalable, testable, and reusable components

Must have

  • Experience with Python and knowledge of packages, Databricks, Notebooks, DBFS, DBConnect
  • Ability to setup data pipelines to process data in different formats
  • Proven ability to create dashboards for internal and embedded external visualisation in programs like Looker or knowledge of similar tools like Tableau, Power Bi, Sisense, Dash, etc.
  • Proficiency in SQL (Structured Query Language)
  • Logical problem solver and critical thinker
  • Strong focus on accuracy, validation, quality, integrity, and timeliness of data
  • Excellent communications and team skills
  • Spoken English

Would be a plus

  • Bachelor's Degree in computer science, mathematics, or a similar field or strong professional experience in this area preferred
  • Interest in developing data-based UX/CX products
  • Ability to work with code reviews, github/gitlab integration, automated regression/numbers testing for calculations and Change Management
  • Experience with Snowflake database modeling and DBA, Fivetran ETL
  • Excellent knowledge of Google Sheets, Microsoft Excel, and LucidChart

Learn more about our policy of equal opportunities in employment

Work at DataArt is

People first

Our relationships with clients and colleagues are based on mutual respect, no matter what differences we may have.

  • Long-term partnership
  • Respect for individuality and freedom of expression
  • Flexible schedule, comfortable offices, and the ability to work from home
  • Market-driven compensation and health care
  • High quality internal administrative services

Expertise

Get the opportunity to unleash your potential in DataArt's ecosystem

  • Highly qualified team
  • Communities and knowledge sharing
  • English classes
  • Internal educational system

Flexibility

Freedom to explore and opportunities to get new experience and knowledge. Constant willingness to change

  • Work contract with DataArt, not project based employment
  • Flat structure
  • Minimum rules
  • Rules and policies change with context, while values stay the same
  • Easy movement among offices and opportunities for relocation

Trust

The ability to count on each other and the willingness to trust people lies at the heart of relationships in DataArt

  • Management via context, bottom-up decision making. We avoid micromanagement
  • Clear equal rules and policies
  • Fair management
  • No ranking vs others, no regular reassessments. Fair seniority assessment

Nie znalazłeś odpowiedniej oferty pracy?

Wyślij do nas swoje CV, a my znajdziemy taką, która będzie atrakcyjna i dopasowana do Ciebie

Wyślij CV

Nie znalazłeś odpowiedniej oferty pracy?

Wyślij do nas swoje CV, a my znajdziemy taką, która będzie atrakcyjna i dopasowana do Ciebie

Wyślij CV