Senior Data Engineer for Just Eat

Kyiv, Amosova, Ukraine

Apply

JUST EAT is the world leader in online takeaway ordering - processing millions of orders across 12 major markets, with a clear #1 position in all of these. We have achieved this by building a growing network of over 63,000 restaurant partners and continuing to commit major investment into our technology platform, our brand and our people. Just Eat is now firmly established as one of the UK’s leading consumer brands, we are all about choice and are helping the nation ‘find your flavour' as we embark on creating the world's greatest food community.
Following our IPO in 2014, Just Eat is now well established as one of the most successful, innovative and high growth technology companies in Europe, with year-on-year order growth of over 50%.
Our people are at the heart of everything we do. Globally we have 2500 Just Eaters. They embody our values: Make Happy, Razor Sharp and Big Hearted. We truly believe it’s the people that make Just Eat the great company it is. We have an incredibly open culture. We’re about making everyone feel comfortable, sharing ideas and trying out new things.

Read more about the client

Description

On behalf of Just Eat, Ciklum is looking for Senior Data Engineer for our team in Kyiv on a full-time basis.

Ideal candidates will be passionate about modern big data technologies, engineering practices and relish the challenge of building scalable and reliable solutions designed to support real time analytics, advanced data science and critical operational projects reliant on data.

Responsibilities

The data engineering team’s role is to build a transformational data platform in order to democratize data in Just Eat. Our team is built on following principles:

  • Open Data: We ingest all data produced across Just Eat using batch and real time pipelines and make it available to every employee in Just Eat. This data is then used to drive analytics, business intelligence, data science and critical business operations.
  • Self Service: We build tools, frameworks and processes to support self service modeling and activation of data. Our goal is to empower our users to find, process and consume our data without barriers.
  • Single Truth: We build services that host all metadata about Just Eat’s data in a single store and promote governance, data culture and Single Source of Truth.
  • Intelligent Personalization: We build and maintain a machine learning platform which support data scientists in developing and deploying ML models at production scale. This allows us to deliver insights, personalization and predictions to our customers at scale.

Requirements

  • Great coding ability – We expect you to write well tested, readable and performant production code to process large volumes of data. Our code is currently a mix of Scala and Python and we love polyglots.
  • Experience working with Cloud – AWS, Azure, Google Cloud. We use Google Cloud for all our deployments with a mix of services – Kubernetes, Dataflow, PubSub etc.
  • Ability to contribute to architecture discussions and influence peers and stakeholders in making better decisions.
  • Inclination to collaborate and ability to communicate technical ideas clearly.
  • Someone that understands systems end to end, beyond the code itself. e.g infrastructure, CI, deployment, monitoring, alerting etc and is willing to take ownership of it.
  • Knowledge and understanding of the fundamentals of computing and distributed systems.

Desirable

  • We don’t expect people to already know our tech stack top to bottom, but knowledge of some of it or similar technologies is beneficial

What's in it for you

Our team is built on the following tenets:

  • Innovate: We are always on the lookout to adopt new technologies to help us achieve our goals. Our team is always learning and growing, we’re inquisitive and we’re not afraid of new tech and open source tooling. We’re looking for like-minded engineers with a passion to keep our code-base and infrastructure best in class.
  • Build for Scale: All our tools and components are built for scale and we use Kubernetes and other tools to help us scale automatically based on usage.
  • Serverless: We don’t manage servers and treat them as cattle. We have multiple kubernetes clusters where we host our Airflow infrastructure as well as numerous microservices and surrounding tooling. In addition we take advantage of the great serverless products available in GCP including: BigQuery, Dataflow (apache beam), Pub/Sub, Datastore etc.
  • Infrastructure as Code: We practice a DevOps first culture, with everyone in the team helping to deploy our infrastructure using terraform and CI/CD pipeline using Jenkins pipeline.
  • Collaboration & Ownership: All code is owned by the team and we have multiple avenues for collaboration – rotation, pairing and technical showcases. We also encourage team members to own their own code and promote self governance.