This position is no longer open for applications

Data engineer II - PySpark / Kubernetes

Data engineer II - PySpark / Kubernetes (CR/516072) Diemen, Netherlands

Salary: EUR90 - EUR95 per hour

Senior Data engineer - contractor role 

For a client active within Tourism, we are currently looking for two experienced data engineers to support a migration program. Preferably, you have a background in Big Data and knowledge of both on prem and at least one cloud provider. The first role requiers Elasticsearch, the second does not require ElasticSearch but does require knowledge of GCP. 

Key Responsibilities:

  • Rapidly developing next-generation scalable, flexible, and high-performance data pipelines.
  • Solving issues with data and data pipelines, prioritizing based on customer impact, and building solutions that prevent them from happening again (root cause).
  • End-to-end ownership of data quality in our core datasets and data pipelines.
  • Experimenting with new tools and technologies to meet business requirements regarding performance, scaling, and data quality.
  • Providing tools that enhance Data Quality company-wide.
  • Developing integrations between multiple applications and services, both on-premise and in the cloud.
  • Contributing to self-organizing tools that help the analytics community discover data, assess quality, explore usage, and find peers with relevant expertise.
  • Building effective monitoring of data and jumping in to handle outages.
  • Responsible for technical implementation and maintenance of data processing services and storage systems in line with the Data Governance Framework.

Tech Stack

  • Pyspark
  • Kubernetes
  • Migrating from: Oozie, Spark, Hadoop 
  • AWS
  • Snowflake
  • Elasticsearch or Opensearch 

Practicalities

  • Start: Mid November
  • 40 hours per week
  • 12 month duration
  • Freelance or Payroll through Huxley (HSM sponsorship can be provided if already eligeble) 
  • Hybrid working: Friday onsite 

 

;