[NEW] Search for a job anonymously — check the details
Close
e. g. Python, Warsaw, Startup

Senior Data Platform Engineer

location-pointer-icon Europe
Data Science
remote

We are looking for a Senior Data Platform Engineer to join our team and take part in the core development of a high-performance computational data platform for our US-based client. This role will suit an experienced data engineer or former DBA/DB developer with strong algorithmic thinking, deep knowledge of database internals, and a passion for building scalable systems for research-driven environments. 

What you’ll do: 

  • Manage release cycles for the platform’s next-generation version 
  • Implement and maintain robust CI/CD pipelines (Docker, Kubernetes, GitHub Actions, etc.) 
  • Extend the data framework to support multiple database engines (PostgreSQL, MySQL, etc.) 
  • Optimize database interactions and query execution layers 
  • Improve core computational database functionality using Python and SQL 
  • Package production-ready releases and contribute to infrastructure automation 
  • Collaborate closely with backend and data teams to propose and implement innovative solutions 
  • Support integration of complex datasets, including development of efficient data translation mechanisms 
  • Contribute to documentation and knowledge sharing (internal docs and potential textbook chapters) 

What we’re looking for: 

  • 10+ years of experience as a Data Engineer, DB Developer, or DBA 
  • Strong theoretical background in Computer Science or Applied Mathematics 
  • Proficiency in Python and SQL 
  • Deep understanding of relational databases, query optimization, and recursive algorithms 
  • Hands-on experience with PostgreSQL, MySQL, Oracle, or similar systems 
  • Experience building or optimizing large-scale ETL pipelines and data frameworks 
  • Familiarity with CI/CD, Docker/Kubernetes, and infrastructure-as-code (Terraform, etc.) 
  • Solid experience working with cloud services  
  • Upper-Intermediate English or higher — you'll work directly with an international client team 

Nice to have: 

  • Familiarity with tools like Apache Airflow, dbt, Apache Beam, or Spark 
  • Certification in GCP, AWS, or Databricks 
  • Contributions to open-source or academic data processing projects 
  • Experience with data modeling and versioned data warehouse patterns (e.g., slowly changing dimensions, snapshots) 
  • Understanding of scientific data workflows or research-driven domains 

Why you’ll enjoy working with us: 

  • Join a mission-driven team building tools that support world-class scientific research 
  • Work with modern data platforms, petabyte-scale systems, and high-throughput architectures 
  • Collaborate with engineers, data scientists, and researchers on meaningful challenges 
  • Flexible schedule and remote-friendly environment 
  • Competitive compensation and transparent, feedback-driven culture 

If you are looking for new opportunities — Let's Talk!

Honeycomb Software
Outsource
< 10
Industry
IoT, E-commerce, Edtech/Education, Fintech/Banking, Embedded
Founded
2015

This site uses cookies to offer you a better browsing experience.

Find out more on how we use cookies and how to change cookie preferences in our Cookies Policy.

Customize
Save Accept all cookies