Improve your business conversations on Growe Talks with negotiation expert Linda Netsch, Harvard Law School. Warsaw, 19/04. Apply free
np. Python, Warszawa, Startup

Middle Big Data Engineer

location-pointer-icon Warszawa, Wrocław, Kraków, Białystok, Bydgoszcz, Gdańsk, Katowice, Kielce, Gdynia, Koszalin, Lublin, Łódź, Poznań, Sopot, Szczecin, Zielona Góra
Zarchiwizowane
zł 18000 — 22000
Brutto / Miesiąc / B2B
Scala
remote

Required skills

— 3+ years of experience in data engineering creating or managing end-to-end data pipelines on large complex datasets
— Experience in developing scalable software using big data technologies (e.g. Hadoop, Spark, Hive, Flink, Samza, Storm, Elasticsearch, Druid, Cassandra, etc).
— Expertise in Scala
— Fluency with at least one dialect of SQL
— Level of English: Upper-Intermediate

As a plus

— Experience with Streaming platforms, typically based around Kafka
— Strong grasp of AWS data platform services and their strengths/weaknesses
— Strong experience using Jira, Slack, JetBrains IDEs, Git, GitLab, GitHub, Docker, Jenkins, Terraform
— Experience using DataBricks

We offer

— High compensation according to your technical skills
— Long-term projects (12m+) with great Customers
— 5-day working week, 8-hour working day, flexible schedule
— Democratic management style & friendly environment
— WFH mode
— Annual Paid vacation — 20 b/days + unpaid vacation
— Paid sick leaves — 6 b/days per year
— 12 national holidays
— Corporate Perks (external training, English courses, corporate events/team buildings)
— Cozy office in the center of the city
— Professional and personal growth

Responsibilities

— Manage data quality and integrity
— Assist with building tools and technology to ensure that downstream customers can have faith in the data they’re consuming
— Cross-functional work with the Data Science or Content Engineering teams to troubleshoot, process, or optimize business-critical pipelines
— Work with Core Platform to implement better processing jobs for scaling the consumption of streaming data sets

Project description

Client is an American e-book and audiobook subscription service that includes one million titles. Platform hosts 60 million documents on its open publishing platform.
The platform allows:
— anyone to share his/her ideas with the world;
— access to audio books;
— access to world’s composers who publish their music;
— incorporates articles from private publishers and world magazines;
— allows access to exclusive content.

Core Platform provides robust and foundational software, increasing operational excellence to scale apps and data. We are focused on building, testing, deploying apps and infrastructure which will help other teams rapidly scale, inter-operate, integrate with real-time data, and incorporate machine learning into their products. Working with our customers in the Data Science and Content Engineering, and our peers in Internal Tools and Infrastructure teams we bring systems-level visibility and focus to our projects.

Client’s goal is not total architectural or design perfection, but rather choosing the right trade-offs to strike a balance between speed, quality and cost.


KITRUM
Outsource
50 - 100
Branża
Big Data, Cloud Computing, Fintech/Banking, Machine Learning, Mobile
Założona
2014

Ta strona używa plików cookie, aby zapewnić Ci lepsze wrażenia podczas przeglądania.

Dowiedz się więcej o tym, jak używamy plików cookie i jak zmienić preferencje dotyczące plików cookie w naszej Polityka plików cookie.

Zmień ustawienia
Zapisz Akceptuj wszystkie cookies