We’re expanding our team and looking for a skilled Data Engineer who will accomplish a team for our American client.
We're looking for a specialist with solid experience and understanding of big data technologies. In this role, you will be responsible for ingesting a high volume of data and transforming it into outputs to accelerate decision-making.
- 3 years of experience in Data Engineering
- 3 years of experience with Python
- Strong experience with AWS stack: Glue, Athena, EMR Serverless, Kinesis, Redshift, Lambda, Step Functions, Data Migration Service (DMS)
- Experience with Spark, PySpark, Iceberg, Delta lake, Aurora DB
Nice to have:
- BS+ in computer science or equivalent experience
- Data modelling and managing data transformation jobs with high volume and timing requirements experience
- AWS CodePipeline, Beanstalk, Azure DevOps, Cloud Formation
- Readiness to learn new technologies
- Setting up data imports from external data sources (DB, CSV, API)
- Building highly scalable pipelines to process high-volume data for reporting and analytics consumption;
- Designing data assets that support experimental and organizational processes, and are efficient and easy to work with.
What we offer:
Paid training programs, language courses;
Medical insurance, gym, lawyer support, etc.;
Jam-free roads to the office and back;
Comfortable working hours;
Great team events;
Variety of knowledge sharing opportunities.
Who we are:
We are Trinetix, one of the fastest-growing technology companies on the market and a community of people with a passion for finding solutions and fulfilling business needs. If you share this passion, we invite you to join our team to tackle those challenges together. We promise you a productive working environment, stylish office with creative and inspiring atmosphere, smart and friendly colleagues, and plenty of tricky challenges.