We're rapidly growing and inviting a Data Engineer to our team.
Job Responsibilities:
• Develop and maintain ETL pipelines using AWS services such as S3, Glue, Lambada and Athena
• Write efficient Python and PySpark code for data processing and transformation
• Manage database systems and optimize SQL queries for performance
• Collaborate with cross-functional teams to understand data requirements and deliver scalable solutions
• Implement best practices for data management, security, and compliance
• Monitor and troubleshoot data pipelines to ensure reliability and performance
Required Qualifications:
• Bachelor's or Master's degree in Computer Science, Information Systems, Operations Research, Mathematics, Statistics, or related field
• Strong proficiency in AWS services including S3, Glue, Lambada and Athena
• Advanced programming skills in Python and experience with PySpark
• Proficiency in SQL and database management
• Experience building and optimizing ETL pipelines in a production environment
Nice to Have:
• Experience with Tableau
• Experience with Airflow and Kafka
• Experience implementing machine learning tools in production environments
We offer:
Care for your health and well-being
• 100 % paid sick leaves;
• 20 working days of paid vacation;
• Medical support;
• Benefits Cafeteria (budget for gym/stomatology/psychological service & etc.);
• Corporate gifts & events.
Professional growth & development
• Competitive salary with annual salary promotions;
• The annual budget for professional courses, conferences, workshops, and books;
• Internal training courses;
• Work with a team of professionals and have the opportunity to share knowledge.
Corporate Culture
• Dynamic and result-oriented work environment;
• The ability to influence product development at an early stage;
• Openness to new ideas and approaches, healthy team discussions;
• No “red tape” culture.