SD Solutions is seeking an experienced Apache Flink Contributor/Developer to join our remote offshore team. In this role, you will be responsible for developing and optimizing real-time data processing pipelines using Apache Flink. You will collaborate with our engineering team to implement scalable solutions, contribute to the Apache Flink open-source community, and ensure the high performance and reliability of our data streaming applications.
- Development and Implementation:
- Investigate the Flink core project for the Datorios’ product needs
- Adapt the Datorios ecosystem to be aligned with the Flink roadmap.
- Design, develop, and maintain Apache Flink-based data processing pipelines tailored to business needs.
- Collaborate with remote teams to gather requirements, design solutions, and deliver robust streaming applications.
- Open Source Contributions:
- Actively participate in the Apache Flink open-source project by developing new features, fixing bugs, and contributing to the community.
- Engage with the Flink community, attend virtual meetups, and stay updated on the latest advancements.
- Propose and implement enhancements to improve the efficiency and scalability of Flink-based applications.
- Optimize Flink jobs for performance, ensuring low-latency processing and high availability in a distributed environment.
- Monitor and troubleshoot issues, providing quick resolutions to maintain system reliability.
- Work closely with the DevOps team to ensure smooth deployment and operation of Flink applications.
- Documentation and Support:
- Create and maintain comprehensive documentation for Flink applications and best practices.
- Provide technical support and mentorship to other team members working on Flink-based solutions.
- Contribute to knowledge sharing sessions and internal training programs.
- Deep knowledge with the Flink project and roadmap
- Proven experience with Apache Flink, including development, deployment, and optimization in production environments.
- Strong programming skills in Java; proficiency in Python is a plus.
- Deep understanding of real-time data processing concepts, stream processing, and distributed systems.
- Experience with integrating Flink with Apache Kafka, Hadoop, and cloud platforms (AWS, GCP, Azure)
- Minimum of 3 years of experience in software development, with a focus on real-time data processing and analytics. (with Flink 3 years)
- Previous contributions to the Apache Flink project or other open-source projects are highly desirable.
- Experience with containerization (Docker) and orchestration tools (Kubernetes) is preferred.
- Strong problem-solving skills and ability to work independently in a remote environment.
- Excellent communication skills, with the ability to collaborate effectively with a distributed team.
- A proactive approach to learning and staying current with new technologies and industry trends.
- Credentials Proven assurance
Preferred Qualifications:
- Experience with Flink’s Table API and SQL API.
- Familiarity with other big data frameworks like Apache Spark.
- Understanding of CI/CD pipelines and automated testing in the context of Flink applications.