Responsibilities
- Build components of large-scale data platform for real-time and batch processing, and own features of big data applications to fit evolving business needs
- Build next-gen cloud based big data infrastructure for batch and streaming data applications, and continuously improve performance, scalability and availability
- Contribute to the best engineering practices, including the use of design patterns, CI/CD, code review and automated test
- Chip in ground-breaking innovation and apply the state-of-the-art technologies
- As a key member of the team, contribute to all aspects of the software lifecycle: design, experimentation, implementation and testing.
- Collaborate with program managers, product managers, SDET, and researchers in an open and innovative environment
Skills Required
- 4+ years of professional programming in Java, Scala, Python, etc.
- 3+ years of big data development experience with technical stacks like Spark, Hive, Singlestore, Kafka, and AWS big data technologies.
- Knowledge of system, application design and architecture
- Experience of build industry level high available and scalable service
- Passion about technologies, and openness to interdisciplinary work
Preferred Skills
- Experience with processing large amount of data at petabyte level.
- Demonstrated ability with cloud infrastructure technologies, including Terraform, K8S, Spinnaker, IAM, etc.
- Experience in widely used Sprint framework and Web framework (React.js, Vue.js, Angular, etc.) and good knowledge of Web stack HTML, CSS, Webpack
Education & Work Experience
- STEM BA Degree + 5 years relevant experience
About Korn Ferry
Korn Ferry unleashes potential in people, teams, and organizations. We work with our clients to design optimal organization structures, roles, and responsibilities. We help them hire the right people and advise them on how to reward and motivate their workforce while developing professionals as they navigate and advance their careers. To learn more, please visit Korn Ferry at www.Kornferry.com