Got algorithm and good quality code in your blood? Spark and Hadoop clusters are your daily business? We have new challenges for you!
Your Responsibilities:
- Solve Big Data problems for our customers in all phases of the project life cycle
- Build program code, test and deploy to various environments (Cloudera, Hortonworks, etc.)
- Enjoy being challenged and solve complex data problems on a daily basis
- Be part of our newly formed team in Berlin and help driving its culture and work attitude
Job Requirements
- Strong experience developing software using Java or a comparable language
- At least 2 years of experience with data ingestion, analysis, integration, and design of Big Data applications using Apache open-source technologies
- Strong background in developing on Linux
- Solid computer science fundamentals (algorithms, data structures and programming skills in distributed systems)
- Sound knowledge of SQL, relational concepts and RDBMS systems is a plus
- Computer Science (or equivalent degree) preferred or comparable years of experience
- Being able to work in an English-speaking, international environment
We offer:
- Fascinating tasks and unique Big Data challenges in various industries
- Benefit from 10 years of delivering excellence to our customers
- Work with our open-source Big Data gurus, such as our Apache HBase committer and Release Manager
- Work on the open-source community and become a contributor
- Fair pay and bonuses
- Work with cutting edge equipment and tools
by via developer jobs - Stack Overflow
No comments:
Post a Comment