Job Description
Technical/Functional Skills:1. Hortonworks Data Platform (HDP) based on Apache Hadoop, Apache Hive, Apache Spark
2. Hortonworks DataFlow (HDF): based on Apache NiFi, Apache Storm, Apache Kafka
3. Experience with high-scale, distributed development (hands-on experience with REST/JSON or XML/SOAP based API / web services)
4. Apache Phoenix, Maria DB.
Experience Required:
• 8+ of relevant work experience, as a Hadoop Big Data Engineer / Developer
• Strong understanding of the Java development, debugging & profiling
• Experience configuring, administering and working with Hortonworks Data Platform (HDP) based on Apache Hadoop, Apache Hive, Apache Spark
• Experience configuring, administering and working with Hortonworks DataFlow (HDF): based on Apache NiFi, Apache Storm, Apache Kafka
• Experience designing and deploying production large-scale Hadoop solutions.
•Ability to understand and translate customer requirements into technical requirements.
• Experience installing and administering multi-node Hadoop clusters
• Strong experience implementing software and/or solutions in the enterprise Linux or Unix environment
• Experience designing data queries against data in a Hadoop environment using tools such as Apache Hive, Apache Phoenix, Maria DB or others.
• Experience with DevOps process using GIT, Maven and job scheduling using the Control-M or Ozzie
•Ability to develop architecture standards, best practices, design patterns
• Significant previous work writing to network-based APIs, preferably REST/JSON or XML/SOAP
• Solid background in Database administration and design, along with Data Modeling with star schema, slowing changing dimensions, and/or data capture.
• Demonstrated experience implementing big data use-cases, understanding of standard design patterns commonly used in Hadoop-based deployments.
• Excellent verbal and written communications
• Ability to drive projects with customers to successful completion
• Skillful in writing and producing technical documentation, knowledge base articles
• Experience in contributing to pre-and post- sales process, helping sales and product teams to interpret customers' requirements
• Keep current with the Hadoop Big Data ecosystem technologies.
Roles & Responsibilities:
• Experience in contributing to pre-and post- sales process, helping sales and product teams to interpret customers' requirements
• Demonstrated experience implementing big data use-cases, understanding of standard design patterns commonly used in Hadoop-based deployments.
Diverse Lynx LLC is an Equal Employment Opportunity employer. All qualified applicants will receive due consideration for employment without any discrimination. All applicants will be evaluated solely on the basis of their ability, competence and their proven capability to perform the functions outlined in the corresponding role. We promote and support a diverse workforce across all levels in the company.
by via developer jobs - Stack Overflow
No comments:
Post a Comment