Placement papers | Freshers Walkin | Jobs daily: Big Data Lead Engineer - Hadoop/Kafka (5-12 yrs)


Search jobs and placement papers

Big Data Lead Engineer - Hadoop/Kafka (5-12 yrs)

Hello Mohamed Bilal,

Here's an interesting job that we think might be relevant for you - .

click here to apply

Our Big Data capability team is looking for a sound technologist who can lead the team by solving complex analytics' problems. If you are an exceptional developer and who loves to push the boundaries to solve complex business problems using innovative solutions, then we would like to talk with you.

You will be challenged to :

- Understand the product and provide end to end design and implementation

- Completely or partly own the complete module and take ownership to take that module from design, development, deployment and finally to production release and support

- Perform product and technology assessments whenever needed

- Develop analytic tools, working on BigData and Distributed Environment. Scalability will be the key

- Liaise with other team members to conduct load and performance testing on modules

- Visualize and evangelize next generation infrastructure in Big Data space (Batch, Near RealTime, RealTime technologies).

- Provide strong technical expertise (performance, application design, stack upgrades) to lead Platform Engineering

- Provide technical leadership and be a role model to data engineers pursuing technical career path in engineering

- Continuously learn, experiment, apply and contribute towards cutting edge open source technologies and software paradigms

- Provide/inspire innovations that fuel the growth of Flutura

The core competency you should possess

Technical/Functional Skills :

- 5+ years of experience in Big Data platform implementation, including 3+ years of hands-on experience in implementation and performance tuning Hadoop/Spark implementations
- 5+ of years of experience delivering technology solutions to various customers
- Experience in creating Data Architecture blue print on Big Data, NoSQL and Streaming Analytics
- Metadata management, data lineage, data governance, especially as related to Big Data
- Experience with Large Scale distributed computing and Big Data methods, such as Spark, Spark Streaming
- Experience in Object Oriented Design and Development with Java, Scala or C++ - Deep understanding of Apache Hadoop 1/2 and the Hadoop ecosystem. Experience with one or more relevant tools (Kafka, Oozie, Zookeeper, HCatalog, Solr, Avro)
- Familiarity with one or more SQL-on-Hadoop technology (Hive, Impala, Spark SQL, Presto)
- Knowledge and experience working with database platforms including as many of these as possible: MongoDB, Redis, Redshift, Postgres, Neo4J, Cassandra
- Current hands-on implementation experience required
- IOT and Time Series experience would be valued
- Ability to travel to client locations to deliver professional services when needed

Desired Characteristics :

- Effective problem identification and solution skills
- Proven analytical and organizational ability
- Strong co-ordination and interpersonal skills to handle complex projects.
- Ability to lead all requirement gathering sessions with the Customer

- Strong oral and written communication skills - Strong leadership skills

Education :

- Qualification : BE/B.Tech, ME/M.Tech, MS, MCA (with an aggregate of 75% and above)
- Stream (Preferable) : Computer Science, Information Technology, Information Science



click here to apply

PS: Please ignore this email if you have already applied or not interested in this job.

Best regards,
Team hirist.com
info@hirist.com

_________________________________________________________
Copyright © 2017 hirist.com. All rights reserved.
Sent by hirist.com | 6th Floor, Kings Mall, Sector - 10, Rohini, Delhi-85

You are receiving this email because you are registered to hirist.com.
If you don't want to receive emails like these anymore, you can unsubscribe.
 

No comments:

Post a Comment