Here's an interesting job that we think might be relevant for you -
Click here to apply
Job Description - Hadoop Developer
1. Advanced working SQL knowledge and experience working with relational databases, query authoring
2. Experience building and optimizing big data data pipelines, architectures and data sets.
3. Strong analytic skills related to working with unstructured datasets.
4. Build processes supporting data transformation, data structures, metadata, dependency and workload management.
5. A successful history of manipulating, processing and extracting value from large disconnected datasets.
6. Working knowledge of message queuing, stream processing, and highly scalable - big data- data stores.
7. Experience with big data tools: Hadoop, Spark, Kafka, etc.
8. Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
- Handled importing of data from various data sources, performed transformations using Hive, MapReduce, loaded data into HDFS and Extracted the data from MySQL into HDFS using Sqoop.
- Involved in creating Hadoop Map-reduce programs to process the data in a distributed way.
- Developed Sqoop scripts to import all database objects and data from existing MySQL databases.
- Analyzed the data by performing Hive queries (HiveQL) and running Pig scripts (Pig Latin) for data ingestion and egress.
- Good experience in Hive partitioning, bucketing and perform different types of joins on Hive tables and implementing Hive Serde like JSON and ORC.
- Expertise in working with Hive data warehouse ecosystem-creating tables, DML operations and writing and HiveQL operations for desired requirements.
- Monitoring the production environment and solving the abend jobs.
- Performed unit testing using MRUnit and prepared Unit Test Cases and Test results.
- Created Technical Specification for new/CRQ applications.
- Created the mapping document for new applications and for CRQ's (Change Request)
- Updated deployment template/guide for code promotion to QA.
- Handling data rejection during data load. Capturing the Exception and Rejection records.
9. Experience with stream-processing systems: Storm, Spark-Streaming, etc.
10. Experience with object-oriented/object function scripting languages: Python, Scala, etc.
Click here to apply
PS: Please ignore this email if you have already applied or not interested in this job.
Best regards,
Team hirist.com
info@hirist.com
_________________________________________________________________________________________________________
You are receiving this email because you are registered to hirist.com as a Jobseeker.
If you don't want to receive emails like these anymore, you can unsubscribe.
Copyright © 2020 hirist.com. All rights reserved.
Sent by hirist.com | 6th Floor, Kings Mall, Sector - 10, Rohini, Delhi-85