Job Description
Do you want to be part of a cutting-edge solutions team that bring together IoT/data? Would you like to join a team focused on increasing client satisfaction by delivery results working with high performing team members. As a Big Data Engineer; you will be responsible to design and develop multiple big data utilities that automate various aspects of data acquisition, ingestion, storage, access and transformation of data volume that scale up to petabytes. You will be part of a high octane, multi-disciplinary team working on Data Acquisition, Ingestion, Curation, Storage teams, etc. You will be a hands-on developer partnering with team leads, technical leads, solution architects and data architects
Responsibilities
EO Statement
IBM is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.
by via developer jobs - Stack Overflow
Do you want to be part of a cutting-edge solutions team that bring together IoT/data? Would you like to join a team focused on increasing client satisfaction by delivery results working with high performing team members. As a Big Data Engineer; you will be responsible to design and develop multiple big data utilities that automate various aspects of data acquisition, ingestion, storage, access and transformation of data volume that scale up to petabytes. You will be part of a high octane, multi-disciplinary team working on Data Acquisition, Ingestion, Curation, Storage teams, etc. You will be a hands-on developer partnering with team leads, technical leads, solution architects and data architects
Responsibilities
- Design and build data services to auto create Hive structures, HDFS directory structures and partitions based on source definitions using a configuration and metadata driven framework
- Design and build custom code for audit, balance, and controls, data reconciliation and entity resolution; Create custom UDFs if needed to do complex data transformations
- Create technical specifications, Unit test plan/cases and document unit test results
- Perform Integration testing for the end-to-end data pipelines
Required Technical and Professional Expertise
- 2+ years of deep working knowledge in use of open source tools such as; Hive, Sqoop, etc.
- 2+ years of coding experience with SQL & Linux Bash/Shell Scripting
- 2+ years of coding experience with Java, Mapreduce, Pig and Python
- 4+ years of hands on experiences and deep understanding of Linux scripting
- Familiarity with Big Data concepts, Data Lake development, Business Intelligence and data warehousing development processes and techniques
- Experience working with Cloudera Manager, Navigator, Impala and security tools (Sentry, Gazzang)
Preferred Technical and Professional Experience
- Experience or knowledge with the following: SPARK, Kafka, Flume, MYSQL, NoSQL, Cassandra, Neo4J, MongoDB, Hortonworks, Big Insights, Cloudera
EO Statement
IBM is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.
by via developer jobs - Stack Overflow
No comments:
Post a Comment