Placement papers | Freshers Walkin | Jobs daily: Data Engineer - Data Platform Department at Rakuten (Tokyo, Japan)


Search jobs and placement papers

Data Engineer - Data Platform Department at Rakuten (Tokyo, Japan)

Position Summary:


The Data Platform Department (DPD) is building out the next generation enterprise data platform that will change the way Rakuten users find, query and analyse data at massive scale and become the hub of analytical innovation. The next generation platform not only looks to automate the processes involved with ingesting, discovering, governing and querying data but also support Rakuten's drive to become completely data-driven across more than 70 services.


We are looking for highly technical Data Engineers to support the development of the data platform handle both batch and streaming data sets at the petabyte scale. As a Data Engineer, you will be creating creating the building blocks of the data platform that will support developers in creating highly performant data pipelines using technologies such as Apache Hive, Apache Atlas, Apache Oozie, Apache Flink, Apache Kafka and many more.



Responsibilities:




  • Work closely with the delivery team across the development lifecycle to understand the requirements, support the design of the high-level architecture and take responsibility for the low-level architecture.


  • Create the building blocks (e.g. JDBC/File Consumers, Stream Consumers) to streamline development of larger data pipelines on the multi-tenant platform


  • Contributes to the technical strategy and architecture for key product modules, assembly and testing.


  • Work with Testers to troubleshoot and debug code


  • Operate in an Agile/Scrum environment to deliver high quality software against aggressive schedules 



Minimum Qualifications:




  • Bachelor's degree in Computer Science, Electrical Engineering, or related technical field

  • 5+ years of experience developing software in Java

  • Experience in parallel programming, scheduling, SQL and RDBMS concepts and best practices

  • Extensive experience with distributed data storage and large-scale data processing using Hadoop, Hive, Oozie, Spark

  • Extensive experience in ETL/ELT and business intelligence

  • Experience with data warehousing systems and techniques for large enterprises

  • Experience with streaming technologies such as Kafka, Flume and Kinesis

  • Experience with web service technologies and integration patterns




Preferred Qualifications:


  • Experience in using HortonWorks HDP and/or HDF platforms




  • Experience in Data Governance




by via developer jobs - Stack Overflow
 

No comments:

Post a Comment