Lalamove is disrupting the logistics industry by connecting customers and drivers directly through our technology. We offer customers a lightning fast and convenient way to book delivery and moving services whether they are at their home, at work or on the go. People talk about O2O, we live it. Onto our fourth year as a start-up, now operating in Hong Kong, China, Taiwan, Thailand, Singapore, Philippines and Vietnam, our aspirations don’t stop there as our model has the ability to transform how goods moved in any city worldwide.
As a Data Engineer at Lalamove, you will be part of the growing data team, which supports different functional departments in the headquarters, as well as over 100 cities in the highly data-driven company. You will be responsible for creating the technology that moves and translates data used for making business, operations and strategic decisions. The ideal candidate will have a passion for optimizing data systems and building them from the ground up.
by via developer jobs - Stack Overflow
As a Data Engineer at Lalamove, you will be part of the growing data team, which supports different functional departments in the headquarters, as well as over 100 cities in the highly data-driven company. You will be responsible for creating the technology that moves and translates data used for making business, operations and strategic decisions. The ideal candidate will have a passion for optimizing data systems and building them from the ground up.
What we seek:
- Quick learner: you enjoy working with data systems, and you have the ability to learn new technology and frameworks quickly
- Problem solver: you are a problem solver with strong critical thinking skills, and willing to find creative solutions to difficult problems
- High autonomy: Self-organized, passionate with a can-do attitude and own end-to-end projects
What you'll need:
- At least 2 years of work experience in a data engineering or data infrastructure role.
- Experience with message queuing, and stream processing using tools such as Amazon Kinesis, Apache Kafka, Spark Streaming etc.
- Experience working with the Hadoop Ecosystem.Experience with MPP databases such as Redshift and working with both normalized and denormalized data models.
- Comfortable in the Linux environment.Good command of English, fluency in Mandarin is a plus.
- Proficiency in SQL and Python.
by via developer jobs - Stack Overflow
No comments:
Post a Comment