Placement papers | Freshers Walkin | Jobs daily: Data Engineer/Architect at moovel Group GmbH (Berlin, Deutschland)


Search jobs and placement papers

Data Engineer/Architect at moovel Group GmbH (Berlin, Deutschland)

Hi,


we are Nataliia, Ayoub, Hasan, Sabbir and Philipp, the BI team at moovel, consisting of an Agile Coach, Data Analyst, Data Architect and a BI Specialist. We are building business value for our finance, marketing and product teams, as well as for the C-level of moovel. In the past months we’ve set up a data warehouse on Snowflake and visualized data on Tableau. Furthermore, we will be responsible for the Data Lake and we will perform day-to-day production RedShift data warehouse tuning, troubleshooting, data migration and backup & recovery. To support and strengthen our team we are looking for YOU - a Data Engineer / Architect for BI, who can take BI to the next level.



But first, who are we and what is our goal?


The company’s goal is to radically simplify individual mobility and to build products and services which enable true flexibility. moovel is part of a large global corporation (Daimler AG), but at the heart of it we are a start-up. We act as a partner for cities, transport networks and customers with the goal of making cities smarter and creating an operating system for urban mobility that provides access to appropriate mobility options in urban areas and paves the way for the future with autonomous vehicles. Currently, 250 employees work in small creative teams in Germany and the US at four locations - Hamburg, Berlin, Stuttgart and Portland.



What the new role will involve


From this year onwards, we will be concentrating strongly on internationalisation and working with our US colleagues, which will of course include many strategic decisions to be taken.

We are looking for YOU (based in Berlin), a Data Engineer / Architect who is responsible for building and maintaining our data infrastructure to enable different company stakeholders to get insight from the data we collect. What exactly does this mean?

Defining data schemas, handling and processing a large-volume of semi-structured and structured data as well as growing our analytics capabilities with faster, more reliable data pipelines will be two central tasks. Further important tasks are: developing a real-time pipeline for events and working with data engineers, analysts, backend engineers and product owners to find the best way to leverage the data in our data warehouse as well as transforming multiple data sources into valuable, reliable components of our data lake.
 


What you need to bring 


We are looking for a motivated, proactive and result-driven colleague who is highly technical and passionate about building systems that handle massive amounts of data.

Must have skills:
 



  •  You are fluent in Python and you write efficient, well-tested code with a keen eye on scalability and maintainability, promoting best practices around data representation, sanitization, retrieval, storage and processing at scale.




  • You have experience  with AWS big data stack (Kinesis, S3, Glue, EMR, Athena etc.) as well as with virtualization and container technologies (e.g docker).




  • Excellent SQL and RDB knowledge - we use Snowflake, Redshift and Postgres as well as NoSQL knowledge.




  • You are experienced with implementing and maintaining ETL processes and complex workflows - we use Talend & Python - and we document all our processes.



The following skills would be nice to have:
 



  • Experience with data intensive industry e.g ad-tech, social media or mobility




Do you think you are the right fit?


We are more than happy to meet you soon. Just click on the “Apply Now” button.


Your Business Intelligence Team


Nataliia, Ayoub, Hasan, Sabbir and Philipp



If you have any questions or are in need for further information feel free to contact Julia Rosenblau +49 711 21953940.


by via developer jobs - Stack Overflow
 

No comments:

Post a Comment