Synthace is looking for Software Engineers who want to help us solve complex Data Processing and Modelling problems, while also getting stuck into a variety of challenges relating to scaling our products and infrastructure.
We're working in a sector (biology) that generates huge amounts of data that is essential for important research into areas like drug discovery. This data is often unstructured, fragmented and currently poorly served by software products - we're trying to change this, and you can help us!
You may have built a data processing stack - your colleagues may also describe you as a 'Data Integration Engineer' or a 'Data Architect' - but when it boils down to it you're a problem solving software engineer with an adaptable mindset and a love for taking challenging requirements and turning them into solutions.
Just to be clear, this is not a 'Big Data' role, and we don’t have 'Big Data' on our roadmap - our value to the client is in iterating complex workflows, rather than scaling to 'Big Data' throughput. So you shouldn't expect to be working on large compute clusters immediately.
If, however, you have an appetite for variety and applying your problem solving abilities to some seriously complex workflows, we should talk!
The Project
Named by the World Economic Forum as one of the world's 30 Technology Pioneers 2016, Synthace is re-imagining how we work with biology, massively improving the speed and quality of the final results.
This is made possible through our revolutionary platform for designing biological experiments, simulating them, translating instructions for automated lab equipment, and visualising complex data sets from the results.
All of this is done by 'Antha', which is already impacting how scientists work with biology in major companies like Dow, Merck and GSK.
You'll be working within a tight-knit, friendly and collaborative development team on exciting projects with plenty of technical challenges to get your teeth into.
What you'd be working on:
- The best way to summarise the challenge is to say that you would be building infrastructure, and also building data applications on top of it - it's a wide ranging and varied role full of interesting challenges!
- You will need to get to understand our existing clients' needs around data processing support, and to take ownership of features and projects relating to data modelling, data processing, data integration, etc.
- In the short to mid term the role will involve quite a bit of refactoring / re-engineering the existing codebase. We're open about the fact that this isn't a greenfield project - you'll initially need to pick up our existing stack and help us to reverse engineer some really tricky workflows.
- You'll also be taking ownership of the architecture of particular components, so it would help if you've previously owned bigger pieces of a data processing suite / high performance computing or Grid computing solution, especially if you have been personally responsible for decisions around how to architect a complete component.
About you:
- Primarily, this is the sort of challenge that will suit you if you enjoy creating order out of mess, you appreciate clean, well designed models, you know what it takes to build robust, reliable, resilient systems, and you take professional pride in building hardened applications
- You have a broad range of experience as a software engineer - you've probably worked in different sectors, or had to adapt to different types of challenges
- You need to be seriously well versed in any of the following programming languages: Python, Go, Java, and/or Scala
- You have an appreciation for engineering best-practices (documentation, code reviews, test automation, etc.)
- You have a strong understanding of data structures and algorithms
- Excellent communication skills are also a must - we move fast but we talk to each other to make sure we don't break things
- You enjoy working in collaborative teams and sharing knowledge - there are no lone wolves here!
Bonus points for:
- Hands-on use of workflow management frameworks (e.g. Airflow, Luigi, Oozie, Apache NiFi)
- Deep knowledge of data modelling, data access, and data storage techniques
- Hands-on experience with any of the following (or similar): Spark, Hive, Drill, Vertica, Paraccel, SybaseIQ/SAP IQ, Parquet, Impala, Bigtable, Singularity
Why join Synthace:
- Work with genuinely extraordinary people
- Open, collaborative, and friendly culture
- Challenging, groundbreaking and exciting work
- Chance to be a part of the 'fourth industrial revolution', helping us to create tools and systems that allow scientists to do things like cure Cancer faster!
Salary: £50k-110k depending on experience
Location: West London
by via developer jobs - Stack Overflow
No comments:
Post a Comment