At Datadog, we’re on a mission to bring sanity to cloud development and operations. We need to make the heterogeneous, complex data we collect easy for users to comprehend and act upon.
As a engineer working on our distributed systems, you will build the high-throughput, low-latency systems that power our product. Your data pipelines will ingest, store, analyze, and query tens of millions of events per second from companies all over the globe.
Our engineering culture values pragmatism, honesty, and simplicity to solve hard problems the right way. Join us to own significant chunks of our architecture, design and build resilient systems, and ship to production every day for customers who care deeply about what you build.
In this role, you will:
- Build distributed, high-throughput, real-time data pipelines
- Do it in Go and Python, with bits of C or other languages
- Use Kafka, Redis, Cassandra, Elasticsearch and other open-source components
- Own meaningful parts of our service, have an impact, grow with the company
Requirements
- You have a BS/MS/PhD in a scientific field or equivalent experience
- You have significant backend programming experience in one or more languages
- You can get down to the low-level when needed
- You care about code simplicity and performance
- You want to work in a fast, high-growth startup environment that respects its engineers and customers
Bonus points
- You wrote your own data pipelines once or twice before (and know what you'd like to change)
- You've built high scale systems with Cassandra, Redis, Kafka or Numpy
- You have significant experience with Go, C, or Python
- You have a strong background in stats
by via developer jobs - Stack Overflow
No comments:
Post a Comment