We’re looking for outstanding Data Developers in Argentina of all shapes and sizes, capable and passionate about technology.
You'll be in charge of all aspects of your project--you'll figure out how it should be architected, pick the best tools/frameworks, write the code, and make sure that loose ends are tied up.
Your job will involve developing data processing systems to manage data generated by our Real Time Bidding platform and tracking and analytics platforms which generate more than 20 TB per day.
If you enjoy analyzing enterprise system and business requirements and incorporating them into the reference architecture’s design and implementation, leveraging the newest technologies to delight users, and executing like an entrepreneur, then you’ve found the right place. We’re highly collaborative, make decisions in minutes, and ship features in hours.
WHAT YOU'LL DO
- Manage the full life-cycle of a Big Data enterprise solution, including creating the requirements analysis, the platform selection, design of the technical architecture, design of the application design and development, testing, and deployment of the proposed solution
- Work closely with data scientists and engineers to design and maintain scalable data models and pipelines
- Design and implement processes to extract, transform, and load (ETL) data from a wide variety of sources (relational, NoSQL, web services, flat files)
- Work with data scientists to develop a scalable data platform
- Keep our data structured in a way that makes regular analysis easy and flexible
- Benchmark systems, analyze system bottlenecks and track data quality and consistency proposing solutions to improve them
- Work creatively and analytically in a problem-solving environment
REQUIREMENTS
- 2-5 years’ experience participating in projects that involved large scale data processing
- Strong SQL programming
- Good knowledge of Linux administration (grep/sed/find, CRON, etc.)
- Experience with Hadoop, Java/Scala and large scale data processing technologies
- Experience with AWS (Elastic Map Reduce, Redshift, Kinesis, Lambda)
- Extreme attention to detail
BONUS
- Love to use and develop open source technologies like Hadoop, Hive, Presto, and Spark
- Experience with stream processing technologies like Kafka, Kinesis, Flume, Storm, Spark Streaming.
- Experience implementing Lambda Architecture style systems
- Comfortable with basic concepts related to Datawarehousing and OLAP
- Strong scripting abilities
- JVM experience preferred
- Rigor in A/B testing, automated testing, and other engineering best practices
- Ultra passionate about startups. Thrive in a fast-paced environment
WHAT WE OFFER
- Learn a ton about the hottest area of growth in Internet advertising - Mobile!
- A great level of responsibility from day one and the chance to develop your potential without any limitations
- Working in a fast-paced, fun environment. Your work will be seen daily by thousands of people
- An entrepreneurial environment with a competitive salary
- The opportunity to work in projects involving cutting edge data processing technologies over large data set
- Coffee, loads of snacks and a fridge full of drinks!
by via developer jobs - Stack Overflow
No comments:
Post a Comment