Placement papers | Freshers Walkin | Jobs daily: Data Engineer at TrueData (Los Angeles, CA)


Search jobs and placement papers

Data Engineer at TrueData (Los Angeles, CA)

Job Description

TrueData is a leading mobile data platform that works with app publishers who generate mobile data and the apps, agencies, brands, and ad tech companies who need that data for mobile ad targeting, optimization, and measurement. We provide app publishers with a platform to safely generate user insights and an incremental revenue stream while delivering mobile marketers high quality mobile data that boosts campaign ROI, at scale.

We are experiencing great momentum and on the back of our Series B we are aggressively expanding our team. Our Data & Analytics Engineer will be the next addition to TrueData and will report directly to our CTO, and will be based in Downtown LA.

Position

As a member of the engineering team you will be responsible for designing and developing scalable high-performance applications for processing, transforming, and analyzing multi-TB volumes of data generated by mobile applications, including: location data (GIS), identity, and audience data.

You will also be involved in prototyping and testing new products and services for prospective clients like Google, and building next generation analytics, visualization and BI tools for internal and external stakeholders.


As an early member of our growing engineering team, you will receive enormous exposure to new technologies and methods, and have the opportunity to grow immensely within the TrueData organization.

What youll be doing:

  • Create and maintain optimal data pipeline architecture
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Executing custom match analytics and retrieving custom data samples from multiple datastores, and working with external engineers at our clients (including Google, Oracle, Verizon, etc.) to ensure successful pilots.
  • Building the next generation of Data Visualization and Analytics tools at TrueData, including data density analysis for location Data (clustering, univariate analyses, etc), Time series analytics for API/S2S/SDK health monitoring, etc.
  • Managing and automating data ingestion and data delivery processes for new clients, products and services.
  • Administering cloud server architectures (databases and application servers)
  • Deploying and configuring our monitoring, security, deployment, reporting, and automation tools as required
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data technologies.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
  • Keep our data separated and secure across national boundaries through multiple AWS regions.
  • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.


What were looking for in an ideal candidate:

  • Passionate about coding and solving problems using software and automation.
  • Youre eager to work with the rest of the TrueData team and our Clients to solve problems, execute on pilots & integrations, and grow the TrueData business.
  • You plow through multiple obstacles in a day, using a combination of persistence, research, problem-solving skills, and your own experience
  • You take pride in your work, and especially work that is completed, tested, and delivered
  • You are a sink for problems, rather than a source. You make your co-workers' jobs easier, not harder
  • You are available for and responsive to questions. You are professional and collegial in your communications
  • You like being the person that others rely on
  • You quickly learn new technologies as needed and recognize that you are engaged in timely, business-critical tasks
  • You are transparent in what you do. You discuss, document, and commit your work as needed
  • You recognize technology is a means to an end, not an end in itself. Tech is always for some end user, not for the engineer
  • Excited to work in DTLA.

Requirements

Qualifications & experience

  • We are looking for a candidate with 4+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:
    • Experience with big data tools:Spark, Kafka, Hadoop, etc.
    • Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
    • Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
    • Experience with AWS cloud services: EC2, EMR, RDS, Redshift,Kinesis, SQS,S3,
    • Experience with stream-processing systems: Spark-Streaming etc.
    • Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
    • Ensure AWS cost is managed effectively by refining the pipelines & execution strategies
  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
  • Experience building and optimizing big data data pipelines, architectures and data sets.
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Strong analytic skills related to working with unstructured datasets.
  • Build processes supporting data transformation, data structures, metadata, dependency and workload management.
  • A successful history of manipulating, processing and extracting value from large disconnected datasets.
  • Working knowledge of message queuing, stream processing, and highly scalable big data data stores.
  • Code unit testing.
  • Experience with the following data types, structures and techniques is a big plus:
    • Time series data
    • Location Data (GIS)
    • ML/AI techniques such as Neural Nets, SVM, Linear/Logistic Regression
    • Association Rule Mining, etc. for both out of sample prediction as well as inference
  • Ability and willingness to understand, learn, and use new programming languages quickly.
  • Must be familiar with Linux shell, scripting, process monitoring and management, SSH.

Benefits

Salary is based on experience & skill level.


  • Health Care Plan (Medical, Dental & Vision)
  • Retirement Plan (401k, IRA)
  • Paid Time Off (Vacation, Sick & Public Holidays)
  • Family Leave (Maternity, Paternity)
  • Work From Home
  • Free Food & Snacks
  • Stock Option Plan
  • Transportation Stipend

by via developer jobs - Stack Overflow
 

No comments:

Post a Comment