Placement papers | Freshers Walkin | Jobs daily: Data Engineer at Oliver Wyman Labs (Boston, MA)


Search jobs and placement papers

Data Engineer at Oliver Wyman Labs (Boston, MA)

Team description


A little bit about us


Oliver Wyman’s Data Science and Engineering team works to solve our clients’ toughest analytical problems, continually pushing forward the state of the art in quantitative problem solving and raising the capabilities of the firm globally. Our team works hand-in-hand with strategy consulting teams, adding expertise where a good solution requires wrangling with a wide variety of data sources including high volume, high velocity and unstructured data; applying specialized data science and machine learning techniques; and developing reusable codebases to accelerate delivery.


Our work is fast paced and expansive. We build models, coalesce data sources, interpret results, and build services and occasionally products that enhance our clients’ ability to derive value from data and upgrade their decision-making capabilities. Our solutions feature the latest in data science tools, machine learning algorithms, AI approaches, software engineering disciplines, and analytical techniques to make an extraordinary impact on clients and societies. We operate at the intersection of exciting, progressive tech and real-world problems faced by some of the world's leading companies. We hire smart, driven people and equip them with the tools and support that they need to get their jobs done.


Our Values and Our Proposition


We believe that our culture is a key pillar of our success and our identity. We take our work seriously, but not ourselves.  We believe happiness, health, and a life outside of work are more important than work itself and are essential ingredients in professional success – no matter what the profession. Ours is a team whose members teach and take care of each other. We want not simply to continue learning and growing but to fundamentally redefine what it means to do consulting and to stretch the boundaries of what we, as a firm, are capable of doing.


Our proposition is simple:



  • You will work with people as passionate and awesome as yourself.

  • You will encounter a variety of technology, industries, projects, and clients.

  • You will deliver work that has real impact in how our clients do business.

  • We will invest in you.

  • We will help you grow your career while remaining hands-on and technical.

  • You will work in smaller, more agile, flatter teams than is the norm elsewhere.

  • You will be empowered and have more autonomy and responsibilities than almost anywhere else.

  • You will help recruit your future colleagues.

  • We offer competitive compensation and benefits.

  • You will work with peers who can learn from you and from whom you can learn.

  • You will work with people who leave egos at the door and encourage an environment of collaboration, fun, and bringing new ideas to the group.


Data Engineer


The Data Engineer is the universal translator between IT, business, software engineers, and data scientists, working directly with clients and project teams. You will work to understand the business problem being solved and provide the data required to do so, delivering at the pace of the consulting teams and iterating data to ensure quality as understandings crystallize.


Our historical focus has been on high-performance SQL data marts for batch analytics, but we are now driving toward new data stores and cluster-based architectures to enable streaming analytics and scaling beyond our current terabyte-level capabilities. Your ability to tune high-performance data pipelines will help us to rapidly deploy some of the latest machine learning algorithms/frameworks and other advanced analytical techniques at scale.


You will serve as a keystone on our larger projects, enabling us to deliver solutions hand-in-hand with consultants, data scientists, and software engineers.


A good candidate will have:



  • Excellent communication skills (verbal and written)

  • Empathy for their colleagues and their clients

  • Signs of initiative and ability to drive things forward

  • Understanding of the overall problem being solved and what flows into it

  • Ability to create and implement data engineering solutions using modern software engineering practices

  • Ability to scale up from “laptop-scale” to “cluster scale” problems, in terms of both infrastructure and problem structure and technique

  • Ability to deliver tangible value very rapidly, working with diverse teams of varying backgrounds

  • Ability to codify best practices for future reuse in the form of accessible, reusable patterns, templates, and code bases

  • A pragmatic approach to software and technology decisions as well as prioritization and delivery

  • Ability to handle multiple workstreams and prioritize accordingly

  • Commitment to delivering value and helping clients succeed

  • Comfort working with both collocated and distributed team members across time zones

  • Comfort working with and developing coding standards

  • Ability to codify best practices for future reuse in the form of accessible, reusable patterns, templates, and codebases

  • Willingness to travel as required for cases (0 up to 40%)


Some things that make our Data Engineers effective:



  • A technical background in computer science, data science, machine learning, artificial intelligence, statistics or other quantitative and computational science

  • A compelling track record of designing and deploying large scale technical solutions, which deliver tangible, ongoing value

    • Direct experience having built and deployed complex production systems that implement modern data science methods at scale and do so robustly

    • Comfort in environments where large projects are time-boxed and therefore consequential design decisions may need to be made and acted upon rapidly

    • Fluency with cluster computing environments and their associated technologies, and a deep understanding of how to balance computational considerations with theoretical properties of potential solutions

    • Ability to context-switch, to provide support to dispersed teams which may need an “expert hacker” to unblock an especially challenging technical obstacle

    • Demonstrated ability to deliver technical projects with a team, often working under tight time constraints to deliver value

    • An ‘engineering’ mindset, willing to make rapid, pragmatic decisions to improve performance, accelerate progress or magnify impact; recognizing that the ‘good’ is not the enemy of the ‘perfect’

    • Comfort with working with distributed teams on code-based deliverables, using version control systems and code reviews



  • Demonstrated expertise working with and maintaining open source data analysis platforms, including but not limited to:

    • Pandas, Scikit-Learn, Matplotlib, TensorFlow, Jupyter and other Python data tools

    • Spark (Scala and PySpark), HDFS, Hive, Kafka and other high-volume data tools

    • Relational databases such as SQL Server, Oracle, Postgres

    • NoSQL storage tools, such as MongoDB, Cassandra, ElasticSearch, and Neo4j



  • Demonstrated fluency in modern programming languages for data science, covering a wide gamut from data storage and engineering frameworks through to machine learning libraries


by via developer jobs - Stack Overflow
 

No comments:

Post a Comment