Placement papers | Freshers Walkin | Jobs daily: Data Engineer, IdentityAI at SailPoint (Austin, TX)


Search jobs and placement papers

Data Engineer, IdentityAI at SailPoint (Austin, TX)

At SailPoint, we do things differently. We understand that a fun-loving work environment can be highly motivating and productive. When smart people work on intriguing problems, and they enjoy coming to work each day, they accomplish great things together.  With that philosophy, weve assembled the best identity team in the world that is passionate about the power of identity.

As the fastest-growing, independent identity and access management (IAM) provider, SailPoint helps hundreds of global organizations securely and effectively deliver and manage user access from any device to data and applications residing in the data center, on mobile devices, and in the cloud.  The companys innovative product portfolio offers customers an integrated set of core services including identity governance, provisioning, and access management delivered on-premises or from the cloud (IAM-as-a-service).

SailPoint has been voted a best place to work in Austin, 8 years in a row. 

SailPoint is looking for a Data Engineer to build, maintain, monitor, and improve a real time scalable, fault tolerant, data processing pipeline, and productize machine learning algorithms, for a new cloud-based, multi-tenant, SaaS analytics product.

You will be integral in building this product and will be part of an agile team that is in startup mode. This is a unique opportunity to build something from scratch but have the backing of an organization that has the muscle to take it to market quickly, with a very satisfied customer base.

Responsibilities

  • Implementing ETL processes
  • Monitoring performance and advising any necessary infrastructure changes
  • Defining data retention policies
  • Productizing and operationalizing machine learning algorithms
  • Be part of a team that is creating a brand-new product line
  • Collaborate with team members to help shape requirements
  • Actively engage in technology discovery that can be applied to the product  

Requirements

  • 1+ year of data engineering or related experience
  • Strong Java and/or Scala experience
  • Proficient understanding of distributed computing principles
  • Ability to solve any ongoing issues with operating the cluster
  • Experience with integration of data from multiple data sources
  • Strong knowledge of data cleaning and various ETL techniques and frameworks
  • Great communication skills
  • BS in Computer Science, or a related field

Preferred

  • Proficiency with Spark
  • Experience with Machine Learning using Mahout/Deeplearning4j/Spark ML
  • Experience with stream processing using Spark Streaming/Storm/Beam/Flink
  • Experience with Elasticsearch
  • Experience with messaging systems, such as Kafka or Kinesis
  • Experience with NoSQL databases, such as Redshift, Cassandra, DynamoDB

Compensation and benefits

  • Experience a Small-company Atmosphere with Big-company Benefits
  • Competitive pay, 401(k) and comprehensive medical, dental and vision plans
  • Recharge your batteries with a flexible vacation policy and paid holidays
  • Grow with us with both technical and career growth opportunities
  • Enjoy a healthy work-life balance with flexible hours, family-friendly company events and charitable work

All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or veteran status.

All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or veteran status.


by via developer jobs - Stack Overflow
 

No comments:

Post a Comment