Remote Jobs

We’ve curated the top remote jobs from the world’s leading companies.

Subscribe to our newsletter to get notified about the newest remote job opportunities.

Unsubscribe at anytime. Privacy policy

Posted about 4 years ago

 As part of the Avail Data Engineering team at Allstate, you will be working to manage our ever-growing collection of vehicle sharing and usage data from across the USA. We support our business analytics and marketing teams by performing data integration and ETL between an increasingly diverse set of data sources and our data warehouse. We will also build big-data applications leveraging techniques from operations research and machine learning to directly enrich, personalize and optimize the Avail car-sharing product. This is a green field opportunity to contribute to the design and implementation of a flexible, scalable data framework for an exciting new sector of the sharing economy. 

This position can be based in either of two Avail HQ offices at San Francisco or Chicago, or fully remote with regular travel to SF HQ.

Key Responsibilities:
  • Design and implement scalable data workflows and pipelines, and integrate diverse data sources and sinks
  • Design appropriate database schemas and optimize database deployment architectures for analytics query loads
  • Implement data transforms and organization for various data stores (data lakes and warehouses)
  • Design and implement new platform architectures for building and serving machine learning models
  • Work with the platform operations team to monitor and maintain live production systems
  • Provide tooling and automation for infrastructure, continuous testing, and continuous deploy of data systems 

Job Qualifications:

  • Various experience levels considered. Junior candidates must have a strong background of coursework or academic projects around data engineering or machine learning at scale or have appropriate industry experience contributing to such projects. Senior candidates must demonstrate a track record of successful technical leadership in the execution of large-scale data projects.
  • Software Engineering – Level-appropriate experience in software engineering and SDLC (our stack may include Python, Golang and Scala). Must consider code readability, reuse, and extensibility a priority when developing solutions.
  • Big Data Engineering - Experience in building scalable data pipelines involving machine learning, optimization or prediction
  • Big Data Devops - Experience in performing operations and automation of various big-data ecosystems in production environments on AWS or a related cloud service
  • Ability to thrive in a fast paced, cross regional, diverse, and dynamic work environment

Nice to have:

  • Experience with AWS data stack – Redshift, Athena, EMR, Kinesis, DocumentDB, DynamoDB
  • Experience with establishing well-organized data lakes
  • Experience setting up and optimizing data warehouses
  • Background in data modeling and performance tuning in relational and no-SQL databases
  • Experience with data practices (security, data management and governance)
  • Experience in operations research, machine learning or optimization