Posted over 2 years ago
Please note you would need to be based in Eastern or Central time zone for this position.
Who is Current Health?
Current Health is a global healthcare technology company, focused on predicting illness and delivering earlier intervention so that every human can live a healthier, longer life. In 2020, Current Health revenue grew 3.500%, we partner with some of the world’s leading healthcare institutions and pharmaceutical organizations and improving health outcomes for patients across the world.
What does a Data Engineer at Current Health do?
As a specialist in data engineering, you will help us scale our data pipelines to meet new challenges as we grow as a business and gain increasing numbers of customers and use-cases. We are currently improving our event driven, message based microservice platform to embrace real-time, highly available distributed streaming technology which will enable our engineers and data scientists to meet our ambitious product goals over the coming months
Requirements
Sounds great, what experience do I need?
- Have a degree in Computer Science, related field, equivalent training or work experience
- Commercial experience in areas of distributed real-time stream processing and complex event processing tech
- You will have experience working with large amounts of data
- Have a deep knowledge of at least one modern programming language and a willingness to learn new ones as required
- Have experience writing tests and testable code
- Be comfortable reviewing, releasing, deploying and troubleshooting your and other people's code
- Bring experience to the team in areas of distributed real-time stream processing and complex event processing tech
- Have previous success in engineering at scale in a distributed systems environment
- Have a practical understanding of cloud computing and networking - we use AWS with Nomad for micro-service management
- Have experience collaborating with data scientists, product teams and other consumers of data assets
Bonus points for...
- Familiarity with key big data technologies, such as Hadoop, MapReduce & Apache Spark.
- A background involving Apache Kafka or other distributed data streaming platforms
- Experience with API design/development
Technologies we use
- Backend: Java (Spring), Python, .NET
- Frontend: JavaScript (TypeScript), Angular, Ionic, npm
- Databases: PostgreSQL (RDS), Couchbase and others
- Infrastructure: Linux, RabbitMQ, AWS via Terraform, Chef, Nomad, Consul and Fabio
- Data Science and ML: H2O, Jupyter, TensorFlow, Keras and Spark
- Monitoring: DataDog and ELK
Benefits
- 401k contribution up to a maximum of 3% on base salary
- 70% contribution towards Health, Optical and dental plans including partner and family.
- 1 x Life & AD&D Insurance
- Holidays: 33 days per year inclusive of public holidays
- Flexible, autonomous working environment
- Travel expenses covered
- Spec your own environment
- Employee Assistance Program
- Team events
- Monthly snack box