Remote Product Jobs

Looking for a remote product management or product marketing job? We’ve curated the top product jobs from leading companies.

Sign up and get notified when new remote are opened.

Unsubscribe at anytime. Privacy policy

Posted 11 months ago

About Us

MetaRouter is a customer-data streaming platform for large, security-conscious enterprises. The MetaRouter platform dramatically reduces latency and bloat by providing server-side integration with third-party marketing, analytics, and data storage/transport tools. By enabling customers to access our SaaS tool, access a private PaaS architecture, or deploy fully within their own private cloud, customers can dramatically simplify and centralize their customer-data pipelines and maintain full control over security and compliance.

The MetaRouter platform, serving as a full-spectrum customer data collection, modification, and delivery platform, and is a collection of many, varied microservices. Our client and server-side ingestion and identity libraries, ETL applications, and configuration and monitoring UIs serve to give data teams control over the shape and substance of their customer-behavior data, all while powering the flexibility and freedom to be creative with their architecture.

The Role: Senior Product Engineer

A key value of the MetaRouter Enterprise platform is the control and flexibility it provides data teams over the shape and substance of their customer-behavior data as it streams through the data routing system. MetaRouter is looking to expand and improve its toolbox of product features offered to Enterprise customers that expand on this flexibility.

In this role, you will serve primarily as a product engineer on the Enterprise platform, helping the team architect, plan, develop, deploy, and maintain new and existing data-engineering-related features. As features are shipped and adopted by customers, you will also be expected to provide implementation support.

You will be primarily operating on Dockerized Golang projects within Kubernetes environments running on GCP services, but some services may require knowledge of Node, Python, and SQL. Our platform deals in unbounded data sets, and strong experience with common tools in the streaming and micro-batching context is important, particularly common transport layers and message queues.


  • Engage in technical leadership and strategy to improve the whole lifecycle of the Enterprise product.

  • Serve as a primary product developer and data-engineer on architecture and ETL-related features.

  • Maintain live features by measuring and monitoring availability, latency, scalability, and health.

  • Participate in Agile development activities such as system design consultation, sprint planning, requirements gathering, capacity planning, and launch reviews.

  • Write and maintain excellent documentation, both internal and client-facing.

Required Background & Skills

  • A friendly attitude and strong motivation to see this product succeed and mature.

  • 3+ years of experience building and maintaining large-scale data systems with open-source tooling.

  • 2+ years experience with Golang and Python.

  • 2+ years experience with using Kuberentes and containerization.

  • 2+ years experience with one or more data transport layers or message queues (Kafka, Kinesis, Pub/Sub)

  • Experience with the GCP ecosystem.

  • Experience working with unbounded data sets (streaming data), inserting data from multiple schemas into a centralized system, and connecting, cleaning, and maintaining complex data sets in transit/at rest.

  • Experience with high-performance SQL reading and writing.

Bonus Round!

  • Experience with AWS, Azure, and other cloud providers, as well as Helm, and Terraform.

  • Experience working with data scientists/engineers in order to create data applications.

  • Experience with Javascript web application and Node.js.

  • Experience creating CLIs and APIs.

  • Experience working on applications with interactions between a user and backend systems/architectures.

  • Experience with ETL tools such as Spark, Beam, DataFlow, Flink, & Hadoop.

  • Experience working with Analytics.js or similar client-side user-behavior-tracking systems.

  • Experience crafting, maintaining and scaling machine learning algorithms. If not practical experience, at least a strong understanding of the industry.