Senior Data Infrastructure Engineer

Posted 67ds ago

Employment Information

Education
Salary
Experience
Job Type

Report this job

Job expired or something wrong with this job?

Job Description

Senior Data Infrastructure Engineer designing data systems for Cobot’s robotics products. Collaborating with teams to architect scalable pipelines for AI insights and data quality.

Responsibilities:

  • Own the full ingestion path from edge to cloud, ensuring robot telemetry, sensor data, and warehouse events are reliably captured, transported, and made available for downstream systems.
  • Design, build, and operate scalable pipelines and foundational data layers (streaming and batch) that deliver low-latency, reliable data for analytics, AI/ML, and product features.
  • Build and maintain ingestion pipelines from object storage (e.g., S3) into Databricks, including raw → staged → analytics-ready layers, supporting both streaming and batch workloads.
  • Own the reliability and CI/CD of the data warehouse and foundational data layers, enabling safe, repeatable deployment of schema changes, transformations, and infrastructure that analytics engineers depend on.
  • Implement observability, monitoring, and data quality checks to ensure pipeline correctness, detect failures or drift, and maintain trust in data used by Vista, Portal, and Scoutmap.
  • Scale and optimize multi-tenant data infrastructure, balancing performance, reliability, and cost-efficiency as Cobot’s customer base and data volume grow.
  • Collaborate directly with robotics, AI/ML, product, and analytics teams to translate product requirements into resilient data systems that unlock customer-facing features.
  • Establish and enforce best practices for data engineering, reliability, security, and CI/CD across ingestion, staging, and warehouse layers—owning the foundations while enabling analytics engineers to ship metrics, marts, and dashboards efficiently.

Requirements:

  • 5+ years of professional experience in data engineering or data infrastructure roles
  • Strong proficiency in Python and SQL, with the ability to write production-quality, scalable, and well-tested code.
  • Proven experience designing and operating ingestion pipelines and staging layers (streaming and batch) that support downstream analytics and product use cases.
  • Experience deploying and managing cloud data infrastructure in AWS using infrastructure-as-code (e.g., Terraform, Kubernetes, Docker).
  • Hands-on experience with cloud-based data platforms, storage systems, and infrastructure.
  • Familiarity with data quality practices, testing frameworks, and CI/CD for data pipelines.
  • Highly motivated teammate with excellent oral and written communication skills.
  • Enjoy working in a fast paced, collaborative and dynamic start-up environment as part of a small team.
  • Willingness to travel occasionally for on-site support or testing, as needed.
  • Must have and maintain US work authorization.

Benefits:

  • Equity
  • Comprehensive benefits