Senior Data Operations Engineer – DataOps

Posted 98ds ago

Employment Information

Education
Salary
Experience
Job Type

Report this job

Job expired or something wrong with this job?

Job Description

Lead the evolution of DataOps practices at a global scale. Design automated, resilient, scalable data platforms on GCP for self-service data infrastructure.

Responsibilities:

  • Lead the design and implementation of enterprise-scale DataOps platforms and automation frameworks.
  • Architect and evolve GCP-native data platforms supporting high-throughput batch and real-time workloads.
  • Design and implement microservices-based data architectures using containerization technologies.
  • Build and maintain CI/CD pipelines for data workflows, including automated testing and deployment.
  • Develop Infrastructure as Code (IaC) solutions to standardize and automate platform provisioning.
  • Implement robust data orchestration, monitoring, and observability capabilities.
  • Establish and enforce data quality frameworks to ensure reliability and trust in data products.
  • Support real-time data platforms operating at extreme scale.
  • Partner with platform squads to deliver self-service data infrastructure products.
  • Drive best practices for automation, resiliency, scalability, and operational excellence.
  • Influence technical direction, mentor senior engineers, and lead through ambiguity.

Requirements:

  • 8+ years of progressive experience in DataOps, Data Engineering, or Platform Engineering roles.
  • Strong expertise in data warehousing, data lakes, and distributed processing technologies (Spark, Hadoop, Kafka).
  • Advanced proficiency in SQL and Python; working knowledge of Java or Scala.
  • Deep experience with Google Cloud Platform (GCP) data and infrastructure services.
  • Expert understanding of microservices architecture and containerization (Docker, Kubernetes).
  • Proven hands-on experience with Infrastructure as Code tools (Terraform preferred).
  • Strong background in CI/CD methodologies applied to data pipelines.
  • Experience designing and implementing data automation frameworks.
  • Advanced knowledge of data orchestration, monitoring, and observability tooling.
  • Ability to architect highly scalable, resilient, and fault-tolerant data systems.
  • Strong problem-solving skills and ability to operate independently in ambiguous environments.

Benefits:

  • Flexible work arrangements
  • Professional development opportunities