Senior Platform Engineer – Cloud Architecture, AI Platforms

Posted 53ds ago

Employment Information

Education
Salary
Experience
Job Type

Report this job

Job expired or something wrong with this job?

Job Description

Senior AI Platform Engineer designing scalable cloud infrastructure for AI initiatives at Derivative Path. Collaborating with teams to streamline AI operations and secure workloads.

Responsibilities:

  • Design and own AWS multi-account architectures, including VPC networking, security controls, and shared-services patterns to support AI workloads.
  • Build and maintain scalable compute environments using ECS Fargate, EKS, and Lambda for both application and model hosting.
  • Implement Infrastructure as Code (IaC) using AWS CDK (preferred) or Terraform to define and manage the entire AI platform stack.
  • Develop and manage CI/CD pipelines using GitHub Actions, incorporating blue/green and canary strategies for model deployments.
  • Architect and operate data services including RDS (Aurora), S3, and Vector Databases (e.g., Pinecone, pgvector) to support RAG and other AI patterns.
  • Partner with the AI Innovation Group to operationalize models using AWS SageMaker, Bedrock, and Azure OpenAI Service.
  • Implement comprehensive observability for infrastructure and model performance, monitoring for latency, drift, and resource utilization.
  • Ensure platform resilience and security, defining disaster recovery strategies and enforcing strict IAM policies for sensitive data.

Requirements:

  • 5+ years of experience in DevOps, Platform Engineering, or Cloud Infrastructure
  • Deep expertise in AWS core services (EC2, VPC, IAM, S3) and networking (Transit Gateways, Route 53)
  • Strong proficiency in Infrastructure as Code, specifically AWS CDK or Terraform
  • Experience with container orchestration (Kubernetes/EKS or ECS) and Docker
  • Proficiency in Python for scripting and automation; experience with API frameworks (FastAPI) is a plus
  • Familiarity with MLOps practices and tools (SageMaker, MLflow, Kubeflow) and deploying LLMs in production
  • Experience designing CI/CD pipelines (GitHub Actions) for both code and data/model workflows
  • Understanding of database architecture, including relational (PostgreSQL) and vector stores
  • Bachelor’s degree in computer science, Engineering, or equivalent practical experience.

Benefits:

  • Competitive bonus, base salary, and equity compensation
  • 23 days of PTO
  • Fully remote
  • RRSP contribution at 3%
  • Competitive health benefits