Senior Data Engineer

Posted 116ds ago

Employment Information

Education
Salary
Experience
Job Type

Report this job

Job expired or something wrong with this job?

Job Description

Data Engineer transforming raw application data using DBT, Materialize, and Snowflake at Lean Tech. Collaborating with teams for real-time data workflow optimization.

Responsibilities:

  • Design, build, and maintain advanced data models and transformations using DBT across Materialize and Snowflake environments.
  • Develop and optimize expert-level SQL queries, views, and stored procedures, focusing on compute cost, memory usage, and advanced indexing strategies.
  • Translate business and BI requirements into scalable semantic models and curated tables to support real-time dashboards and reporting.
  • Monitor, tune, and optimize Materialize cluster usage, managing compute resource sizing and memory performance.
  • Troubleshoot and resolve upstream data issues by collaborating with Data Platform engineers on components such as CDC connectors, pipelines, or message flows.
  • Participate in schema design, data quality assessments, and the implementation of data governance best practices.
  • Collaborate with BI engineers to define data models that effectively support analytical and reporting requirements.
  • Analyze and support streaming-enabled architectures, including data flows from CDC, Kafka, and Materialize into Snowflake.
  • Support infrastructure tasks by understanding Infrastructure as Code (IaC) deployments, reviewing containerized flows, and exploring system logs.
  • Engage in continuous improvement efforts focused on pipeline reliability, performance tuning, cost optimization, and technical documentation.

Requirements:

  • Expert-level SQL proficiency, including query optimization, indexing strategies, understanding of database engine behavior, and the ability to write complex transformations.
  • Hands-on experience with DBT for data transformation and modeling.
  • Strong understanding of relational database concepts, including schemas, views, indexes, and query plans.
  • Experience working with modern data warehouses such as Snowflake.
  • Solid understanding of SQL-based stored procedures or functions, preferably in PostgreSQL, with experience in other engines like Oracle or SQL Server also being valuable.
  • Experience with streaming-enabled databases like Materialize, including an understanding of compute resource usage and cluster sizing.
  • Ability to debug and troubleshoot upstream pipeline issues related to CDC, connectors, or ingestion workflows.
  • Familiarity with streaming and real-time systems concepts, such as Kafka and JSON message consumption patterns.
  • Experience working in modern cloud environments (AWS, GCP, Azure, or Oracle).
  • General software engineering skills, including the ability to understand data flows, investigate logs, and reason about deployment components.
  • Familiarity with container technologies such as Docker and orchestration tools like ECS or Kubernetes from a conceptual standpoint.
  • Conceptual understanding of Infrastructure as Code (IaC) tools, such as Terraform or CloudFormation.
  • Ability to handle schema evolution challenges, including adjusting models when upstream schemas change (e.g., new fields, nullability changes, removed fields).

Benefits:

  • Professional development opportunities with international customers
  • Collaborative work environment
  • Career path and mentorship programs that will lead to new levels