Senior Data Engineer – LATAM Based Only

Posted 61ds ago

Employment Information

Education
Salary
Experience
Job Type

Report this job

Job expired or something wrong with this job?

Job Description

Senior Data Engineer focused on building data pipelines and platforms for analytics. Seeking experienced candidates in LATAM for remote role.

Responsibilities:

  • Design, develop, and maintain robust data pipelines using Python and Apache Airflow, including pipeline testing and operational monitoring.
  • Build and optimize complex SQL transformations, applying advanced techniques such as CTEs, window functions, and query performance tuning.
  • Model and manage analytical datasets in Snowflake, ensuring scalability, clarity of data models, and cost-efficient usage.
  • Develop and maintain data processing workflows using Apache Spark for large-scale data transformations.
  • Integrate and consume streaming data using Apache Kafka, ensuring reliable and scalable data ingestion.
  • Design and implement testing strategies for data pipelines, including data validation, regression checks, and pipeline reliability controls.
  • Work with AWS services, primarily S3, to manage data lakes, including partitioning strategies, lifecycle policies, and storage optimization.
  • Collaborate with cross-functional teams to understand data requirements, ensure data availability, and support downstream analytics and applications.
  • Contribute to best practices around data architecture, performance optimization, and operational excellence.

Requirements:

  • Minimum 5 to 7 years of professional experience in Data Engineering or closely related roles.
  • At least 4+ years of hands-on experience building and maintaining production-grade data pipelines using Python and SQL.
  • Demonstrated experience working with cloud-based data platforms and distributed data processing in complex, high-volume environments.
  • Strong professional experience with Python for data engineering use cases.
  • Advanced SQL expertise, including complex joins, CTEs, window functions, and performance tuning.
  • Hands-on experience with Snowflake, focused on data modeling and analytical workloads.
  • Proven experience developing and testing data pipelines in Apache Airflow using Python.
  • Experience working with Apache Kafka in data ingestion or streaming architectures.
  • Solid understanding and practical experience defining and implementing data testing strategies.
  • Experience with Apache Spark for distributed data processing.
  • AWS experience, with strong knowledge of S3 for data lakes, including partitioning and lifecycle management.

Benefits:

  • Participation in challenging, high-impact projects with international clients.
  • Remote-first work model with a LATAM-focused team.
  • Competitive compensation aligned with seniority and expertise.
  • Opportunity to influence data architecture decisions and best practices within delivery teams.