Mid-Level Data Engineer
Posted 42ds ago
Employment Information
Report this job
Job expired or something wrong with this job?
Job Description
Professional Data Engineer developing and maintaining robust data solutions using SQL and Python. Collaborating with data science and analytics to ensure data quality and integrity across platforms.
Responsibilities:
- Develop and maintain ETL pipelines using SQL and/or Python.
- Use tools like Dagster/Airflow for pipeline orchestration
- Collaborate with cross-functional teams to understand and deliver data requirements.
- Ensure a consistent flow of high-quality data using stream, batch, and CDC processes.
- Use data transformation tools like DBT to prepare datasets to enable business users to self-service.
- Ensure data quality and consistency in all data stores.
- Monitor and troubleshoot data pipelines for performance and reliability.
Requirements:
- 3+ years of experience as a data engineer.
- Proficiency in SQL is a must
- Experience with modern cloud data warehousing, data lake solutions like Snowflake, BigQuery, Redshift, Azure Synapse.
- Experience with ETL/ELT, batch, streaming data processing pipelines.
- Excellent ability to investigate and troubleshoot data issues, providing fixes and proposing both short and long-term solutions.
- Knowledge of AWS services (like S3, DMS, Glue, Athena, etc.)
- Familiar with DBT or other data transformation tools.
- Familiarity with GenAI, and how to leverage LLMs to resolve engineering challenges.
- Experience with orchestration tools like Dagster, Airflow, AWS Step functions, etc.
- Familiar with CI/CD pipelines and automation.
Benefits:
- Health insurance
- Retirement plans
- Paid time off
- Flexible work arrangements
- Professional development




















