Data Engineer
Posted 1hrs ago
Employment Information
Report this job
Job expired or something wrong with this job?
Job Description
Data Engineer developing and maintaining data pipelines and transformation models for analytics at Industrial Electric Manufacturing. Collaborating with stakeholders and ensuring data quality across business decisions.
Responsibilities:
- Build and maintain ELT pipelines using Fivetran and custom integrations that ingest data from source systems including Procore, Salesforce, ERP platforms, and internal databases into Snowflake
- Develop, test, and document dbt models that transform raw data into clean, reliable datasets for analytics and reporting across Finance, Production, Supply Chain, and Engineering
- Build dimensional models and staging layers following team conventions, ensuring data is structured for optimal Tableau dashboard performance and ad-hoc analysis
- Write and maintain dbt tests, monitor data freshness, and investigate data quality issues when they arise, owning resolution through to root cause
- Work with APIs and data connectors to integrate new data sources, troubleshoot ingestion issues, and ensure reliable data flow into the warehouse
- Maintain clear documentation for data models, pipeline configurations, and business logic so the team can understand and extend your work
- Partner with business stakeholders to understand data needs, clarify requirements, and deliver datasets that answer real operational questions
- Monitor query performance and pipeline efficiency, identifying opportunities to optimize warehouse costs and model run times
- Participate in code reviews, follow Git workflows and CI/CD practices, and contribute to improving the team’s development processes
- Stay current with modern data stack tools and practices, bringing ideas for improvement back to the team
- Use AI coding assistants and agent-based tools to accelerate pipeline development, code generation, testing, and documentation. Manage AI agents as part of your daily workflow to increase throughput and quality
Requirements:
- Bachelor’s degree in Computer Science, Information Systems, Data Science, or related field, or equivalent experience
- 3–5 years of experience in data engineering, analytics engineering, or BI development with hands-on experience building production data pipelines
- Strong SQL skills including CTEs, window functions, complex joins, and query optimization
- Experience with Snowflake or similar cloud data warehouses
- Working experience with dbt (data build tool) for building and testing data transformation workflows
- Proficiency in Python for data processing, scripting, and API integrations
- Experience with data integration platforms such as Fivetran or similar ELT tools
- Familiarity with Tableau or similar BI tools and understanding of how data structure impacts dashboard performance
- Comfortable with Git version control and modern development workflows including code review and CI/CD
- Strong problem-solving skills with ability to debug data issues systematically
- Clear written and verbal communication skills with ability to document work and explain technical concepts to non-technical stakeholders
- Self-motivated with ability to work independently in a remote environment while collaborating across a distributed team
- Experience using AI coding assistants (e.g., Claude, GitHub Copilot) and comfort directing AI agents to perform data engineering tasks such as writing SQL, generating dbt models, and debugging pipeline issues
- Preferred: Experience with manufacturing, construction, or project management data systems such as Procore, ERP platforms, or supply chain tools
- Preferred: Familiarity with dimensional modeling concepts (star schemas, fact/dimension tables).
Benefits:
- We offer comprehensive and competitive benefits package designed to support our employees' well-being, growth, and long-term success.


















