Data Engineer – Medicine & Analytics
Posted 2hrs ago
Employment Information
Report this job
Job expired or something wrong with this job?
Job Description
Data Engineer responsible for building reliable data solutions for analytics in Medicine. Collaborate with diverse teams to improve patient outcomes through data-driven insights.
Responsibilities:
- Design, build, and maintain end‑to‑end data pipelines and integrations to support Medicine & Analytics use cases (batch and event‑driven patterns where relevant).
- Develop, operate, and optimize integrations using SnapLogic and AWS services such as S3, AWS Lambda, and AWS Glue to ensure robust ingestion, transformation, and orchestration.
- Implement and maintain analytics‑ready data models in Snowflake, ensuring performance, scalability, and cost‑efficient design.
- Build transformation logic and analytics layers using dbt, including modular modeling, testing, documentation, and deployment best practices.
- Contribute to and enforce data governance standards by leveraging tools such as Collibra, ensuring metadata quality, lineage, ownership, and consistent definitions.
- Partner with Data Quality stakeholders to implement and monitor quality controls using Attaccama, including rules, profiling, exception handling, and remediation workflows.
- Support data lifecycle processes and operationalization of data products using Innovator (as applicable in the ecosystem) to align delivery with platform and product standards.
- Proactively identify opportunities to simplify architecture, automate repetitive work, and reduce operational effort (observability, alerting, self‑healing patterns).
- Ensure all solutions follow security, privacy, and compliance expectations (e.g., regulated environment practices, audit readiness, access controls, data handling).
- Collaborate closely with Product Owners, Data Scientists, Analysts, Architects, and business stakeholders to translate needs into reliable, reusable data assets.
- Act as a role model for engineering excellence: version control, CI/CD, code reviews, documentation, and operational runbooks.
Requirements:
- Degree in Computer Science, Engineering, Data/Information Systems, or a related field, with several years of relevant experience in data engineering, analytics engineering, or similar roles.
- Hands‑on experience building integrations and pipelines using tools such as SnapLogic (or comparable iPaaS) and cloud services — specifically AWS S3, Lambda, and Glue.
- Strong experience with Snowflake including data modeling, performance tuning, and secure data access patterns.
- Proven experience with dbt (models, tests, macros, documentation, environments, CI/CD integration).
- Familiarity with data governance and metadata management, ideally with Collibra; understanding of lineage, stewardship, and data catalog practices.
- Experience implementing data quality controls and monitoring, ideally with Attaccama (or equivalent tooling and approaches).
- Solid knowledge of software engineering fundamentals: Python/SQL, Git, coding standards, automated testing, and production support practices.
- Demonstrated ability to work independently, manage priorities, and proactively drive work forward in a dynamic environment.
- Strong stakeholder management, analytical thinking, and structured problem‑solving skills.
- Excellent communication skills in English (Spanish is a strong plus), enabling clear interaction with technical and non‑technical stakeholders.
Benefits:
- Permanent Contract
- Learning & Development



















