Data Consultant, Databricks
Posted 64ds ago
Employment Information
Report this job
Job expired or something wrong with this job?
Job Description
Data Engineer leading the development of scalable data pipelines in Databricks ecosystem. Architecting ETL/ELT processes with configuration-as-code approach for data governance and performance.
Responsibilities:
- Lead the development of scalable data pipelines within the Databricks ecosystem
- Architect robust ETL/ELT processes using a "configuration-as-code" approach
- Migrate data ingestion and transformation workloads to Databricks using Lake flow declarative pipelines
Requirements:
- Strong hands-on experience with Databricks in production environments
- Deep expertise in PySpark and advanced SQL
- Experience with:
- Delta Lake
- ingestion pipelines (batch + streaming)
- data transformation frameworks/patterns
- Proven experience implementing:
- CI/CD in Databricks
- Databricks Asset Bundles (DABs)
- declarative pipelines (Lakeflow)
- Strong AWS infrastructure familiarity (S3, IAM, compute patterns)
- Terraform experience specifically with Databricks + AWS resources
- PowerShell scripting experience (asset)
Benefits:
- Retirement Plan (401k, IRA)
- Work From Home
- Health Care Plan



















