Data Engineer – Pipelines, Structured Markup
Posted 47ds ago
Employment Information
Report this job
Job expired or something wrong with this job?
Job Description
Data Engineer designing ingestion pipelines and structured transformation workflows for Vulcury. Collaborating to convert raw data into structured, queriable data objects for internal use.
Responsibilities:
- Build and maintain ingestion pipelines (Python-based ETL/ELT)
- Design structured transformation workflows using dbt, SQLMesh, or equivalent
- Convert unstructured transcripts and documents into normalized database records
- Maintain PostgreSQL architecture (structured tables, JSONB, indexing strategy)
- Develop attribute extraction frameworks for technical, commercial, and risk signals
- Ensure data quality, consistency, and lineage from raw interaction to structured output
- Collaborate with AI/ML engineers to ensure clean model inputs
Requirements:
- Strong Python (data pipelines, orchestration)
- Advanced SQL (PostgreSQL preferred)
- Experience with ETL/ELT frameworks (dbt, Airflow, SQLMesh, etc.)
- Experience handling semi-structured data (JSON, transcripts, document parsing)
- Strong schema design and normalization skills
- Familiarity with cloud storage systems (S3 or equivalent)
















