Postdoctoral Fellow, Data Engineering, Pipelines, PySpark

Posted 84ds ago

Employment Information

Education
Salary
Experience
Job Type

Report this job

Job expired or something wrong with this job?

Job Description

Data Engineer responsible for developing scalable data architectures and pipelines for CIELO. Focusing on data requirement validation, automation, and technical documentation processes.

Responsibilities:

  • Plan and align the project with the Androidization strategy
  • Gather and validate functional and technical requirements
  • Design the solution architecture and the data model
  • Automate ingestion and processing of operational data
  • Model and automate refined tables
  • Implement monitoring and proactive alerts
  • Publish and validate tables in the production environment
  • Document the entire technical and functional structure of the platform
  • Conduct training sessions and formalize the technical handover

Requirements:

  • Degree: PhD
  • Education: Computer Engineering, Computer Science, Statistics, or related fields
  • English: Intermediate
  • Strong knowledge of data engineering using Python
  • Basic knowledge of cloud computing platforms
  • Experience with code versioning using Git
  • Ability to design scalable architectures
  • Ability to automate observability mechanisms and alerts
  • Ability to produce technical and operational documentation
  • Ability to conduct trainings and technical handovers

Benefits:

  • Stipend: BRL 9,000.00
  • Availability: 40 hours per week
  • Remote (Home Office)