Big Data Engineer
Posted 36ds ago
Employment Information
Job Description
Data Engineer focusing on large-scale data systems operations and real-time data pipelines at Inetum. Collaborating with engineers and product managers to build robust technical solutions.
Responsibilities:
- Responsible for design, development and operations of large-scale data systems operating at petabytes scale.
- Focusing on real-time data pipelines, streaming analytics, distributed big data and machine learning infrastructure.
- Interact with engineers, product managers, BI developers and architects to provide scalable robust technical solutions.
- Design, develop, implement and tune large-scale distributed systems and pipelines that process large volume of data; focusing on scalability, low-latency, and fault-tolerance.
- Experience with Java, Python to write data pipelines and data processing layers.
- Experience in Airflow & Github.
- Experience in writing map-reduce jobs.
- Proven expertise with Big Data Technologies Hadoop, Hive, Kafka, Presto, Spark, HBase.
- Highly Proficient in SQL.
- Experience with Cloud Technologies (GCP, Azure).
- Experience with relational model, memory data stores desirable (Oracle, Cassandra, Druid).
Requirements:
- Bachelor's degree or engineering degree in one of the following areas: Computer Systems, Computer Science, Information Technology, Software Engineering, Applied Mathematics, Statistics or related field.
- At least 4 years of verifiable experience in roles related to Data engineering, Big Data development experience, Large-scale data processing, Data pipeline development.
- Use of technologies such as Hadoop, Spark, Kafka, Hive, etc.
- English language at conversational level.
Benefits:
- Benefits above and beyond the legal requirements.
- Work flexibility (hybrid/remote).
- Access to cutting-edge tools and technologies.
- Opportunity for professional growth in high-impact projects.



















