Data Engineer

Posted 5hrs ago

Employment Information

Education
Salary
Experience
Job Type

Report this job

Job expired or something wrong with this job?

Job Description

Data Engineer at Anika Systems responsible for designing and building data pipelines. Supporting federal clients with data strategies and ensuring high-quality data for analytics.

Responsibilities:

  • Design, develop, and maintain robust ETL/ELT pipelines to ingest, transform, and deliver data across enterprise platforms.
  • Build scalable data ingestion frameworks for structured and semi-structured data, including XBRL filings and financial datasets.
  • Implement data transformation logic to support analytics, reporting, and regulatory use cases.
  • Ensure data pipelines are reliable, performant, and scalable in cloud environments.
  • Leverage AI-assisted development tools to accelerate pipeline development, testing, and optimization.
  • Develop and manage data solutions leveraging AWS services (e.g., S3, Airflow, DAGs, Glue, Lambda, Redshift).
  • Implement and optimize Apache Iceberg table formats for large-scale, ACID-compliant data lakes.
  • Support lakehouse architectures that unify data lakes and data warehouses.
  • Optimize data storage and retrieval strategies for performance and cost efficiency.
  • Enable data platforms that support AI/ML workloads and downstream generative AI use cases.
  • Design and implement CI/CD pipelines for data pipelines, infrastructure, and analytics code using various tools.
  • Automate build, test, and deployment processes for ETL pipelines and data platform components.
  • Implement DataOps best practices, including version control, automated testing, environment promotion, and rollback strategies.
  • Ensure reproducibility, reliability, and governance of data pipeline deployments across environments.
  • Integrate AI-driven testing and monitoring tools to improve pipeline quality and reduce operational risk.
  • Design and implement materialized views and other performance optimization techniques to improve query efficiency.
  • Develop pipelines to ingest, parse, and normalize XBRL data.
  • Apply context engineering principles to ensure data is enriched with meaningful metadata, lineage, and business context.
  • Collaborate with data architects, analysts, and business stakeholders to understand data needs and deliver solutions.
  • Work in Agile teams to iteratively deliver data capabilities and enhancements.

Requirements:

  • Bachelor’s degree in Computer Science, Engineering, Data Science, or related field.
  • 5+ years of experience in data engineering, ETL development, or data platform engineering.
  • Strong hands-on experience with: ETL/ELT tools and frameworks AWS data services (S3, Glue, Lambda, Redshift, etc.) Apache Iceberg and modern data lake architectures
  • Experience designing and implementing CI/CD pipelines for data platforms and ETL workflows.
  • Demonstrated proficiency using AI tools and AI-assisted development workflows (e.g., LLM copilots, automated code generation, pipeline optimization tools).
  • Experience processing XBRL or complex financial/regulatory datasets.
  • Proficiency in SQL and Python.
  • Experience implementing materialized views and query optimization techniques.
  • Understanding of data modeling concepts and metadata management.
  • Familiarity with data governance, data quality practices, and data readiness for AI/ML use cases.
  • Ability to work in Agile, DevOps-oriented environments.
  • U.S. Citizenship required; ability to obtain and maintain a federal clearance.

Benefits:

  • Health insurance
  • Paid time off
  • Professional development