Search by job, company or skills

  • Posted 6 days ago
  • Be among the first 40 applicants
Early Applicant
Quick Apply

Job Description

We are seeking a Senior Data Engineer with expertise in Graph Data technologies to join our data engineering team. This role is crucial for developing scalable, high-performance data pipelines and advanced data models that power next-generation applications and analytics. The ideal candidate will have a strong background in data architecture, big data processing, and Graph technologies, enabling the organization to leverage connected data for deep insights and advanced analytics use cases. You will work closely with data scientists, analysts, architects, and business stakeholders to design and deliver graph-based data engineering solutions.

Roles & Responsibilities

  • Graph Data Engineering: Design, build, and maintain robust data pipelines using Databricks (Spark, Delta Lake, PySpark) for complex graph data processing workflows.
  • Graph Database Optimization: Own the implementation of graph-based data models, capturing complex relationships and hierarchies. Build and optimize Graph Databases such as Stardog, Neo4j, Marklogic, or similar, to support query performance, scalability, and reliability.
  • Query Implementation: Implement graph query logic using SPARQL, Cypher, Gremlin, or GSQL, depending on platform requirements.
  • Data Integration & Analytics: Collaborate with data architects to integrate graph data with existing data lakes and warehouses. Work closely with data scientists and analysts to enable graph analytics, link analysis, and recommendation systems.
  • Metadata & Governance: Develop metadata-driven pipelines and lineage tracking for graph and relational data processing. Ensure data quality, governance, and security standards are met across all graph data initiatives.
  • Mentorship & Innovation: Mentor junior engineers and contribute to data engineering best practices, especially around graph-centric patterns and technologies. Stay up-to-date with the latest developments in graph technology, graph ML, and network analytics.

Qualifications

  • A Master's or Bachelor's degree in Computer Science, IT, or a related field with relevant experience.
  • Hands-on experience in Databricks, including PySpark and Delta Lake.
  • Hands-on experience with graph database platforms such as Stardog, Neo4j, or Marklogic.
  • Strong understanding of graph theory, graph modeling, and traversal algorithms.
  • Proficiency in workflow orchestration and performance tuning on big data processing.
  • Strong understanding of AWS services.
  • Experience with software engineering best practices, including version control (Git), CI/CD (Jenkins), and automated unit testing.
  • AWS Certified Data Engineer, Databricks Certificate, or Scaled Agile SAFe certification is preferred.

Soft Skills

  • Problem-Solving: Excellent analytical and troubleshooting skills, with the ability to quickly learn, adapt, and apply new technologies.
  • Collaboration: Excellent collaboration and communication skills, with experience working with Scaled Agile Framework (SAFe) and DevOps practices.
  • Proactiveness: High degree of initiative and self-motivation, with the ability to manage multiple priorities successfully.
  • Communication: Strong verbal and written communication skills, including presentation and public speaking skills.

More Info

Job Type:
Industry:
Function:
Employment Type:
Open to candidates from:
Indian

About Company

Horizon Therapeutics focuses on developing innovative medicines for rare and rheumatic diseases, dedicated to improving patient lives.

Job ID: 123279777

Similar Jobs