
Search by job, company or skills
Key Responsibilities
Develop and maintain the Azure Databricks hosted data lake by creating pipelines to extensive internal and external data sources, including writing APIs.
Convert unprocessed data into mastered, easy to use tables for data analysts through ETL.
Promote best practices and help setting up standardized, effective processes to improve your work and work of the team.
Work collaboratively with the data engineering team to envision and layout the roadmap for the architecture of the data lake.
Drive data governance with high standards of quality.
Coordinate with the wider data and analytics team to ensure that the output meets the requirements of data science, reporting and data analysis workstreams.
Contribute to driving the strong data culture behind ION Analytics digital transformation.
Actively build in depth knowledge in a range of domains:
o Commercial: Contracts and client relationships
o Accounting: Financials of ION and External companies
o Markets: M&A, LCM and ECM Deals
o Content: Articles and Journalism
o Product front end: Functionalities and platform offering
o Product back end: Usage and other archived information
Required skills, experiences, and qualifications
- Working knowledge of Machine Learning (ML) or Artificial Intelligence (AI) preferred.
- Proficiency in Python and SQL, Pyspark is a plus.
- Experience with Azure.
- Experience with Databricks.
- Experience with version control in Azure DevOps or GitHub.
- Experience of agile software development methodologies (Scrum, Kanban etc.).
- Strong analytical and problem-solving skills.
- Strong written and verbal communication skills.
- Proactive team player, attention to detail and quality conscious.
- Ability to work to deadlines and manage expectations.
We're visionary innovators who are delivering mission-critical trading and workflow automation software to financial institutions, corporations, central banks, and governments. By combining our passion for automation with a strategic view on the industries we serve, we design solutions that improve decision-making, simplify complex processes, and empower people. Simply put, we help our customers do more, faster and better than before. We believe our investments in research and development are shaping the future of automation and enabling our customers to transform their business. And we embrace the power of community, working with each other and with our customers to succeed through a positive culture of continuous improvement.
Job ID: 111117511
Skills:
Azure Data Factory, Spark, Pyspark, Azure Data Lake, Databricks, Azure, Sql, Azure DevOps, Synapse, Azure SQL DB
Skills:
Data Transformation, Data Modelling, Sql, Oracle Cloud Data Integration, ETL pipelines
Skills:
data engineering , Python, Pyspark, AWS Glue, Docker, AWS Batch
Skills:
Cloudformation, Pyspark, Datadog, Sql, Jenkins, Git, Azure Data Factory, Gcp, Docker, Terraform, Azure, Kubernetes, Python, Azure DevOps, AWS, Airflow, Databricks Unified Data Analytics Platform, MLflow, Delta Live Tables, Prefect, Unity Catalog
Skills:
data engineering , Java, BigQuery, Google Cloud Platform, Scala, Apache Spark, Dataproc, Sql, Distributed Systems, DataFlow, Python, data pipelining, Parallel Processing, Pub Sub, performance optimization, Google Cloud Storage
We don’t charge any money for job offers