
Search by job, company or skills
Bring in industry best-practices around creating and maintaining robust data pipelines for complex data projects with / without AI component: o programmatically retrieve (unstructured mostly) data from several static and real-time sources (incl. web scraping, API use) o Structure this data into a structured format o Harmonize the data, into a common format and store it in a dedicated database. o Schedule the different jobs into a dedicated pipeline o rendering results through dynamic interfaces incl. web / mobile / dashboard with ability to log usage and granular user feedbacks o performance tuning and optimal implementation of complex Python scripts, SQL, Industrialize ML / DL solutions and deploy and manage production services proactively handle data issues arising on live apps Perform ETL on large and complex datasets for AI applications - work closely with data scientists on performance optimization of large-scale ML/DL model finetuning Build data tools to facilitate fast data cleaning and statistical analysis Build and ensure data architecture is secure and compliant Resolve issues escalated from Business and Functional areas on data quality, accuracy, and availability Work closely with APAC IT Transformation and coordinate with a fully decentralized team across different locations in APAC and global HQ (Paris). You should be Expert in structured and unstructured data in traditional and Big data environments Oracle / SQLserver, MongoDB, Hive / Pig, BigQuery and Spark Have excellent knowledge of Python programming both in traditional and distributed models (PySpark) Expert in shell scripting and writing schedulers Hands-on experience with Cloud - deploying complex data solutions in hybrid cloud / on-premise environment both for data extraction / storage and computation Experience working on industry standard services like Message Queue, Redis, Elastic Search, Kafka, or Spark Streaming Well versed with DevOps best practices like containerization, CICD pipeline (Jenkins and Maven) Hands-on experience in deploying production apps using large volumes of data with state-of-the-art technologies like Dockers, Kubernetes and Kafka C2 - Internal Natixis Strong knowledge of data security best practices 10+ years experience in data engineering role Graduate from a Tier-1 university Knowledge of finance and experience in handling company annual reports would be greatly appreciated And most importantly, you must be a passionate coder who really cares about building apps that can help us do things better, smarter and faster
eClerx provides business process management, automation and analytics services to a number of Fortune 2000 enterprises, including some of the world's leading financial services, communications, retail, fashion, media & entertainment, manufacturing, travel & leisure, and technology companies.
Job ID: 143684335