Search by job, company or skills

D

Data Engineer II

1-6 Years
new job description bg glownew job description bg glownew job description bg svg
  • Posted 6 days ago
  • Over 100 applicants
Quick Apply

Job Description

  • Drive the evolution of data and services platforms with a strong emphasis on data engineering and data science, ensuring impactful advancements in data quality, scalability, and efficiency.
  • Develop and fine-tune methods and algorithms to generate precise, high-quality data at scale, including the creation and maintenance of feature stores, analytical stores and curated datasets for enhanced data integrity and usability.
  • Solve complex data challenges involving multi-layered data sets and optimize the performance of existing data pipelines, libraries, and frameworks.
  • Provide support for deployed data applications and analytical models, identifying data issues and guiding resolutions.
  • Ensure proper data governance policies are followed by implementing or validating Data Lineage, Quality checks, classification, etc.
  • Integrate diverse data sources, including real-time, streaming, batch, and API-based data, to enrich platform insights and drive data-driven decision-making.
  • Experiment with new tools to streamline the development, testing, deployment, and running of our data pipelines.
  • Develop and enforce best practices for data engineering, including coding standards, code reviews, and documentation.
  • Ensure data security and privacy compliance, implementing measures to protect sensitive data.
  • Communicate, collaborate and work effectively in a global environment.
  • Bachelors degree in Computer Science, Software Engineering, or a related field
  • Extensive hands-on experience in Data Engineering, including implementing multiple end-to-end data warehouse projects in Big Data environments.
  • Proficiency in application development frameworks (Python, Java/Scala) and data processing/storage frameworks (Hadoop, Spark, Kafka).
  • Experience in developing data orchestration workflows using tools such as Apache NiFi, Apache Airflow, or similar platforms to automate and streamline data pipelines.
  • Experience with performance tuning of database schemas, databases, SQL, ETL jobs, and related scripts.
  • Experience of working in Agile teams
  • Experience in development of data-driven applications and data processing workflows/pipelines and/or implementing machine learning systems at scale using Java, Scala, or Python. This includes all phases such as data ingestion, feature engineering, modeling, tuning, evaluating, monitoring, and presenting analytics.
  • Experience in developing integrated cloud applications with services like Azure, Databricks, AWS or GCP.
  • Excellent analytical and problem-solving skills, with the ability to analyze complex data issues and develop practical solutions.
  • Strong communication and interpersonal skills, with the ability to collaborate effectively with and facilitate activities across cross-functional teams, geographically distributed, and stakeholders.

More Info

Job Type:
Industry:
Function:
Employment Type:
Open to candidates from:
Indian

About Company

Dynamic Yield by Mastercard enables teams to build personalized, optimized, and synchronized digital customer experiences, enhancing revenue and customer loyalty.

Job ID: 118924327

Similar Jobs