Search by job, company or skills

Bitcot

Sr. Data Engineer

This job is no longer accepting applications

new job description bg glownew job description bg glownew job description bg svg
  • Posted 2 months ago

Job Description

Job Title: Data Engineer Databricks & Data Integrations

Location - Chennai / Indore / Remote

Experience - 36 years

Employment Type - Full-time

About the Role

We are looking for a skilled Data Engineer to build, manage, and optimize scalable data pipelines using Databricks. This role will play a key role in data source identification, data mapping, data quality assessment, and system integration, ensuring reliable, high-quality data for analytics and application use.

The ideal candidate will have strong hands-on experience with Databricks, data modeling, and integrations, and will collaborate closely with backend, analytics, and product teams.

Key Responsibilities :

Data Source Identification & Quality Assessment

Identify, analyze, and document available data sources, formats, and access methods.

Assess data quality across completeness, accuracy, consistency, and reliability.

Proactively identify data gaps, risks, and opportunities for improvement.

Data Mapping & Integration

  • Define and maintain comprehensive data mapping between source systems and Databricks tables.
  • Design and implement scalable ETL/ELT pipelines using Databricks and Apache Spark.
  • Manage end-to-end data integrations with internal systems and third-party platforms.
  • Ensure smooth, reliable, and maintainable data flows across systems.

Databricks & Data Modeling

Develop and optimize Databricks workloads using Spark and Delta Lake.

Design efficient data models optimized for performance, analytics, and API consumption.

Manage schema evolution, versioning, and data lineage.

Monitor pipeline performance and optimize for cost and scalability.

Data Quality, Validation & Governance

Implement data validation, reconciliation, and error-handling mechanisms.

Ensure data accuracy, integrity, and consistency across pipelines.

Follow best practices for data security, governance, and access control.

Collaboration & Documentation

  • Work closely with backend, analytics, and product teams to align data solutions with business needs
  • Create and maintain clear technical documentation for data mappings, pipelines, and integrations.
  • Participate in design and architecture discussions.

Required Skills & Qualifications:

Technical Skills

Strong hands-on experience with Databricks and Apache Spark.

Proficiency in Python and SQL.

Proven experience in data mapping, transformation, and data modeling.

Experience integrating data from APIs, databases, and cloud storage.

Solid understanding of ETL/ELT concepts and data warehousing principles.

Cloud & Platform Experience

Experience working with AWS, Azure, or GCP(any one of them).

Familiarity with Delta Lake, Parquet, or similar data formats.

Understanding of secure data access and role-based permissions.

Good to Have

Experience supporting data for analytics dashboards or data-driven applications.

Exposure to streaming or near-real-time data pipelines.

Experience in startup or fast-paced product development environments.

Soft Skills

Strong analytical and problem-solving skills.

Ability to work effectively with cross-functional teams.

Clear communication and documentation skills.

Strong ownership mindset with attention to detail.

Why Join Us

Opportunity to work on impactful data platforms and integrations.

Hands-on role with ownership over data pipelines and models.

Collaborative and growth-oriented work culture.

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 140385187