Search by job, company or skills

intellicredence pvt. ltd.

DataBricks Engineer(Python or Java)

Save
new job description bg glownew job description bg glow
  • Posted an hour ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Company Description

IntelliCredence is a global IT consulting and technology services provider dedicated to helping enterprises innovate and achieve scalability through cutting-edge technologies. With expertise in AI, cloud, digital solutions, data science, and analytics, we deliver value-driven solutions tailored to our clients business objectives. We serve leading organizations across industries such as healthcare, retail, and manufacturing, offering diverse engagement models to foster growth and innovation. At IntelliCredence, we prioritize customer success, proactive collaboration, and cost-effective excellence in technology services. Our team consists of highly skilled professionals, supported by flexible engagement models and pricing options.

Role Description

We are seeking a DataBricks Engineer (specializing in Python or Java) to join our dynamic team on a full-time basis. This on-site position is located in Bengaluru. The individual will be responsible for designing, implementing, and optimizing scalable data engineering solutions using the DataBricks platform. Key tasks include developing data pipelines, creating and managing data models, debugging and optimizing code, and collaborating with cross-functional teams to enhance data processing workflows. on large-scale data processing, ETL workflows, and cloud-based data platforms using Databricks, Spark, Delta Lake, and cloud services. The engineer will also ensure seamless integration with other technologies and maintain best practices in code development and deployment.

Job Title: Databricks Engineer

Experience: 5–9 Years

Location: Bengaluru

Employment Type: Full-time

Roles and Responsibilities

  • Design, develop, and maintain data pipelines using Databricks and Apache Spark
  • Write efficient code using Python / PySpark or Java / Spark
  • Build ETL/ELT workflows for structured and unstructured data
  • Work with Delta Lake for reliable and optimized data storage
  • Optimize Spark jobs for performance, scalability, and cost efficiency
  • Integrate Databricks with cloud platforms such as Azure, AWS, or GCP
  • Work with data lakes, data warehouses, and big data processing systems
  • Perform data cleansing, transformation, validation, and quality checks
  • Collaborate with data engineers, analysts, data scientists, and business teams
  • Monitor and troubleshoot production data pipelines
  • Follow best practices for coding, version control, testing, and deployment

Required Skills

  • Strong experience with Databricks
  • Good hands-on experience in Python / PySpark or Java / Spark
  • Strong understanding of Apache Spark architecture
  • Experience with SQL and data modeling
  • Knowledge of Delta Lake
  • Experience in building batch and streaming data pipelines
  • Familiarity with cloud services such as Azure Data Lake, AWS S3, or Google Cloud Storage
  • Experience with workflow orchestration tools like Databricks Jobs, Airflow, or Azure Data Factory
  • Good understanding of performance tuning in Spark
  • Experience with Git, CI/CD, and production deployment practices

Preferd Skills

  • Experience with Azure Databricks / AWS Databricks
  • Knowledge of Kafka or streaming data processing
  • Experience with Unity Catalog
  • Understanding of data governance, access control, and security
  • Exposure to data warehouse platforms like Snowflake, Synapse, Redshift, or BigQuery
  • Knowledge of Agile development methodology

Qualifications

  • Strong knowledge of Programming, with proficiency in Python or Java
  • Experience in Software Development and building scalable solutions
  • Proficiency in Microservices architecture and building distributed systems
  • Hands-on expertise in Spring Framework for robust application development
  • Problem-solving skills and the ability to collaborate effectively in team environments
  • Experience working with big data tools, particularly with DataBricks, is a significant advantage
  • Bachelor's degree in Computer Science, IT, Engineering, or related field
  • Relevant Databricks, Spark, or cloud certifications will be an added advantage

Key Competencies

  • Strong problem-solving skills
  • Good communication skills
  • Ability to work independently and in a team
  • Strong analytical and debugging skills
  • Focus on performance, quality, and scalability

More Info

Job Type:
Industry:
Employment Type:

Job ID: 147546445