Search by job, company or skills

Texplorers Inc

Senior Data Engineer

new job description bg glownew job description bg glownew job description bg svg
  • Posted 20 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Company Description

Texplorers Inc. is a leader in digital transformation for global enterprises, with over 15 years of experience in cloud technology and AI expertise. The company focuses on empowering businesses to scale efficiently and continuously improve through their innovation ecosystem. Texplorers Inc. is committed to innovation, sustainability, and inclusion, ensuring perpetual learning and growth for their clients.

We are seeking candidates who can join us immediately.

We are seeking an experienced Data Engineer to build and refine robust big data solutions. You will be responsible for designing high-performance ETL pipelines using Apache Spark across major cloud environments (Azure, AWS, or GCP), with a heavy focus on data engineering excellence and system optimization.

Key Responsibilities:

-Lead the end-to-end design and optimization of Databricks ETL pipelines to ensure high-quality data delivery.

-Develop scalable data ingestion and transformation workflows by leveraging the full power of the Apache Spark framework.

-Manage complex data lifecycles involving both structured and unstructured datasets from diverse originating sources.

-Fine-tune Spark jobs to balance peak performance with cost-efficiency and long-term scalability.

-Establish robust CI/CD patterns and version control standards for all Databricks-based workflows.

-Collaborate with stakeholders across engineering, data science, and analytics to translate business needs into technical solutions.

-Enforce rigorous security protocols and access management policies within the Databricks environment.

-Diagnose and resolve complex bottlenecks related to big data processing and system scalability.

Required Skills:

-Proven track record of developing high-performance solutions using Databricks and Apache Spark (proficient in PySpark, Scala, or Java).

-Expert-level command of SQL for complex data modeling, transformation, and analytical querying.

-Deep familiarity with cloud ecosystems (Azure, AWS, or GCP), with specific expertise in storage and warehousing solutions like ADLS, S3, or BigQuery.

-Significant experience building end-to-end ETL/ELT pipelines and managing scalable Data Warehousing environments.

-Strong grip on CI/CD pipelines, Git version control, and DevOps methodologies tailored for data engineering.

-In-depth knowledge of Delta Lake and Lakehouse architecture, with a focus on advanced performance tuning and optimization.

-Skilled in managing massive-scale datasets within distributed computing environments.

Preferred Skills:

-Skilled in leveraging Databricks MLflow to track, deploy, and scale machine learning models.

-Familiarity with modern data stack components such as Apache Kafka for streaming, Airflow for workflow automation, or Snowflake for cloud warehousing.

-Deep understanding of data privacy standards, access control policies, and corporate compliance requirements.

Education & Experience:

-Bachelor's or Master's degree in Computer Science, Data Engineering, or a strictly related technical discipline.

-57 years of specialized experience in Data Engineer and big data architecture.

-Mandatory background in the Retail sector, with a deep understanding of industry-specific data challenges

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 138593369

Similar Jobs