Search by job, company or skills

Tesla

Senior Data Engineer

6-8 Years
new job description bg glownew job description bg glownew job description bg svg
  • Posted 9 days ago
  • Be among the first 10 applicants
Early Applicant

Job Description

About The Role
We are looking for a hands-on Senior Data Engineer
to help build and evolve Tesla's next-generation enterprise analytics platform that powers business intelligence, operational intelligence, manufacturing insights, supply chain visibility, service telemetry, energy operations, and more all while operating under strict SOX compliance and change management controls.

You will design, develop, and operate large-scale data infrastructure in a fast-paced, high-impact environment where decisions affect vehicle production, global delivery, battery lifecycle, Supercharger network, Full Self-Driving development, and energy grid optimization.

Key Responsibilities

  • Architect, build, and maintain state-of-the-art Enterprise Data Warehouse / Lakehouse solutions that serve both batch and near-real-time analytics use cases
  • Design and implement robust ETL / ELT pipelines using Python and Apache Airflow (or modern orchestration equivalents)
  • Develop and operate real-time data streaming and processing platforms using open-source technologies such as Apache Kafka, Apache Spark Streaming / Structured Streaming, Flink, or equivalent
  • Handle sensitive financial, production, and customer data systems while strictly adhering to SOX controls, segregation of duties, change management, and audit requirements
  • Partner closely with business sponsors, product managers, manufacturing engineers, service operations, finance, and IT/security teams to gather requirements, scope projects, and deliver high-quality solutions quickly
  • Communicate complex technical concepts and business impact effectively through written documentation, verbal discussions, architecture diagrams, and executive-level presentations (360-degree communication)
  • Define, enforce, and continuously improve engineering standards, coding best practices, testing methodologies, CI/CD patterns, monitoring & alerting, and quality assurance processes
  • Actively participate in design reviews, code walkthroughs, and pull request reviews across the team
  • Stay current with evolving open-source technologies and recommend adoption when they provide meaningful differentiation or operational efficiency
  • Provide global 247 data support on a rotating basis (on-call) and own ETL / streaming pipeline health monitoring, alerting, and incident resolution

  • Required Qualifications


  • 6+ years of professional experience as a Data Engineer, Backend Engineer, or ETL developer building large-scale data platforms
  • Strong proficiency in Python for data engineering (pandas, PySpark, SQLAlchemy, etc.)
  • Deep hands-on experience designing and operating Airflow DAGs in production at scale
  • Production experience with at least one distributed streaming system (Kafka, Kafka Streams, Spark Streaming, Flink, Pulsar, etc.)
  • Solid understanding of data modeling for analytical workloads
  • Experience building and operating systems under SOX compliance or similarly regulated environments (change control, audit trails, separation of duties, etc.)
  • Strong SQL skills and understanding of distributed query engines
  • Experience with containerization (Docker) and orchestration (Kubernetes / ECS) is highly desirable
  • Excellent communication skills able to explain technical trade-offs to engineers and business value to non-technical stakeholders
  • More Info

    Job Type:
    Industry:
    Employment Type:

    About Company

    Job ID: 142596911

    Similar Jobs