Search by job, company or skills

O

Senior Perception Engineer

Save
new job description bg glownew job description bg glownew job description bg svg
  • Posted 14 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Description

About Origin

Origin (previously 10xConstruction) is building general-purpose autonomous robots for US construction to tackle rising costs, safety risks, and labour shortages. Our modular, multi-trade platform combines purpose-built hardware with real-time site intelligence to navigate complex environments and execute tasks with precision. Trained in high-fidelity simulation and already deployed on live sites, our robots deliver 5x faster execution, 250%+ margin expansion, and significant cost savings. Join India's most talent-dense robotics team consisting of individuals from IITs, Stanford, UCLA, etc.

About The Role

You will work on building and optimizing 3D perception systems that enable robots to understand and interact with complex real-world environments. The goal is to develop robust perception pipelines that work reliably across both simulation and real-world construction sites, ensuring accurate scene understanding, localization, and decision-making.

Key Responsibilities

3D Perception Development & Scene Understanding

  • Design, implement, and deploy realtime 3D perception pipelines leveraging LiDAR, IMU, Stereo, and RGB cameras
  • Develop algorithms for spatial and temporal data interpretation to enable high-fidelity semantic scene understanding
  • Optimize ego-motion estimation and localization modules to ensure seamless integration with downstream planning and control tasks

Deep Learning

  • Train and integrate deep learning models required for semantic world understanding and surface finish classification for quality control
  • Collect and curate high-quality datasets (real and synthetic) and automate training pipelines and experiment tracking
  • Benchmark and optimize perception models for edge devices to ensure real-time performance in resource-constrained environments

Sensor Fusion & Localization

  • Design and implement sensor fusion strategies combining visual, inertial, and spatial data
  • Architect and implement sophisticated sensor fusion strategies that integrate classical methods with deep learning approaches to convert noisy, asynchronous data from heterogeneous sensors into a unified environment representation

Calibration

  • Develop and automate robust extrinsic and intrinsic calibration procedures (Camera, LiDAR, IMU) using target-based and/or targetless methods
  • Design and implement algorithms for online calibration drift detection and self-healing to maintain system integrity during long-term deployments
  • Establish rigorous quantitative metrics to objectively evaluate and certify calibration quality

Collaboration

  • Partner with Navigation, Manipulation, and Cloud teams to ensure perception outputs are optimized for downstream path planning, grasping, and remote fleet monitoring
  • Define, track system-level and own system level KPIs and perception metrics to identify regressions across software iterations

Requirements

  • 3+ YoE
  • Strong fundamentals in computer vision and 3D perception
  • Proficiency in C++ (Python is a plus)
  • Familiarity with PyTorch, TensorRT
  • Basic understanding of Localisation and SLAM
  • Ability to work with real-world data and debug complex systems

Nice to Have

  • Experience with Nvidia Deepstream, GStreamer, Holoscan
  • Experience with ROS/ROS2
  • Hands-on experience with LiDAR, RGB-D cameras, or IMUs
  • Familiarity with OpenCV, PCL, or similar libraries
  • Experience working on robotics or vision-based projects

More Info

Job Type:
Industry:
Function:
Employment Type:

About Company

Job ID: 147219201

Similar Jobs

Bengaluru, India

Skills:

UbuntuTensorflowLinuxshell scriptingCloud ServicesPytorchPythonROSNVIDIA Isaac Simmodel deploymentsensor fusion techniquescomputer vision algorithmsmulti-modal AI systems