Search by job, company or skills

eternal.ag

Computer Vision Engineer (All levels)

new job description bg glownew job description bg glownew job description bg svg
  • Posted 15 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Transform 800,000 hectares of greenhouses into fully-autonomous food production sites

At eternal.ag, we're building the future of sustainable food production. Our mission is to convert the world's existing greenhouses into fully automated facilities that can produce fresh food year-round - addressing the critical need to double food production by 2050 while facing severe labor shortages, water scarcity, and climate challenges.

About The Role

Help us bring our robots to life in greenhouses around the world. Whether you're a recent graduate or an experienced engineer, you'll build perception systems that enable robots to identify ripe produce, detect plant diseases, and navigate complex greenhouse environments autonomously.

As a Computer Vision Engineer at eternal.ag, you'll be part of a high-performance culture that values first-principles thinking and rapid iteration. You'll work at the intersection of classical computer vision and modern deep learning, creating perception systems that operate reliably in challenging agricultural environments with varying lighting, occlusions, and organic variability.

You'll collaborate with a distributed team across our Cologne HQ and Bengaluru office, pushing the boundaries of what's possible in agricultural computer vision while delivering practical solutions that work 24/7 in production environments.

What You'll Do

  • Maintain and improve our perception pipeline, including detecting ripe tomatoes among dense foliage, estimating cut points on stems with sub-centimeter accuracy, and adapting to new crop types as we expand.
  • Build and maintain data and model lifecycle infrastructure, from data collection and annotation to model monitoring, drift detection, and retraining across our growing robot fleet.
  • Create real-time perception pipelines that process 2D/3D sensor data for robotic decision-making with sub-centimeter accuracy.
  • Optimize vision algorithms for edge deployment on robotic platforms, balancing accuracy with computational efficiency.
  • Build tools for debugging, analysis, and performance evaluation of your algorithms. Root-cause failures when things break in production.
  • Collaborate cross-functionally with robotics engineers, AI/ML researchers, and crop scientists to deliver end-to-end perception solutions.
  • You own your projects from prototype to production. We expect production-quality software engineering, not just notebook experiments. We don't have separate research and engineering teams.

Qualifications

Core Requirements (All Levels)

  • Bachelor's or Master's degree in Computer Science, Electrical Engineering, Applied Mathematics, Artificial Intelligence, Machine Learning, Robotics, or a related field (or graduating by Summer 2026)
  • Strong programming skills in C++ and/or Python
  • Solid understanding of deep learning fundamentals (training, evaluation, common failure modes) and how to apply them to real-world problems
  • Understanding of 3D geometry, camera models, and coordinate transforms
  • Working knowledge of classical CV (OpenCV, image processing, filtering)
  • Experience with at least one deep learning framework (PyTorch preferred)
  • Familiarity with Linux environments and version control systems
  • A bias toward letting data and learning solve problems, while being pragmatic about when classical approaches are the right tool

Experience Levels

New Graduate / Entry Level (0-2 years)

  • Recent graduate or final year student with strong academic performance
  • Hands-on computer vision and/or deep learning experience through internships, research projects, or competitions (Kaggle, university labs, personal projects)
  • Demonstrated programming skills through coursework or personal projects
  • Understands model training and evaluation basics, including common failure modes

Early Career (2-5 years)

  • Has built a training pipeline end-to-end: data collection/curation, training, evaluation, deployment, and iteration based on real-world feedback
  • Experience taking at least one vision or ML system from prototype to production
  • Proficiency with modern architectures (YOLO, Mask R-CNN, Vision Transformers)
  • Can make sound tradeoffs between classical and learning-based components in a perception pipeline
  • Practical experience with model optimization for edge deployment (quantization, distillation, TensorRT/ONNX export)

Senior Level (5-8 years)

  • Proven track record of deploying production perception systems and making sound architectural tradeoffs between classical and learning-based approaches
  • Experience designing data strategies: what to collect, how to annotate efficiently, when to retrain vs. fine-tune
  • MLOps experience: automated retraining pipelines, model versioning, drift detection, or A/B evaluation across a deployed fleet
  • Knowledge of model optimization for embedded systems (quantization, pruning, distillation)
  • Ability to mentor junior engineers and make technical decisions that compound over time

Staff/Principal Level (8+ years)

  • Technical leadership experience with complex perception systems at scale
  • Has built and iterated on the full ML lifecycle: training infrastructure, data flywheels, MLOps, model monitoring, continuous improvement at fleet scale
  • Strategic thinking about perception architecture, including when learned methods outperform engineered solutions and when simpler methods remain preferable
  • Track record of building and scaling high-performance computer vision teams

Preferred Qualifications

  • Experience with foundation models, vision-language models, or transfer learning for domain adaptation
  • Knowledge of 3D sensors (stereo cameras, depth sensors)
  • Familiarity with ROS2 for perception system integration
  • Experience with MLOps and ML infrastructure: experiment tracking, model versioning, automated retraining, data management at scale
  • Publications at top-tier computer vision or ML conferences (CVPR, ICCV, NeurIPS, ICML)
  • Open source contributions to computer vision or ML projects

Why eternal.ag

Launch Your Career: For new graduates, this is a unique opportunity to join a proven team and learn from engineers who've already built and deployed commercial vision systems. You'll get hands-on experience with cutting-edge technology while making a real-world impact from day one.

Impact at Scale: Your perception algorithms will directly enable robots to transform 800,000 hectares of greenhouses worldwide into sustainable, autonomous food production facilities.

Technical Excellence: Work with state-of-the-art computer vision technology including modern deep learning architectures, 3D perception, and multi-modal sensor fusion.

Rapid Innovation: Our software-first approach means you'll see your models deployed to real robots in hours/days, not months. We've proven we can develop and deploy new perception capabilities over-the-air as crops evolve.

Unique Challenges: Tackle perception problems that combine the complexity of outdoor vision (varying lighting, weather) with the precision requirements of industrial automation.

Growth Opportunity: Join as we scale from proof-of-concept to global deployment. Be part of the core team shaping the future of agricultural perception. Clear career progression from graduate to senior engineer and beyond.

Mentorship & Learning: Work alongside experienced computer vision engineers who've solved complex real-world problems. We invest in your growth through hands-on projects, technical mentorship, and exposure to all aspects of vision system development.

Flexible Work Culture: Distributed team with offices in Cologne and Bengaluru, following a follow-the-sun support model for our 24/7 operations.

Our Tech Stack

  • Deep Learning: PyTorch, ONNX
  • Deployment: TensorRT, ONNX Runtime
  • Vision & 3D: OpenCV
  • Sensors: RGB cameras, stereo vision, depth sensors
  • Data & Training: Cloud-native training pipelines, experiment tracking, annotation tooling
  • Infrastructure: Edge deployment systems, OTA model updates
  • Integration: ROS2 for robotic system integration

Apply Now

Ready to revolutionize how the world grows food through advanced computer vision Whether you're starting your career or looking to make a bigger impact, join us in building perception systems that will enable sustainable food production for billions.

We're committed to building a diverse and inclusive team. We encourage applications from candidates of all backgrounds and experience levels who are excited about our mission and show potential to grow with us. Recent graduates - don't let experience requirements hold you back; we value passion, potential, and fresh perspectives.

eternal.ag is building fully automated food production sites that can sustainably produce fresh food year-round. Backed by world-class investors and partnering with leading agricultural companies, we're turning the vision of fully-autonomous greenhouses into reality.

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 145306567