Search by job, company or skills

Q

Engineer, Principal/Manager - Machine Learning, AI

12-20 Years
new job description bg glownew job description bg glownew job description bg svg
  • Posted 8 days ago
  • Be among the first 50 applicants
Early Applicant
Quick Apply

Job Description

We are looking for a Principal AI/ML Engineer with expertise in model inference, optimization, debugging, and hardware acceleration. This role will focus on building efficient AI inference systems, debugging deep learning models, optimizing AI workloads for low latency, and accelerating deployment across diverse hardware platforms.

In addition to hands-on engineering, this role involves cutting-edge research in efficient deep learning, model compression, quantization, and AI hardware-aware optimization techniques. You will explore and implement state-of-the-art AI acceleration methods while collaborating with researchers, industry experts, and open-source communities to push the boundaries of AI performance.

This is an exciting opportunity for someone passionate about both applied AI development and AI research, with a strong focus on real-world deployment, model interpretability, and high-performance inference.

  • Education & Experience:
  • 20+ years of experience in AI/ML development, with at least 5 years in model inference, optimization, debugging, and Python-based AI deployment.
  • Masters or Ph.D. in Computer Science, Machine Learning, AI.
  • Leadership & Collaboration:
  • Lead a team of AI engineers in Python-based AI inference development.
  • Collaborate with ML researchers, software engineers, and DevOps teams to deploy optimized AI solutions.
  • Define and enforce best practices for debugging and optimizing AI models.
  • Key Responsibilities:
  • Model Optimization & Quantization:Optimize deep learning models using quantization (INT8, INT4, mixed precision etc), pruning, and knowledge distillation.
  • Implement Post-Training Quantization (PTQ) and Quantization-Aware Training (QAT) for deployment.
  • Familiarity with TensorRT, ONNX Runtime, OpenVINO, TVM.
  • AI Hardware Acceleration & Deployment:Optimize AI workloads for Qualcomm Hexagon DSP, GPUs (CUDA, Tensor Cores), TPUs, NPUs, FPGAs, Habana Gaudi, Apple Neural Engine.
  • Leverage Python APIs for hardware-specific acceleration, including cuDNN, XLA, MLIR.
  • Benchmark models on AI hardware architectures and debug performance issues.
  • AI Research & Innovation:Conduct state-of-the-art research on AI inference efficiency, model compression, low-bit precision, sparse computing, and algorithmic acceleration.
  • Explore new deep learning architectures (Sparse Transformers, Mixture of Experts, Flash Attention) for better inference performance.
  • Contribute to open-source AI projects and publish findings in top-tier ML conferences (NeurIPS, ICML, CVPR).
  • Collaborate with hardware vendors and AI research teams to optimize deep learning models for next-gen AI accelerators.
  • Details of Expertise:
  • Experience optimizing LLMs, LVMs, LMMs for inference.
  • Experience with deep learning frameworks: TensorFlow, PyTorch, JAX, ONNX.
  • Advanced skills in model quantization, pruning, and compression.
  • Proficiency in CUDA programming and Python GPU acceleration using cuPy, Numba, and TensorRT.
  • Hands-on experience with ML inference runtimes (TensorRT, TVM, ONNX Runtime, OpenVINO).
  • Experience working with RunTimes Delegates (TFLite, ONNX, Qualcomm).
  • Strong expertise in Python programming, writing optimized and scalable AI code.
  • Experience with debugging AI models, including examining computation graphs using Netron Viewer, TensorBoard, and ONNX Runtime Debugger.
  • Strong debugging skills using profiling tools (PyTorch Profiler, TensorFlow Profiler, cProfile, Nsight Systems, perf, Py-Spy).
  • Expertise in cloud-based AI inference (AWS Inferentia, Azure ML, GCP AI Platform, Habana Gaudi).
  • Knowledge of hardware-aware optimizations (oneDNN, XLA, cuDNN, ROCm, MLIR, SparseML).
  • Contributions to open-source community.
  • Publications in International forums conferences journals.

About Company

QUALCOMM CDMA Technologies (QCT) is the largest provider of 3G chipset and software technology in the world, with chipsets shipped to more than 50 customers and powering the majority of all 3G devices commercially available. QCT partners with nearly 60 3G network operators around the globe and has the largest CDMA engineering team in the wireless industry.
QCT provides complete chipset solutions and integrated applications from the Launchpad suite of advanced technologies. Our integrated solutions offer device manufacturers reduced bill-of-materials costs, time-to-market, and development time. Mobile handsets powered by QCT chipsets can offer more features while maintaining a smaller, sleeker form-factor and benefiting from reduced power demands.
QCT values collaboration with its customers and partners and works closely with them to enable their success. We offer a wide range of tools to support the device development process, and develop new technologies based on the needs and demands of the wireless market. Devices for all market segments can now include features enabled by 3G wireless technology, in demand by a growing and increasingly sophisticated wireless community.

Job ID: 119222245

Similar Jobs

Early Applicant