Minimum qualifications:
- Bachelor's degree in Electrical Engineering, Computer Engineering, Computer Science, a related field, or equivalent practical experience.
- 8 years of experience in ASIC/SoC architecture, logic design, or systems engineering.
- Experience defining and specifying boot, reset, and power management architectures for SoCs.
- Experience in hardware security principles (e.g., secure boot, hardware root of trust, secure debug).
- Experience in debug and trace architectures, specifically with ARM CoreSight infrastructure.
Preferred qualifications:
- Master's degree or PhD in Electrical Engineering, Computer Engineering, or a related field.
- Experience with AI/ML accelerator architectures or high-performance computing (HPC) SoCs.
- Experience taking a complex SoC from early concept definition through to post-silicon bring-up and debug.
- Knowledge of PCIe (Peripheral Component Interconnect Express) architecture and integration.
- Excellent communication and documentation skills, with the ability to lead and influence cross-functional engineering teams.
About The Job
In this role, you'll work to shape the future of AI/ML hardware acceleration. You will have an opportunity to drive cutting-edge TPU (Tensor Processing Unit) technology that powers Google's most demanding AI/ML applications. You'll be part of a team that pushes boundaries, developing custom silicon solutions that power the future of Google's TPU. You'll contribute to the innovation behind products loved by millions worldwide, and leverage your design and verification expertise to verify complex digital designs, with a specific focus on TPU architecture and its integration within AI/ML-driven systems.
The AI and Infrastructure team is redefining what's possible. We empower Google customers with breakthrough capabilities and insights by delivering AI and Infrastructure at unparalleled scale, efficiency, reliability and velocity. Our customers include Googlers, Google Cloud customers, and billions of Google users worldwide.
We're the driving force behind Google's groundbreaking innovations, empowering the development of our cutting-edge AI models, delivering unparalleled computing power to global services, and providing the essential platforms that enable developers to build the future. From software to hardware our teams are shaping the future of world-leading hyperscale computing, with key teams working on the development of our TPUs, Vertex AI for Google Cloud, Google Global Networking, Data Center operations, systems research, and much more.
Responsibilities
- Define and architect SoC-level debug and trace features, heavily utilizing ARM CoreSight and custom debug IPs to enable deep visibility into complex, multi-die AI systems.
- Partner with IP design, SoC integration, Design Verification (DV), and low-level software/firmware teams to translate architectural requirements into executable specifications.
- Drive the definition of the SoC power management architecture, including power domains, low-power states, and power sequencing.
- Conduct PPA (Power, Performance, Area) analysis to make data-driven architectural decisions.
- Architect robust and scalable boot, initialization, and reset sequences across the entire SoC and Security. Define hardware security architectures, including secure boot, cryptographic isolation, and debug security/entitlement mechanisms.
Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form .