Senior ML Hardware Architect
Our mission is to radically reduce the cost of Artificial Intelligence.
We are the world leaders in algorithm/hardware co-design for artificial intelligence. Our roadmap begins with products 100x better than GPUs and will ultimately deliver products that are many orders of magnitude more cost effective than what is available today. We will ultimately be able to put models the size of ChatGPT into chips the size of a thumbnail.
About the role:
We’re looking for an experienced ML Hardware Architect to help define the architecture of Rain’s next-generation AI accelerators! This person will report to our Lead Architect.
This is a remote role, so you can work from anywhere in the United States.
- Define the architecture of next-generation AI accelerators to enable novel online learning algorithms
- Work closely with the algorithms team to understand and evaluate local and global data movement in systolic and data-flow architectures
- Develop architecture simulators for system design, verification, prototyping, and hardware-software co-optimizations
- Work closely with the design team to define the micro-architecture of key IP blocks
- Work closely with the system software team to develop compilers and ISA extensions to enable new hardware functional units in an SoC
- Integrate proprietary accelerators and design macros with standard service cores
- MSEE with 5+ years or PhD in Electrical Engineering and/or Computer Engineering with a focus on SoC architecture design, AI accelerator design, system modeling, and PPA analysis
- Deep understanding of processor ISA including x86, ARM, and RISC-V
- Strong understanding of memory management schemes, on-chip and off-chip data movement, logic-memory optimizations, and efficient code placement techniques
- Experience with SystemC, Python, C/C++, and the ability to write production-ready, annotated code
- Experience designing state-of-the-art data flow, spatial, and systolic architectures and the ability to integrate synchronous and asynchronous design macros with both digital and mixed-signal circuit collaterals
- Familiarity with compiler optimizations, synthesis, code generation, programming models and computer architecture
- Familiarity with the integration of novel accelerators and design macros with service cores
- Familiarity with algorithmic techniques for on-device learning (online/continuous learning)
- Developed the architecture of at least one state-of-the-art AI accelerator
- Familiarity with deep learning models and willingness to learn novel algorithms and translate them into specifications for hardware accelerators
- Exhibit a high degree of motivation and independence
- Strong communication skills, both written and verbal
- PhD in Electrical Engineering and/or Computer Engineering with focus on near- and in-memory computing architectures for AI acceleration
- Understanding of state-of-the-art SRAM based in-memory computing macros
- Medical Insurance with 100% coverage of employee premiums
- Dental and Vision Insurance
- 401k match
- Unlimited PTO + all federal holidays
- Company wide time off: Time between Christmas and New Years, week of the 4th of July off
- Work from anywhere in the United States
- And more!
The anticipated annual base salary for this position in the United States is $220,000 to $320,000. This range does not include any other compensation components or other benefits that an individual may be eligible for.