[ad_1]
Research
Our AI method has accelerated and optimized chip design, and its superhuman chip layouts are used in hardware around the world
In 2020, we published a preprint introducing our novel reinforcement learning method for designing chip layouts, which we later published in Nature and made open source.
Today we are publishing an addendum to Nature that describes more about our method and its implications for the field of chip design. We also release a pre-trained checkpoint, share the model weights, and announce its name: AlphaChip.
Computer chips have led to remarkable advances in artificial intelligence (AI), and AlphaChip is returning the favor by using AI to accelerate and optimize chip design. The method has been used to design superhuman chip layouts in the last three generations of Google's custom AI accelerator, the Tensor Processing Unit (TPU).
AlphaChip was one of the first reinforcement learning approaches to solve a real-world engineering problem. It generates superhuman or comparable chip layouts in hours, rather than requiring weeks or months of human effort, and its layouts are used in chips around the world, from data centers to cell phones.
This is how AlphaChip works
Designing a chip layout is not an easy task. Computer chips are made up of many interconnected blocks with layers of circuit components, all connected by incredibly thin wires. Additionally, there are many complex and interconnected design requirements that all need to be met simultaneously. Because of its complexity, chip designers have struggled to automate the chip footprint planning process for over sixty years.
Similar to AlphaGo and AlphaZero, who learned to master the games of Go, Chess and Shogi, we designed AlphaChip to treat chip floorplanning as a kind of game.
Starting from an empty grid, AlphaChip places one circuit component at a time until all components are completely placed. Then it is rewarded based on the quality of the final layout. A novel “edge-based” graph neural network enables AlphaChip to learn the relationships between interconnected chip components and generalize across chips, allowing AlphaChip to improve with every layout it designs.
Using AI to design Google's AI accelerator chips
AlphaChip has generated superhuman chip layouts that have been used in every generation of Google's TPU since its release in 2020. These chips enable massive scaling of AI models based on Google's Transformer architecture.
TPUs are at the heart of our powerful generative AI systems, from large language models like Gemini to image and video generators like Imagen and Veo. These AI accelerators are also at the heart of Google's AI services and are available to external users via Google Cloud.
To design TPU layouts, AlphaChip first practices a variety of previous generation chip blocks, such as on-chip and inter-chip network blocks, memory controllers and data transport buffers. This process is called pre-training. We then run AlphaChip on current TPU blocks to generate high-quality layouts. Unlike previous approaches, AlphaChip gets better and faster the more instances of the chip placement task it can solve, similar to how human experts do.
With each new generation of TPU, including our latest Trillium (6th Gen), AlphaChip has designed better chip layouts and provided a larger portion of the overall footprint, speeding up the design cycle and producing more powerful chips.
The broader impact of AlphaChip
AlphaChip's impact can be seen in its applications across Alphabet, the research community and the chip design industry. In addition to developing dedicated AI accelerators like TPUs, AlphaChip has also created layouts for other chips at Alphabet, such as Google Axion processors, our first Arm-based general-purpose data center CPUs.
External organizations also adopt AlphaChip and build on it. For example, MediaTek, one of the world's leading chip design companies, has expanded AlphaChip to accelerate the development of its most advanced chips while improving performance, performance and chip area.
AlphaChip has sparked an explosion of work on AI for chip design and has expanded to other critical phases of chip design, such as logic synthesis and macro selection.
Creating the chips of the future
We believe AlphaChip has the potential to optimize every phase of the chip design cycle, from computer architecture to manufacturing – and transform chip design for custom hardware found in everyday devices such as smartphones, medical devices, agricultural sensors, and more is.
Future versions of AlphaChip are currently in development and we look forward to working with the community to further revolutionize this space and create a future where chips are even faster, cheaper and more energy efficient.
Acknowledgments
We are so grateful to our amazing co-authors: Mustafa Yazgan, Joe Wenjie Jiang, Ebrahim Songhori, Shen Wang, Young-Joon Lee, Eric Johnson, Omkar Pathak, Azade Nazi, Jiwoo Pak, Andy Tong, Kavya Srinivasa, William Hang, Emre Tuncer, Quoc V. Le, James Laudon, Richard Ho, Roger Carpenter and Jeff Dean.
We particularly thank Joe Wenjie Jiang, Ebrahim Songhori, Young-Joon Lee, Roger Carpenter, and Sergio Guadarrama for their continued efforts to achieve this production impact, Quoc V. Le for his research advice and mentorship, and our senior author Jeff Dean for his support and intensity technical discussions.
We would also like to thank Ed Chi, Zoubin Ghahramani, Koray Kavukcuoglu, Dave Patterson, and Chris Manning for all their advice and support.
[ad_2]
Source link