SofTeCode Blogs

One Place for all Tech News and Support

Google AI manufactures Chipset in 6hrs

3 min read
Google AI
0
(0)

In a recent Google AI blog post, lead Jeff Dean, scientists at Google Research and therefore the Google chip implementation and infrastructure team described an AI technology that will design computer chips in but six hours.

The team explained the method during a published paper where it talked a few learning-based approaches to chip design which will learn from experience and improve over time, becoming better at generating architectures for unseen components. They claim that this technology can complete designing computer chips in under six hours on average, which is significantly faster than the weeks it takes human experts within the loop.

According to the corporate, the new technology advances the state of the art therein it implies the location of on-chip transistors is often largely automated. If made publicly available, the Google researchers’ technique could enable cash-strapped startups to develop their chips for AI and other specialized purposes.
W3Schools

Additionally, such a development can shorten the chip design cycle, which can allow hardware to adapt better to rapidly evolving research.
Explaining the method, the blog post stated — in essence, the approach aims to put a “netlist” graph of logic gates, memory, and more onto a chip canvas, such the planning optimizes power, performance, and area (PPA) while adhering to constraints on placement density and routing congestion. The graphs home in size from millions to billions of nodes grouped in thousands of clusters, and typically, evaluating the target metrics takes from hours to over each day.

The researchers devised a framework that directs an agent trained through reinforcement learning to optimize chip placements. Given the netlist, the ID of the present node to be placed, and therefore, therefore, the metadata of the netlist and the semiconductor technology, a policy AI model outputs a probability distribution over available placement locations, while a worth model estimates the expected reward for the present placement.

While testing the team started with an empty chip, the agent as mentioned above places components sequentially until it completes the netlist and doesn’t receive a gift until the top when a negative weighted sum of proxy wavelength and congestion is tabulated. To guide the agent in selecting which components to put first, components are sorted by descending size — placing larger components first reduces the prospect there’s no feasible placement for it later.

According to the team, training the agent required creating a knowledge set of 10,000 chip placements, where the input is that the state related to the given placement, and therefore the label is that the reward for the location . to create it, the researchers first picked five different chip netlists, to which an AI algorithm was applied to make 2,000 diverse placements for every netlist.

Post testing, the co-authors report that as they trained the framework on more chips, they were ready to speed up the training process and generate high-quality results faster. In fact, they claim it achieved superior PPA on in-production Google tensor processing units (TPUs) — Google’s custom-designed AI accelerator chips — as compared with leading baselines.

According to the researchers, “Unlike existing methods that optimize the location for every new chip from scratch, our work leverages knowledge gained from placing prior chips to become better over time.”

Additionally, “our method enables direct optimization of the target metrics, like wire length, density, and congestion, without having to define … approximations of these functions as is completed in other approaches. Not only does our formulation make it easy to include new cost functions as they become available, but it also allows us to weigh their relative importance consistent with the requirements of a given chip block (e.g., timing-critical or power-constrained),” concluded the researchers.

How useful was this post?

Click on a star to rate it!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this post.

Give your views

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Search

Social Love – Follow US