03.24.18

Amid AI’s rise and Moore’s Law’s demise, companies must enhance chip expertise

Moore’s Law: It’s the foundation of the technology industry—and it’s responsible for the advent of countless gadgets, paradigm shifts, and technological empires that have shaped our lives during the past half century, from the PC and smartphone revolutions, to the internet boom, to the creation of corporate powerhouses like Google and Amazon.

Now it’s dead.

With the demise of Moore’s Law—which stated that the number of transistors on a silicon chip doubled every two years—a new factor is taking over the role as the technology business’ fundamental driver: artificial intelligence (AI). AI will place even greater demands on the semiconductor industry, requiring faster advances in chip technology than ever before.

For companies that hope to develop differentiated edge products using AI, it will be essential to participate in the development of microchips that can handle the rigors of tasks such as machine-learning inference. In many cases, this will require companies to cultivate their own semiconductor expertise and work with chip suppliers to develop custom parts specifically designed for their products.

No Moore

Since Intel co-founder Gordon Moore first uttered his eponymous maxim in 1965, the semiconductor industry has stuck to the spirit of the law, either doubling the capabilities of chips or halving their prices every two years. However, the chip industry has run into technical problems, with the progress of Moore’s Law stymied by physical and economic limitations intrinsic in semiconductor manufacturing.

Intel in 2015 said it would slow the rate of advancement in semiconductor manufacturing process technologies, and warned that transistors are likely to continue
shrinking for only five more years. This announcement prompted the MIT Technology Review in 2016 to run a headline declaring, “Moore’s Law Is Dead. Now What?”

This is a problem for the technology industry in general, with Moore’s Law enabling the technology industry’s capability to continuously offer products that are faster, cheaper, and more powerful. It’s an even bigger problem for the AI segment specifically, where more extensive processing power will be required.

Today’s burgeoning AI market was made possible by Moore’s Law, enabling powerful chips like the current generation of graphics processing units (GPUs). Such chips incorporate billions of transistors and thousands of cores, supporting the kind of extensive parallel processing required for AI tasks. For example, Nvidia’s Volta GPU contains 21 billion transistors and 5,120 cores.

To support the increased demands of AI, future GPUs will require billions more transistors to support thousands of additional cores. However, with semiconductor technology progressing at a slower rate, the tech industry can no longer count on Moore’s Law to yield semiconductor manufacturing technology improvements that would allow today’s GPUs to double their transistor and core counts—or slash their costs by 50 percent—in two years.

On the Edge

The end of Moore’s Law represents a particular challenge for AI-enabled edge devices, such as drones, surveillance cameras or internet of things (IoT) devices. Such devices are increasingly conducting AI inference in real-time, requiring relatively small, low-cost and power-efficient processor chips. Lacking the historical engine that generates constant reductions in size, cost, and energy consumption, it can be a challenge to develop processing chips for edge application that are equal to the task of implementing AI at the edge.

Happily, there are ways for makers of edge devices to implement AI chip solutions that meet their requirements. One path is to develop application-specific integrated circuit (ASIC) chips, which are custom devices that allow designers to pick and choose which features they would like to use.

Edge devices don’t need all the power that a large GPU provides. However, they do need a certain amount of processing capabilities, along with other features related directly to their functioning.

Leading ASIC suppliers help customers build ASICs using intellectual property (IP) cores that perform specific functions needed in a design.
In this way, companies can dedicate all the transistors they have to features they need, and not waste money, power and development costs on features that are unnecessary.

“(For edge AI), ASICs will win in the consumer space because they provide a more optimized user experience, including lower power consumption and higher processing, for many applications,” McKinsey & C0. stated in a report. “Enterprise edge will see healthy competition among field programmable gate arrays, GPUs, and ASIC technology. However, ASICs may have an advantage because of their superior performance per watt, which is critical on the edge.”
An investment in the future

While ASICs may be the optimal solution for edge AI devices, they require expense and effort to integrate them into the product-design process. Developing ASICs entails companies to hire and maintain a team of engineers, an undertaking that could require tens of millions of dollars. This team will be essential to engage in semiconductor design and interface with ASIC suppliers.

However, such investments will be essential to develop AI-enabled chips in the post Moore’s Law Era.

Hong Bui is Senior Vice President, Product Development for Veritone. He is a software veteran with over two decades of experience leading and developing products for top consumer brands. Hong is leading the product development of the Veritone aiWARE platform.