Microsoft is jumping into the market for AI chipsets using a type of programmable semiconductor designed to bring flexibility to processing deep-learning applications in its data centers.
The company is employing field programmable gate arrays (FPGAs) from Intel Corp.’s Altera group in an AI initiative called Project Brainwave, as reported by Forbes. FPGAs are a type of chip whose circuitry can be programmed to meet the specific needs of a customer after manufacturing. This allows the FPGAs to be reconfigured to run various algorithms at high speeds.
Project Brainwave supports multiple deep learning frameworks, including Microsoft’s CNTK, Google’s TensorFlow and Facebook’s Caffe2. This approach to AI chipsets distinguishes Microsoft from Google, whose Tensor Processing Units are dedicated to working with Google’s own algorithms.
With new deep-learning developments arriving at a rapid pace, Microsoft’s approach could provide the flexibility needed to accommodate the proliferation of the technology.
“We wanted to build something bigger, more disruptive and more general than a point solution,” said Microsoft distinguished engineer Doug Burger in a Forbes interview conducted at the Hot Chips conference.
However, the tradeoff for the higher flexibility of FPGAs is their higher cost compared to other types of microchips, such as application specific integrated circuits. FPGAs also can suffer from slower performance compared to dedicated chips.
Burger told Forbes that Microsoft has made modifications to the chips that boost their speed.
The U.S. deep learning market is expanding rapidly, with revenue expected to rise a compound annual growth rate of more than 57 percent from 2017 through 2021, according to the market research firm Technavio.
The technology is widely used in image recognition, transcription, drug design, direct marketing and recommendation systems.
Microsoft plans to use the Project Brainwave FPGAs on what It describes as “real-time AI,” which involves running deep-learning algorithms, rather than training them. Training typically requires more computing horsepower.
Training tasks involve teaching deep-learning algorithms how to work by having them process massive amounts of data. For example, an image recognition algorithm might look through thousands of images of cats to learn how to recognize felines.
Training currently is conducted by high-performance, general-purpose processors. Nvidia is currently the leader in this area, with its graphics processing units suited for these data-processing applications.
However, Intel’s AI chipsets also can be employed for deep-learning training. Intel offers a Deep Learning Training Tool designed to work with the company’s Xeon, Xeon Phi and Core i7 Extreme Edition processors.
Stephan Cunningham is vice president, product management at Veritone. Working in concert with core internal teams including industry-specific general managers and engineering as well as directly with clients and prospects, he leads the disciplines and business processes which govern the Veritone Platform.