04.18.18

NVIDIA and ARM Take Deep Learning to the Edge

With more than 86 billion ARM processor chips shipped to date, chances are there’s an ARM-based device within arm’s reach of you right now—most likely your smartphone or tablet. In fact, ARM-powered products are nearly ubiquitous, with the diminutive, low-power processors at the heart of countless mobile, consumer and internet-of-things (IoT) devices used by people every day. Now a deal between ARM and Nvidia promises to bring advanced deep learning capabilities to AI chips used in billions of devices that reside at the edge.

Under the terms of this partnership, the company’s will integrate Nvidia’s Deep Learning Accelerator (NVDLA) architecture into ARM’s Project Trillium platform for machine learning. The companies intend to make it easier for licensees of ARM’s processor technology to produce AI chips that can perform sophisticated tasks using deep learning.

NVDLA is a free, open architecture created to promote a standard design for inference accelerators. The architecture is based on Xavier, an Nvidia system-on-a-chip (SoC) that integrates an advanced graphics processing unit (GPU) and central processing unit (CPU). Packed with more than 9 billion transistors, Xavier is “the world’s most powerful autonomous machine system on a chip,” according to Nvidia.

Trillium is ARM’s machine-learning platform consisting of processors, software and a software development kit for edge devices.

“Inferencing will become a core capability of every IoT device in the future,” said Deepu Talla, vice president and general manager of Autonomous Machines at Nvidia, in a press-release quote. “Our partnership with Arm will help drive this wave of adoption by making it easy for hundreds of chip companies to incorporate deep learning technology.”

The goal of the companies is to make deep-learning inferencing a standard feature in every internet-connected product.

“Accelerating AI at the edge is critical in enabling ARM’s vision of connecting a trillion IoT devices,” said Rene Haas, executive vice president, and president of the IP Group, at ARM. “Today we are one step closer to that vision by incorporating NVDLA into the ARM Project Trillium platform, as our entire ecosystem will immediately benefit from the expertise and capabilities our two companies bring in AI and IoT.”

The agreement plays to each company’s strength, with Nvidia’s GPU technology leading in machine-learning training applications and ARM dominating in IoT products and other high-volume electronic devices.

“This is a win/win for IoT, mobile and embedded chip companies looking to design accelerated AI inferencing solutions,” said Karl Freund, lead analyst for deep learning at Moor Insights & Strategy. “Nvidia is the clear leader in ML training and ARM is the leader in IoT endpoints, so it makes a lot of sense for them to partner on IP.”

ARM has been striving to proliferate machine-learning technology in AI chips in recent months. The company in 2017 announced the Cortex-A75 and A55, ARM’s first processors to use its DynamIQ technology. DynamIQ provides a performance boost that allows the devices to perform machine learning tasks 50 times faster than previous Arm chips.

Allen Kim is head of ecosystems at Veritone. He oversees developer partnerships, the cognitive engine ecosystem, and media ingestion for the Veritone platform. Learn more about our platform and join the Veritone developer ecosystem today.