How AI chips will explode 3x by 2025 with startups like Hailo, Syntiant and Groq

Artificial Intelligence training and inference tasks are largely being handled today in mega data centers often run by major cloud providers. In recent years, the semiconductor industry has spawned the emergence of AI chips that can work in data centers but also edge devices as diverse as assisted driving vehicles and even portable MRI machines.

Many observers believe this AI chip market is in its infancy yet is expected to explode in the next five years.

One leading analyst firm has just forecast that the market for artificial intelligence-related chips will reach nearly $129 billion in 2025, three times the nearly $43 billion market of 2018.

AI memory devices alone will account for $60 billion of that total in 2025, up from $20 billion in 2019, and processors will account for $68 billion in 2025, up from $22 billion in 2019. Those forecasts are according to IHS Markit | Technology, now part of Informa Tech. The tabulation includes semiconductors in systems that run AI functions, including the memory and processing devices within those systems that run AI applications.

“Semiconductors represent the foundation of the AI supply chain, providing the essential processing and memory capabilities required for every AI application on earth,” said Luca De Ambroggi, senior research director for AI at IHS Markit. “AI is already propelling massive demand growth for microchips.”

Major AI chip suppliers include Intel and Nvidia (and more recently Xilinx) that tend to round out their offerings with AI-related software, Ambroggi said. All the major players expect impressive growth. Intel alone said last week it had $3.8 billion in AI revenue in 2019 in a global market that it sizes at $25 billion by 2024—a figure far below the IHS Markit forecast. “The market for AI-based silicon is growing and evolving quickly,” Intel CEO Bob Swan told analysts.

RELATED: Intel hits record 2019 revenue of $72 billion, with record 4Q

De Ambroggi said he looked at 50 different AI companies in reaching his forecast numbers, many that will consolidate down to 10 or so in coming years. Many are startups or non-traditional semi suppliers such as Habana, purchased late last year by Intel. Others include: Halio, Graphcore, Cambricon Technology, Cerebras, Kalray, Novumind, Thinci, Gyrfalcon Technology, Syntiant, Greenwaves, Horizon Robotics, Groq and Wave Computing.

Some of the startups are offering new architectures that challenge traditional devices used in AI processing such as microprocessors (MPUs), microcontrollers (MCUs), graphics processing units (GPUs), digital signal processors (DSPs) and field programmable gate arrays (FPGAs). In fact, the use of AI in various devices means that traditional classes of processors are evolving and not easy to classify as distinct categories.

“Old definitions of what makes an MPU, DPS or MCU are beginning to blur in the AI era,” De Ambroggi said. What is arriving instead are integrated semiconductors such as application-specific integrated circuits (ASICs) and system-on-chip (SoC) solutions. “With processor makers offering turnkey solutions using ASICs and SoCs, it makes less difference to system designers whether their algorithm is executed on a GPU, CPU or DSP,” he said.

Much AI work requires enormous amounts of power and high-bandwidth volatile memory. To help address both demands, De Ambroggi said some of the latest technologies put the memory closer to the computational core by enabling processing parallelism with dedicated memory cells for each processing core.

Another approach is to move early stages of data computation into the memory in a technique called processing into memory (PIM). 

A third approach is using new memory technologies that allow easy back-end silicon integration or volatile performance or a non-volatile capability or fast input/output (I/O) interface or a low pico-joule per byte.

In an interview from his headquarters in Tel Aviv, Halio CEO Orr Danon said the company’s latest Hailo-8 processor uses lessons learned from neural networks in the human brain. “We have a different approach to parallelism that won’t work in a Windows computer but for a neural network with a structured, defined data flow where the data flows as in a neural network,” Danon said. “It goes back to making a processor designed for the task.”

Hailo, started in 2017 and backed by venture capital, created its processor with an integrated team of 80 engineers. Software designers sometimes worked on the hardware design and hardware designers worked on the software. The Hailo-8 is in trials with various unnamed companies that work in edge computer applications such as driving assistance, city management, security, robotics, retail, medical and industrial automation. The chip will go into mass production sometime in 2020, Danon added.

Hailo-8 features up to 26 tera operations per second. It has been benchmarked next to an Nvidia Xavier processor where it performed image classification tasks with 20 times less the power consumption of the Xavier. The Xavier took 30 watts compared to 1.5 watts on the Hailo-8, Danon said.

Hailo-8 was also selected as a CES 2020 Innovations Awards Honoree in the Embedded Technologies product category, and Hailo engineers presented at CES earlier in January. “CES was fantastic and a great opportunity,” Danon said, noting that Hailo was exposed to a crowd at CES estimated at 170,000.

De Ambroggi said Hailo’s approach to AI chips falls into a new architecture that addresses memory needs of power and performance. In the human brain, the different areas that process and memorize data are homogeneously integrated in the brain’s cortex. By comparison, microcomputers (such as GPUs) still traditionally need huge amounts of external volatile memory (SDRAM) to buffer temporary data in a data-processing job.

“The amount of level 1-2-3 cache memory is not enough to run AI algorithms,” he explained. “Consequently, new processor architectures are moving more memory closer to the various computational units to increase processing efficiency and power while optimizing data movement required during deep learning operations.”

The demands of businesses that are testing and using AI have pushed chipmakers to focus on one of two paths, he said. One is a focus on creating a flexible platform that can adjust itself to a dynamic market in rapid evolution. The other is to focus on specific applications with coprocessors and ASICs that can be optimized for that purpose.

“As of today there are enough business cases for taking both directions, all with their own advantages and challenges,” De Ambroggi said. “AI is still in its infancy. Moreover, AI grows and develops at an extremely high speed in different directions in all industries.”

RELATED: Deep learning processor sets performance benchmark