SiFive, a designer of chips based on the RISC-V computing platform, announced a series of new AI chips for high-performance AI workloads.
The SiFive Intelligence XM Series is designed for accelerating high performance AI workloads. This is the first intellectual property from SiFive to include a highly scalable AI matrix engine, which accelerates time to market for semiconductor companies building system on chip solutions for edge IoT, consumer devices, next generation electric and/or autonomous vehicles, data centers, and beyond.
As part of SiFive’s plan to support customers and the broader RISC-V ecosystem, SiFive also announced its intention to open source a reference implementation of its SiFive Kernel Library (SKL).
The announcement was made at a SiFive press event, Tuesday, in Santa Clara, where executives discussed the leadership role the RISC-V architecture is playing at the core of AI solutions across a diverse range of market leaders, and provided an update on SiFive’s strategy, roadmap and business momentum.
The open solution
Patrick Little, CEO of SiFive, said in an interview with VentureBeat that customers in the semiconductor, systems and consumer markets have come to appreciate the software strategy behind SiFive and RISC-V.
He noted that products with more than 10 billion SiFive cores have shipped to date. And Little noted that SiFive has invested more than $500 million in R&D and it is selling to the top semiconductor leaders and hyperscalers. The company has more than 400 design wins.
The RISC-V architecture has a software that is an open standard interface, meaning any kinds of cores that connect to it. That means customers who use SiFive designs can choose their own accelerators for AI and other applications without having to worry about breaking software compatibility, Little said.
While big leaders in AI like Nvidia can use their own proprietary graphics processing unit (GPU) architectures, smaller companies use their own breed of accelerators, he said. But software programmers don’t want to learn a new language every time a new accelerator comes along, Little said. So the hyperscalars and chip companies want to use RISC-V solutions like SiFive so they don’t have to keep rewriting their software, he said.
The RISC-V open standard software interface allows for the graceful evolution of the RISC-V standard over time and it de-risks the solution beyond a single proprietary vendor.
SiFive has been steadily moving up a food chain, starting in the 1990s with embedded cores and adding its first vector processor in 2021. And now it is adding AI solutions. Customers can use it as a data flow processor as the front end to their processor to go with their changing backend AI accelerators.
“They don’t want to keep writing to the AI software. So we put a RISC-V vector processor in front of that. The AI processors keep changing fast. The models keep changing. Software writers want to write to something that will be around in 15 years,” he said. “We are one of the few companies that can fill that gap. And today we announced own accelerator, or matrix multiplication engine, and we are doing the XM product line to completement what we did in vector processing. It’s a matrix multiplication engine.”
Customers who want an alternative to Nvidia can turn to another source, but they don’t want that rival to be another proprietary solution. Rather, they like RISC-V as it offers many rival companies behind it, Little said.
“We believe our solution can scale to Nvidia level performance,” he said.
“Many companies are seeing the benefits of an open processor standard while they race to keep up with the rapid pace of change with AI. AI plays to SiFive’s strengths with performance per watt and our unique ability to help customers customize their solutions,” said Little. “We’re already supplying our RISC-V solutions to five of the Magnificent 7 companies, and as companies pivot to a ‘software first’ design strategy we are working on new AI solutions with a wide variety of companies from automotive to datacenter and the intelligent edge and IoT.”
SiFive’s new XM Series offers an extremely scalable and efficient AI compute engine. By integrating scalar, vector, and matrix engines, XM Series customers can take advantage of very efficient memory bandwidth. The XM Series also continues SiFive’s legacy of offering extremely high performance per watt for compute-intensive applications.
“RISC-V was originally developed to efficiently support specialized computing engines including mixed-precision operations,” said Krste Asanovic, SiFive chief architect, in a statement. “This, coupled with the inclusion of efficient vector instructions and the support of specialized AI extensions, are the reasons why many of the largest datacenter companies have already adopted RISC-V AI accelerators.”
As part of his presentation, Asnovic introduced more details on the new XM Series which broadens its Intelligence Product family. The XM Series also continues SiFive’s legacy of offering extremely high performance per watt for compute-intensive applications.
Featuring four X-Cores per cluster, a cluster can deliver 16 TOPS (INT8) or 8 TFLOPS (BF16) per GHz. The chip has 1TB/s of sustained memory bandwidth per XM Series cluster, with the clusters being able to access memory via a high bandwidth port or via a CHI port for coherent memory access. SiFive envisions the creation of systems incorporating no host CPU or ones based on RISC-V, x86 or Arm. The company is sampling its solutions now.
SiFive will be at the RISC-V Summit North America, taking place Oct. 22-23, 2024 in Santa Clara, California. The company has 500 people.
“We’ve become the gold standard of RISC-V,” Little said.
The post SiFive unveils RISC-V chip design for high-performance AI workloads appeared first on Venture Beat.