By Jane Lanhee Lee and Stephen Nellis
OAKLAND, California (Reuters) - Groq, a Silicon Valley chip startup founded by a former Alphabet (NASDAQ:GOOGL) Inc engineer, said on Thursday it has adapted technology similar to the underpinnings of the wildly popular ChatGPT to run on its chips.
Groq modified LLaMA, a large language model released last month by Facebook parent Meta Platforms Inc (NASDAQ:META) that can be used to power bots to generate human-like text.
The move is significant because Meta's researchers originally developed LLaMA using chips from Nvidia (NASDAQ:NVDA) Corp, which has a market share of nearly 90% for AI computing according to some estimates. Showing that a cutting-edge model can be moved to Groq's chips easily could help the startup prove that its products are a viable alternative to Nvidia.
Groq has been trying to chip away at Nvidia's market share, along with startups such as SambaNova and Cerebras and big companies like Advanced Micro Devices (NASDAQ:AMD) Inc and Intel Corp (NASDAQ:INTC).
Efforts to find alternative chips to Nvidia's have gained extra steam with the popularity of ChatGPT which has focused attention on Nvidia's dominant role in AI. The public battle to dominate the AI technology space kicked off late last year with the launch of Microsoft (NASDAQ:MSFT) Corp-backed OpenAI's ChatGPT and prompted tech heavyweights from Alphabet to China's Baidu Inc (NASDAQ:BIDU) to trumpet their own offerings.
Meta made its code available to researchers for noncommercial use. Groq used Meta's model but stripped out the code that was included in order to make the model run on an Nvidia chip, Groq CEO Jonathan Ross told Reuters.
Groq then ran that model through Groq Compiler which automatically adds specific code for it to run on its own computing system. A compiler turns code into ones and zeros so a chip can read them.
Ross said the company's goal is to make it easy to move models from Nvidia's chips to its own. He said using the Groq system can also eliminate engineering effort each time changes are made to the LlaMA or other models to get it to work on the chips.
Meta Platforms declined to comment. The company has been working on making it easier for developers to use non-Nvidia chips and in October launched a set of free software tools for AI applications that enable switching back and forth between Nvidia and AMD chips.