SAN JOSE, Calif. - NVIDIA (NASDAQ:NVDA) has introduced a new suite of generative AI microservices designed to help enterprises deploy custom applications more efficiently. The NVIDIA NIM microservices, running on the NVIDIA CUDA platform, enable optimized inference on a wide range of AI models, aiming to reduce deployment times from weeks to minutes.
The cloud-native microservices catalog is built to support NVIDIA's CUDA installed base, which includes hundreds of millions of GPUs across various platforms such as clouds, data centers, workstations, and PCs. These services are intended to provide enterprises with the tools to become AI-driven organizations while maintaining ownership and control of their intellectual property.
NVIDIA's enterprise-grade AI microservices are part of the NVIDIA AI Enterprise 5.0 offering and are accessible from major cloud services like Amazon (NASDAQ:AMZN) SageMaker, Google (NASDAQ:GOOGL) Kubernetes Engine, and Microsoft (NASDAQ:MSFT) Azure AI. They integrate with popular AI frameworks and are supported on over 400 NVIDIA-Certified Systems from leading hardware providers.
ServiceNow (NYSE:NOW) is among the first to utilize these services to develop domain-specific AI applications. Other companies, including Adobe (NASDAQ:ADBE), Cadence, CrowdStrike (NASDAQ:CRWD), Getty Images, SAP, ServiceNow, and Shutterstock (NYSE:SSTK), are also accessing the new microservices to transform their data into AI capabilities.
The NIM Inference Microservices are powered by NVIDIA inference software, including Triton Inference Server and TensorRT-LLM, and offer industry-standard APIs for various domains such as language and drug discovery. These pre-built containers are designed to enable rapid scaling and high performance for AI applications in production environments.
Moreover, NVIDIA announced CUDA-X microservices for tasks like retrieval-augmented generation (RAG), data processing, and guardrails, as well as high-performance computing (HPC). These services facilitate data preparation, customization, and training to expedite AI development across industries.
Ecosystem partners, including Box, Cloudera (NYSE:CLDR), Cohesity, Datastax, Dropbox (NASDAQ:DBX), and NetApp (NASDAQ:NTAP), are collaborating with NVIDIA to integrate proprietary data into generative AI applications. Snowflake (NYSE:SNOW) is leveraging NeMo Retriever microservices to utilize enterprise data for AI application development.
Developers can experiment with NVIDIA microservices at no charge through ai.nvidia.com. For production deployment, enterprises can use NVIDIA AI Enterprise 5.0 on NVIDIA-Certified Systems and leading cloud platforms.
This announcement is based on a press release statement from NVIDIA.
This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.