NVIDIA Brings AI Assistants to Life With GeForce RTX AI PCs (2024)

Project G-Assist, NVIDIA ACE NIMs for Digital Humans, and Generative AI Tools Deliver Advanced AI Experiences on RTX Laptops; Plus, RTX-Accelerated APIs for Small Language Models Coming to Windows Copilot Runtime

COMPUTEX—NVIDIA today announced new NVIDIA RTX™ technology to power AI assistants and digital humans running on new GeForce RTX™ AI laptops.

NVIDIA unveiled Project G-Assist — an RTX-powered AI assistant technology demo that provides context-aware help for PC games and apps. The Project G-Assist tech demo debuted with ARK: Survival Ascended from Studio Wildcard. NVIDIA also introduced the first PC-based NVIDIA NIM™ inference microservices for the NVIDIA ACE digital human platform.

These technologies are enabled by the NVIDIA RTX AI Toolkit, a new suite of tools and software development kits that aid developers in optimizing and deploying large generative AI models on Windows PCs. They join NVIDIA’s full-stack RTX AI innovations accelerating over 500 PC applications and games and 200 laptop designs from manufacturers.

In addition, newly announced RTX AI PC laptops from ASUS and MSI feature up to GeForce RTX 4070 GPUs and power-efficient systems-on-a-chip with Windows 11 AI PC capabilities. These Windows 11 AI PCs will receive a free update to Copilot+ PC experiences when available.

“NVIDIA launched the era of AI PCs in 2018 with the release of RTX Tensor Core GPUs and NVIDIA DLSS,” said Jason Paul, vice president of consumer AI at NVIDIA. “Now, with Project G-Assist and NVIDIA ACE, we’re unlocking the next generation of AI-powered experiences for over 100 million RTX AI PC users.”

Project G-Assist, a GeForce AI Assistant
AI assistants are set to transform gaming and in-app experiences — from offering gaming strategies and analyzing multiplayer replays to assisting with complex creative workflows. Project G-Assist is a glimpse into this future.

PC games offer vast universes to explore and intricate mechanics to master, which are challenging and time-consuming feats even for the most dedicated gamers. Project G-Assist aims to put game knowledge at players’ fingertips using generative AI.

Project G-Assist takes voice or text inputs from the player, along with contextual information from the game screen, and runs the data through AI vision models. These models enhance the contextual awareness and app-specific understanding of a large language model (LLM) linked to a game knowledge database, and then generate a tailored response delivered as text or speech.

NVIDIA partnered with Studio Wildcard to demo the technology with ARK: Survival Ascended. Project G-Assist can help answer questions about creatures, items, lore, objectives, difficult bosses and more. Because Project G-Assist is context-aware, it personalizes its responses to the player’s game session.

In addition, Project G-Assist can configure the player’s gaming system for optimal performance and efficiency. It can provide insights into performance metrics, optimize graphics settings depending on the user’s hardware, apply a safe overclock and even intelligently reduce power consumption while maintaining a performance target.

First ACE PC NIM Debuts
NVIDIA ACE technology for powering digital humans is now coming to RTX AI PCs and workstations with NVIDIA NIM — inference microservices that enable developers to reduce deployment times from weeks to minutes. ACE NIM microservices deliver high-quality inference running locally on devices for natural language understanding, speech synthesis, facial animation and more.

At COMPUTEX, the gaming debut of NVIDIA ACE NIM on the PC will be featured in the Covert Protocol tech demo, developed in collaboration with Inworld AI. It now showcases NVIDIA Audio2Face™ and NVIDIA Riva automatic speech recognition running locally on devices.

Windows Copilot Runtime to Add GPU Acceleration for Local PC SLMs
Microsoft and NVIDIA are collaborating to help developers bring new generative AI capabilities to their Windows native and web apps. This collaboration will provide application developers with easy application programming interface (API) access to GPU-accelerated small language models (SLMs) that enable retrieval-augmented generation (RAG) capabilities that run on-device as part of Windows Copilot Runtime.

SLMs provide tremendous possibilities for Windows developers, including content summarization, content generation and task automation. RAG capabilities augment SLMs by giving the AI models access to domain-specific information not well represented in ‌base models. RAG APIs enable developers to harness application-specific data sources and tune SLM behavior and capabilities to application needs.

These AI capabilities will be accelerated by NVIDIA RTX GPUs, as well as AI accelerators from other hardware vendors, providing end users with fast, responsive AI experiences across the breadth of the Windows ecosystem.

The API will be released in developer preview later this year.

4x Faster, 3x Smaller Models With the RTX AI Toolkit
The AI ecosystem has built hundreds of thousands of open-source models for app developers to leverage, but most models are pretrained for general purposes and built to run in a data center.

To help developers build application-specific AI models that run on PCs, NVIDIA is introducing RTX AI Toolkit — a suite of tools and SDKs for model customization, optimization and deployment on RTX AI PCs. RTX AI Toolkit will be available later this month for broader developer access.

Developers can customize a pretrained model with open-source QLoRa tools. Then, they can use the NVIDIA TensorRT™ model optimizer to quantize models to consume up to 3x less RAM. NVIDIA TensorRT Cloud then optimizes the model for peak performance across the RTX GPU lineups. The result is up to 4x faster performance compared with the pretrained model.

The newNVIDIA AI Inference ManagerSDK,now available in early access, simplifies the deployment of ACE to PCs. It preconfigures the PC with the necessary AI models, engines and dependencies while orchestrating AI inference seamlessly across PCs and the cloud.

Software partners such as Adobe, Blackmagic Design and Topaz are integrating components of the RTX AI Toolkit within their popular creative apps to accelerate AI performance on RTX PCs.

“Adobe and NVIDIA continue to collaborate to deliver breakthrough customer experiences across all creative workflows, from video to imaging, design, 3D and beyond,” said Deepa Subramaniam, vice president of product marketing, Creative Cloud at Adobe. “TensorRT 10.0 on RTX PCs delivers unprecedented performance and AI-powered capabilities for creators, designers and developers, unlocking new creative possibilities for content creation in industry-leading creative tools like Photoshop.”

Components of the RTX AI Toolkit, such as TensorRT-LLM, are integrated in popular developer frameworks and applications for generative AI, including Automatic1111, ComfyUI, Jan.AI, LangChain, LlamaIndex, Oobabooga and Sanctum.AI.

AI for Content Creation
NVIDIA is also integrating RTX AI acceleration into apps for creators, modders and video enthusiasts.

Last year, NVIDIA introduced RTX acceleration using TensorRT for one of the most popular Stable Diffusion user interfaces, Automatic1111. Starting this week, RTX will also accelerate the highly popular ComfyUI, delivering up to a 60% improvement in performance over the currently shipping version, and 7x faster performance compared with the MacBook Pro M3 Max.

NVIDIA RTX Remix is a modding platform for remastering classic DirectX 8 and DirectX 9 games with full ray tracing, NVIDIA DLSS 3.5 and physically accurate materials. RTX Remix includes a runtime renderer and the RTX Remix Toolkit app, which facilitates the modding of game assets and materials.

Last year, NVIDIA made RTX Remix Runtime open source, allowing modders to expand game compatibility and advance rendering capabilities.

Since RTX Remix Toolkit launched earlier this year, 20,000 modders have used it to mod classic games, resulting in over 100 RTX remasters in development on the RTX Remix Showcase Discord.

This month, NVIDIA will make the RTX Remix Toolkit open source, allowing modders to streamline how assets are replaced and scenes are relit, increase supported file formats for RTX Remix’s asset ingestor and bolster RTX Remix’s AI Texture Tools with new models.

In addition, NVIDIA is making the capabilities of RTX Remix Toolkit accessible via a REST API, allowing modders to livelink RTX Remix to digital content creation tools such as Blender, modding tools such as Hammer and generative AI apps such as ComfyUI. NVIDIA is also providing an SDK for RTX Remix Runtime to allow modders to deploy RTX Remix’s renderer into other applications and games beyond DirectX 8 and 9 classics.

With more of the RTX Remix platform being made open source, modders across the globe can build even more stunning RTX remasters.

NVIDIA RTX Video, the popular AI-powered super-resolution feature supported in the Google Chrome, Microsoft Edge and Mozilla Firefox browsers, is now available as an SDK to all developers, helping them natively integrate AI for upscaling, sharpening, compression artifact reduction and high-dynamic range (HDR) conversion.

Coming soon to video editing software Blackmagic Design’s DaVinci Resolve and Wondershare Filmora, RTX Video will enable video editors to upscale lower-quality video files to 4K resolution, as well as convert standard dynamic range source files into HDR. In addition, the free media player VLC media will soon add RTX Video HDR to its existing super-resolution capability.

Learn more about RTX AI PCs and technology by joining NVIDIA at COMPUTEX.

NVIDIA Brings AI Assistants to Life With GeForce RTX AI PCs (2024)

FAQs

Does the Nvidia RTX use AI? ›

Powered by the new fourth-gen Tensor Cores and Optical Flow Accelerator on GeForce RTX 40 Series GPUs, DLSS 3 uses AI to create additional frames and improve image quality.

How does NVIDIA help AI? ›

Inference. Drive breakthrough AI inference performance. NVIDIA offers performance, efficiency, and responsiveness critical to powering the next generation of AI inference—in the cloud, in the data center, at the network edge, and in embedded devices.

What is NVIDIA doing with generative AI? ›

NVIDIA AI is the world's most advanced platform for generative AI and is relied on by organizations at the forefront of innovation. Designed for the enterprise and continuously updated, the platform lets you confidently deploy generative AI applications into production, at scale, anywhere.

What operating system does NVIDIA use for AI? ›

NVIDIA Base OS implements the stable and fully qualified operating systems for running AI, machine learning, and analytics applications on the DGX platform.

Is Nvidia AI free? ›

Kick-start your AI journey with access to NVIDIA AI workflows—for free.

Which GPU is best for AI? ›

NVIDIA A100: The undisputed champion for professional AI tasks. It boasts exceptional processing power, Tensor Cores for deep learning, and high memory bandwidth. However, its high price tag makes it ideal for large-scale research or commercial applications.

Will Nvidia dominate AI? ›

As a result, Nvidia continues to dominate the AI chip market with an estimated share of more than 95% as per some estimates. The good part is that Nvidia's foundry partner TSMC is set to expand its advanced chip packaging capacity at an annual rate of 60% through 2026.

Who competes with Nvidia in AI? ›

Shares of American tech giant Intel rose during premarket trading on Tuesday after the company unveiled new artificial intelligence chips for data centers, an offensive that comes just days after chipmaking rivals Nvidia and AMD announced new products of their own as the once dominant firm competes for market share in ...

Is AMD or Nvidia better for AI? ›

Nvidia currently dominates the market for graphics processing units, or GPUs, used for running computationally intensive AI workloads. But AMD has proven to be an able fast-follower. AMD's Instinct MI300 series accelerators provide a viable alternative to Nvidia's current H100 GPU, analysts say.

What is the downside of generative AI? ›

Ethical Enchantments: The misuse of generative AI can lead to ethical concerns, such as deepfake creation or the amplification of harmful content, reminiscent of dark incantations. Bias Bewitchment: Generative AI can perpetuate biases present in the data it's trained on, casting a shadow of unfairness over its outputs.

Who is Nvidia owned by? ›

Nvidia (NVDA) Ownership Overview

Approximately 51.74% of the company's stock is owned by Institutional Investors, 4.04% is owned by Insiders and 44.22% is owned by Public Companies and Individual Investors. The ownership structure of Nvidia (NVDA) stock is a mix of institutional, retail and individual investors.

How much does Nvidia AI cost? ›

NVIDIA AI Enterprise is available as a perpetual license at $3,595 per CPU socket. Enterprise Business Standard Support for NVIDIA AI Enterprise is $899 annually per license.

Does NASA use Nvidia? ›

NASA research scientist Christoph Keller and collaborators are using NVIDIA V100 Tensor Core GPUs and NVIDIA RAPIDS data science software libraries to accelerate machine learning algorithms using data from the NASA Center for Climate Simulation to model air pollution formation.

What is the most powerful AI GPU? ›

At Nvidia's 2024 GTC AI conference, the company released the much-anticipated Blackwell platform. The platform consists of a new graphics processing unit (GPU), dubbed the "world's most powerful chip," the GB200 NVL72 rack-scale system, and a set of enterprise AI tools.

Do I need Nvidia for AI? ›

Do machine learning and AI need a “professional” video card? No. NVIDIA GeForce RTX 3080, 3080 Ti, and 3090 are excellent GPUs for this type of workload. However, due to cooling and size limitations, the “pro” series RTX A5000 and high-memory A6000 are best for configurations with three or four GPUs.

Do GPUs use AI? ›

The role of GPUs in AI and machine learning

GPUs drive the rapid processing and analysis of complex data in AI and machine learning. Designed for parallel processing , their architecture efficiently manages the heavy computational loads these technologies demand.

Is NVDA into AI? ›

Nvidia has roughly 80% of the AI chip market, including the custom AI processors made by the cloud computing companies like Google, Microsoft and Amazon.com.

Does NVIDIA Broadcast use AI? ›

The NVIDIA Broadcast app includes: Noise Removal: use AI to remove background noise from your microphone feed – be it a loud mechanical keyboard or doorbell ringing. The AI network can even be used on incoming audio feeds to mute that one friend who won't turn on push-to-talk.

How to get RTX AI? ›

All you need to do is run an installer, but the installer is prone to fail, and you'll need to satisfy some minimum system requirements. You need an RTX 40-series or 30-series GPU with at least 8GB of VRAM, along with 16GB of system RAM, 100GB of disk space, and Windows 11.

References

Top Articles
Latest Posts
Article information

Author: Allyn Kozey

Last Updated:

Views: 5805

Rating: 4.2 / 5 (43 voted)

Reviews: 82% of readers found this page helpful

Author information

Name: Allyn Kozey

Birthday: 1993-12-21

Address: Suite 454 40343 Larson Union, Port Melia, TX 16164

Phone: +2456904400762

Job: Investor Administrator

Hobby: Sketching, Puzzles, Pet, Mountaineering, Skydiving, Dowsing, Sports

Introduction: My name is Allyn Kozey, I am a outstanding, colorful, adventurous, encouraging, zealous, tender, helpful person who loves writing and wants to share my knowledge and understanding with you.