NVIDIA Brings AI Assistants to Life With GeForce RTX AI PCs (2024)

NVIDIA today announced new NVIDIA RTX technology to power AI assistants and digital humans running on new GeForce RTX AI laptops. NVIDIA unveiled Project G-Assist—an RTX-powered AI assistant technology demo that provides context-aware help for PC games and apps. The Project G-Assist tech demo debuted with ARK: Survival Ascended from Studio Wildcard. NVIDIA also introduced the first PC-based NVIDIA NIM inference microservices for the NVIDIA ACE digital human platform.

These technologies are enabled by the NVIDIA RTX AI Toolkit, a new suite of tools and software development kits that aid developers in optimizing and deploying large generative AI models on Windows PCs. They join NVIDIA's full-stack RTX AI innovations accelerating over 500 PC applications and games and 200 laptop designs from manufacturers. In addition, newly announced RTX AI PC laptops from ASUS and MSI feature up to GeForce RTX 4070 GPUs and power-efficient systems-on-a-chip with Windows 11 AI PC capabilities. These Windows 11 AI PCs will receive a free update to Copilot+ PC experiences when available.

"NVIDIA launched the era of AI PCs in 2018 with the release of RTX Tensor Core GPUs and NVIDIA DLSS," said Jason Paul, vice president of consumer AI at NVIDIA. "Now, with Project G-Assist and NVIDIA ACE, we're unlocking the next generation of AI-powered experiences for over 100 million RTX AI PC users."

Project G-Assist, a GeForce AI Assistant
AI assistants are set to transform gaming and in-app experiences—from offering gaming strategies and analyzing multiplayer replays to assisting with complex creative workflows. Project G-Assist is a glimpse into this future.

PC games offer vast universes to explore and intricate mechanics to master, which are challenging and time-consuming feats even for the most dedicated gamers. Project G-Assist aims to put game knowledge at players' fingertips using generative AI.

Project G-Assist takes voice or text inputs from the player, along with contextual information from the game screen, and runs the data through AI vision models. These models enhance the contextual awareness and app-specific understanding of a large language model (LLM) linked to a game knowledge database, and then generate a tailored response delivered as text or speech.

NVIDIA partnered with Studio Wildcard to demo the technology with ARK: Survival Ascended. Project G-Assist can help answer questions about creatures, items, lore, objectives, difficult bosses and more. Because Project G-Assist is context-aware, it personalizes its responses to the player's game session.

In addition, Project G-Assist can configure the player's gaming system for optimal performance and efficiency. It can provide insights into performance metrics, optimize graphics settings depending on the user's hardware, apply a safe overclock and even intelligently reduce power consumption while maintaining a performance target.

First ACE PC NIM Debuts
NVIDIA ACE technology for powering digital humans is now coming to RTX AI PCs and workstations with NVIDIA NIM—inference microservices that enable developers to reduce deployment times from weeks to minutes. ACE NIM microservices deliver high-quality inference running locally on devices for natural language understanding, speech synthesis, facial animation and more.

At COMPUTEX, the gaming debut of NVIDIA ACE NIM on the PC will be featured in the Covert Protocol tech demo, developed in collaboration with Inworld AI. It now showcases NVIDIA Audio2Face and NVIDIA Riva automatic speech recognition running locally on devices.

Windows Copilot Runtime to Add GPU Acceleration for Local PC SLMs
Microsoft and NVIDIA are collaborating to help developers bring new generative AI capabilities to their Windows native and web apps. This collaboration will provide application developers with easy application programming interface (API) access to GPU-accelerated small language models (SLMs) that enable retrieval-augmented generation (RAG) capabilities that run on-device as part of Windows Copilot Runtime.

SLMs provide tremendous possibilities for Windows developers, including content summarization, content generation and task automation. RAG capabilities augment SLMs by giving the AI models access to domain-specific information not well represented in ‌base models. RAG APIs enable developers to harness application-specific data sources and tune SLM behavior and capabilities to application needs.

These AI capabilities will be accelerated by NVIDIA RTX GPUs, as well as AI accelerators from other hardware vendors, providing end users with fast, responsive AI experiences across the breadth of the Windows ecosystem.

The API will be released in developer preview later this year.

4x Faster, 3x Smaller Models With the RTX AI Toolkit
The AI ecosystem has built hundreds of thousands of open-source models for app developers to leverage, but most models are pretrained for general purposes and built to run in a data center.

To help developers build application-specific AI models that run on PCs, NVIDIA is introducing RTX AI Toolkit — a suite of tools and SDKs for model customization, optimization and deployment on RTX AI PCs. RTX AI Toolkit will be available later this month for broader developer access.

Developers can customize a pretrained model with open-source QLoRa tools. Then, they can use the NVIDIA TensorRT model optimizer to quantize models to consume up to 3x less RAM. NVIDIA TensorRT Cloud then optimizes the model for peak performance across the RTX GPU lineups. The result is up to 4x faster performance compared with the pretrained model.

The new NVIDIA AI Inference Manager SDK, now available in early access, simplifies the deployment of ACE to PCs. It preconfigures the PC with the necessary AI models, engines and dependencies while orchestrating AI inference seamlessly across PCs and the cloud.

Software partners such as Adobe, Blackmagic Design and Topaz are integrating components of the RTX AI Toolkit within their popular creative apps to accelerate AI performance on RTX PCs.

"Adobe and NVIDIA continue to collaborate to deliver breakthrough customer experiences across all creative workflows, from video to imaging, design, 3D and beyond," said Deepa Subramaniam, vice president of product marketing, Creative Cloud at Adobe. "TensorRT 10.0 on RTX PCs delivers unprecedented performance and AI-powered capabilities for creators, designers and developers, unlocking new creative possibilities for content creation in industry-leading creative tools like Photoshop."

Components of the RTX AI Toolkit, such as TensorRT-LLM, are integrated in popular developer frameworks and applications for generative AI, including Automatic1111, ComfyUI, Jan.AI, LangChain, LlamaIndex, Oobabooga and Sanctum.AI.

AI for Content Creation
NVIDIA is also integrating RTX AI acceleration into apps for creators, modders and video enthusiasts.

Last year, NVIDIA introduced RTX acceleration using TensorRT for one of the most popular Stable Diffusion user interfaces, Automatic1111. Starting this week, RTX will also accelerate the highly popular ComfyUI, delivering up to a 60% improvement in performance over the currently shipping version, and 7x faster performance compared with the MacBook Pro M3 Max.

NVIDIA RTX Remix is a modding platform for remastering classic DirectX 8 and DirectX 9 games with full ray tracing, NVIDIA DLSS 3.5 and physically accurate materials. RTX Remix includes a runtime renderer and the RTX Remix Toolkit app, which facilitates the modding of game assets and materials.

Last year, NVIDIA made RTX Remix Runtime open source, allowing modders to expand game compatibility and advance rendering capabilities.

Since RTX Remix Toolkit launched earlier this year, 20,000 modders have used it to mod classic games, resulting in over 100 RTX remasters in development on the RTX Remix Showcase Discord.

This month, NVIDIA will make the RTX Remix Toolkit open source, allowing modders to streamline how assets are replaced and scenes are relit, increase supported file formats for RTX Remix's asset ingestor and bolster RTX Remix's AI Texture Tools with new models.

In addition, NVIDIA is making the capabilities of RTX Remix Toolkit accessible via a REST API, allowing modders to livelink RTX Remix to digital content creation tools such as Blender, modding tools such as Hammer and generative AI apps such as ComfyUI. NVIDIA is also providing an SDK for RTX Remix Runtime to allow modders to deploy RTX Remix's renderer into other applications and games beyond DirectX 8 and 9 classics.

With more of the RTX Remix platform being made open source, modders across the globe can build even more stunning RTX remasters.

NVIDIA RTX Video, the popular AI-powered super-resolution feature supported in the Google Chrome, Microsoft Edge and Mozilla Firefox browsers, is now available as an SDK to all developers, helping them natively integrate AI for upscaling, sharpening, compression artifact reduction and high-dynamic range (HDR) conversion.

Coming soon to video editing software Blackmagic Design's DaVinci Resolve and Wondershare Filmora, RTX Video will enable video editors to upscale lower-quality video files to 4K resolution, as well as convert standard dynamic range source files into HDR. In addition, the free media player VLC media will soon add RTX Video HDR to its existing super-resolution capability.

NVIDIA Brings AI Assistants to Life With GeForce RTX AI PCs (2024)

FAQs

NVIDIA Brings AI Assistants to Life With GeForce RTX AI PCs? ›

COMPUTEX—NVIDIA today announced new NVIDIA RTX™ technology to power AI assistants and digital humans running on new GeForce RTX™ AI laptops. NVIDIA unveiled Project G-Assist — an RTX-powered AI assistant technology demo that provides context-aware help for PC games and apps.

Does the Nvidia RTX use AI? ›

The AI processors in every GeForce RTX GPU deliver chart-busting levels of performance across the most demanding games, apps, and workflows.

Why is NVIDIA needed for AI? ›

The Nvidia ecosystem, from its software to its sourcing of materials, allowed it to position itself as the go-to source for companies that needed massive computing power to handle their AI needs.

Who uses NVIDIA AI chips? ›

Working with the most dynamic companies in the world, we will realize the promise of AI for every industry.” Among the many organizations expected to adopt Blackwell are Amazon Web Services, Dell Technologies, Google, Meta, Microsoft, OpenAI, Oracle, Tesla and xAI.

Is NVIDIA AI free? ›

Kick-start your AI journey with access to NVIDIA AI workflows—for free.

Does ChatGPT use Nvidia? ›

Key Points. The launch of ChatGPT set off a new boom in AI technology. The disruptive chatbot also showed how powerful Nvidia's GPUs are. Nvidia shares have soared since the launch, as demand for its products has skyrocketed.

How much does Nvidia AI cost? ›

NVIDIA AI Enterprise is available as a perpetual license at $3,595 per CPU socket. Enterprise Business Standard Support for NVIDIA AI Enterprise is $899 annually per license.

What company is leading the AI revolution? ›

While Nvidia is the clear leader in AI hardware, Microsoft has established itself as the leader in AI software.

How does Nvidia make money from AI? ›

Given that Nvidia relies on sales of chips that are being deployed in data centers for AI training and inference purposes, it can be easily concluded that this business segment can continue to be a major catalyst for the company.

Who is the leader of Nvidia AI? ›

Jensen Huang created a unique culture at Nvidia that allows the AI chip leader to move 'very, very fast' Fortune.

Who is Nvidia's biggest competitor in AI? ›

Huawei developed the Ascend series of chips as a rival to Nvidia's line of AI chips. The Chinese company's main product, the 910B chip, is its main rival to Nvidia's A100 chip, which launched roughly three years ago.

Does Tesla use Nvidia chips? ›

To get there, he's said, Tesla requires plenty of Nvidia's GPUs which are specialized for AI training and workloads. Those chips are in limited supply due to soaring demand from Google , Amazon , Meta , Microsoft , OpenAI and others.

Is Amazon buying Nvidia chips? ›

AI chips are expected to grow to account for 30% of the total chip market by 2030, up from just 10% in 2023. Amazon's decision to continue ordering Nvidia chips also bucks the trend of Big Tech companies designing in-house chip technology.

What language does Nvidia AI use? ›

One of the most popular languages used for AI development by Nvidia is Python. Python is a high-level programming language that has become the go-to language for data scientists, researchers, and AI developers worldwide.

Does Google AI use Nvidia? ›

Further widening the availability of NVIDIA-accelerated generative AI computing, Google Cloud also announced the general availability of A3 Mega will be coming next month. The instances are an expansion to its A3 virtual machine family, powered by NVIDIA H100 Tensor Core GPUs.

What can chat with RTX do? ›

Chat with RTX (ChatRTX) is a tech demo developed by Nvidia that enables users to run an AI chatbot locally on their PC using their own documents and files. The app is free to download, but requires a Windows 11 operating system equipped with the Nvidia's latest software and hardware.

Which Nvidia graphics card for AI? ›

The GeForce RTX 4080 SUPER generates AI video 1.5x faster — and images 1.7x faster — than the GeForce RTX 3080 Ti GPU. The Tensor Cores in SUPER GPUs deliver up to 836 trillion operations per second, bringing transformative AI capabilities to gaming, creating and everyday productivity.

Are GPUs used for AI? ›

GPU architecture offers unmatched computational speed and efficiency, making it the backbone of many AI advancements. The foundational support of GPU architecture allows AI to tackle complex algorithms and vast datasets, accelerating the pace of innovation and enabling more sophisticated, real-time applications.

Is NVDA involved in AI? ›

Nvidia designs the world's most powerful graphics processing units (GPUs) for data centers, which developers use to build, train, and deploy their AI models. Those chips sent Nvidia's data center revenue soaring 217% during fiscal 2024 (ended Jan. 28, 2024), and that momentum should continue in fiscal 2025.

How to get RTX AI? ›

All you need to do is run an installer, but the installer is prone to fail, and you'll need to satisfy some minimum system requirements. You need an RTX 40-series or 30-series GPU with at least 8GB of VRAM, along with 16GB of system RAM, 100GB of disk space, and Windows 11.

Top Articles
Latest Posts
Article information

Author: Margart Wisoky

Last Updated:

Views: 5709

Rating: 4.8 / 5 (58 voted)

Reviews: 89% of readers found this page helpful

Author information

Name: Margart Wisoky

Birthday: 1993-05-13

Address: 2113 Abernathy Knoll, New Tamerafurt, CT 66893-2169

Phone: +25815234346805

Job: Central Developer

Hobby: Machining, Pottery, Rafting, Cosplaying, Jogging, Taekwondo, Scouting

Introduction: My name is Margart Wisoky, I am a gorgeous, shiny, successful, beautiful, adventurous, excited, pleasant person who loves writing and wants to share my knowledge and understanding with you.