PCWorld
PC gamers face an ongoing problem: more powerful games demand more powerful resources, all in the service of games that deliver more realistic experiences and graphics. But can gamers avoid paying out outrageous sums of money to keep up? Texture compression might be an answer, shrinking down the size of games as well as allowing them to fit into the limited video memory of older, cheaper cards. Both Nvidia and Intel are working on ideas, which could be available to new and existing hardware in the coming months. All told, it’s an exciting potential solution to the ongoing scarcity of RAM and video RAM, which is driving up prices ( including video cards ) and holding back new graphics-card releases . Both companies outlined their plans in recent weeks: Intel announced its Neural Texture Compression SDK this weekend, while Nvidia’s related neural texture compression talk at GTC 2026 showed how textures could be effectively impressed using its hardware, too. 3D graphics is essentially a puppet show: Historically, each object is created by a framework of surfaces, and then the game designers tell your PC how to “cover” them with textures that are individually lit and colored. It’s this texture data that can make up the bulk of the game’s size, since each object can have several “maps” applied to it. A realistic-looking “brick” might be coded to tell the game which parts of the brick are shadowed, which are rough or glossy, and how those differences affect the color of the brick itself. These are called “maps.” And they matter: A game like Hogwarts: Legacy might require 58GB of data; the “High Definition Texture Pack” can require an additional 18.3GB . Loading textures in and out of memory can also cause stuttering in games, so reducing the size of those textures also can improve the way the game plays, too. Microsoft, which is trying to build a DirectX API that will allow this to happen, has already said that it plans to build support for neural texture compression into DirectX . It’s assuming that developers will want to use what it calls both “small models” and “scene models” to allow next-generation scene rendering, which could include neural lighting and neural texture compression. In both, AI would be used to calculate how a scene should be drawn and shaded, versus actually performing all of these calculations itself. Two ways of shrinking game data Intel engineers showed its Texture Set Neural Compression working in two variants, compressing textures up to 9X or over 17X versus uncompressed data, depending upon the method used. According to Intel graphics engineer Marissa Dubois, the textures could be decompressed at various points: at installation, while the game was loading, or even later. Like other compression techniques, it uses similarities in the data texture maps to try and reduce the data size. Intel’s TSNC in action. YouTube / Intel Intel’s presentation noted that there is a bit of what it calls “perceptual error: 6-7 percent for the second 17X variant, or 5 percent with the first. Both can either use the XMX cores inside the Intel Arc GPUs, or “fall back” to a more generic implementation that can be used on other CPUs and even GPUs. XMX inference on Panther Lake is about 3.4 times faster than the fallback method, Intel said. For now, this is a demo, Intel engineers said. An alpha software development kit (SDK) is scheduled for later this year, followed by a beta and then an eventual release. Nvidia has already unveiled DLSS 5 , the controversial graphics enhancement which adds generative AI as a way to “improve” the quality of games — where DLSS 5’s “improvements” is very much in doubt . Nvidia’s Neural Texture Compression is explicitly deterministic, which means that it always reconstructs the same texture that the developer designed. Nvidia does use a small neural network to reconstruct the data, running on its Tensor cores. The RTX Neural Texture Compression SDK is available for developers to use today. Nvidia showed off two demonstrations: neural texture compression and neural materials. Nvidia demonstrated how NTC could be used to compress a scene that previously required 6.5GB of VRAM to just 970 MB with NTC. It’s not as clear what Nvidia is doing with neural materials, but it appears the company is trying to tell the GPU what the properties of a material actually are, and then compress those instructions. The graphics card would then essentially construct the material, speeding up the process from between 1.4X to 7.7X, according to Nvidia. Nvidia appears to be trying to encode what material act like in the real world. AMD doesn’t yet offer an SDK to lower memory consumption in games. However, in 2024 it published a paper on neural texture block compression , slicing texture sizes by 70 percent. All told, neural compression isn’t something that can benefit you yet. But it appears to be just a few months off…and let’s face it, it probably can’t get here soon enough.
Go to News Site