Why everyone hates NVIDIA DLSS 5 (but will love it eventually)

Upscaling, or reconstructing frames for video games in real time, is a pretty controversial practice. Pursists balk at the idea, but users with a “weak” or mid-tier gaming system appreciate the extra fluidity that comes with it. NVIDIA does it. So does AMD. And Intel, too. But all hell broke loose when Nvidia announced the next iteration of its super-sampling tech, particularly owing to the excessively AI-fied look of the visuals, especially human faces.
It’s been a wild few weeks in the tech world, and if you’ve been following the DLSS 5 (Deep Learning Super Sampling) saga, you know it’s been a rollercoaster of “Wow,” “Wait, what?”, and “Get that thing away from my game.” Here is the breakdown of the DLSS 5 drama, from the leather-jacket-clad hype to the current “2D filter” reality.
The Story So Far: The “GPT Moment” That Wasn’t
It all started when Jensen Huang took the stage at NVIDIA’s GTC 2026 and dropped the bombshell: DLSS 5. NVIDIA wasn’t just upscaling pixels anymore; they were generatively reimagining them. Jensen called it the “GPT moment for graphics,” promising that AI would now handle the heavy lifting of visual realism: things like skin texture, fabric sheen, and complex lighting. Unfortunately, the hype didn’t even last 24 hours.
Within a few hours, the internet was flooded with side-by-side comparisons of Resident Evil Requiem and Starfield. The community’s response? “AI Slop.” Instead of making games look “better,” DLSS 5 was “Yassifying” characters by smoothing out gritty skin textures, adding unintended makeup, and making everyone look like an Instagram influencer from 2022.
Then came the “Betrayal.” As reported by Insider Gaming, major game developers were blindsided. Artists at Ubisoft and Capcom reportedly found out about the DLSS 5 demos at the same time we did. NVIDIA scrambled with damage control, promising a “Full Creative Control” SDK with intensity sliders. But the final blow came just days ago: An email interview between YouTuber Daniel Owen and NVIDIA’s Jacob Freeman revealed that DLSS 5 isn’t actually tapping into the deep 3D geometry of the game. It’s essentially a high-end 2D post-processing filter being laid over the screen. The “Neural Revolution” turned out to be a very expensive coat of paint.
Why “Better” Isn’t Always Better
On paper, DLSS 5 sounds like magic. And in some ways, it is. If you look at a landscape or a static environment, the AI-infused shadows and highlights look objectively “cleaner.” But here’s the problem: cleaner isn’t always the vibe.
Video games are art, and art is about intention.
If a developer spends three years perfecting a hazy, moody, claustrophobic hallway in a horror game, they don’t want an AI coming in and “fixing” it.
DLSS 5 has a habit of brightening up dark corners and scrubbing away atmospheric fog because it thinks those are “errors” to be corrected. The fact that developers were surprised by the demo is the biggest red flag. It’s classic corporate hierarchy: the suits at the top say “Yes” to NVIDIA for the marketing buzz, while the actual creative teams are left in the dark. Instead, if NVIDIA had actually collaborated with the artists, it could have fed the AI 3D data models and blueprints.
Imagine if the AI knew exactly where a character’s scar was supposed to be, or how a specific fabric was meant to reflect light. In fact, as Veedrac on Reddit recently showcased, games that have DLSS 5 with tone-mapping actually look stunning. It proved that the tech can work, but only when a human is steering the ship. By launching it as a “black box” filter, NVIDIA basically bypassed the very people who make games worth playing.
Then again, there is the elephant in the room: Data Sovereignty. As a creative designer, why would I be okay with handing over my raw character designs and lighting maps to an AI model? We’ve seen how this works. The AI uses that data to “learn,” and eventually, it’s building things based on your hard work without you in the loop. It’s a valid fear that NVIDIA is building a master engine that might one day make the “Artist” part of “Game Artist” optional.
The Future Awaits
Is DLSS 5 dead on arrival? Probably not. If history tells us anything, this is just NVIDIA’s standard operating procedure: break things first, fix them later. Look back at 2018: Ray Tracing launched, tanked our frame rates, and looked “fine” at best. Today? It’s the gold standard. In 2022, they gave us Frame Generation, and we all laughed at the “fake frames.” Now? It’s practically the only way to hit a playable 4K.
Don’t get me wrong, I’d genuinely take raw, native rasterization over this AI mess any day. I want my games rendered for real, without the digital shortcuts. But that’s just not the world we live in. NVIDIA owns 95% of the market, as reported by Jon Peddie Research, which means whatever they introduce, be it good, bad, or ugly, eventually becomes the industry blueprint.
Right now, DLSS 5 is stuck in its “Uncanny Valley” phase. It’s awkward, over-aggressive, and currently getting slandered for being a glorified 2D filter. But eventually, NVIDIA will have to realize they can’t treat a game like a flat video file. That promised SDK needs to be more than just a slider; it needs to be a bridge that lets developers lock in their artistic soul. Once DLSS 5 learns to respect the “mood” as much as the “pixels,” it will change gaming forever.
And we know how this ends: the industry follows NVIDIA like clockwork. We can bitch all we want today, but in two years, we’ll probably be debating whether AMD’s “FSR 5” is as good at “re-painting” characters as Team Green. The tech is inevitable. We just have to make sure the art doesn’t get lost in the upscale.



