Can you just fucking render the graphics as they were made?
Best I can do I constant hallucinations, and a burning planet.
Smear some more blur on my game will ya?
No thanks, I’ll stick with real rendering.
Every game about to be looking like that cursed AI generated Minecraft thing.
Can’t wait for system TDPs that make industrial refrigeration seem cheap and efficient.
They’re just desperately searching for reasons to make new gpu gens attractive.
Btw, dosn’t that lead to way overengineered gpus? I mean, even more than now.
That’s how new GPU generations have been pushed for as long as GPUs have existed.
And no there is no overengineering of GPUs. You don’t want stagnation or underwhelming to nonexistent jumps as with other tech products, like smartphones, do you?
Yes, i do. A new generation all 5 years would be enough too, for both.
And so AAA gaming dies because most consumers are too slow to tell the difference between authentic talent and AI bullshit.
I feel like I’ve read predictions like this one a million times since the 1990s…
This may be nothing more than some kind of morbid curiosity, but I really want to know what these “neural rendering capabilities” turn out to be. Maybe it’s cool stuff.
It’s not a secret, Nvidia publishes white papers about what their technologies are and how they work:
https://research.nvidia.com/labs/rtr/neural_appearance_models/
It seems like everyone in this thread thinks it’s like that AI generated Minecraft demo. Though I can’t blame them too much since the article is complete shit as well.
Thank you so much! Looks very interesting indeed. Definitely gonna watch the video when I’m on WiFi again.
I’ve seen some early demonstrations of this, like this one from three years ago that makes GTA V look photoreal:
https://www.youtube.com/watch?v=P1IcaBn3ej0
The potential of this tech is enormous. Imagine an alternative to RTX Remix that turns any game into a near photoreal experience. Genres like simulations and racing games - which tend to attempt photorealism instead of creative art styles anyway, are primarily featuring inanimate objects (avoiding most of the uncanny valley that way) and could be transformed with models based on relatively limited training data - would be ideal at first.
What a bag of kneejerk hot takes this thread is.
This tech allows games to render like it’s 2003, and get a one-step filter to look like a goddamn Pixar film.