You’re probably right. I like RPGs and I love science fiction, but for some reason I just can’t get into Cyberpunk. You’d think I’d enjoy it but nope. The story just doesn’t captivate me and there isn’t much to do outside of the campaign, so…
The new AI-based denoiser probably uses the tensor cores which is severely underutilized on gaming, thus netting extra performance seemingly out of thin air. One of the main criticism of the new Nvidia cards was how much silicon area they use for those RT and AI cores which could’ve been used to improve performance of traditional rendering had Nvidia used the area for standard GPU cores instead. Now it seems Nvidia finally have more ways to utilize those silicon for gaming, though it’s not clear whether they’re finally fully utilized yet (there were reports about how the tensor cores only have 1% utilization even with DLSS frame generation turned on).
From my understanding, this is an optimized method of raytracing. We could expect to see better looks and better performance with each iteration of raytracing. Especially since it’s a relatively new technology for games.
How is the performance getting better (more FPS) when you use the GPU even more?
Just buy a 4090, easy
I already did but Cyberpunk is still boring. What am I doing wrong?
Not the game for you?
You’re probably right. I like RPGs and I love science fiction, but for some reason I just can’t get into Cyberpunk. You’d think I’d enjoy it but nope. The story just doesn’t captivate me and there isn’t much to do outside of the campaign, so…
The new AI-based denoiser probably uses the tensor cores which is severely underutilized on gaming, thus netting extra performance seemingly out of thin air. One of the main criticism of the new Nvidia cards was how much silicon area they use for those RT and AI cores which could’ve been used to improve performance of traditional rendering had Nvidia used the area for standard GPU cores instead. Now it seems Nvidia finally have more ways to utilize those silicon for gaming, though it’s not clear whether they’re finally fully utilized yet (there were reports about how the tensor cores only have 1% utilization even with DLSS frame generation turned on).
If they did use that space for raster AMD would be toast right now.
They say it’s because you don’t have to do the same denoise with the new algorithm.
From my understanding, this is an optimized method of raytracing. We could expect to see better looks and better performance with each iteration of raytracing. Especially since it’s a relatively new technology for games.