Well I do not want to go into tech details, since it is a gaming forum, but actually the texture setup that we can playing with - Even if I believe that we should not start to downgrade the texture quality with such high end GPUs.
I’m not a Lumberyard engine dev, but these things are quite general around real-time engines except UE5’s nanite and lumen.
Basically what is happening :
- the setting as ground quality is also connected to texturing as normal maps (I have no clue if they used displacement maps or but it also can play a role) this setting goes hand in hand with texture quality setting
- and of course we have texture quality setting
- we also have post process setting that could play with particles, I would try to play with it as well
This 3 above setting is filling up our GPU’s Vram, aka at the above mentioned high-end Nvidia cards, that has ddr6x memory that can overheat as nuclear reactor could cause issues and after a temperature cap the GPU will throttle - means it will switch to safe mode and sleep at 30-40-50% load.
As I see the engine is playing with opacity maps that they are using as a kind of dynamic subdivision - means, if something is not in the character’s set up vision radius it will not be rendered at full texturing rez (you can see it when a tree branch is very close to the cam and blocking the way of the characters actual viewing radius. Same can be seen in the distance (you see it like a low quality pixel streaming render method, that some real-time engines are using, mostly web based ones)
Now, how it is connected to our issue:
In some map parts so called instancing is not will optimized - that means, that lets say a bush in a viewed area has 3 texture variations (optimal max) so when you see 200 bushes, they are loaded in your GPU as 3 different texture set (albedo, ambient occlusion with some baked in shadows, normal map, (maybe bump or displacement) opacity, and roughness. As you can see an object has around 3-6 texture maps to show what we can see in game. IF all these maps should be high resolution it could be 2k×3-5 / object. IF an area is not well optimized due to asset instancing, it could be that it will load in more than the optimal number of in-game viewed objects, so our already sensitive ddr6X memory is receiving way too much info, it have to read-write as if would be no tomorrow - and it overheats.
And important factor:
No matter if you are monitoring your GPU temps, this texture optimization issue if effecting the Cram, that you can monitor with with spec software or with physical measuring tools.
Conclusion:
- some maps are not well optimized due to instancing
- some maps are not will optimized due to dynamic subdivision SINCE the extremely organic structure of the terrain offers way too many concave and convex forms that are smuggle it self into the viewing radius, so dynamic subdivision should render at high resolution way too many objects as it would be optimal, or acceptable with our card’s Vram - I mean here: dense wood with a lot of vegetation and other objects that are covering (joining the viewing radius, that has to be rendered out at the set up max res)
Solution:
Some maps should be rebuild or the range of dynamic subdivision should be lowered (maybe by introducing fake DOF? versus the now used strange opacity solution?)
I did not wanted to play the smart-ass, but in this way probably you all can understand that from where the wind blows and the solution is not just a little changing in the code here and there, but maybe full map rebuild.
Now I have just one question in my mind, how the heck this issue did not jumped out till the closed beta / alpha testing?