But that’s completely fine, the GPUs are designed to run at 100%, all games that have uncapped frame rates which is the vast majority of them, as long as they’re not limited by the CPU will throw new frames at the GPU and if that’s a 2D menu or something then the GPU will just re-draw the menu as fast as it can, typically a few thousand FPS and the GPU usage will remain at “100%”
And again by usage this means the scheduler has a full queue of instructions to execute, it doesn’t mean that 100% of the transistors on the die are actually drawing power, a basic menu is unlikely to use RT cores and Tensor cores and probably just one of the general purpose cores, either INT, FP16 or FP32. It’s not doing anything complex enough to require specialization from the entire GPU and so the power draw while at 100% “usage” will be lower.
100% usage in that way is typical, it’s normal for GPUs it’s what they’re designed to do, they’re designed to provide you as much performance as possibly by working as hard as they can, you WANT 100% usage from your hardware otherwise you’re wasting money buying fast hardware and not using it to its full potential.
It’s not the amount of render calls that’s the difference with New World, in this respect it’s doing the same thing any other game would do. It’s the type of work it’s asking the GPU to do which uses much more of the GPUs capabilities. Memory leaks don’t typically lead to higher CPU usage, they tend to fill up memory until it runs out and the application crashes, but again that’s not really relevant here anyway. It’s not the amount of draw calls that’s the problem, it’s the type they’re making, and what part of the GPU they execute on. There’s more power draw because there’s more parts of the GPU in use simultaneously which are all pulling power because the game is efficiently loading the GPU.
it’s actually completely the same with CPUs for what it’s worth. CPUs are full of specialized areas on the die which cater to specific instruction sets and don’t get used unless you send them a instruction that can be executed by that core. Most notably AVX instructions have been a problem in recent years. You can benchmark a CPU and overclock it and it be 100% stable, yet when you benchmark it with an app that uses a mix of AVX instructions, which the recent versions of Prime95 can, then you’ll overheat and trip your CPU. It’s such a pervasive problem that motherboard manufactureres added an AVX Offset in the bios of the motherboard, this allows the motherboard to detect when AVX instructions are being used and literally down clock the CPU by a certain amount to make sure it stays cool and safe. Same kind of thing.
If your card is using more than 100% when its limited to 100%, you either got a faulty card or using faulty overclocking software. A game doesn’t tell you GPU how much power to draw, your GPU decides how much power draw it needs to handle the tasks.
JayzTwoCents also proved other applications could make faulty cards pull more than the set limit when tweaked with MSI Afterburner.
I really don’t witness any improvements. Textures need an eternity to fully load, mobs are having glitches where they just disappear and then suddenly appear again (when they’re doing charge attacks or leaps) etc.
I haven’t played since patch yesterday but the previous patch broke something with my 3080Ti OC strix.
I had fewer FPS. Some area even got at 60 FPS where I used to have 100+
I will check later today if this new patch really made a diff or not.
I agree that no DLSS is poor. It should be on the priority.
Just wanted to report in after changing thermal paste to Arctic Silver 5. Got temps down to 74/38 from 84/52 during full load/idle, respectively. Gains! Fuck knows why the original paste was so bad. Original looked uniform and properly applied too but it’s like night and day. Should I change the memory cooling pads as well?
not what im seeing, i see shadow artifacts in the engine, looking at a starmetal ore makes my 3080 got to 30% usage and 35fps on screen. Nah homie it’s terribly optimized for high end. Cp2077 runs better. Ive also never seen gpu usage hit 100% in this like it does every other game.
I wish this game had any kind of optimization, but it does not. Going from lowest to highest settings on my old rig gave me 10-15 FPS, so basically nothing. 100% GPU load? Eating RAM like its a buffet? This game is horribly optimized for old and new rigs as well.
Yup same my gpu sits at 40 on water with its own loop and cpu at 50 with a hefty oc. Everything in game cranked up.
People out here playing modern games with modern gpus but with dated cooling solutions. Disgusting honestly especially when you can very cheaply liquid cool cpu, gpu or both.
Wanna reduce the load on your GPU even more? Turn the Video Quality setting to Low and then increase the other settings individually back to what you had it at. It will remove TAA. You have to do this every time you launch the game though.
Even if you already have everything on Low, it will still work.
What a… suggestion… sorry. that is no option. saying a high end card can not handling stuff a 1050 TI GTX of MSI could at almost 50 fps max settings and 4k mode 1980xxxx
Thats why i do not agreeing on NVIDIA saying my one is the standard 2 gb one… scamming me 2 GB of VRAM.
Then this optimisation suggestions at Geforce expierience… this guys have gotten 0 clue about game settings tbh.
do you play other games? because yeah, looks and runs like crap compared to rdr2, tomb raider, metro exodus, even cp2077 lol. Because i have similar specs and similar temps, and boy it does not live up to other AAA’s in that world. 144 on a 3080 is high fps? Cant even enable gsync because it has no true full screen or windowed modes.