GPU power useage low?

Been testing a new system build and am watching temps, power etc while running various benchmarks. I noticed that D5 is not using the full power draw I’d expect from the GPU being fully utilized and am wondering if there is performance being left on the table?

Screenshot below of the power draw during the benchmark (values on the left) 1080p video render export, which for me finishes in 58s. Drawing ~230W during export, whereas on the right you can see the MAX value I got from running Furmark benchark a few minutes prior, which is expected.

image

System specs:
Intel Core Ultra 265k
64GB DDR5 RAM
RTX5080 16GB
Nvidia Driver: 32.0.125.8142 (581.42)
Windows Build: 10.0.26200 Build 26200

Hi @tim1

Try to check if D5 Render is set to Maximum Performance.

Yep. Tested it with that and didn’t see a change.

Hi @tim1

Try testing it with a higher quality image or video, then check if the GPU power is at maximum or near maximum.

Yes, I have used the benchmark tool and performance seems about in line with others with similar hardware. I’m just unsure why GPU usage shows 100% but power is much lower. It’s not seemingly stressing the GPU very much vs other heavy 3D applications.

Hi @tim1

Alright, have you tried testing it with a heavy file (perhaps 5gb or above), then try rendering it in 2k or 4k, just to test if the GPU Power usage increases. You may try downloading some D5 Files from our Scene Express

Let me know of the results.

Interesting, but that wouldn’t be helpful for my current models if I never reach that file size. At those sizes I would likely be at a VRAM limitation which would be another issue altogether.

I do have some large models I tested on and the power usage remains roughly same as I mentioned. I understand what you are saying about trying more intensive resolutions like 4k or higher, seeing how low resolution could cause a CPU bottleneck to be slowing down the GPU, but that wouldn’t be the case here with my hardware - and I haven’t see the CPU statistics give any indication of such while testing.

I’m curious what sort of power usage statistic your system uses while running the benchmark and if it is using the “max” power or not.

Do you mind sharing the benchmark results?

image

Hi @tim1

I’ve consulted with the team regarding your question about D5 Render’s GPU usage. D5 Render does not always use your GPU’s full power output in simple D5 scenes, and this is completely normal.

The GPU utilization percentage (which is directly proportional to its Power Consumption) you see isn’t always a direct measure of performance; it’s a measure of how busy the GPU is relative to its current workload.


It is the GPU driver that decides how power is used. The driver, with your operating system (OS) and the API (such as DirectX or Vulkan), continually manages the GPU’s power state.

  • If your scene is simple, the driver knows it doesn’t need to engage the maximum clock speeds or wattage to maintain a smooth framerate (or to complete the render quickly). It scales the power down to save energy and reduce heat.

  • If you have Vertical Sync (V-Sync) enabled or have the application’s frame rate explicitly capped (e.g., to 60 FPS), the GPU will not push to 100% because it knows it has to wait for the monitor’s next refresh cycle.

As I suggested, If you truly want to see your GPU usage hit 95-100% to confirm your system is running optimally, you can force the application to make the GPU consume more power by:

  • Increase Resolution
  • Max Out Effects

If the power increases, then your device should be normal; otherwise, that will be a different situation, and we may need to collect your log files. The same goes for empty scene files and have maxed your GPU power consumption.

Interesting. Caught in a bit of a catch-22 then. As when resolution increases, so does workload which will then increase render times - which I’m trying to reduce with more powerful hardware in the first place. So simply running render outputs at a higher resolution is counterproductive for time savings. Particularly with video output and if any custom PT settings are added vs default.

What I hear you saying is that D5 process is considered a “light” load most of the time compared to, say, a modern video game engine. In comparison, I am meaning to test Twinmotion, which is based in Unreal Engine (we’ll call that a video game engine in this regard) and my assumption is that it will be heavier on the hardware. Just a guess, though.

What I’m sort of getting at is a better way to measure GPU performance for typical D5 workloads. I had debated a RTX5090 vs the 5080. I’m happy with the uptick in performance of the 5080 vs my old 3080 12GB, but I’m left wondering how much the 5090 would have been worth it. My initial concern was its max TDP, but seeing how D5 isn’t pushing anywhere near those max TDP levels, I’m left thinking my worries for that use-case were unfounded.

I wonder if the main bottleneck then is memory bandwidth? Trying to lock down what exactly D5 benefits most from in hardware specs. It is difficult to discern from combing through the user benchmark results. I.e.- is it weighted toward CUDA cores, memory bandwidth, RT cores, clock speed? CPU even?

13900k RTX5080 at home, I9 285k RTX5090 at work. No measurable rendering difference. Only the Vram helps a little. 65% of my renders are faster on my home machine. Keep in mind that some scatter assets on specific geometry slow D5 render speed down and cause an intermitted GPU load. Either Nvidia drivers Win 11 or D5 are unable to fully utilize 5090’s cores.

This is useful info - thanks. Your work 5090 build is what I was originally considering and it sounds like it might not have been worth it. I was sure the higher memory bandwidth and RT cores in the 5090 would be massive uplift.

None of my work uses scatter or motion/path assets. Just a lot of artificial interior lights (retail design). With 16GB VRAM on my new 5080, I haven’t been limited yet. But I was borderline on my old 12GB 3080 card, which definitely had some other issues.

using 5090, it works very well. However, I had same issues with scatter. With simple grass scatter on, GPU utilization was very low with very slow renders.

Once scatter is off, the utilization went up higher (never 100%) with fast renders.
there’s a previous thread