Skip to main content
Adobe Photoshop Family

17 Messages

 • 

408 Points

Fri, Nov 27, 2020 6:22 PM

Lightroom Classic: debunking efficiency using the hardware and a plugin 🤦‍♂️ (Lua + SDK)

I made a plugin that run a benchmark to test the efficiency of different hardware configuration. For now, the plugin just play with sliders of one picture in a consistent loop.

The problem is that when hardware acceleration (GPU) is FULL (ON), it takes 7.07 seconds, and when hardware acceleration is OFF it takes 3,61 seconds. 🤦‍♂️ . I'll add exporting soon to the plugin, but initial tests are not bright.

Either LRc devs do not know how to use the GPU, don't know how to code and update the code to use the GPU or are underdevelopping (no ressources, no time, etc) or don't give a f*** about LRc?

For real, I am really looking for answers here.

Current Specs CPU i7-3770 (from 2013) , GPU 2070.

Test with machine #1

  • CPU i7-3770 (from 2013) , GPU 2070.
  • GPU FULL ON 7.07 seconds.
  • GPU OFF 3,61 seconds.

Edit:Test with machine #2

  • CPU AMD A10-9600P (4/4) 2.4Ghz + GPU Radeon R5:
  • with GPU FULL 11,64s
  • GPU OFF 5,74s.

With LRc version 10.

Responses

16 Messages

 • 

304 Points

2 months ago

I'll ask the obvious question of clarification:  Are the stats you posted for the Intel or the AMD CPU. You list two systems but only show one set of numbers. Also, what feature did you test? It's kind of hard for anyone to assess whether there are really any issues here or exactly which feature is not showing improvement.

I have not always had great performance gains with GPU acceleration. , Even sometimes, I have had lockups when using GPU acceleration. I think there is still a lot of issues which can be traced to specific CPU/graphics board configurations. Quite possibly, it may involve the motherboard and its chipset as well. I am not an expert here by any means, but I think using GPU acceleration is still in its infancy and doesn't really do that much yet.

17 Messages

 • 

408 Points

@steven_friedman_7261016 

Thanks for the question, I have updated the results and wrote them in a more clear way in the original post.

Right now, the only thing I am done is a reset on values on one picture and then incrementing values on one picture in a loop. My CPU is not max out when doing those tests, so there is room for speed if I may say so. My SSD is far from been at 100% too, where my database/catalog is.

For the test/plugin, right now I am simply "playing" with the values on one picture in a loop.

So the values are:

  "Temperature",
  "Tint",
  "Exposure",
  "Highlights",
  "Shadows",
  "Contrast",
  "Whites",
  "Blacks",
  "Texture",
  "Clarity",
  "Dehaze",
  "Vibrance",
  "Saturation",
  "Sharpness",
   "SharpenRadius",
  "SharpenDetail",
  "SharpenEdgeMasking",
  "LuminanceSmoothing",
  "LuminanceNoiseReductionDetail",
  "LuminanceNoiseReductionContrast",

And I am changing values with these SDK tools (that access the API of Lightroom):

  • LrDevelopController.increment(param)
  • LrDevelopController.setValue(param , value)

While doing another run, I saw that when the GPU is ON+FULL, the CPU is higher compare to when the GPU is at OFF. Now that is interesting.

799 Messages

 • 

9.5K Points

2 months ago

With the Quadro P2200 on my system all develop sliders on LRC V10 are now instantaneous on a 4K monitor. They were very slightly slower with a Quadro K1200. V9.4 had already achieved this for most of them but noise reduction had still lagged by 1/2-2 seconds depending on the image. At least one that doesn't use the GPU is instantaneous, I don't remember which one. While never slower, GPU fully enabled is as much as 10-20 times faster for some sliders. They are not making up for a slow cpu as the system is a 6-core Xeon, 3.6/4.0 GHz. 

The system currently has two 4K monitors though only one at a time is used for Lightroom. Both are calibrated. I plan to add a 5K monitor to see if Nvidia has fixed their Windows 5K support. Moving 75% more pixels should be somewhat slower but the larger color space would make it worthwhile. 

 

Your results are not consistent with what most others are experiencing. Nor are they consistent with benchmarks such as those conducted by Puget Systems on dozens of video cards. Their benchmark can be downloaded for you to run on your system. 

162 Messages

 • 

2K Points

2 months ago

Inquiry (either not stated or I overlooked)

Your display resolution in use?

799 Messages

 • 

9.5K Points

Dual 4K though I only use one monitor at a time. 

17 Messages

 • 

408 Points

@david_golding_1mvs2lwngpf9v

In my case, I am using  LRc on a 1440p (2560x1440 x10bits) and my other screen is not used for those tests since LRc does not scale on a second monitor.

799 Messages

 • 

9.5K Points

That display should be very fast, especially with your 2070 video card. Currently Lightroom only works with 8 bits per channel for the display, we're hoping for an update to 10 bits someday if they can accomplish that without a significant performance hit.

 

Back in the Lightroom V8 era, I was using a 1920x1080 monitor and performance was fine. Upgraded to a 5K monitor and it slowed to a snail's pace pumping all those pixels. A faster video card helped a little but significant relief didn't come until V9. 

  

Switching to a 4k monitor was the biggest performance boost until V9.4. which fixed all but a couple sliders. Those were fixed with V10. No longer any develop performance issues for me though my largest raw files are only about 50-52 MB, Fuji X-T3. Larger raw files might be slower.

 

My next step is to reconnect my 5K monitor to see how it performs. 

  

Champion

 • 

5.5K Messages

 • 

97.3K Points

2 months ago

"In my case, I am using  LRc on a 1440p (2560x1440 x10bits)"

Adobe has stated that on displays smaller than 4K, the GPU will often provide little improvement and may slow things down. Its current FAQ says

"Using a compatible GPU can provide significant speed improvements on high-resolution displays, such as 4K and 5K monitors."

It would be interesting to see your benchmark results on a 4K or 5K display.

(edited)

799 Messages

 • 

9.5K Points

When adding a faster video card to a FHD display, I saw very little improvement but that was also in the days when Lightroom made very little if any use of a video card. 

 

I've not run any benchmarks. I drag the sliders and if they are instantaneous then that meets my requirements. Today I experience no delays with the develop sliders on a 4K monitor with Lightroom at full screen. 

 

Since V10, they are all instantaneous for me with one exception that Adobe has accepted as a bug: Adjustment Brush with Auto Mask on slows down many following operations dramatically. The following edits randomly undo parts of the Auto Mask operation. Basically Auto Mask can only be used as the last Develop step.   

 

Because my video card only has 5GB and a 5K monitor benefits from at least an 8GB card, I expect a hit. With a 4GB card and a 4K monitor, almost 100% of the VRAM was used with several (most) of the sliders. Upgrading to a 5GB card; it still used almost all of the VRAM and the total utilization of the card increased from 36 to 38%. This leads me to believe that even better performance could be achieved with an 8GB card which if true, means that a 5K monitor could take advantage of about 16GB of VRAM.  

 

If the 5K monitor works, I'll post a note regarding the performance impact on my system. There have been issues with 5K monitors with Nvidia video cards on Windows so we'll see if Nvidia got the bugs worked out. 

Champion

 • 

5.5K Messages

 • 

97.3K Points

2 months ago

"Nor are they consistent with benchmarks such as those conducted by Puget Systems on dozens of video cards."

The Pugetbench benchmark does very limited performance testing of Develop:

"Develop Module Auto WB & Tone (How long the image takes to update after applying the Auto Tone and White Balance)"

So it's not surprising that Alexandre is getting much different results. I don't think Puget's benchmark is sufficient to provide useful predictions of how well LR might run in real life on a particular graphics card.

17 Messages

 • 

408 Points

@John_R_Ellis 

PurgetBench for LRc is a not to bad testing, but does use an external software that mimic your mouse, so you can't use your mouse for 45 minutes while it is running, and my goal is to make a benchmark that use only SDK/API's of LRc.

Champion

 • 

5.5K Messages

 • 

97.3K Points

But PugetBench only tests "Develop Auto WB & Tone", which is hardly representative of Develop operations most users do. I think your approach using LrDevelopController to test many sliders could give much more representative results.

162 Messages

 • 

2K Points

2 months ago

Instead of assuming, although would probably be a good assumption considering the skill set of all members involved, I will ask.

So NVIDIA GTX, applying maximum performance improvements (power)

https://www.winhelp.info/boost-lightroom-performance-on-systems-with-nvidia-graphics-chip.html

I assume AMD has similar

And for Windows, Power setting at High Performance?

https://www.winhelp.info/boost-lightroom-performance-on-systems-with-nvidia-graphics-chip.html Tip 8)

17 Messages

 • 

408 Points

@david_golding_1mvs2lwngpf9v 

I was so happy to learn that there is a performance mode for the nVidia card, but when I looked up, it seems like I already put it there a long time ago.

Nevertheless, I did test it with max power settings vs consistent power and the results are in!

TEST# GPU ON GPU OFF 1 Dev LOOP 3 Dev LOOP 10 pics Exp 4K 10 pics Exp 1K Machine Lrc version Test version (LrToolkitIdentifier) Notes
9 FULL 17,69 1 10.0 LRAlexTestv20201128
10 X 9 1 10.0 LRAlexTestv20201128
11 FULL 18,8 1 10.0 LRAlexTestv20201128 With nVidia power at not maximum performance
12 X 9,09 1 10.0 LRAlexTestv20201128 With nVidia power at not maximum performance

(I don't know how this table will end up in the post, finger cross)

So I can confirm that with power at maximum, you get about... 5% increse performance with "the slider test".

162 Messages

 • 

2K Points

2 months ago

An additional inquiry, involving different expectations.

Lower Display Resolution

For the Windows rig, where you can actually select a lower resolution, not just scaling. If you reduce to less than 4K, Yes accept using an expensive monitor at lower resolution. How does LrC in your test perform?

And when you do, go bonkerss on the adjustment brrushes to try to slow LrC down.