Help get this topic noticed by sharing it on Twitter, Facebook, or email.
We have put together a technote containing several suggestions for optimizing Lightroom's performance that we hope will help.
Let us know which of these suggestions are helpful to you. Thanks!
We do acknowledge that there have been a performance hit for truely "some" users, though that doesn't comfort the users that are seeing the performance hit that they are seeing, and professionals that do potentially lose time (which = money) due to the decrease in performance.
Now that there is acknowledgement of the issue, we still don't have a resolution, nor know exactly what is causing it on the multitude of systems that are seeing it.
Thanks to those that are delving in deeper to try to discover what the cause is, and hopefully we can all look for a solution, since it is happening on not all systems, but some.
I wanted to call attention to a potential solution that was given by another user on another forum after a chat with adobe support. I've paraphrased some of the steps and cleaned them up a bit so they can be digestible, but would be curious to hear of other's results after having tried these steps.
A fair amount of users are running in windows, and these steps are windows-centric, if you're a mac guru, perhaps you can divine how to perform similar steps on the mac platform.
Here is the chat log and troubleshooting steps.
Please try them out, and let us know if you see any improvement with any of these steps.
William is the tech, and Jojo the end user with performance trouble.
William: I understand that you're experiencing performance issue. Am I correct?
Jojo: Oh yes
Rename Preference folder
Please close all windows
Click the Start button.
Type %appdata% in the search box and press Enter.
Double click on Adobe folder.
Rename the "Lightroom" folder to "OldLightroom".
Clear Temp files
Click on Start button.
Type %temp% and press Enter.
It will open Temp folder.
Empty the files and folders inside it.
Jojo: ok done. it still has files that could not be deleted for Win Explorer and Google Chrome (the browser I use)
William: Launch Lightroom and check if you're getting the same issue.
Jojo: that seems to have sped it up significant;y
William: Please double check if that works fine now.
Jojo: Much faster, but all of my presets are gone
Jojo: Export presets, etc
Rename old preference folder back so LR sees it.
Click on Start button.
Type %appdata% in the search box and press Enter.
Double click on Adobe folder.
Rename the "Lightroom" preference folder to "2-OldLightroom".
Rename the "OldLightroom" and rename it to "Lightroom".
After that open Preferences folder inside it.
Rename the file "Lightroom 4 Preferences.agprefs" and to "OldLightroom 4 Preferences.agprefs".
Launch Lightroom and test the performance
Results 2 chat:
Jojo: Oh yes, that's working well!
Jojo: Much better!
William: You can start working with the product smoothly now.
Jojo: Hey, are they fixing this in a release? I know a lot of photographers who are VERY upset about this performance issue. I was ready to change back to LR3
William: Corrupt preference may cause Lightroom to work slow.
William: We renamed the preference file and it is working fine.
End Chat log
Another user had mentioned that they "renamed the Lightroom 3 Preferences.agprefs to Old_Lightroom 3 Preferences.agprefs and now it works a lot quicker."
Those are at least a couple things to try that shouldn't take but a few minutes.
Report back with any change in LR 4.1 behavior
OK. My turn now:
I sympathize Adobe LR team. They do have one major issue. They are branded as Adobe Photoshop Lightroom - now THIS particular "Photoshop" puts them in the biggest problem for them: Highiest Expectations. PS IS so COOL and everyone expect the same delivery from LR.
However I've never before have heard of such a nonsense:
"The Spot Removal Tool and Local Corrections Brush are not designed for hundreds to thousands of corrections. If your image contains many (hundreds) of localized adjustments, consider using a pixel-based editing application such as Photoshop for that level of correction.
First of all we are NOT talking about corrections in the hundreds. We are talking in the less-than-A-hundred scale.
Sooo... you suggest that as an event photographer I should avoid delivering hundreds of photos full of people with white teeth, great looking eyes and skin with no blemishes and with lighter wrinkles, great hair and much more?
The quiality of my photos depends to some extend of the above criterias. And I am in a highly saturated market segment.
And you, Adobe, suggest that I should edit them with PS? We are talking in terms of the hundreds of photos and in many projects I am paid for a number of photos delivered.
Why better not look for a LR alternative? I expect better attitude towards development. Better you experience the pressure of competition than me. Capture One or similiar might do the job.
Take your time. But DO know: I am from the next generation of users. I don't care how and what event photographers of PS-ONLY generation used to deliver.
You are amazing (for me) in terms of features. Now please DO make them work as they should and don't tell me that SOME of the features are "NOT DESIGNED for..." They are there so you are accountable for providing them to us. STOP excusing! Make things work as you did with PS!
Diko, let me save you some time, effort and a lot of grief. Switch now to a product that meets your workflow needs; LR will never achieve it's potential until they unceremoniously dump that silly and amateurish DB system called SQLite. They have tried to paint a ton of lipstick on that pig and that dog still don't hunt. (No apologies for the mixed metaphors). I just installed a SATA 6 SSD; moved all my new photos to a standalone 10K HDD and bought LR 5.3. While it does perform better that either LR 4 or LR 5.0, it still gets bogged down in the develop mode.
I made a few changes that helped a bit and I thought I'd pass them on to you.
If you use your SSD for both systems stuff and your catalogs and cache, then you should format a separate partition of 10GB or so for the cat and the cache. The reason for this is that LR is so crappy at file management that the $MFT file is getting hammered and if you move those files to their own partition you'll also get a separate $MFT. Plus you should format that partition with an Allocation unit (cluster) size of 64KB instead of 4K. This will help to prevent fragmenting, which is, despite a lot of press to the contrary, a problem for SSDs. Mainly because fragmenting increases the number of records in the $MFT file; by a lot. You might want to take a look at your MFT size on your SSD to see what I mean. Mine is bumping up against 400MB in size. Just do a defrag analysis. My catalog had only 100 photos in it and it had 250 fragments. That means that I had 250 records in the $MFT to describe my catalog!!
You should also back up your photos and reformat your photos drive with the larger 64KB cluster size. This by the way is the recommendation for all 2+ TB drives with very large media files.
The effect of these changes is to reduce the response time to the $MFT file. I made several runs and saw at lease a 50% reduction in response time.
I wonder why the "that silly and amateurish DB system called SQLite" should have such a big influence on the performance of the *develop* module.
I have a 3 years old 4 CPU 7GHZ Windows 7 machine and it works reasonable. Unlike CaptureOne LR is developing the whole image when opening it the first time - CaptureOne is only developing the visible part and is so able to display 100% view much quicker.
I have the worst performance problems with the selection of images, (see my other post) - a rather basic feature. But I do not think it is SQL light fault, it is rather a wrong design of selection markers.
Most problems with slow developing of large size previews could be solved by finishing the half baked feature QuickPreviews - it should be possible to use them, even when the external drive is online.
Slow selection can be solved by not storing the selection as Boolean (1/0) but as integer number. Only the current master_selection_number makes a selection number valid, all other values are not, that means the image is not selected. Instead of having to check and write thousands of boolean it is enough to just increment one number to remove a complete selection.
Don't use 64k cluster size, it will chew up crap loads of disk space with very little benefit. This comment is focused on where your preview cache is. There are 10's to ~100,000 files in there, and they most often are less than 8k, and few are larger but not enough to justify 64k or 32k cluster size. It is a numbers game! How many files on the volume and what is there distribution curve of their file size. If you have 10 million files at 6K and 3 at > 64K.. Having a cluster size of 64K will use up much more space than the default cluster size of 4k. In my example 10 million 6k files will use 8K of disk space (sort of) each, whereas they would use 64k each and waste 87.5% of your hard earned money for that SSD or that large TB HDD.
Remember: Cluster size is the minimal allocated file size on disk. So a 1 byte file will take up a cluster of disk space. 4k (default) or 64K (Williams suggestion). So over the course of many LESS than cluster size files, will consume your disk space, and mostly with empty space.
As for the $MFT and other ODS (On-disk-structure) files NTFS uses. A majority of the of them are cached in memory anyway as they are used often and page-faulted in.
There are changes with using a larger cluster size though, the size of the free space bitmap file (hidden) is reduced,
I mount a SSD at the location of \users\username\pictures\lightroom. Thus all the preview cache IO and the cat file IO are going to its own SSD.
Fragmentation is moot on SSD's as there is no head seek latency. It can cause increase in # of IO's as read-ahead can't be done in 1 shot, but again since there are no seeks its pretty much moot.
If you have take ETL traces while LR is running you don't know what is going on. if you don't know what ETL is, then you are too removed from the details to make anything but guesses.
Your rhetorical suggestion that the type of files used in managing digital images would somehow be measured in the multiple of millions and be merely 6k each is preposterous. Besides, data space is dirt cheap. NTFS was introduced in 1993 when the price of a gigabyte of storage was $1,330, so making use of every available byte was crucial. Today the price of a gigabyte of storage is 4 cents, so not too much to worry about there.
My admittedly smallish photo archive storage device (a 1TB WD internal drive) has 181K files on it and they average 3MB each. The LR 5 catalog alone is a file that’s 253MB in size. My new working photo-HDD (a 250GB WD 10K) has 580 files averaging 15MB each. During a develop session these files and the catalog are constantly being rewritten. Many of these rewrites make the files slightly larger; 2K or 3K at a time. That easily slops over into another cluster and causes fragmentation. When I loaded my first 100 RAW NEF files and developed them, my files ended up 27% fragmented. After that I reformatted my working HDD to 64KB clusters. By my calculation I now have on average 9K per file in room to grow (waste?) and at the cost of storage that’s about $0.0000004.
Despite your assertion that the $MFT, $BITMAP, $MFTLOG, et al are cached, they aren’t. This is especially evident when tiny files (less than 750 bytes) are kept inside the MFT itself and not stored in the user area; the MFT gets rewritten. Did you know that LR 5 uses a file of just 64 bytes as a file locking mechanism? Imagine what queuing up against the MFT does to home grown multithreading!!
Fragmentation is a problem for SSDs and it has nothing to do with seek time. It has to do with increasing the number of I/Os to the drive by having to issue multiple random reads and writes to all the involved working datasets. Go here for an explanation of the four corners of SSD performance. And as is always the case not all SSDs are created equal.
Are you daft? I was not talking about where your RAW image files are, but because they are large and often in a hierarchy of some sort. I would gather most people have them on a multi TB HDD and if you know 2, 3 or 4 people who store them on a SSD, don't reply with I know many that put them on SSD.... I know there are.
As SSD are being sold in new PC's the number is increasing that C:\ will be one. By default as I said before, LR uses folders in the USERS account location. The picture folder. CameraRaw uses the users AppData\local\Adobe\CameraRaw path for its caching unless moved and is on the OS drive. Since the users folder is on the OS drive, it too will be 4k 99.99% of the time as people won't know how to install the OS on a differently formatted volume.
The develop module is going to generate the image on demand per the XMP file description data of modifications, by running through cameraraw to generate the lrprev file. I am sure there are subtleties on LR actual order of things. But back to the point.
My "Lightroom 5 Catalog Previews.lrdata" folder has over 53,000 files it is. Most are not greater than 4-16k; yes some can grow large in the MB, but its not as many as the small ones and if you move to an SSD Drive, they are NOT as cheap as HDD are on the $/GB scale and you NEEDLESSLY chew up disk space.
As I said AND I'LL SAY IT LOUDER.. ITS A NUMBERS GAME. You find the distribution curve of the file sizes and numbers of them. Then you pick the cluster size that allows a great many of them to fit in a cluster.
If you want to see what files are being access and how often, type "ctrl-shift-esc", then "performance" tab, then at the bottom you click on "open resource monitor", if you know how to find resource monitor on your own then run it. You can't sort by time the IO happened.. AKA linear time, but sort by read size, write and total IO are there.
If you really want to get into it, download sysinternals suite and after un-zipping the package run ProcMon and select only the file monitor view. This will show IO activity in linear time of read/write. You'll be amazed just how much is going on.
I am aware of the MFT and files may be embedded to save on additional access. I picked 6k for a number so I had something to do math against, period and in the many files I poked about in my lrprev cache many where plus minus that size. Btw, you must be fun at parties with how literal you must make everyone be.
Unless you have source I doubt you can talk about what is cached and what is not within NTFS. But someone adapt to debugging can find out much. If you are NOT a kernel dev, or file-system filter driver writer expert you don't fully know what the OS is doing at the FS level. I'll be happy to review your proof though if you can provide it.
If you are really worried on fragmentation then setup daily defrag operations at night vs. weekly. "Control Panel->Administrative Tools->Defragment and Optimize Drives"
Things to note in the IO path:
Encryption (bit locker for example) and Anti-virus is another. all IO has to go through these filter drivers and there are others. If your anti-virus is setup to scan on READS AND WRITES as most are by default that will add latency. If you use EFS that is another type of encryption vs. bitlocker and that there is R/W latency there. But many don't use EFS. Bitlocker though is increasing its use and if you have a laptop w/o using it.. Well don't have any personal data on it.
If you want to be daring you can exclude your antivirus from scanning lrprev extension, XMP and NEF, CR2, etc. and a few other file extension. Also can exclude the directories if your AV allows it.
Volume level snapshots can cause IO latency with Copy on Writes (file updates) after snap shots are performed IO can be re-routed on the volume. C:\ has system recovery turned on (snap shots) and they are performed often and before updates or application installs.
I know LR has perf issues there are crap loads of them and I think they DON'T know how to perf trace their application. They appear to POLL, A LOT, when they could use event notifications. There are places they don't monitor, example the location where CAMERA PROFILES ARE.. Color check Passport updates the directory...
Oh and a word to the devs.
ADOBE, IT IS STUPID I HAVE TO RELOAD LIGHTROOM. NEVER SHOULD THERE BE A REASON TO RELOAD THE EXE if something is added. LR should see the file arrival and update itself. Here is the code you need right here.
Gee Ron, that's a whole lot of words, your fingers must be tired. Just kidding.
Look, what ever works for you is just fine by me. And if you want to save a couple of bucks by squeezing the last drop from your drives, then knock yourself out. And this forum is not exactly Scientific American and I don't think many people really care about what is and isn't going on inside their computers, let alone require rigorous proof of concept. If what I suggest works, go for it, if not ignore, easy peasy. .
What I care about is whether I can get done what I want to get done in a timely matter. LR is functionally as good as anything I could possible use for my photo needs. And with a modest investment of my time and money I've gotten to the point where it is tolerable to work with.
And yes, I have written hardware level code, many years ago, probably before you were born. And as the saying goes, the more thing change, the more they stay the same...
This reply was removed on 2014-01-19.
see the change log
As much as it pains me to say this, you have a good idea Rob; again, just kidding. Here’s what I’ve done to make LR 5 performance tolerable, if not almost decent.
As I mentioned earlier I installed a SanDisk Extreme II 250GB SSD. I prepared for the installation by clearing off all user data from my C: drive. I than installed the SSD and formatted it with 2 partitions. One for the OS and one 10 GB for LR files. I than loaded the OS from scratch, meaning I kept nothing from the past OS build. I than reinstalled all my apps. I created a catalog folder and a cache folder on the smaller partition of the SSD. I reformatted by WD 10K HDD with 64k cluster sizes and dedicated that one to only my images. I fired up LR 5.3 and started to work on images. After a few days and around 350 Raw files I started to notice a bit of lag in the develop module, so I fired up the Resource Manager and in a quiet system, I watched the file activity. Long story short, the files in the catalog and cache SSD partition were constantly being rewritten for no apparent reason. Plus, they were getting very long response times in the 20-25 MS range to boot. I’m pretty sure the files are rewritten to prevent data loss while I’m working on them. No matter the reason why, the simple fact is that SSDs are not at all good for frequent rewrites. I’ll not try to explain why; go here if you want to know: http://www.thessdreview.com/daily-new...
So I decided to go beyond SSD and move these files into RAM. I purchased, for $10, a copy of RAMDisk: http://www.radeonmemory.com/software_... Now I know what you’re thinking, that won’t be reliable enough, but this is not your father’s ram disk and ECC memory is not either. Watch to tutorials to see how it works: http://www.radeonmemory.com/support_t...
So after a bit of tweaking this is what I ended up with: A 2GB ram disk containing all the volatile files associated with the working set of an LR session. These are the catalog, the previews, the journals, the cache and a few others that were collocated with the catalog. A full SSD dedicated to the OS and non-write app data such as camera profiles, themes and other fixed files( no second partition). A 250GB WD 10K HHD dedicated to current images. A 1TB WD 7200 internal drive dedicated to archived images and all of my other files such as Word and Excel files. A 2TB WD USB 3 external HHD that backs up all the other drives every day during the night.
Now my LR work sessions are tolerable if not downright snappy and they don’t slow down as I get further into a session. Magic cure? Don’t know. Would it work for everyone? Don’t know. Worth a try? I’d say yes, mostly because it’s so cheap and easy.
One last thing and I’m sure this will sound like blasphemy; SSDs are not worth a whit for an application build the way LR is built. Since I’m sure many of you don’t care what my opinion of these seemingly magic devices is I won’t expand on that remark. If anyone is interested, send me a private note.
Interesting that your first concern is my grammar. Especially since I copied the text from Word and lost my line breaks in the possess. Oh well, typing is not my strong suit. LOL
Since you've grasped the essence of my solution, it's doesn't matter how the text appears.
I think we've all proven that the LR team is not inclined to make changes to their design and I can't say I blame them. After all this product cost less then $150 to buy.
The good news is that you don't have to have them cache anything; RamDisk does that for them. So if you have a nice i7 with 32 Gigs of RAM, carve off 16 Gigs for the catalog/cache folder, move it from your SSD and see what happens.
What is the latest thinking on 1:1 Previews and size and quality?
I am running a NEC 30" display 2560x1600
So I have LR5.3 set to make Standard Preview Size set at 2880 Pixels
and Preview Quality High plus never discard 1:1 previews.
I have both the Catalog file and Preview files on a separate 256GB SSD drive and all my images on a 8TB raid 0 drive.
Does this sound right?
William I have about 26K images, all different sizes from 10MP RAW to 36MP RAW files.
I am actually rebuilding all my previews as I type this. I deleted my preview file and wanted to start from scratch, yet before I deleted the file, it was around 130GB.
I am running a 2010 Mac Pro 6 Core with 48GB of RAM.
I just wanted to make sure with my Catalog preview settings.
My point was that Lightroom doesn’t need disk arrays, and weird formatting strategies, and piles of cores, and gobs of RAM, and dedicated SSDs in order to run well. There’s nothing wrong with SQLite. Nevertheless, I myself have found that Lightroom has gotten slower and flakier as its version numbers have increased, even as processor speeds have doubled and quadrupled, and solid state drives have replaced hard disks. The only variable worth considering is Lightroom itself. There’s lots of duplication of wasted effort in users troubleshooting one wacky, improbable notion after the other, when what needs to happen is for the Lightroom team to just write better code. Acknowledging that, the arguments here sound preposterous.
i appreciate that we seem shooting in the dark, because so many different configurations of application and hardware create too many variables. Plus there are levels of expertise among the respondents. I'm more a user-end person with a mid-range pc loaded with extra ram and drive-space and running win7.
my issues with LR only show up now when i use the adjustment tools that involve masking. If a photo needs just basic post-processing and a little masking, it works seamlessly. If it starts to bog because of too many adjustments, i take it to photoshop.
as long as i can find a workaround, i won't complain. I'm happy so many people are involved and giving adobe the feedback they need to know where to start looking for the problem(s). I understand the frustration that professionals experience when their essential tools don't work the way they are supposed to and the deadline approaches regardless.
Sooo, Mark (may I call you Mark?), you've been taking lots of family snaps with your iPhone 3 and running them through your slightly outdated Mac running LR to remove that nasty red eye? I bet they look really sparkly on your Facebook wall.
Geez, I wish I'd known it was that easy...
LR will never be perfect for everyone as they build it for Mac OS and Windows.
What I would really like is more options to alter its behavior by choices. I do run as high-end i7 (6-CPU(+HT)=12) 32GB ram on SSD's for OS and Catalog+cache files.
There are delays going from all the modules main flow modules.. You can stop that or add check box on large systems, just load up all the DLL's with 16GB, 32GB+ systems, there is no need to load them on the fly each time. Windows will place the unused pages on the standby list anyway and fault them back in when access. YOU DON'T NEED TO THINK for the OS. IT has been done for you.
One think I would like to see is the ability to collapse my change bar. If I play with a slider 15 times to get the right exposure or contrast ext. to my liking.. I would like to be able to collapse all the deltas to a final value. You should be compiling that anyway so you only run through the renderer for a given delta one. If I have 10,000 +1's on exposure, then a 5,000 -1's.. it should process just once as +5000 on exposure.
Oh for you nit-pickers.. I made up the numbers for example of concept not that one would really do it, but you could do it generate something more reasonable that in the end could be collapsed.