I have a folder containing 3000 PDFs that I annotate using Skim. Because Dropbox does not reliably sync extended attributes, I have set up Hazel to monitor this folder and perform an action that would write the separate Skim file whenever the Skim file is newer than the date last modified of the PDF file corresponding to the Skim file (the Skim file is identical to the not-synced extended attributes, but, since it is its own file, Dropbox syncs it whenever I change the Skim notes in my PDF). This is all fine and dandy and works. However, because the folder contains 3000 PDFs and 4000 files altogether (PDF + Skim files), every time I save my PDF, Hazel is busy for over a minute running the rules on all 3000 files. So I thought let’s make it more efficient by splitting the folder into about 200 subfolders (based on the first two letters of the PDF file name, the folder hierarchy is all automated by a third party). The setting up of the "run rules on Subfolders" is no problem. That works too. However, now Hazel still takes a minute every time I save a PDF by running the test on all 200 subfolders. It is not the amount of time that bothers me. It is the fact that every time this happens Hazel uses so much CPU that my fan starts spinning loudly. I can make the rule evaluation of the folders much faster by including a test "Date Last Modified is After Date Last Matched" in the "Run Rules on Subfolders" test. This test is meaningful as long as changes are local. When I change a file locally, the Date Last Modified of the enclosing folder changes, the test comes back positive and Hazel goes into the Subfolder. It all only takes about 7 seconds to run on all 200 folders. However, if I change a file on the remote computer, Dropbox does not change the "Date Last Modified" of the enclosing folder, so the test cannot serve its purpose. Is there not an efficient way to monitor 4000 files in folder that does not eat up a lot of CPU power?