I recently changed from using Wordpress cron jobs to using disconnecting AJAX requests for scanning for images with Gallery Hierarchy to try and get a little more stable and reliable scanning system.
One of the problems that didn’t go away doing this was dealing with the maximum execution time of a PHP script. This is set by (among other things) PHP
max_execution_time setting in the PHP configuration file (php.ini).
After coming up with a solution to check if the scan job had been running for longer than this execution time and then restarting it and finding that it wasn’t working, I did a couple of tests.
My first test was this simple PHP script:
After running the script in the console while monitoring the
test1.log file (and having it running for more that 400s [to be expected as the
0 (unlimited) in the console]), I tried running it in the browser and got the following in
After seeing this, I wondered if the maximum execution time was the maximum CPU time rather than just time, so I adjusted my test script to the following:
After running it in the browser again, the output was a little more expected and confirmed my suspicions.
Even though it didn’t get killed bang on 300 seconds, it did get killed.
So after the tests and finding that it judging whether something has been killed simply based on the time and the
max_execution_time was a bad idea, I decided to change my approach. Instead of seeing how long it has been running for, I see how long it has been since its status was last saved - something that should happen every 10 seconds. If there hasn’t been a status update in a while (30 seconds), I assume that the scan job has been killed and try and restart it. It should be noted that if it dies due to an error, it will not be restarted. I have also set the execution limit to 0 (using set_time_limit) in the hope that it is on a server that it will work on.
One function that may be of use for monitoring cpu time is
getrusage, which can be used to monitor CPU time among a few other useful stats.
Some other ways that I considered was:
- using getrusage in the PHP script to its running time so that when it got close to its end, it could die gracefully/signal it was going to die. This would however add processing overhead.
- storing the PID of the PHP script and then running once that process goes. This, on a super busy machine could lead to the missing the PHP script dying - if the server happened to have lots of PID reuse and a new process happened to pick up the PID of the PHP script and the run indefinitely… unlikely(ish), but it could happen.