Cron is a software utility in Unix-like computer operating systems that automatically schedules time-based jobs. BriteCore's DayCron script is executed on every live site's Leader node around midnight every day.
Note: The frequency is based on the type of job. Advanced settings control the frequency for some of the utilities.
As the script executes, it logs to log/daycron.log and records the status of individual steps in the cron_jobs database table.
DayCron/Nightly processing dependencies
All sites (live, demo, and test) have a nightly processing file that executes a number of other processes, depending on if it was called to run daycron or minutecron jobs. Part of this script ensures that any site that isn't configured to be truly live will skip the processing (this is why most demo and all test sites do nothing overnight). When running DaycCon jobs, the following tasks are called and send jobs to the cluster:
- BriteCore's DayCron script kicks off the subprocesses and functions listed below:
- Process premium records
- Generate reports
- Process commissions
- Uploads to vendors
Together, these processes send the jobs to the instances they are running on to be processed asynchronously, and then the result is returned to the original instance upon completion.
Details on Nightly processing dependencies
DayCron script
Before any jobs are queued at all, the DayCron script sets two "master" Redis locks:
- <client-name>:locks:df_cache
- <client-name>:locks:df_prep_cache
These locks are meant to represent the processes for building data frames (DF) and preparing a DF cache that hasn't yet been completed. These locks are also not meant to be released until all locks that will be nested under them have been released.
Premium processing
This script kicks off the majority of the DF cache-related tasks each morning.
- Chunks the work to process premium records into an arbitrary number of jobs. Each job sets a new Redis lock with the format :locks:premium_records_X, where X is the number of this particular premium records job. These locks are released as each premium records job completes.
- Locks the :locks:df_prep_cache master lock ensures that it is locked, and the first sub-key is set beneath it: :locks:df_prep_cache:item_trans. Similar to the DF cache master locks, this lock represents that item transactions haven't yet been built this morning.
- Sends the "parent" job for building the DF cache to its cluster. Each DF cache build job that executes will group together a number of caches that need building, setting a new Redis lock under the df_cache master lock, and releasing each one as each individual DF is built. Once all DF caches have been built, the master df_cache lock will be released.
- Sends item transaction build job to its cluster. This script executes only when all DF cache has finished building. Once finished building item transactions, the df_prep_cache:item_trans lock is released, and if no other locks have been set under df_prep_cache, the df_prep_cache master lock itself is released. This may be a cause of problems!
- Chunks and builds prepared DF cache once all DF cache has been built and after item, transactions have been built. In the event the df_prep_cache master is released lock too early, this process will be sure to re-lock the master lock. Afterward, individual Redis locks are set for each prepared DF that needs to build and are released as each build completes.
Report runner
Depending on the time of month/year DayCron is executing, different reports will run for each client. These reports are executed to run that script on cluster nodes queued by the reports utility. This job won't execute until all DF cache and prepared DF cache locks are released (meant to wait for all DF cache to be built, but we currently don't always wait for that, as building item transactions can also release the prepared DF cache lock).
Process commissions
This is another script that will execute differently based on the time of the month. It, however, executes without any dependencies.
Upload to vendors
NxTech processing is sent to the supercluster for clients who integrate with that vendor. DF cache must finish building for this job to execute.
Frequency
The utilities run on a daily, monthly, and annual frequency.