To run such a costly task, you first need to decide what to do if it doesn’t fit in your full time window. You need to decide if you should be canceled in full, going back to the previous state, suspended and continued in the next time window, or move forward until you finish.
Each of these cases has a different approach. A good rule is divide to conquer. You could divide the total records by ranges of Ids and send them in groups to a Thread::Queue or to a server Redis where each thread or process, in the case of redis, would consume each batch.
A way to catch the groups of ids, would be the Model.find_in_batches, consulting only the #ids. Another way would be to create ranges of ids for each job, for example, by consulting the Model.Maximum(:id) and the Model.minimum(:id) to know the cardinality of your set and then divide the amount by the number of batches you want to create or by the amount of each batch you want to create.
In addition, to control the end of the window, a thread could be used for monitoring, which would send to each executing thread the stop signal.
A good scheduling library is the Rufus-Scheduler.
Remember that if you only work with threads, you won’t be able to scale in more than one process, so redis becomes an interesting tool for communicating Jobs and even suspension warnings between the processes involved.
Since it seems that this process of yours will take a long time, the memory consumption will be huge, the Activerecord will consume a lot! I’ve had to solve similar situations, update tables by bank is much more business
– user3603