According to what you described, there will be hundreds of simultaneous connections. Each connection increases a value and when it reaches 5, it returns to 1.
With txt file this can be a problem as you will have to create conditions to prevent inconsistencies.
One logic is to stop editing or accessing the file if it is already opened by a user:
$f = fopen('fit.txt', 'a');
if(flock($f, LOCK_EX | LOCK_NB)) {
$n = fread($fp, 4);
($n == 5)? $n = 1: $n++;
fwrite($f, $n);
flock($f, LOCK_UN);
}
fclose($f);
Using a database this operation is more secure, however it is obvious that it will have a much higher cost of processes.
Before you think about performance, think about consistency. If the routine is safe and you are sure there will be no failure, you will go to the "next stage" which is optimization.
In this example above with flock()
, the process is "super fast" but still some fault may happen. Something inexperienced where it takes too long to release to the next user.
Imagine then a scenario where 200 users accessed in the same exact time.
The first will be the "lucky". Will read and write the number and release to the second, third, fourth. But the last one in the queue will be able to read and write the value correctly or will return some error for long waiting time?
Consider that if the system has hundreds of simultaneous accesses, let’s say in a single second it receives 150 connections and after 2 seconds another 200 and after 2 seconds another 100. Only then you already have, in a space of 5 seconds, 350 neguinho in the queue to read and write in this txt. The system can interrupt the execution around the number 200 due to long wait.
It may be the case to rethink the logic of the business.
If you don’t have this large amount of simultaneous connections, then yes, the simple flock()
, as in the example, can solve and be even a more viable option than a database in terms of performance.