1
I have a shell/bash script that works perfectly to make backups, the problem is that I have large files that are giving problems in the execution of the script. The script has to compress the file in the format tar.gz
and it does this, but when it arrives at 6GB+or- the script continues to compress the file but passes to the next lines and the backups crashes, the server must have a set_time_limit;
same in php, in the php file that calls the shell/bash I use set_time_limit(0);
and works very well, the shell/bash has something similar ?
The script:
MYSQLDUMP="$(which mysqldump)"
$MYSQLDUMP -u $DBUSER -h $DBHOST -p$DBPASS $DBNAME | gzip > $TIMESTAMP.sql.gz
ssh $USER_SSH@$HOST_SSH "tar -zcf - $HOME" > $TIMESTAMP.backup.tar.gz
tar -zcf $TIMESTAMP.tar.gz $TIMESTAMP.backup.tar.gz $TIMESTAMP.sql.gz
SUCCESS=$?
rm $TIMESTAMP.sql.gz
rm $TIMESTAMP.backup.tar.gz
I didn’t put the variables in because I don’t think it’s necessary.
Before the end of the tar
it removes the 2 files from the end lines ... if the file is less than about 6 GB or 7GB this does not happen
I did not understand why to limit time. Because in that case could not interrupt the process? Your question would not be about synchronous execution?
– Daniel Omine
The GNU tar has a parameter called
--checkpoint
with which you can create a callback. Thus, the next execution would only be invoked when the current process completes.– Daniel Omine
@Danielomine have any examples of using this --checkpoint ? if you can post an answer
– Alan PS