About updating a photo personally would say (being a kick on my part) that updated photos should never contain the same name, if it is web, or else in the PHP part (which is your case) adjust Urls based on update time (overwriting of the new photo in the old one after uploading, which is something you will get based on the operating system itself, PHP already contains this, which is the filemtime(<localização do arquivo>)
), as an example I suggested in:
This way it could reuse the same name and at the same time bypass the cache of the photos in the browser for the client.
About deleting a photo from the file system you could of course make a SELECT
at the moment, take the path of the photo, then run the DELETE
, then if delete is successfully executed it would take the previously obtained path and run the unlink
, if the unlink
fail, run the rollback on the bench (which I think is actually expendable as the next suggestion), would look something like (it’s just hypothetical):
$id = <ID DA FOTO AQUI>;
$foto = null;
$pasta = '/foo/bar/baz/fotos/'; //Pasta aonde se localiza as fotos
$mysqli->autocommit(false); //DESLIGA O autocommit
if ($stmt = $mysqli->prepare('SELECT `id`, `foto` FROM `fotos` WHERE id=?')){
$stmt->bind_param('i', $id);
$stmt->execute();
$result = $stmt->get_result();
while ($row = $result->fetch_assoc()) {
$foto = $row['foto'];
}
$stmt->close();
}
if ($foto && $stmt = $mysqli->prepare('DELETE FROM `fotos` WHERE id=?')){
$stmt->bind_param('i', $id);
$deletado = $stmt->execute();
$stmt->close();
if (!$deletado) {
echo 'Falha ao deletar do banco';
} elseif (unlink($pasta . $foto)) {
//Se falhar ao deletar da pasta então executamos o rollback
$mysqli->rollback();
echo 'Falha ao deletar da pasta';
}
}
Separating the "processes"
Despite thinking about the previous possibility of "invalidate" (delete) files immediately as in the example above, I believe that a less "hard" way would be to let this process occur part, because a file may be being used by systems back-end cache of the pages where the image/Source is shown, if invalidating the file immediately could cause side effects, then the suggestion would simply create a "process" (not related to threads) the part that checks files that are no longer in use, so you could delete multiple files in a "scheduled" way consuming specific things only once:
- Like script execution itself, which instead of eliminating one by one would eliminate several
- Write and read the disk only at once (although this is very relative and I really can’t say it will be better to delete several instead of one by one)
The main advantage in this case is that you would have a guarantee that would only take what is not really in the bank, of course if you have a system that reuses photo names this could surely cause a "race condition" problem or just a problem of "inconsistency", so for this to work well the photos should never contain repeated names, even if an old one has already been removed
Supposing he executed several DELETE
s during the "workday" (from who uses your system), then at a more idle time something with a CRON or something similar, would perform the cleaning process, picking up a range of files and checking if they exist in the bank with NOT IN()
, then what returned could run the unlink()
, if there are too many photos maybe set a limit and save the last check, and running the process again from the next after the "range".
I can’t say for sure it’s the best way out, but in page caching issues that would still need old resources would be a way to minimize the problem, it depends a lot on what you intend to do.
If the removal of the old photo fails, I can do
rollback
in the database, but I will have a spare file of the photo that was sent. I can try to delete it, but this operation may fail and start to have unnecessary files piling up. What can I do in this situation?– Vinícius Lara
What you can do is create a routine (cron) that will run (1x every n) looking for files that have failed to delete/insert. You can do this management via database as well.
– Danizavtz