How to work with files and database in a way that is possible to consist

Asked

Viewed 71 times

1

In a system where one works with file uploads being necessary to consist the links of these files in the database, which is the best approach to do it?

Example: A registration system where it is necessary to send a photo of the user and making it possible to edit it (resend the photo, for example). How the system would handle the fact that it is necessary to delete the old photo and replace it with the current one?

I think, to update the photo:

  • System needs to upload new photo;
  • The system needs to update the photo link in the database;
  • The system deletes the old photo.

And, if something goes wrong in any of these steps, how to ensure that the user does not run out of photo or that there is no accumulation of unused photos?

The Mysql database has transactions that help to deal with this, but the file system does not...

3 answers

2

About updating a photo personally would say (being a kick on my part) that updated photos should never contain the same name, if it is web, or else in the PHP part (which is your case) adjust Urls based on update time (overwriting of the new photo in the old one after uploading, which is something you will get based on the operating system itself, PHP already contains this, which is the filemtime(<localização do arquivo>)), as an example I suggested in:

This way it could reuse the same name and at the same time bypass the cache of the photos in the browser for the client.


About deleting a photo from the file system you could of course make a SELECT at the moment, take the path of the photo, then run the DELETE, then if delete is successfully executed it would take the previously obtained path and run the unlink, if the unlink fail, run the rollback on the bench (which I think is actually expendable as the next suggestion), would look something like (it’s just hypothetical):

$id = <ID DA FOTO AQUI>;
$foto = null;
$pasta = '/foo/bar/baz/fotos/'; //Pasta aonde se localiza as fotos

$mysqli->autocommit(false); //DESLIGA O autocommit

if ($stmt = $mysqli->prepare('SELECT `id`, `foto` FROM `fotos` WHERE id=?')){
    $stmt->bind_param('i', $id);
    $stmt->execute();

    $result = $stmt->get_result();

    while ($row = $result->fetch_assoc()) {
        $foto = $row['foto'];
    }

    $stmt->close();
}

if ($foto && $stmt = $mysqli->prepare('DELETE FROM `fotos` WHERE id=?')){
    $stmt->bind_param('i', $id);

    $deletado = $stmt->execute();

    $stmt->close();

    if (!$deletado) {
        echo 'Falha ao deletar do banco';
    } elseif (unlink($pasta . $foto)) {
        //Se falhar ao deletar da pasta então executamos o rollback
        $mysqli->rollback();

        echo 'Falha ao deletar da pasta';
    }
}

Separating the "processes"

Despite thinking about the previous possibility of "invalidate" (delete) files immediately as in the example above, I believe that a less "hard" way would be to let this process occur part, because a file may be being used by systems back-end cache of the pages where the image/Source is shown, if invalidating the file immediately could cause side effects, then the suggestion would simply create a "process" (not related to threads) the part that checks files that are no longer in use, so you could delete multiple files in a "scheduled" way consuming specific things only once:

  • Like script execution itself, which instead of eliminating one by one would eliminate several
  • Write and read the disk only at once (although this is very relative and I really can’t say it will be better to delete several instead of one by one)

The main advantage in this case is that you would have a guarantee that would only take what is not really in the bank, of course if you have a system that reuses photo names this could surely cause a "race condition" problem or just a problem of "inconsistency", so for this to work well the photos should never contain repeated names, even if an old one has already been removed

Supposing he executed several DELETEs during the "workday" (from who uses your system), then at a more idle time something with a CRON or something similar, would perform the cleaning process, picking up a range of files and checking if they exist in the bank with NOT IN(), then what returned could run the unlink(), if there are too many photos maybe set a limit and save the last check, and running the process again from the next after the "range".

I can’t say for sure it’s the best way out, but in page caching issues that would still need old resources would be a way to minimize the problem, it depends a lot on what you intend to do.

0

The ideal is for you to generate in the system a UID (unique id) and save this image in the directory with this UID, to avoid that pages saved in cache do not update the image when in fact the image needs to be updated, with a UID always the browser will receive the unique image, and you can cache it. As for the generated HTML, set in headers where the image can be requested, with new Node, to expire in X time, or not to cache it.

In fact, it all depends on the number of requests per second that this system will have, and how it will be curled, if you explain help you in a script, more passes more details of how you intend to do this.

-2

If you use database, it is possible to do all this in a block transaction of the database, when there is any failure in any of the previous steps, just make a rollback. Otherwise, commit.

If you use a non-relational database, you should do everything using Atomic Operations.

  • 2

    If the removal of the old photo fails, I can do rollback in the database, but I will have a spare file of the photo that was sent. I can try to delete it, but this operation may fail and start to have unnecessary files piling up. What can I do in this situation?

  • What you can do is create a routine (cron) that will run (1x every n) looking for files that have failed to delete/insert. You can do this management via database as well.

Browser other questions tagged

You are not signed in. Login or sign up in order to post.