Difference between ways to perform a backup (Disk space, buffer, etc.)

Asked

Viewed 864 times

1

Suppose I have a machine with 10GB free disk space, and I have a backup script from a database postgresql, that when run, it backs up locally and just after finished, copies to another remote server.

The problem is this, when the backup reaches a size equal to or greater than 10GB, will lack disk space and soon will give problem in everything that runs on this machine.

Question: using the pg_dump and instead of backing up locally, already back up by pointing to another machine through the option -h, even if the total backup size is greater than 10GB, buffer size will be enough not to lock the machine in matter of space?

  • 1

    This question belongs to the Databasedministrators

  • 1

    @dcastro http://meta.pt.stackoverflow.com/questions/1/aqui-no-o-stackoverflow-com

  • 2

    @star Áreas relacionadas ao dia a dia de programadores, como **administração de sistemas**. If a backup question is not part of the day-to-day life of a programmer, I withdraw all my arguments. kthxbay

  • However, there is a reason to close titled "this question is not about programming". In my opinion, this is not the right place for such questions, and there are better and more appropriate places. "kthxbay"? Don’t need to be upset.

  • The question is relevant and is within the scope of the "Stack Overflow" since there is no equivalent site in English.

  • @Jean I even agree with the reason presented by Murifox, but "since there is no equivalent site in Portuguese." It is not valid. Nor is there a Portuguese version of Physics Stackexchange, You’re gonna start asking physics questions here?

  • I believe the scope of this site is IT, Engineering and Software Development. If it’s about backing up a database to a Physics system yes!

  • 3

    I don’t want to go into too much detail, so if you need to continue discussing this, please open a discussion at the finish line. But although the site is not yet with every definition determined and the proper tools, the site is about software development. In principle, no more, no less. What is part of this is still being determined. Some cases are obvious, others are not: http://meta.pt.stackoverflow.com/questions/264/quais-assuntos-devem-fazer-parte-do-nosso-foco-on-topic

Show 3 more comments

3 answers

1

I recommend you take a look at documentation to get some ideas on how to approach backup, but basically we have three options to back up:

  • dump (through the utility pg_dump)
  • file system (copy files made via rsync/scp/etc)
  • PITR (made through custom script, pg_rman or until [pgbarma]n4)

I recommend you take a look at the Fabio Telles article that talks about backup: http://savepoint.blog.br/dump-nao-e-backup/

  • 1
    1. http://www.postgresql.org/docs/9.3/static/app-pgdump.html
    1. https://code.google.com/p/pg-rman/
    1. http://pgbarman.org

0

You can back up from a remote machine that has postgresql-client installed:

pg_dump -h postgres_server dbname >pg_dump.bkp

0

You can create an NFS for the remote machine and perform the backup normally, as you would in a local directory.

The time of your backup will depend on the speed of the link between your local server and the remote server.

If the speed of this link is less than the speed of your local file system (for example, you could record twice as fast on the local disk as you could on the network), remember to enable maximum compression with the parameter -Z 9. You only stop using compression if the speed of your link is so good that it does not justify wasting processor time compressing.

Browser other questions tagged

You are not signed in. Login or sign up in order to post.