Mysql database update

Asked

Viewed 1,595 times

6

I’m working with one database on the localhost and the other on production. At various times I need to update my local bank to match the server.

I made a . sh script to be able to copy all the files from the database table (/var/lib/mysql) and thus be able to do what I want. However the process is very time consuming due to the size of the tables files.

I want to know more professional suggestions to make this copy. Actually, something I could synchronize only the modifications.

P.S.: Another 5 developers are working in the same way, using the same server database.

1 answer

4

Then, the process should be inverse, and only in the structure of the tables. The ideal would be to use some system of Migrations.

However, to obtain 'hot' data for testing and the like in a development and/or test environment, I believe that the ideal would be continuous dumps (which become less recurrent with the evolution of the software). The flow would be:

  1. Local database change: generates Migration
  2. After backup, Migration is run on the production server doing the proper fixups
  3. After Migration, mysqldump
  4. Run the dump on the local server

A mysqldump, even in the way you are working now is much more 'lightweight':

In production: $ mysqldump -uuser -p databasename > dumpYYYYMMDD.sql

Local: $ mysql -uuser -p databasenamelocal < dumpYYYYMMDD.sql

There are other forms (many others, I would say) with specific programs including (Mysql Workbench, Mysqldiff) but it’s usually something simple and fast and if you have a really giant database in production, I believe it is best to just synchronize the structure and create a random data generation script for the appropriate tests.

Browser other questions tagged

You are not signed in. Login or sign up in order to post.