Quantcast
Channel: MySQL Forums - Replication
Viewing all articles
Browse latest Browse all 1561

running binlogs before setting up replication (no replies)

$
0
0
Hi All

While setting up a simple slave server a colleague of mine has suggested that running "manually" the bin-log files on the slave before setting up the replication will alleviate some of the load on the (live) master. This is the sugested procedure:

1. Take a dump from the master with --single-transaction and --master-data flags

2. run the dump on the slave
3. Get from the dump the binlog file and position of the master at the momento of taking the dump
4. copy the binlogs from the master to the slave starting from the file you get from the dump
5. Run the binlogs manually on the slave like this:

mysqlbinlog --start-position 1406 mysql-bin.000001 | mysql -uroot -p test

mysqlbinlog mysql-bin.000002 | mysql -uroot -p test

....

repeat with all the binlog files

6. And finally set up replication on the slave using the same binlogfile and position from the dump:

change master to master_host ='192.168.0.198', master_user='slave_user', master_password='*****', master_log_file='mysql-bin.000001', master_log_pos=1406;

According to my colleague the slave wil be able to tell that some of the commands on the binlogs have already been executed and will be skipped, so the slave will catch up quicker.

Myself on the other hand am concern the slave wont be able to realize some of the instructions were executed before and will execute them again resulting in a faulty slave.

Does anybody knows what the actual behavior of this is?

Viewing all articles
Browse latest Browse all 1561

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>