↧
MySQL 8.0.16 Replication Enhancements (no replies)
↧
MySQL 8.0 Rotating binary log master key online (no replies)
↧
↧
Replication lagging (3 replies)
I have a headache with GTID replication under MySQL 5.7; in that every now and again we start to get long lag times (detected by examining a pt-heartbeat controlled table) which drop suddenly every now and gain.
(See https://imagebin.ca/v/4fbUIkRbzRhM for a graph of lag in seconds over time)
Running a show slave status shows that both Slave IO and Slave SQL threads are running, Slave SQL status is more often than not "System lock" and Retrieved GTID set and Executed GTID set are increasing however the lag despite its jumps is just not getting down.
I'm at a loss to explain the issue in detail as I simply don't know how to proceed at this point - Using iotop, I can see that the main MySQL process is heavily IO bound (95%+), but outside of that, CPU load is low.
We do run a number of apparently unsafe operations on the master - a far number of LOAD DATA INFILEs, a lot of TRUNCATEs and quite a few ALTER TABLEs all under script control however up until recently this has not been an issue.
The computer in question was recently (~7 days ago) rebuilt with Ubuntu 18.04.2 LTS and there is nothing else that this machine (the slave) is used for.
Please feel free to ask for more information - I'm simply unsure as to what to provide at this stage and I'd really appreciate some help.
Many thanks.
(See https://imagebin.ca/v/4fbUIkRbzRhM for a graph of lag in seconds over time)
Running a show slave status shows that both Slave IO and Slave SQL threads are running, Slave SQL status is more often than not "System lock" and Retrieved GTID set and Executed GTID set are increasing however the lag despite its jumps is just not getting down.
I'm at a loss to explain the issue in detail as I simply don't know how to proceed at this point - Using iotop, I can see that the main MySQL process is heavily IO bound (95%+), but outside of that, CPU load is low.
We do run a number of apparently unsafe operations on the master - a far number of LOAD DATA INFILEs, a lot of TRUNCATEs and quite a few ALTER TABLEs all under script control however up until recently this has not been an issue.
The computer in question was recently (~7 days ago) rebuilt with Ubuntu 18.04.2 LTS and there is nothing else that this machine (the slave) is used for.
Please feel free to ask for more information - I'm simply unsure as to what to provide at this stage and I'd really appreciate some help.
Many thanks.
↧
MIXED format not replicating Inserts events (no replies)
Hello!
First of all I want to thanks in advance to that forum creators and contributors.
I've a big problem. We have a Master replicating into a Replica
The master DB Binlog-format is MIXED and since we had some problems with triggers, we deleted those triggers from the Replica.
Now we've found that some tables are not well replicated on the Replica.
Checking the binary log, we saw that Inserts events launched by triggers were not logged into.
This is the regular behaviour or it is a bug?
We are thinking in two possibilities:
- change that format to ROW or STATEMENT
- create those triggers on the replica (but someday it will break the replica again)
Please help!!
Thanks!
First of all I want to thanks in advance to that forum creators and contributors.
I've a big problem. We have a Master replicating into a Replica
The master DB Binlog-format is MIXED and since we had some problems with triggers, we deleted those triggers from the Replica.
Now we've found that some tables are not well replicated on the Replica.
Checking the binary log, we saw that Inserts events launched by triggers were not logged into.
This is the regular behaviour or it is a bug?
We are thinking in two possibilities:
- change that format to ROW or STATEMENT
- create those triggers on the replica (but someday it will break the replica again)
Please help!!
Thanks!
↧
Unable to add database to slave (1 reply)
Hello,
I have a MariaDb master-slave setup. I'm trying to add a new database to both, but the slave is not willing to replicate.
What I did was:
slave> stop slave;
slave> flush tables;
slave> create database newdatabase;
Edited the config and added:
replication_do_db = 'newdatabase'
Then on master:
master> create database newdatabase;
Then I restarted the slave.
When I do a 'show slave status\G' I get:
Replacte_Do_Db: OldDb1, OldDb2, OldDb3
but not the new database.
Any suggestions?
I have a MariaDb master-slave setup. I'm trying to add a new database to both, but the slave is not willing to replicate.
What I did was:
slave> stop slave;
slave> flush tables;
slave> create database newdatabase;
Edited the config and added:
replication_do_db = 'newdatabase'
Then on master:
master> create database newdatabase;
Then I restarted the slave.
When I do a 'show slave status\G' I get:
Replacte_Do_Db: OldDb1, OldDb2, OldDb3
but not the new database.
Any suggestions?
↧
↧
Restore slave using backup files (no replies)
I have a question about the slave in a master slave setup.
Is it possible to reinstall the slave using dump from the same slave? Something like:
mysql slave> stop slave;
mysql slave> flush tables;
mysql slave> show slave status\G ## STORE THIS INFO FOR LATER USER
# mysqldump <dump all relevants databases>
Then delete everything and reinstall from scratch, restore the dump and then user the info from the show slave status command done before the dump to start master-slave sync?
Is it possible to reinstall the slave using dump from the same slave? Something like:
mysql slave> stop slave;
mysql slave> flush tables;
mysql slave> show slave status\G ## STORE THIS INFO FOR LATER USER
# mysqldump <dump all relevants databases>
Then delete everything and reinstall from scratch, restore the dump and then user the info from the show slave status command done before the dump to start master-slave sync?
↧
Error 'Operation CREATE USER failed for'XXX'@'localhost'' on query (no replies)
Users were added (Master and Slave server), after starting replication. There was an error.
Slave_IO_Running: Yes
Slave_SQL_Running: No
Last_Error: Error 'Operation CREATE USER failed for 'xxxxx'@'localhost'' on query. Default database: ''. Query: 'CREATE USER 'xxxxx'@'localhost' IDENTIFIED WITH 'mysql_native_password' AS '*xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx''
Solution to the problem:
STOP SLAVE;
RESET SLAVE;
CHANGE MASTER TO MASTER_HOST='xxx.xxx.xxx.xxx, MASTER_USER='xxxxxx, MASTER_PASSWORD='xxxxx',
MASTER_LOG_FILE = 'mysql-bin.0000xx', MASTER_LOG_POS = xxx;
START SLAVE;
Replication worked.
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
But when comparing databases through mysqldiff. Some tables were missing.
Slave_IO_Running: Yes
Slave_SQL_Running: No
Last_Error: Error 'Operation CREATE USER failed for 'xxxxx'@'localhost'' on query. Default database: ''. Query: 'CREATE USER 'xxxxx'@'localhost' IDENTIFIED WITH 'mysql_native_password' AS '*xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx''
Solution to the problem:
STOP SLAVE;
RESET SLAVE;
CHANGE MASTER TO MASTER_HOST='xxx.xxx.xxx.xxx, MASTER_USER='xxxxxx, MASTER_PASSWORD='xxxxx',
MASTER_LOG_FILE = 'mysql-bin.0000xx', MASTER_LOG_POS = xxx;
START SLAVE;
Replication worked.
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
But when comparing databases through mysqldiff. Some tables were missing.
↧
MySQL 8.0: Enhanced support for large transactions in Group Replication (no replies)
↧
make MySQL server master and slave at the same time (1 reply)
Hi, all
I have three MySQL instances:
MySQL_A - with DB_1
MySQL_B - replica of MySQL A
MySQL_C - with DB_2, DB_3
GTID is enabled on all MySQL servers
I want MySQL_A to be a replica of MySQL_C DB_2 only using GTID replication. I can easily setup DB2 replication based on binlog position. But when i try to create DB_2 replica using GTID, i get an error. Steps:
do mysqldump DB_2 from MySQL_C
import dump to MYSQL_A
change master on MySQL_A
mysql> CHANGE MASTER TO
-> MASTER_HOST='MySQL_C',
-> MASTER_USER='replication',
-> MASTER_PASSWORD='password',
-> MASTER_AUTO_POSITION=1;
mysql> CHANGE REPLICATION FILTER REPLICATE-DO-DB = ( DB_2 );
mysql> START SLAVE;
Next i receive an error:
Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'Cannot replicate because the master purged required binary logs. Replicate the missing transactions from elsewhere, or provision a new slave from backup. Consider increasing the master's binary log expiration period. To find the missing transactions, see the master's error log or the manual for GTID_SUBTRACT.'
What additional steps i have to do?
I have three MySQL instances:
MySQL_A - with DB_1
MySQL_B - replica of MySQL A
MySQL_C - with DB_2, DB_3
GTID is enabled on all MySQL servers
I want MySQL_A to be a replica of MySQL_C DB_2 only using GTID replication. I can easily setup DB2 replication based on binlog position. But when i try to create DB_2 replica using GTID, i get an error. Steps:
do mysqldump DB_2 from MySQL_C
import dump to MYSQL_A
change master on MySQL_A
mysql> CHANGE MASTER TO
-> MASTER_HOST='MySQL_C',
-> MASTER_USER='replication',
-> MASTER_PASSWORD='password',
-> MASTER_AUTO_POSITION=1;
mysql> CHANGE REPLICATION FILTER REPLICATE-DO-DB = ( DB_2 );
mysql> START SLAVE;
Next i receive an error:
Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'Cannot replicate because the master purged required binary logs. Replicate the missing transactions from elsewhere, or provision a new slave from backup. Consider increasing the master's binary log expiration period. To find the missing transactions, see the master's error log or the manual for GTID_SUBTRACT.'
What additional steps i have to do?
↧
↧
Buy real registered passports((www.supportdocuments24hrs.com (no replies)
real registered passports (www.supportdocuments24hrs.com) driver's license, identity card
We only offer original passport, driver's license, identity cards, visas, stamps and other documents for the following countries: Australia, Belgium,
buy Brazil, Finland, France, Great Britain, Ireland, Italy, Netherlands, Norway, Austria, Sweden, Switzerland, Spain, Great Britain, USA and some others.
We offer you one of the best services in the world. Most customers have experienced our true service.
Buy online passport
buy identity card online
Buy online driving license
Buy the residence permit online
Buy / Get Citizenship Online
Buy Online Documents
Our website ..http //: www.supportdocuments24hrs.com
http //: www.supportdocuments24hrs.com
http //: www.supportdocuments24hrs.com
http //: www.supportdocuments24hrs.com
http //: www.supportdocuments24hrs.com
Contact emails ......... documents24hrs@gmail.com
WHATSAPP ......... 1 (201) 701-0871
Skype ........ berning121
Fake passport
buy fake ID card
Buy a license
buy fake
Our website ..http //: www.supportdocuments24hrs.com
http //: www.supportdocuments24hrs.com
http //: www.supportdocuments24hrs.com
http //: www.supportdocuments24hrs.com
http //: www.supportdocuments24hrs.com
WHATSAPP ......... 1 (201) 701-0871
Skype ........ berning121
buy real and fake passport
buy real and wrong driver's license
Buy a real and false identity card
buy real and false citizenship
buy real and false citizenship
Buy real and fake papers
Our website ....... http //: www.supportdocuments24hrs.com
Contact emails ......... documents24hrs@gmail.com
WHATSAPP ......... 1 (201) 701-0871
Skype ........ berning121
We only offer original passport, driver's license, identity cards, visas, stamps and other documents for the following countries: Australia, Belgium,
buy Brazil, Finland, France, Great Britain, Ireland, Italy, Netherlands, Norway, Austria, Sweden, Switzerland, Spain, Great Britain, USA and some others.
We offer you one of the best services in the world. Most customers have experienced our true service.
Buy online passport
buy identity card online
Buy online driving license
Buy the residence permit online
Buy / Get Citizenship Online
Buy Online Documents
Our website ..http //: www.supportdocuments24hrs.com
http //: www.supportdocuments24hrs.com
http //: www.supportdocuments24hrs.com
http //: www.supportdocuments24hrs.com
http //: www.supportdocuments24hrs.com
Contact emails ......... documents24hrs@gmail.com
WHATSAPP ......... 1 (201) 701-0871
Skype ........ berning121
Fake passport
buy fake ID card
Buy a license
buy fake
Our website ..http //: www.supportdocuments24hrs.com
http //: www.supportdocuments24hrs.com
http //: www.supportdocuments24hrs.com
http //: www.supportdocuments24hrs.com
http //: www.supportdocuments24hrs.com
WHATSAPP ......... 1 (201) 701-0871
Skype ........ berning121
buy real and fake passport
buy real and wrong driver's license
Buy a real and false identity card
buy real and false citizenship
buy real and false citizenship
Buy real and fake papers
Our website ....... http //: www.supportdocuments24hrs.com
Contact emails ......... documents24hrs@gmail.com
WHATSAPP ......... 1 (201) 701-0871
Skype ........ berning121
↧
Replication table vs database (2 replies)
Hey Forum.
I'm testing at the moment a little test setup.
Ves01 - contains tt.12345_p
Ves02 - contains tt.98765_p
HQ1 - contains whole tt database - Replicate from ves01 & ves02 into one DB
HQ2 - contains whole tt database - Replicate from HQ1
My Replication works fine towards HQ1 - When trying to replicate from HQ1 to HQ2 - I'm getting OK and connection - reading the Master bin log.
But its not inserting any statement in HQ2
Starting the channel - gives me
Slave I/O thread for channel 'tt': connected to master 'mysql_replic@192.168.10.61:3306',replication started in log 'mysql-bin.000005' at position 4
Reads the whole bin log - but do not insert any rows.
In slave status I'm getting:
Slave_IO_State: Queueing master event to the relay log
Untill the binlog file have been read - no data is available. No errors getting.
Today I dump the whole DB from HQ1 - loaded into HQ2 - and startet replication again - but again the same result
What Am I doing wrong here
I'm testing at the moment a little test setup.
Ves01 - contains tt.12345_p
Ves02 - contains tt.98765_p
HQ1 - contains whole tt database - Replicate from ves01 & ves02 into one DB
HQ2 - contains whole tt database - Replicate from HQ1
My Replication works fine towards HQ1 - When trying to replicate from HQ1 to HQ2 - I'm getting OK and connection - reading the Master bin log.
But its not inserting any statement in HQ2
Starting the channel - gives me
Slave I/O thread for channel 'tt': connected to master 'mysql_replic@192.168.10.61:3306',replication started in log 'mysql-bin.000005' at position 4
Reads the whole bin log - but do not insert any rows.
In slave status I'm getting:
Slave_IO_State: Queueing master event to the relay log
Untill the binlog file have been read - no data is available. No errors getting.
Today I dump the whole DB from HQ1 - loaded into HQ2 - and startet replication again - but again the same result
What Am I doing wrong here
↧
error code 2061 (1 reply)
Hi Folks,
I have installed MySql 8.0 on Ubuntu 14.
I have setup master-master replication.
Whenever I start slave, I get following error and it takes long time for slave to connect to master. Please note I have not enabled SSL.
2019-06-06T20:09:11.652384Z 34 [ERROR] [MY-010584] [Repl] Slave I/O for channel '': error connecting to master 'repl1_slave@127.0.0.1:3306' - retry-time: 60 retries: 66, Error_code: MY-002061
Please note my setup is on same server using mysqld_multi.
can you please advise how to resolve "error 2061" .
Kind regards
Chandresh.
I have installed MySql 8.0 on Ubuntu 14.
I have setup master-master replication.
Whenever I start slave, I get following error and it takes long time for slave to connect to master. Please note I have not enabled SSL.
2019-06-06T20:09:11.652384Z 34 [ERROR] [MY-010584] [Repl] Slave I/O for channel '': error connecting to master 'repl1_slave@127.0.0.1:3306' - retry-time: 60 retries: 66, Error_code: MY-002061
Please note my setup is on same server using mysqld_multi.
can you please advise how to resolve "error 2061" .
Kind regards
Chandresh.
↧
Multi source replication (1 reply)
Hello, I need to set up a replication from multiple sources, in case of several masters to 1 slave in mysql 5.7.
I have 1 different database on each master, and I need to replicate these databases to 1 single slave.
I was reading the mysql documentation, but I'm confused by the amount of information.
I also need to back up the current databases, which are on another server at the moment and transfer to these masters, and replicate them to the slave.
Could someone show me how I can accomplish this, or indicate an article to do it?
I have 1 different database on each master, and I need to replicate these databases to 1 single slave.
I was reading the mysql documentation, but I'm confused by the amount of information.
I also need to back up the current databases, which are on another server at the moment and transfer to these masters, and replicate them to the slave.
Could someone show me how I can accomplish this, or indicate an article to do it?
↧
↧
Newby resetting replication query (1 reply)
mariadb 5.5.60-MariaDB
I have a LIVE DB in a database "bugtracker". that mariadb instance also has these databases
information_schema mysql performance_schema test
I have a DR DB in a database "bugtracker". that mariadb instance also has these databases
information_schema mysql performance_schema
Replication is set up for bugtracker.
I am sorting a failover procedure.
Failover: stop replication on DR mysql> stop slave; mysql> reset master; mysql> change master to master_host='';
Stop MySQL database on DR
Edit /etc/my.cnf on DR to change server-id to 1 (from 2) Start MySQL database on DR
then use DR as the "live" site.
FAILBACK: mysqldump bugtracker on DR import bugtracker on live
Stop the MySQL database on LIVE Stop the MySQL database on DR
Edit /etc/my.cnf on DR to change server-id to 2 (back from 1)
Start the MySQL database on DR Start the MySQL database on LIVE
re-enable replication from LIVE to DR Initial setups/configurations are already done obviously so these steps are all that is needed to restart replication.
ON LIVE: use bugtracker; FLUSH TABLES WITH READ LOCK; SHOW MASTER STATUS;
record the information displayed e.g. +----------------------+----------+-------------------+------------------------+
| File | Position | Binlog_Do_DB |Binlog_Ignore_DB |
+----------------------+----------+-------------------+------------------------+
| mysql-bin.000001 | 107 | bugtracker | |
+----------------------+----------+-------------------+------------------------+
reset the grant GRANT REPLICATION SLAVE ON *.* TO replication_user@DR_IP IDENTIFIED BY 'PASSWORD';
ON DR use bugtracker; CHANGE MASTER TO MASTER_HOST='LIVE_IP',MASTER_USER='replication_user',MASTER_PASSWORD='PASSWORD',MASTER_LOG_FILE='<see above>', MASTER_LOG_POS=<see above>;START SLAVE;
check slave status SHOW SLAVE STATUS\G Check for lines Slave_IO_Running: Yes Slave_SQL_Running: Yes
That seems logically sufficient to me.
But a colleague is suggesting that having dumped DR and imported to LIVE I now have to recreate the entire SLAVE DB stuff all from scratch - which seems overkill to me. ie delete the DR Dbs nd recreate them then dump the entire LIVE to import to DB?
??
cheers
didds
I have a LIVE DB in a database "bugtracker". that mariadb instance also has these databases
information_schema mysql performance_schema test
I have a DR DB in a database "bugtracker". that mariadb instance also has these databases
information_schema mysql performance_schema
Replication is set up for bugtracker.
I am sorting a failover procedure.
Failover: stop replication on DR mysql> stop slave; mysql> reset master; mysql> change master to master_host='';
Stop MySQL database on DR
Edit /etc/my.cnf on DR to change server-id to 1 (from 2) Start MySQL database on DR
then use DR as the "live" site.
FAILBACK: mysqldump bugtracker on DR import bugtracker on live
Stop the MySQL database on LIVE Stop the MySQL database on DR
Edit /etc/my.cnf on DR to change server-id to 2 (back from 1)
Start the MySQL database on DR Start the MySQL database on LIVE
re-enable replication from LIVE to DR Initial setups/configurations are already done obviously so these steps are all that is needed to restart replication.
ON LIVE: use bugtracker; FLUSH TABLES WITH READ LOCK; SHOW MASTER STATUS;
record the information displayed e.g. +----------------------+----------+-------------------+------------------------+
| File | Position | Binlog_Do_DB |Binlog_Ignore_DB |
+----------------------+----------+-------------------+------------------------+
| mysql-bin.000001 | 107 | bugtracker | |
+----------------------+----------+-------------------+------------------------+
reset the grant GRANT REPLICATION SLAVE ON *.* TO replication_user@DR_IP IDENTIFIED BY 'PASSWORD';
ON DR use bugtracker; CHANGE MASTER TO MASTER_HOST='LIVE_IP',MASTER_USER='replication_user',MASTER_PASSWORD='PASSWORD',MASTER_LOG_FILE='<see above>', MASTER_LOG_POS=<see above>;START SLAVE;
check slave status SHOW SLAVE STATUS\G Check for lines Slave_IO_Running: Yes Slave_SQL_Running: Yes
That seems logically sufficient to me.
But a colleague is suggesting that having dumped DR and imported to LIVE I now have to recreate the entire SLAVE DB stuff all from scratch - which seems overkill to me. ie delete the DR Dbs nd recreate them then dump the entire LIVE to import to DB?
??
cheers
didds
↧
Issue with Replication using MySQL 5.7.19 on Windows 2012 (1 reply)
Hello Experts,
Need your help on the following issues I'm facing while setting up mysql replication on my environment.
MySQL Version: 5.7.19
OS: Windows 2012 R2
1) Enabled binary logging on the source host using the below parameters in my.ini file.
[mysqld]
log-bin=mysql-bin
server-id=1
bind-address = 0.0.0.0
2) Restarted MYSQL services after updating my.ini file.
3) created DB user with all (replication related) privileges. Performed mysqldump of source database, copied the database dump file to slave server.
4) Restart mysql service on slave server and complete the slave replication steps.
STOP SLAVE
CHANGE MASTER TO MASTER_HOST='xxxxxx', MASTER_USER='xxxxxxxxxx', MASTER_PASSWORD='xxxxxxxxxx', MASTER_LOG_FILE='mysql-bin.000012',MASTER_LOG_POS=154
START SLAVE
SHOW SLAVE STATUS
--------------
*************************** 1. row ***************************
Slave_IO_State: Connecting to master
Master_Host: xxxxxxxxxx
Master_User: xxxxxxxxxx
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000012
Read_Master_Log_Pos: 154
Relay_Log_File: mysql-relay-bin.000001
Relay_Log_Pos: 4
Relay_Master_Log_File: mysql-bin.000012
Slave_IO_Running: Connecting
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 154
Relay_Log_Space: 154
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 0
Master_UUID:
Master_Info_File: C:\mysql\provision3308\data\master.info
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Slave_SQL_Running_State: Waiting for the next event in relay log
Master_Retry_Count: 86400
Master_Bind:
Last_IO_Error_Timestamp:
Last_SQL_Error_Timestamp:
Master_SSL_Crl:
Master_SSL_Crlpath:
Retrieved_Gtid_Set:
Executed_Gtid_Set:
Auto_Position: 0
Replicate_Rewrite_DB:
Channel_Name:
Master_TLS_Version: ]
Slave Status: *************************** 1. row ***************************
Slave_IO_State: Queueing master event to the relay log
Master_Host: xxxxxxx
Master_User: xxxxxxx
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000012
Read_Master_Log_Pos: 154
Relay_Log_File: mysql-relay-bin.000001
Relay_Log_Pos: 4
Relay_Master_Log_File: mysql-bin.000012
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 154
Relay_Log_Space: 408
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 1
Master_UUID: xxxxxxxxxxxxxxxxxxxxxxxxxxx
Master_Info_File: C:\mysql\provision3308\data\master.info
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Slave_SQL_Running_State: Reading event from the relay log
Master_Retry_Count: 86400
Master_Bind:
Last_IO_Error_Timestamp:
Last_SQL_Error_Timestamp:
Master_SSL_Crl:
Master_SSL_Crlpath:
Retrieved_Gtid_Set:
Executed_Gtid_Set:
Auto_Position: 0
Replicate_Rewrite_DB:
Channel_Name:
Master_TLS_Version: ]
5) Attached my.ini for both master and slave server to review.
6) Issue: When testing the replication, we do not see the changes reflected on slave server. However, looking at the output of show slave status - it is pointing to correct master log file name and position.
Need your help on the following issues I'm facing while setting up mysql replication on my environment.
MySQL Version: 5.7.19
OS: Windows 2012 R2
1) Enabled binary logging on the source host using the below parameters in my.ini file.
[mysqld]
log-bin=mysql-bin
server-id=1
bind-address = 0.0.0.0
2) Restarted MYSQL services after updating my.ini file.
3) created DB user with all (replication related) privileges. Performed mysqldump of source database, copied the database dump file to slave server.
4) Restart mysql service on slave server and complete the slave replication steps.
STOP SLAVE
CHANGE MASTER TO MASTER_HOST='xxxxxx', MASTER_USER='xxxxxxxxxx', MASTER_PASSWORD='xxxxxxxxxx', MASTER_LOG_FILE='mysql-bin.000012',MASTER_LOG_POS=154
START SLAVE
SHOW SLAVE STATUS
--------------
*************************** 1. row ***************************
Slave_IO_State: Connecting to master
Master_Host: xxxxxxxxxx
Master_User: xxxxxxxxxx
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000012
Read_Master_Log_Pos: 154
Relay_Log_File: mysql-relay-bin.000001
Relay_Log_Pos: 4
Relay_Master_Log_File: mysql-bin.000012
Slave_IO_Running: Connecting
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 154
Relay_Log_Space: 154
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 0
Master_UUID:
Master_Info_File: C:\mysql\provision3308\data\master.info
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Slave_SQL_Running_State: Waiting for the next event in relay log
Master_Retry_Count: 86400
Master_Bind:
Last_IO_Error_Timestamp:
Last_SQL_Error_Timestamp:
Master_SSL_Crl:
Master_SSL_Crlpath:
Retrieved_Gtid_Set:
Executed_Gtid_Set:
Auto_Position: 0
Replicate_Rewrite_DB:
Channel_Name:
Master_TLS_Version: ]
Slave Status: *************************** 1. row ***************************
Slave_IO_State: Queueing master event to the relay log
Master_Host: xxxxxxx
Master_User: xxxxxxx
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000012
Read_Master_Log_Pos: 154
Relay_Log_File: mysql-relay-bin.000001
Relay_Log_Pos: 4
Relay_Master_Log_File: mysql-bin.000012
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 154
Relay_Log_Space: 408
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 1
Master_UUID: xxxxxxxxxxxxxxxxxxxxxxxxxxx
Master_Info_File: C:\mysql\provision3308\data\master.info
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Slave_SQL_Running_State: Reading event from the relay log
Master_Retry_Count: 86400
Master_Bind:
Last_IO_Error_Timestamp:
Last_SQL_Error_Timestamp:
Master_SSL_Crl:
Master_SSL_Crlpath:
Retrieved_Gtid_Set:
Executed_Gtid_Set:
Auto_Position: 0
Replicate_Rewrite_DB:
Channel_Name:
Master_TLS_Version: ]
5) Attached my.ini for both master and slave server to review.
6) Issue: When testing the replication, we do not see the changes reflected on slave server. However, looking at the output of show slave status - it is pointing to correct master log file name and position.
↧
Password Change for Account Used for Group Replication (no replies)
I have created an account rpl_user for group replication. If I will change the password for this account, is it require to execute the command
CHANGE MASTER TO MASTER_USER='rpl_user', MASTER_PASSWORD='newpassword' FOR CHANNEL 'group_replication_recovery';
CHANGE MASTER TO MASTER_USER='rpl_user', MASTER_PASSWORD='newpassword' FOR CHANNEL 'group_replication_recovery';
↧
ERROR 3092 (HY000): The server is not configured properly to be an active member of the group. (no replies)
With MySQL 8.0.15, I tried to set up the group replication according to the manual(not all the same steps), but the error occurred. Maybe I missed some details.
I was trying to set up the first node. After I execute START Group_replication, MySQL returned "ERROR 3092 (HY000): The server is not configured properly to be an active member of the group."
The log:
2019-06-19T14:29:11.729885Z 8 [Note] [MY-011716] [Repl] Plugin group_replication reported: 'Current debug options are: 'GCS_DEBUG_NONE'.'
2019-06-19T14:29:11.763057Z 8 [Note] [MY-011673] [Repl] Plugin group_replication reported: 'Group communication SSL configuration: group_replication_ssl_mode: "DISABLED"'
2019-06-19T14:29:11.766812Z 8 [Note] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Debug messages will be sent to: asynchronous::/var/lib/mysql/GCS_DEBUG_TRACE'
2019-06-19T14:29:11.777927Z 8 [Note] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Added automatically IP ranges 127.0.0.1/8,192.168.1.3/24,::1/128,fe80::d80f:313a:7237:a3d5/64 to the whitelist'
2019-06-19T14:29:11.779035Z 8 [Note] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Translated 'ic-1' to 192.168.1.3'
2019-06-19T14:29:11.779067Z 8 [Note] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Translated 'ic-1' to 192.168.1.3'
2019-06-19T14:29:11.779080Z 8 [Note] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Translated 'ic-1' to 192.168.1.3'
2019-06-19T14:29:11.779655Z 8 [Note] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] SSL was not enabled'
2019-06-19T14:29:11.779698Z 8 [Note] [MY-011694] [Repl] Plugin group_replication reported: 'Initialized group communication with configuration: group_replication_group_name: 'e6b8600c-9294-11e9-8e74-000c2992d1b0'; group_replication_local_address: 'ic-1:33061'; group_replication_group_seeds: 'ic-1:33061,ic-2:33061,ic-3:33061'; group_replication_bootstrap_group: 'true'; group_replication_poll_spin_loops: 0; group_replication_compression_threshold: 1000000; group_replication_ip_whitelist: 'AUTOMATIC'; group_replication_communication_debug_options: 'GCS_DEBUG_NONE'; group_replication_member_expel_timeout: '0'; group_replication_communication_max_message_size: 10485760; group_replication_message_cache_size: '1073741824u''
2019-06-19T14:29:11.779901Z 8 [Note] [MY-011643] [Repl] Plugin group_replication reported: 'Member configuration: member_id: 1; member_uuid: "f6db1cc0-9290-11e9-86fa-000c2992d1b0"; single-primary mode: "true"; group_replication_auto_increment_increment: 7; '
2019-06-19T14:29:11.785054Z 15 [System] [MY-010597] [Repl] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_applier' executed'. Previous state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''.
2019-06-19T14:29:11.813254Z 8 [Note] [MY-011670] [Repl] Plugin group_replication reported: 'Group Replication applier module successfully initialized!'
2019-06-19T14:29:11.815695Z 18 [Note] [MY-010581] [Repl] Slave SQL thread for channel 'group_replication_applier' initialized, starting replication in log 'FIRST' at position 0, relay log './ic-1-relay-bin-group_replication_applier.000008' position: 4
2019-06-19T14:29:11.897681Z 0 [Note] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] XCom protocol version: 7'
2019-06-19T14:29:11.897753Z 0 [Note] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] XCom initialized and ready to accept incoming connections on port 33061'
2019-06-19T14:29:11.898095Z 0 [ERROR] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Error connecting to the local group communication engine instance.'
2019-06-19T14:29:12.927997Z 0 [ERROR] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] The member was unable to join the group. Local port: 33061'
2019-06-19T14:30:11.817403Z 8 [ERROR] [MY-011640] [Repl] Plugin group_replication reported: 'Timeout on wait for view after joining group'
2019-06-19T14:30:11.841145Z 8 [Note] [MY-011649] [Repl] Plugin group_replication reported: 'Requesting to leave the group despite of not being a member'
2019-06-19T14:30:11.841425Z 8 [ERROR] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] The member is leaving a group without being on one.'
2019-06-19T14:30:11.849144Z 18 [Note] [MY-010596] [Repl] Error reading relay log event for channel 'group_replication_applier': slave SQL thread was killed
2019-06-19T14:30:11.849275Z 18 [Note] [MY-010587] [Repl] Slave SQL thread for channel 'group_replication_applier' exiting, replication stopped in log 'FIRST' at position 0
2019-06-19T14:30:11.885913Z 15 [Note] [MY-011444] [Repl] Plugin group_replication reported: 'The group replication applier thread was killed.'
Looks like the log did not provide information that was useful enough.
Could you please tell me what cause the error?
Kind regards.
Ben
I was trying to set up the first node. After I execute START Group_replication, MySQL returned "ERROR 3092 (HY000): The server is not configured properly to be an active member of the group."
The log:
2019-06-19T14:29:11.729885Z 8 [Note] [MY-011716] [Repl] Plugin group_replication reported: 'Current debug options are: 'GCS_DEBUG_NONE'.'
2019-06-19T14:29:11.763057Z 8 [Note] [MY-011673] [Repl] Plugin group_replication reported: 'Group communication SSL configuration: group_replication_ssl_mode: "DISABLED"'
2019-06-19T14:29:11.766812Z 8 [Note] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Debug messages will be sent to: asynchronous::/var/lib/mysql/GCS_DEBUG_TRACE'
2019-06-19T14:29:11.777927Z 8 [Note] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Added automatically IP ranges 127.0.0.1/8,192.168.1.3/24,::1/128,fe80::d80f:313a:7237:a3d5/64 to the whitelist'
2019-06-19T14:29:11.779035Z 8 [Note] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Translated 'ic-1' to 192.168.1.3'
2019-06-19T14:29:11.779067Z 8 [Note] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Translated 'ic-1' to 192.168.1.3'
2019-06-19T14:29:11.779080Z 8 [Note] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Translated 'ic-1' to 192.168.1.3'
2019-06-19T14:29:11.779655Z 8 [Note] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] SSL was not enabled'
2019-06-19T14:29:11.779698Z 8 [Note] [MY-011694] [Repl] Plugin group_replication reported: 'Initialized group communication with configuration: group_replication_group_name: 'e6b8600c-9294-11e9-8e74-000c2992d1b0'; group_replication_local_address: 'ic-1:33061'; group_replication_group_seeds: 'ic-1:33061,ic-2:33061,ic-3:33061'; group_replication_bootstrap_group: 'true'; group_replication_poll_spin_loops: 0; group_replication_compression_threshold: 1000000; group_replication_ip_whitelist: 'AUTOMATIC'; group_replication_communication_debug_options: 'GCS_DEBUG_NONE'; group_replication_member_expel_timeout: '0'; group_replication_communication_max_message_size: 10485760; group_replication_message_cache_size: '1073741824u''
2019-06-19T14:29:11.779901Z 8 [Note] [MY-011643] [Repl] Plugin group_replication reported: 'Member configuration: member_id: 1; member_uuid: "f6db1cc0-9290-11e9-86fa-000c2992d1b0"; single-primary mode: "true"; group_replication_auto_increment_increment: 7; '
2019-06-19T14:29:11.785054Z 15 [System] [MY-010597] [Repl] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_applier' executed'. Previous state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''.
2019-06-19T14:29:11.813254Z 8 [Note] [MY-011670] [Repl] Plugin group_replication reported: 'Group Replication applier module successfully initialized!'
2019-06-19T14:29:11.815695Z 18 [Note] [MY-010581] [Repl] Slave SQL thread for channel 'group_replication_applier' initialized, starting replication in log 'FIRST' at position 0, relay log './ic-1-relay-bin-group_replication_applier.000008' position: 4
2019-06-19T14:29:11.897681Z 0 [Note] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] XCom protocol version: 7'
2019-06-19T14:29:11.897753Z 0 [Note] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] XCom initialized and ready to accept incoming connections on port 33061'
2019-06-19T14:29:11.898095Z 0 [ERROR] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Error connecting to the local group communication engine instance.'
2019-06-19T14:29:12.927997Z 0 [ERROR] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] The member was unable to join the group. Local port: 33061'
2019-06-19T14:30:11.817403Z 8 [ERROR] [MY-011640] [Repl] Plugin group_replication reported: 'Timeout on wait for view after joining group'
2019-06-19T14:30:11.841145Z 8 [Note] [MY-011649] [Repl] Plugin group_replication reported: 'Requesting to leave the group despite of not being a member'
2019-06-19T14:30:11.841425Z 8 [ERROR] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] The member is leaving a group without being on one.'
2019-06-19T14:30:11.849144Z 18 [Note] [MY-010596] [Repl] Error reading relay log event for channel 'group_replication_applier': slave SQL thread was killed
2019-06-19T14:30:11.849275Z 18 [Note] [MY-010587] [Repl] Slave SQL thread for channel 'group_replication_applier' exiting, replication stopped in log 'FIRST' at position 0
2019-06-19T14:30:11.885913Z 15 [Note] [MY-011444] [Repl] Plugin group_replication reported: 'The group replication applier thread was killed.'
Looks like the log did not provide information that was useful enough.
Could you please tell me what cause the error?
Kind regards.
Ben
↧
↧
Huge increase in binary logging after a purge (no replies)
I've been manually purging the binary logs of a master DB for over a year now (1 slave DB on a different machine). After my most recent purge, over the next few hours, it generated a huge amount of logs - about 8 files of 1 GB each. Normally, it would take days before one 1GB file would be created. I did another purge that day and restarted mysql and it has returned to normal. What could have caused this? Is there anything I should be concerned about right now?
I can't remember the exact number of the purge command and it is now gone from history since I restarted mysql, but I think it was:
PURGE BINARY LOGS TO 'mysql-log.000101';
Thanks for any insight into this.
I can't remember the exact number of the purge command and it is now gone from history since I restarted mysql, but I think it was:
PURGE BINARY LOGS TO 'mysql-log.000101';
Thanks for any insight into this.
↧
The purpose of master binary logs in delayed replication (1 reply)
Hi
I need to setup a master-slave replication schema with a 24 hours replication delay.
Now , due to space constraints , I need to know if 24 hours of binary logs need to be saved on the master or only the relay logs need to be on the slave side
As far as I understand , the master binary logs are being transferred to the slave immediately , and stored there as relay logs , waiting for the delay to expire
If this is the case - why do we need the master logs to stay online until the 24 hours expire ?
I did the following test :
set a 24 hours delay
created some load on the master and flushed logs few times
purged a log that was already transferred to the slave
reset the delay
got an error that the master log is missing.
I dont understand why the master log is needed by the slave , if it was already transferred to the slave and the data is sitting in the relay log waiting for apply.
Am I missing something ?
thanks
Orna
I need to setup a master-slave replication schema with a 24 hours replication delay.
Now , due to space constraints , I need to know if 24 hours of binary logs need to be saved on the master or only the relay logs need to be on the slave side
As far as I understand , the master binary logs are being transferred to the slave immediately , and stored there as relay logs , waiting for the delay to expire
If this is the case - why do we need the master logs to stay online until the 24 hours expire ?
I did the following test :
set a 24 hours delay
created some load on the master and flushed logs few times
purged a log that was already transferred to the slave
reset the delay
got an error that the master log is missing.
I dont understand why the master log is needed by the slave , if it was already transferred to the slave and the data is sitting in the relay log waiting for apply.
Am I missing something ?
thanks
Orna
↧
flush logs lock whole database (1 reply)
hello master, we meet a problem about MySQL flush logs;
when expired_log_days set 10, there is no write to database in next 5 days,
then suddenly fill up a binlog file, will trigger a flush logs command,
MySQL will flush 5 days binlogs one time,
if there is too many binlogs, whole database will lock for a long time.
I hope pelple kown this problem, and MySQL can solve it!
I test MySQL 5.6 and 5.7
find this souce code in logg.cc
bool LOGGER::flush_logs(THD *thd)
{
int rc= 0;
/*
Now we lock logger, as nobody should be able to use logging routines while
log tables are closed
*/
logger.lock_exclusive();
/* reopen log files */
file_log_handler->flush();
/* end of log flush */
logger.unlock();
return rc;
}
when expired_log_days set 10, there is no write to database in next 5 days,
then suddenly fill up a binlog file, will trigger a flush logs command,
MySQL will flush 5 days binlogs one time,
if there is too many binlogs, whole database will lock for a long time.
I hope pelple kown this problem, and MySQL can solve it!
I test MySQL 5.6 and 5.7
find this souce code in logg.cc
bool LOGGER::flush_logs(THD *thd)
{
int rc= 0;
/*
Now we lock logger, as nobody should be able to use logging routines while
log tables are closed
*/
logger.lock_exclusive();
/* reopen log files */
file_log_handler->flush();
/* end of log flush */
logger.unlock();
return rc;
}
↧