Bobcares

MySQL replication skip duplicate entry: Management

by | Jul 3, 2022

MySQL replication can be resolved in a few simple steps and the user can also skip duplicate entries if they want.

Bobcares answers all questions no matter the size, as part of our MySQL support

Let us take a look at how to resolve and skip MySQL duplicate entries in detail.

Resolve MySQL replication: Duplicate entry

The following steps will assist in fixing MySQL replication problems brought on by duplicate entries in the slave database.

  1. Firstly the user has to connect SSH to the slave DB system. To connect type in ssh-user-@-server-
  2. Secondly, connect to MySQL. For example mysql -u root -p. The code will cue for the password.
  3. Finally, the user can check the slave status by typing in show slave status\G. The code detailing the slave will open up as shown below.

Slave_IO_State: Waiting for master to send event Master_Host: Master_User: slave_sec Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.000038 Read_Master_Log_Pos: 865774015 Relay_Log_File: mysqld-relay-bin.000021 Relay_Log_Pos: 25092925 Relay_Master_Log_File: mysql-bin.000038 Slave_IO_Running: Yes Slave_SQL_Running: No Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 1062 Last_Error: Error 'Duplicate entry '6213' for key 'PRIMARY'' on query. Default database: 'testdb'. Query: 'insert into orders (created_at, user_id, receiver_id) values ('2016-01-12 08:48:09', null, null)' Skip_Counter: 0 Exec_Master_Log_Pos: 817513358 Relay_Log_Space: 73353756 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: NULL Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 1062 Last_SQL_Error: Error 'Duplicate entry '6213' for key 'PRIMARY'' on query. Default database: 'testdb'. Query: 'insert into orders (created_at, user_id, receiver_id) values ('2016-01-12 08:48:09', null, null)' Replicate_Ignore_Server_Ids: Master_Server_Id: 100412 Master_UUID: f20637be-3530-11e5-a860-005056a05b60 Master_Info_File: /var/db/mysql/master.info SQL_Delay: 0 SQL_Remaining_Delay: NULL Slave_SQL_Running_State: Master_Retry_Count: 86400 Master_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: 160124 01:07:05 Master_SSL_Crl: Master_SSL_Crlpath: Retrieved_Gtid_Set: Executed_Gtid_Set: Auto_Position: 0 1 row in set (0.00 sec)

Note: if the slave status result is null Empty set (0.00 sec) then the user is probably on master.

MySQL replication: Duplicate entry: Solutions

  1. The user slave may lose sync with the master and cause issues. So it is better to fix this than to bypass it. The recommended practice is to slave as read-only so that apps cannot unintentionally write to slave DB. This may be the reason behind the problem and some applications may be pointing to slave DB instead of master. The User can attempt to find duplicate entries and remove them from slave databases.
  2. Execute stop slave and then start slave on the slave DB after erasing the previous entry.
  3. Replication will most likely restart and return to normal. The user can repeat these instructions if it becomes stuck once more for the same fault for a different record. If there are numerous similar entries, it is best to restart the sale and restart replication. The user can recheck the slave to make sure.

Slave_IO_State: Waiting for master to send event Master_Host: Master_User: slave_sec Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.000038 Read_Master_Log_Pos: 867336894 Relay_Log_File: mysqld-relay-bin.000021 Relay_Log_Pos: 57402302 Relay_Master_Log_File: mysql-bin.000038 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 849822735 Relay_Log_Space: 74916972 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 46579 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 100412 Master_UUID: f20637be-3530-11e5-a860-005056a05b60 Master_Info_File: /var/db/mysql/master.info SQL_Delay: 0 SQL_Remaining_Delay: NULL Slave_SQL_Running_State: Reading event from the relay log Master_Retry_Count: 86400 Master_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: Master_SSL_Crl: Master_SSL_Crlpath: Retrieved_Gtid_Set: Executed_Gtid_Set: Auto_Position: 0 1 row in set (0.00 sec)

Skip duplicate replication errors

Normally, anytime a query on the slave fails, MySQL replication will cease. The users may want to skip more queries in some circumstances. For instance, users might prefer to ignore any and all duplicate errors they encounter.

"1062 | Error 'Duplicate entry 'abc' for key 10' on query. Default database: 'db'. Query: 'INSERT INTO ..."

The user would add to their my.cnf if they were certain that skipping those errors wouldn’t make their slave unreliable and they wanted to skip them entirely.

slave-skip-errors = 1062

For example, as shown above, the error 1062 is the error that the user wants to skip. And here the user will get the duplicate entry message.
Error: 1062 SQLSTATE: 23000 (ER_DUP_ENTRY) Message: Duplicate entry(#user) ‘%s’ for key %d

Similarly, the user can skip also skip other errors. Again, it is only advised for those who are aware of how each query affects their data.

slave-skip-errors=[err_code1,err_code2,...|all]

[Need assistance with similar queries? We are here to help]

Conclusion

It is easy to manage MySQL replication and skip duplicate entries. It is advised that the user should be well versed in each aspect of the queries before performing a skip.

PREVENT YOUR SERVER FROM CRASHING!

Never again lose customers to poor server speed! Let us help you.

Our server experts will monitor & maintain your server 24/7 so that it remains lightning fast and secure.

GET STARTED

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Never again lose customers to poor
server speed! Let us help you.