We use AWS RDS Mysqldump to import data or export data. But we may come across a few errors.
Here, at Bobcares, we assist our customers with several AWS queries as part of our AWS Support Services.
Today, let us see how we can troubleshoot and resolve this error.
AWS RDS Mysqldump
Most often, we receive the following errors while using mysqldump:
- Couldn’t execute FLUSH TABLES WITH READ LOCK errors
- Max_allowed_packet errors
- SUPER privilege(s) and DEFINER errors
- Lost or aborted connection errors
Moving ahead, let us see the methods our Support Techs employ to fix them.
Couldn’t execute FLUSH TABLES WITH READ LOCK error
When we use the –master-data option with mysqldump to export data, we receive:
“mysqldump: Couldn’t execute ‘FLUSH TABLES WITH READ LOCK’: Access denied for user ‘user’@’%’ (using password: YES) (1045)”
To resolve this, we remove the –master-data option. By doing so, we don’t get an exact log position in the mysqldump.
To workaround, either we take the mysqldump when the application stops, or we take the mysqldump from an Amazon RDS read replica.
Our Support Techs recommend the steps below to create a mysqldump from an Amazon RDS MySQL read replica.
1. First, we set a value for the binary log retention.
2. Then we stop the replication:
3. After that, we take the mysqldump without –master-data=2 from the read replica.
4. To capture the Master_Log_File and Exec_Master_Log_Pos, we run:
SHOW SLAVE STATUS
5. Since we use the replica for the application, we start the replication again using:
In case we don’t use the replica for the application, we can delete it.
Most often, we find this error while we use mysqldump to export data.
“Error 2020: Got packet bigger than ‘max_allowed_packet’ bytes when dumping table `tb_name` at row: XX”
If the mysqldump command requests a packet that is larger than the value of the max_allowed_packet parameter set for the RDS DB instance, it causes this error.
To resolve this, we increase the global value for max_allowed_packet, or configure the max_allowed_packet in the mysqldump for that session.
For example, we can modify the command like the following:
$ mysqldump --max_allowed_packet=1G ......
SUPER privilege(s) and DEFINER errors
On the other hand, this error occurs when we use mysqldump to import data into an RDS DB instance that is running MySQL or MariaDB:
“ERROR 1227 (42000) at line XX: Access denied; you need (at least one of) the SUPER privilege(s) for this operation”
This error indicates that either the target RDS DB instance has the binary log enabled or, the imported mysqldump file tries to create an object with a DEFINER attribute user that doesn’t exist in the instance.
Here, the command requires SUPER privilege(s) that RDS DB instances don’t provide.
Lost or aborted connection errors
When we use mysqldump to import data, we might receive:
“mysqldump: error 2013: lost connection to mysql server during query when dumping table”
“mysqldump: Aborted connection XXXXXX to db: ‘db_name’ user: ‘master_user’ host: ‘XXXXXXX’ (Got timeout writing communication packets)”
Suppose the connection is idle for too long. Then the client may terminate the connection incorrectly.
To resolve this, we verify that application timeouts are shorter than the MySQL timeouts.
In addition, we make sure that the applications close idle connections.
If a connection times out, we can increase the timeouts for MySQL by increasing the wait_timeout and interactive_timeout parameters.
To do so, we can use a custom parameter group.
In short, we saw how our Support Techs fix the AWS RDS Mysqldump.