Generally, we see the EC2 Error Audit: Backlog Limit Exceeded in the Amazon EC2 Linux instance’s screenshot and system logs.
Here, at Bobcares, we assist our customers with several AWS queries as part of our AWS Support Services.
Today, let us discuss why we receive these messages, and how we can prevent them from reoccurring.
EC2 Error Audit: Backlog Limit Exceeded
In a Linux system, the audit backlog buffer is to maintain or log audit events. When a new audit event triggers, the system logs the event and adds it to the audit backlog buffer queue.
The backlog_limit parameter value is the number of audit backlog buffers.
By default, the parameter is set to 320:
# auditctl -s
enabled 1
failure 1
pid 2264
rate_limit 0
backlog_limit 320
lost 0
backlog 0
Any event beyond the default number can result in the following errors:
audit: audit_backlog=321 > audit_backlog_limit=320 audit: audit_lost=44393 audit_rate_limit=0 audit_backlog_limit=320 audit: backlog limit exceeded
-or-
audit_printk_skb: 153 callbacks suppressed audit_printk_skb: 114 callbacks suppressed
An audit buffer queue at or exceeding capacity might also cause the instance to hang or remain unresponsive.
To avoid them, we increase the backlog_limit parameter value. Mostly, increasing buffer space helps avoid error messages.
Here is a calculation of the memory for the auditd backlog. We can use this to determine how large we can make the backlog queue without causing memory stress.
One audit buffer = 8970 Bytes Default number of audit buffers (backlog_limit parameter) = 320 320 * 8970 = 2870400 Bytes, or 2.7 MiB
The MAX_AUDIT_MESSAGE_LENGTH parameter defines the size of the audit buffer.
How to fix further errors?
In case the instance is inaccessible with backlog limit exceeded messages in the system log, we stop and start the instance.
Then, we perform the following steps to change the audit buffer value.
Here we change the backlog_limit parameter value to 8192 buffers.
1. First and foremost, we access the instance via SSH.
2. Then we verify the current audit buffer size.
For Amazon Linux 1 and other OS that don’t have systemd:
$ sudo cat /etc/audit/audit.rules
# This file contains the auditctl rules that are loaded
# whenever the audit daemon is started via the initscripts.
# The rules are simply the parameters that would be passed
# to auditctl.
# First rule - delete all
-D
# Increase the buffers to survive stress events.
# Make this bigger for busy systems
-b 320
# Disable system call auditing.
# Remove the following line if you need the auditing.
-a never,task
# Feel free to add below this line. See auditctl man page
For Amazon Linux 2 and other OS that use systemd:
$ sudo cat /etc/audit/audit.rules
# This file is automatically generated from /etc/audit/rules.d
-D
-b 320
-f 1
3. After that we with an editor we access the audit.rules file:
For Amazon Linux 1 and other OS that doesn’t use systemd:
$ sudo vi /etc/audit/audit.rules
For Amazon Linux 2 and other OS that use systemd:
$ sudo vi /etc/audit/rules.d/audit.rules
4. Now we proceed to edit the -b parameter to a larger value.
$ sudo cat /etc/audit/audit.rules
# This file contains the auditctl rules that are loaded
# whenever the audit daemon is started via the initscripts.
# The rules are simply the parameters that would be passed
# to auditctl.
# First rule - delete all
-D
# Increase the buffers to survive stress events.
# Make this bigger for busy systems
-b 8192
# Disable system call auditing.
# Remove the following line if you need the auditing.
-a never,task
# Feel free to add below this line. See auditctl man page
$ sudo auditctl -s
enabled 1
failure 1
pid 2264
rate_limit 0
backlog_limit 320
lost 0
backlog 0
5. Once done, we restart the auditd service. The new backlog_limit value takes effect.
In addition, as we can see, it updates in auditctl -s:
# sudo service auditd stop
Stopping auditd: [ OK ]
# sudo service auditd start
Starting auditd: [ OK ]
# auditctl -s
enabled 1
failure 1
pid 26823
rate_limit 0
backlog_limit 8192
lost 0
backlog 0
[Stuck in between? We are here for you]
Conclusion
In short, we saw how our Support Techs fix the EC2 error.
0 Comments