Wondering why instance reachability check failed at AWS EC2? We can help you.
Here, at Bobcares, we assist our customers with several AWS queries as part of our AWS Support Services.
Today, let us see steps followed by our Support Techs to perform this task.
How to resolve Instance reachability check failed at AWS EC2?
Basically, Amazon EC2 monitors the health of each EC2 instance with two status checks:
System status check
The system status check detects issues with the underlying host that your instance runs on.
If the underlying host is unresponsive or unreachable due to network, hardware or software issues, then this status check fails.
Instance status check
The instance status check failure indicates an issue with the reachability of the instance.
This issue occurs due to operating system-level errors such as the following:
- Failure to boot the operating system
- Failure to mount the volumes correctly
- Exhausted CPU and memory
- Kernel panic
Today, let us see the steps followed by our Support Techs to resolve it:
1.Firstly, determine if the instance status check or system status check failed by viewing the instance status check metrics.
2. If the system status check failed, then see My instance failed the system status check.
If the instance status check failed, then check the instance’s system logs to determine the cause of the failure.
Depending on the data found in the system logs, use one of the following resolutions:
Failure to boot the operating system
If the system logs contain boot errors, then see My EC2 Linux instance failed the instance status check due to operating system issues. How do I troubleshoot this?
Failure to mount the volumes correctly
An instance status check might fail due to a mount point that’s unable to mount correctly, as shown in the following example:
[FAILED] Failed to mount /
See 'systemctl status mnt-nvme0n1p1.mount' for details.
[DEPEND] Dependency failed for Local File Systems.
For troubleshooting information, see My EC2 Linux instance failed the instance status check due to operating system issues. How do I troubleshoot this?
Exhausted CPU and Memory
High CPU Utilization
If the CPUUtilization metric is at or near 100%, the instance might not have enough compute capacity for the kernel to run.
For T2 or T3 instances, check the CPU credit metrics in the CloudWatch metrics table to determine if the CPU credits are at or near zero.
If the CPU credits are at zero, then the CPUUtilization metric shows a saturation plateau at the baseline performance for the instance.
The baseline performance might be 20%, 40%, and so on, depending on the instance type.
CloudWatch metrics indicating CPU utilization at or near 100%, or at a saturation plateau for T2 or T3 instances, indicate that the status check failed due to over-utilization of the instance’s resources.
Block device errors, software bugs, or kernel panic might cause an unusual CPU usage spike.
If the CPUUtilization metric is at 100%, and the system logs contain errors related to block devices, memory issues, or other unusual system errors.
Then, reboot or stop and start the instance.
Out of memory
High memory pressure might result in the instance status check failing.
In the following example, the operating system is out of memory and stopping the process consuming the most memory.
[115879.769795] Out of memory: kill process 20273 (httpd) score 1285879 or a child
[115879.769795] Killed process 1917 (php-cgi) vsz:467184kB, anon-rss:101196kB, file-rss:204kB
EC2 memory and disk metrics are not sent to CloudWatch by default.
However, you can send additional metrics to CloudWatch for monitoring using the CloudWatch agent.
To troubleshoot and resolve the out of memory issue, consider upgrading the instance to a larger instance type.
Or, add swap storage to the instance to alleviate the memory pressure.
Disk full errors
If the system logs contain disk full errors, then the instance might have entered emergency mode due to the root device is full.
$: service apache2 restart
Error: No space left on device
$: /etc/init.d/mysql restart
[....] Restarting mysql (via systemctl): mysql.serviceError: No space left on device
root@example:~# df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/root 7.7G 7.7G 0 100% /
For detailed instructions on how to troubleshoot and resolve disk full errors, see the following:
My EC2 Linux instance failed the instance status check due to over-utilization of its resources. How do I troubleshoot this?
How do I increase the size of my EBS volume if I receive an error that there’s no space left on my file system?
Kernel panic
A kernel panic error occurs when the kernel detects an internal, fatal error during operation.
If the error occurs during the operating system boot, then the kernel might not be able to load properly.
This might cause an operating system boot failure. The following is an example of a kernel panic error message:
Linux version
2.6.16-xenU (builder@xenbat.amazonsa) (gcc version 4.0.1 20050727 (Red Hat4.0.1-5)) #1 SMP Mon May 28 03:41:49 SAST 2007
Kernel command
line: root=/dev/sda1 ro 4
Registering block device major 8
Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(8,1)
[Need help with the procedure? We’d be glad to assist you]
Conclusion
In short, we saw how our Support Techs resolve Instance reachability check failed at AWS EC2.
PREVENT YOUR SERVER FROM CRASHING!
Never again lose customers to poor server speed! Let us help you.
Our server experts will monitor & maintain your server 24/7 so that it remains lightning fast and secure.
0 Comments