ECS task stuck in the PENDING state can be a result of an unresponsive docker daemon or large docker image, etc.
Here, at Bobcares, we assist our customers with several AWS queries as part of our AWS Support Services.
Today, let us see how we can resolve this issue.
ECS task stuck in the PENDING state
An ECS task can be stuck in the PENDING state due to several reasons. That includes:
- An unresponsive Docker daemon
- Large Docker image
- The Amazon ECS container agent lost connectivity with the Amazon ECS service in the middle of a task launch
- The Amazon ECS container agent takes a long time to stop an existing task
In order to avoid errors while we run the AWS CLI commands, we make sure we have the most recent AWS CLI version.
How to resolve this?
Moving ahead, let us see a few troubleshooting steps our Support Techs employ to find why the task is stuck in the PENDING state.
The Docker daemon is unresponsive
- For CPU issues:
1. We use Amazon CloudWatch metrics to check if the container instance exceeded the maximum CPU.
2. If necessary, we increase the size of the container instance.
- For memory issues:
1. We run the free command to see the available memory for the system.
2. Then we increase the size of the container instance as needed.
- For I/O issues:
1. Initially, we run the iotop command.
2. This will give an idea of the services that use the most IOPS. Then, we distribute these tasks to distinct container instances using task placement constraints and strategies.
-or-
We use CloudWatch to create an alarm for the Amazon EBS BurstBalance metrics. Then, we use an AWS Lambda function or a custom logic to balance tasks.
The Docker image is large
Larger images increase the amount of time the task is in the PENDING state.
To speed up the transition time, we tune the ECS_IMAGE_PULL_BEHAVIOR parameter to take advantage of image caching.
The Amazon ECS container agent lost connectivity with the Amazon ECS service in the middle of a launch
1. To verify the status and connectivity of the Amazon ECS container agent, we run the following commands.
For Amazon Linux 1:
$ sudo status ecs
$ sudo docker ps -f name=ecs-agent
For Amazon Linux 2:
$ sudo systemctl status ecs
$ sudo docker ps -f name=ecs-agent
2. Then we view metadata on running tasks in the ECS container instance via:
$ curl http://localhost:51678/v1/metadata
{ "Cluster": "CLUSTER_ID", "ContainerInstanceArn": "arn:aws:ecs:REGION:ACCOUNT_ID:container-instance/TASK_ID", "Version": "Amazon ECS Agent - AGENT " }
3. In addition, to view information on running tasks, we run:
$ curl http://localhost:51678/v1/tasks
{ "Tasks": [ { "Arn": "arn:aws:ecs:REGION:ACCOUNT_ID:task/TASK_ID", "DesiredStatus": "RUNNING", "KnownStatus": "RUNNING", ... ... } ] }
4. If the issue relates to a disconnected agent, we restart the container agent with either of the following commands:
For Amazon Linux 1:
$ sudo stop ecs
$ sudo start ecs
For Amazon Linux 2:
$ sudo systemctl stop ecs
$ sudo systemctl start ecs
ecs start/running, process xxxx
5. To determine agent connectivity, check the following logs for keywords such as “error,” “warn,” or “agent transition state”:
- Amazon ECS container agent log at /var/log/ecs/ecs-agent.log.yyyy-mm-dd-hh.
- Amazon ECS init log at /var/log/ecs/ecs-init.log.
- Finally, the Docker logs at /var/log/docker.
The Amazon ECS container agent takes a long time to stop an existing task
The agent won’t start new tasks if the Amazon ECS container agent has older tasks to stop.
Generally, there are two parameters to control container stop and start timeout at the container instance level.
1. In /etc/ecs/ecs.config, we can set the value of the ECS_CONTAINER_STOP_TIMEOUT parameter to the amount of time to pass before the containers are forcibly killed if they don’t exit normally on their own.
2. In /etc/ecs/ecs.config, we can set the value of the ECS_CONTAINER_START_TIMEOUT parameter to the amount of time that to pass before the Amazon ECS container agent stops trying to start the container.
[Need help with the procedures? We’d be happy to assist]
Conclusion
In short, we saw how our Support Techs fix the ECS task stuck error.
0 Comments