Most often SSH Configuration File error in EC2 occurs when we change the Amazon EC2 instance’s sshd_config file
Here, at Bobcares, we assist our customers with several AWS queries as part of our AWS Support Services.
Today, let us see how we can troubleshoot and resolve this.
SSH Configuration File error in EC2
If we change an instance’s sshd_config file it might lead us to a connection refused error when connecting through SSH.
To confirm that we can’t access the instance, we access the instance through SSH with verbose messaging on:
$ ssh -i "myawskey.pem" ec2-user@ec2-11-22-33-44.eu-west-1.compute.amazonaws.com -vvv
Here, we need to use our own key file and user name.
The error looks like the following:
OpenSSH_7.9p1, LibreSSL 2.7.3 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 48: Applying options for * debug1: Connecting to ec2-11-22-33-44.eu-west-1.compute.amazonaws.com port 22. ssh: connect to host ec2-11-22-33-44.eu-west-1.compute.amazonaws.com port 22: Connection refused
How to resolve this?
For a Nitro-based instance, the device names differ. For example, instead of /dev/xvda or /dev/sda1, it will be /dev/nvme.
Moving ahead, let us see a few effective methods our Support Techs employ to fix this issue.
Method 1: Use the EC2 Serial Console
With EC2 Serial Console for Linux, we can troubleshoot boot issues, network configuration, and SSH configuration issues in Nitro-based instance types.
To access it, we use either the Amazon EC2 console or the AWS AWS CLI.
However, we need to grant access to the console at the account level before we use the serial console.
Then we create AWS IAM policies granting access to the IAM users.
In addition, each instance with the serial console must include at least one password-based user.
Method 2: Use a rescue instance
1. Initially, we launch a new EC2 instance in the VPC. We use the same AMI in the same Availability Zone as the impaired instance.
The new instance becomes the rescue instance.
2. After that, we stop the impaired instance.
3. Then we detach the Amazon EBS root volume (/dev/xvda or /dev/sda1) from the impaired instance.
4. We attach the EBS volume as a secondary device (/dev/sdf) to the rescue instance.
5. Eventually, we connect to the rescue instance using SSH.
6. Here, to view devices, we run:
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part /
xvdf 202:80 0 8G 0 disk
└─xvdf1 202:81 0 8G 0 part
7. Then we create a mount point directory (/rescue) for the new volume that we attached to the rescue instance:
$ sudo mkdir /mnt/rescue
8. To mount the volume in the above directory, we run:
$ sudo mount -t xfs -o nouuid /dev/xvdf1 /mnt/rescue/
To mount ext3 and ext4 file systems, we run:
$ sudo mount /dev/xvdf1 /mnt/rescue
9. Finally, to verify the volume mounted in the directory we run the lsblk command again:
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part /
xvdf 202:80 0 8G 0 disk
└─xvdf1 202:81 0 8G 0 part /mnt/rescue
Correct or copy the sshd_config file
We can investigate the sshd_config file on the impaired instance and roll back the changes:
$ sudo vi /mnt/rescue/etc/ssh/sshd_config
Or, we can copy the sshd_config file from the rescue instance to the impaired instance:
$ sudo cp /etc/ssh/sshd_config /mnt/resscue/etc/ssh/sshd_config
Reattach the volume to the original instance and test the connection
Our Support Techs recommend completing the following if we went ahead with Method 2.
1. To unmount the volume, we run:
$ sudo umount /mnt/rescue/
2. Then we detach the secondary volume from the rescue instance and then attach the volume to the original instance as /dev/xvda.
3. Eventually, we start the instance.
4. Finally, we verify if we can reach the instance by connecting to the instance using SSH.
[Need help with the resolution? We are here for you]
Conclusion
In short, we saw how our Support Techs fix the configuration file error for our customers.
0 Comments