Bobcares

How to fix ‘Too Many Open Files’ in Linux

by | Nov 18, 2020

Linux users often come across the error, ‘Too Many Open Files’ due to high load in the server, leaving it hard for us to open multiple files.

We, at Bobcares have experienced similar error and our Server Administration Team have come up with effective solutions.

Today, let’s check how to find the limit of maximum number of open files set by Linux and how we alter it for an entire host, individual service or a current session.

 

Where can we find error, ‘Too Many Open Files’?

Since Linux has set a maximum open file limit by default, the system has a method for restricting the number of various resources a process can consume.

Usually the ‘Too Many Open Files’ error is found on servers with an installed NGINX/httpd web server or a database server (MySQL/MariaDB/PostgreSQL).

Too Many Opem Files

For example, when an Nginx web server exceeds the open file limit, we come across an error:

socket () failed (29: Too many open files) while connecting to upstream

To find the maximum number of file descriptors a system can open, run the following command:

# cat /proc/sys/fs/file-max

The open file limit for a current user is 1024. We can check it as follows:

# ulimit -n [root@server /]# cat /proc/sys/fs/file-max 97816 [root@server /]# ulimit -n 1024

There are two limit types: Hard and Soft. Any user can change a soft limit value but only a privileged or root user can modify a hard limit value.

However, the soft limit value cannot exceed the hard limit value.

To display the soft limit value, run the command:

# ulimit –nS

To display the hard limit value:

# ulimit -nH

 

‘Too Many Open Files’ error & Open File Limits in Linux

Now we know that these titles mean that a process has opened too many files (file descriptors) and cannot open new ones. In Linux, the maximum open file limits are set by default for each process or user and the values are rather small.

We, at Bobcares have monitored this closely and have come up with a few solutions:

 

Increase the Max Open File Limit in Linux

A large number of files can be opened if we change the limits in our Linux OS. In order to make new settings permanent and prevent their reset after a server or session restart, make changes to /etc/security/limits.conf. by adding these lines:

  • hard nofile 97816
  • soft nofile 97816

If it is using Ubuntu, add this line as well:

session required pam_limits.so

These parameters allow to set open file limits after user authentication.

After making the changes, reload the terminal and check the max_open_files value:

# ulimit -n 97816

 

Increase the Open File Descriptor Limit per service

A change in the limit of open file descriptors for a specific service, rather than for an entire operating system is possible.

For example, if we take Apache, to change the limits, open the service settings using systemctl:

systemctl edit httpd.service

Once the service settings is open, add the limits required. For example;

[Service] LimitNOFILE=16000 LimitNOFILESoft=16000

After making the changes, update the service configuration and restart it:

# systemctl daemon-reload # systemctl restart httpd.service

To ensure the values have changed, get the service PID:

# systemctl status httpd.service

For example, if the service PID is 3724:

# cat /proc/3724/limits | grep “Max open files”

Thus, we can change the values for the maximum number of open files for a specific service.

 

Set Max Open Files Limit for Nginx & Apache

In addition to changing the limit on the number of open files to a web server, we should change the service configuration file.

For example, specify/change the following directive value in the Nginx configuration file /etc/nginx/nginx.conf:

worker_rlimit_nofile 16000

While configuring Nginx on a highly loaded 8-core server with worker_connections 8192, we need to specify

8192*2*8 (vCPU) = 131072 in worker_rlimit_nofile

Then restart Nginx.

For apache, create a directory:

# mkdir /lib/systemd/system/httpd.service.d/

Then create the limit_nofile.conf file:

# nano /lib/systemd/system/httpd.service.d/limit_nofile.conf

Add to it:

[Service] LimitNOFILE=16000

Do not forget to restart httpd.

 

Alter the Open File Limit for Current Session

To begin with, run the command:

# ulimit -n 3000

Once the terminal is closed and a new session is created, the limits will get back to the original values specified in /etc/security/limits.conf.

To change the general value for the system /proc/sys/fs/file-max, change the fs.file-max value in /etc/sysctl.conf:

fs.file-max = 100000

Finally, apply:

# sysctl -p [root@server /]# sysctl -p net.ipv4.ip_forward = 1 fs.file-max = 200000 [root@server /]# cat /proc/sys/fs/file-max 200000

[Despite all the above solutions if the issue still prevail, don’t panic. We have our Experienced Tech Team working round the clock to help you fix it for you.]

 

Conclusion

To conclude, today, we figured how our Support Engineers solve the error, ‘Too Many Open Files’ and discussed options to change the default limits set by Linux.

We saw that the default value of open file descriptor limit in Linux is too small, and discussed a few options for changing these limits of the server.

PREVENT YOUR SERVER FROM CRASHING!

Never again lose customers to poor server speed! Let us help you.

Our server experts will monitor & maintain your server 24/7 so that it remains lightning fast and secure.

GET STARTED

var google_conversion_label = "owonCMyG5nEQ0aD71QM";

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Never again lose customers to poor
server speed! Let us help you.