"Some of the sites in our server are taking much time to load than normal, one of such sites is domainname.org and it is taking around 12 seconds to load. Can you help fix this."
Recently, we were contacted by a server owner who was having issues with website slowness.
In our role as Technical Support Services for web hosting companies, configuring and managing the web servers for best performance, is a major task we do.
Analyzing the reasons for web server slowness and resolving it, forms a part of that service. Here’s how we debug such issues.
Nginx performance optimization – Why you need it
Nginx is a powerful web server that handles static pages quickly, without incurring much server resources. But it cannot handle dynamic requests such as PHP well.
While checking this customer’s server, we could see a lot of Nginx processes running. Killing these processes and restarting the Nginx service can give an immediate relief, but that is not enough.
Finding the root cause of the issue and fixing it, is the method of debugging that is followed by Bobcares engineers. This helps to ensure that the issue does not recur and is solved permanently.
In our experience dealing with Nginx web servers, we’ve see various reasons why the web server slows down, and these include:
- Nginx stand-alone server incapable to handle dynamic web requests efficiently
- Crawling by bots and analytics sites in the websites leading to too many web requests
- Other related services such as MySQL or backups, causing high server load
- Malicious attempts by hackers, such as too many POST attempts or DDOS attacks to sites
- Certain software features such as Autodiscover processes leading to redirect loops
- Improper configuration of Nginx web server, that leads to too high TIME_WAIT connections
To pinpoint the actual cause for the Nginx issue, we examine the web server access logs, which are usually available at ‘/var/log/nginx/access.log’ or in the installation directory.
Nginx performance optimization – How we do it!
Nginx performance optimization is done in a step-by-step manner. The first step is to analyze the website contents and traffic in the server, and then choose the setup and parameters that suit this purpose well.
1. Reverse proxy setup
By default, many servers have Apache as the web server. But due to the high memory and CPU usage incurred by Apache, it is not enough to handle sites with huge traffic.
Apache handles dynamic requests such as PHP, while Nginx is a web server that handles static contents better than Apache. Nginx processes static pages faster and with less server resources.
In a server that handles websites with mixed contents, such as WordPress sites, one web server alone may not be sufficient to handle the peak traffic.
To harness the benefits of both the web servers, we usually configure a reverse proxy setup which involves both Apache and Nginx servers.
In this reverse proxy server, we configure Nginx as the front end and Apache as backend. Both servers listen to different ports in the server.
Nginx serves the static pages quickly, which boosts up site loading speed by over 50%. Apache server processes the dynamic contents and hand over pages to Nginx.
With this reverse proxy setup, we are able to balance the traffic between the two web servers, and get the best out of both, thus ensuring high server performance.
2. Tweaking Nginx parameters
The performance of an Nginx web server is highly determined by its configuration settings. In this particular server, we noticed plenty of TIME_WAIT connections between Apache and Nginx.
root@server [logs]# netstat -tpan|grep -c TIME_WAIT 1110
While examining the Nginx parameters, we could see that the Nginx timeout value was very high. This was further aggravated by the high TCP timeout values in the server.
Nginx configuration file is usually located at ‘/etc/nginx/nginx.conf’. The parameters in it that mainly determine the Nginx performance are:
-
- worker_processes – This setting is used to set the number of NGINX worker processes. This is usually set as 1 for low traffic servers. For servers with high traffic and disk I/O, this value is increased based on the number of CPU cores (like 1 per CPU core).
- worker_connections – The maximum number of connections that each worker process can handle simultaneously, is determined by this. This value is decided based on the core file limit in the server. In effect, the total number of allowable Nginx connections would be ‘worker_processes * worker_connections’. This value can be tweaked in an iterative manner, and by adjusting the kernel open file limits.
- Buffers – The size of buffer plays an important role in Nginx performance. If the size is too low, Nginx will have to write to temporary files more often, increasing the disk I/O. There are many buffer parameters such as ‘client_body_buffer_size’ (which limits the size of data transmitted by POST actions), ‘client_header_buffer_size’ (for limiting client header size), ‘client_max_body_size’ ( maximum allowed size of the client request body), etc., that we configure for best performance.
- TCP parameters – TCP tuning parameters are set using sysctl. They determine the network timeout and other settings, and play an important role in web server performance optimization. But most often, server owners overlook these settings, causing server to huff and puff.
- Timeout – This value determines how long the server waits for each activity, and has a huge influence on the performance. Too short a timeout leads to frequent timeout errors, but too long a value can cause server to overload. Some of the timeout settings are:
a. keepalive_timeout – makes the connection last up to a particular time, after which the server will disconnect.
b. send_timeout – this timeout is set not for entire transfer, but between two read operations, after which the server disconnects.
c. client_body_timeout and client_header_timeout – make the server wait for a particular time to receive the header and data, during operations such as POST attempts.
3. Caching techniques
In certain high traffic web servers which serves ‘not-too-many-dynamic’ requests, we configure additional features such as caching for static files like logos, CSS files, Javascript, etc.
Nginx server is configured to provide a caching mechanism to temporarily store such frequently accessed static files. This helps to reduce bandwidth and improve website speed.
By configuring cache and setting the cache expiry time for static files that do not change constantly, Nginx performance is improved drastically.
Another feature is to enable compression for reducing data size, using tools such as gzip. A file is compressed before it is sent to the browser, which saves bandwidth and increases site speed.
But caching and compression, if not implemented without doing proper review of the website pages and CPU resources, can cause adverse results and end up loading the server.
4. Server audits and monitoring
Performance optimization should not stop with just configuration tweaks. We cannot wait for a high traffic to pop up and cause the server to crash, to know the effectiveness of the tweaks.
Conducting stress tests in the servers with the help of benchmarking tools, is a major task we do along with the web server tweaks.
By simulating the peak traffic in the server and testing its capacity to handle the load, we confirm that the tweaks are effective. We do further iterations until this performance level is attained.
We also configure 24/7 service and server health monitoring systems for the servers we manage. This enables us to pinpoint any minor hiccups that occur, and take actions to avoid a server catastrophe.
Conclusion
Today we saw how our Support Engineers perform Nginx performance optimization to fix website slowness issue. In our customers’ servers, we monitor the services 24/7 and perform periodic server tweaks for ongoing performance.
0 Comments