Tried and tested solutions for your servers, from our outsourced support diaries.

How to troubleshoot high load in linux web hosting servers

How to troubleshoot high load in linux web hosting servers

Even in this age of powerful servers and cloud computing, server load spikes are all too common. While troubleshooting a high server load, getting the approach right means work half done.

At Bobcares, our server experts fix server load in our customer’s (web hosts) servers, in as little as 5 mins. We do it by systematically tracing an abusive user (or program) from an affected service or over-used resource.

See how we keep server load stable!

Yes, it sounds like a handful, but years of practice has made it quite easy for us. We’ll explain how, but let’s answer a fundamental question first:

High load average – What is it really?

A server functions with a limited set of resources. For eg., an average server these days will have 8 GB RAM, 4 processors, 75 IOPS SATA II hard disks, and 1 Gigabit NIC cards.

Now, let’s assume one user decided to backup their account. If that process occupies 7.5 GB of RAM, other users or services in the system have to wait for that process to get over.

The longer the backup takes, the longer the wait queue. The “length” of the queue is represented as server load. So, a server running at load avg. 20, will have a longer wait queue than a server at load avg. 10.

[ High server load can ruin your business! Don’t delay anymore. Our expert server specialists will keep your servers stable]

Why FAST troubleshooting is important

When a server is under high load, chances are that the number of processes in the “wait” queue are growing each second.

The commands take longer to execute, and soon the server could become non-responsive, leading to a reboot. So, it is important to kill the source of the server load as soon as possible.

Blessen Quote high server load  In our Server support team, we have a concept called “The Golden Minute”. It says that the best chance to recover from a load spike is in the first minute. Our engineers keep a close eye on the monitoring system 24/7, and immediately log on to the server if a load spike is detected. It is due to this quick reaction and expert mitigation that we’re able to achieve close to 100% server uptime for our customers .  

Blessen Cherian
Member of Executive Group, Bobcares

How to troubleshoot a load spike really fast?

It is common for people to try out familiar commands when faced with a high load situation. But without a sound strategy it is just wasted time.

Bobcares support techs use a principle called go from what you know to what you don’t.

When we get a high load notification, there’s one thing we know for sure. There’s at least one server resource (RAM, CPU, I/O, etc.) that’s being abused.

  1. So, the first step is to find out which resource is being abused.
  2. The next is to find out which service is using that resource. It could be the web server, database server, mail server, or some other service.
  3. Once we find out the service, we then find out which user in that service is actually abusing the server.

FAST Linux server load troubleshooting

To show how this concept works in reality, we’ll take an example of a high load situation we recently fixed in a CentOS Linux server. Here are the steps we followed:

  1. Find the over-loaded resource
  2. Find the service hogging that resource
  3. Find the virtual host over-using that service

1. Find the over-loaded resource

Our support techs use different tools for different types of servers. For physical servers or hardware virtualized servers, we’ve found atop to be a good fit. In an OS virtualized server, we use the top command, and if it’s a VPS node we use vztop.

The goal here is to locate which one of the resources; viz, CPU, Memory, Disk or Network that is getting hogged. In this case, we used atop, as it was a dedicated server.

We ran the command “atop -Aac“. It showed the accumulated usage of resources for each process, sorted automatically by the most used resource, and the command details. This gave the below output.

atop output showing disk statistics

atop disk usage data

 

We could see that the most used resource is disk and is marked as ADSK. From the highlighted summary we saw that /dev/sda was 100% busy.

It’s worthwhile to note that the resource that is most vulnerable to over-use is usually Disk (especially if its SATA), followed by memory, then CPU and then network.

At this stage of troubleshooting, the following points are worth noting:

  1. We observe the server for at least 30 secs before deciding on which resource is being hogged. The one that remains on top the most is the answer.
  2. While using top, we use the “i” switch to see only the active processes, and “c” switch to see the full command line.
  3. The “%wa” in top command helps us to see the wait average to know if its a non-CPU resource hog.
  4. Using pstree, we look for any suspicious processes or unusually high number of a particular service. We then compare the process listing with a similarly loaded server to do a quick check.
  5. We use netstat to look for any suspicious connections, or too many connections from one particular IP (or IP range).

[ Don’t wait for your server to crash! Grab our Emergency server services to save your servers at affordable pricing. ]

Troubleshooting is as much an exercise in invalidating possible scenarios as it is about systematically zeroing in one particular possibility.

When you know how various commands give an output in a normal stable server, you will gain an instinct of knowing what is NOT right.


TROUBLED BY HIGH SERVER LOAD?

Never again lose customers to an unstable server! Let us help you.

Contact Us once. Enjoy Peace Of Mind For Ever!

GET INSTANT SOLUTION FOR HIGH LOAD

Bobcares provides Outsourced Web Hosting Support and Outsourced Server Management for online businesses. Our services include 24/7 server support, help desk support, live chat support and phone support.

8 Comments

  1. please help me, my server is having high cpu, i used pstree command, and i found this

    init─┬─atd ├─auditd───{auditd} ├─clamd───{clamd} ├─crond ├─dbus-daemon ├─dovecot─┬─anvil │ └─log ├─exim ├─fail2ban-server───8*[{fail2ban-serve}] ├─httpd───18*[httpd] ├─mdadm ├─6*[mingetty] ├─mysqld_safe───mysqld───23*[{mysqld}] ├─nginx───5*[nginx] ├─rsyslogd───3*[{rsyslogd}] ├─spamd───2*[spamd] ├─sshd─┬─sshd───bash───pstree │ └─sshd───bash ├─udevd───2*[udevd] ├─vesta-nginx───vesta-nginx ├─vesta-php───2*[vesta-php] └─vsftpd

    Reply
    • Hi Jedi,

      From the pstree results, I suspect MySQL processes to be the reason for high CPU in your server. Please contact our expert server administrators at https://bobcares.com/emergency-server-support/ if the issue persists. We’ll inspect your server in detail to pinpoint the exact cause and fix the high CPU issue in no time.

      Reply
      • please give me some tips to fix this please, i am short of cash now

        Reply
        • Hi Jedi,

          As I’ve mentioned in the article, you need to start with finding which resource is being over used.

          Use the utility atop and run it for a few minutes to see which resource is highlighted in red (see image in the article).

          Then use more specialized tools for that resource to locate which service is hogging that resource.

          Reply
  2. the atop command didnt work on my ssh, it says command not found

    Reply
    • Hi Jedi,

      You need to first install the ‘atop’ utility in your server, before you can use it.

      To install atop on RHEL/CentOS/Fedora Linux, use

      yum install atop

      To install atop on Debian/Ubuntu Linux, use

      apt-get install atop

      Reply
  3. also i noticed when i stop mysql, the cpu reduces drastically,

    [mysqld]
    datadir=/var/lib/mysql
    socket=/var/lib/mysql/mysql.sock
    symbolic-links=0

    skip-external-locking
    key_buffer_size = 256M
    max_allowed_packet = 32M
    table_open_cache = 256
    sort_buffer_size = 1M
    read_buffer_size = 1M
    read_rnd_buffer_size = 4M
    myisam_sort_buffer_size = 64M
    thread_cache_size = 8
    query_cache_size= 16M
    thread_concurrency = 8

    #innodb_use_native_aio = 0
    innodb_file_per_table

    max_connections=200
    max_user_connections=50
    wait_timeout=10
    interactive_timeout=50
    long_query_time=5

    #slow_query_log=1
    #slow_query_log_file=/var/log/mysql-slow-queries.log

    [mysqld_safe]
    log-error=/var/log/mysqld.log
    pid-file=/var/run/mysqld/mysqld.pid

    #
    # include all files from the config directory
    #
    !includedir /etc/my.cnf.d

    this is my current

    Reply
    • Hi Jedii,

      As mentioned in my previous comment, MySQL seems to be the issue. Please check the ‘process list’ of MySQL to identify which database and query is causing the high load.

      Reply

Submit a Comment

Your email address will not be published. Required fields are marked *

About Bobcares

Bobcares Bobcares is a server management company that helps businesses deliver uninterrupted and secure online services.
Our engineers manage close to 52,500 servers that include virtualized servers, cloud infrastructure, physical server clusters, and more.
MORE ABOUT BOBCARES
Bobcares
WE RESCUE AND MANAGE YOUR SERVER 911 SUPPORT . MONITORING . MAINTENANCE Our experts are online 24/7 to help you recover from a server issue, or to assist you with complex server admin jobs.
GET STARTED