No space left on device (28) is a common error in Linux servers.
As a Server Administration Service provider, we see this error very often in VPSs, Dedicated Servers, AWS cloud instances and more.
It could happen seemingly for no reason at all (like refreshing a website), or when updating data (like backup sync, database changes, etc.).
What is the error “No space left on device (28)”?
For Linux to be able to create a file, it needs two things:
- Enough space to write that file.
- A unique identification number called “inode” (much like a social security number).
Most server owners look at and free up the disk space to resolve this error.
But what many don’t know is the secret “inode limit“.
What is “inode limit”, you ask? OK, lean in.
Linux identifies each file with a unique “inode number”, much like a Social Security number. It assigns 1 inode number for 1 file.
But each server has only a limited set of inode numbers for each disk. When it uses up all the unique inode numbers in a disk, it can’t create a new file.
Quite confusingly, it shows the error : No space left on device (28) and people go hunting for space usage issues.
Now that you know what to look for, let’s look at how to fix this issue.
How to fix “No space left on device (28)”
In reality, there’s no single best way to resolve this error.
Our Dedicated Server Administrators maintain Linux servers of web hosts, app developers and other online service providers.
We’ve seen variations of this error in MySQL servers, PHP sites, Magento stores and even plain Linux servers.
Going over our support notes, we can say that an ideal solution that’s right for one server might cause unintended issues in another server.
So, before we go over the solutions, carefully study all options and choose the best for you.
Broadly, here are the ways in which we resolve “No space left on device (28)“:
- Deleting excess files.
- Increasing the space availability.
- Fixing service configuration.
Now let’s look at the details of each case.
Fixing Magento error session_start(): failed: No space left on device
Our Dedicated Server Administrators support several high traffic online shops that use Magento.
In a couple of these sites, we have seen the error “session_start(): failed: No space left on device” when the site traffic suddenly goes up during marketing campaigns, seasonal sales, etc.
That happens when Magento creates a lot of session and cache files to store visitor data and to speed up the site.
In all the cases, we’ve seen the inode count to be maxed out, while the was still disk space.
Here, deleting the session or cache files might look like a good idea, but those files will come back in a few minutes.
So, after we as a pemanent solution, we do these:
- Setup cache expiry and auto-cleaning – These servers uses PHP cache and Magento cache, whcih created a lot of files during high traffic hours. So, we configured cache expiry, and setup a script to clear all cache files older than 24 hours.
- Configure log rotation – Some logs grew to more than 50 GB, which posed a threat to the disk space. So, we configured the log rotate service to limit log size to 500 MB, and to delete all logs older than 1 week.
- Auto-clear session files – The session files were set to tbe auto-deleted with a custom script so that inode numbers are not used up.
- Audit disk usage periodically – On top of all this, our admins audit these servers every couple of weeks to make sure the systems are working properly, and if needed make corrections.
If you need help fixing this error in your Magento server, click here to talk to our Magento experts. We are online 24/7.
Resolving WordPress “Warning: sessionstart(): open(/tmp/mysite/tempfile) failed: No space left on device (28)”
WordPress is another popular web platform where we’ve seen this error.
When updating content or logging into the admin panel, a user sees the error:
Warning: sessionstart(): open(/tmp/mysite/tempfile) failed: No space left on device (28)
The most common reasons for this are:
- User’s space/inode quota exhaustion.
- Inode exhaustion in session directory.
- Space/Inode exhaustion in the server.
Fixing quota issues
If the site is hosted in a shared server or a VPS, there’ll be account default limits to storage space and inode counts.
In many cases, we’ve found large files (such as backup, videos, DB dumps, etc.) in the user’s home directory itself. But there are other locations that are not so obvious:
- Trash or Spam mail folders
- Catch-all mail accounts
- Web app log files (eg. WordPress error log)
- Old log files (eg. access_log.bak)
- Old uncompressed backups (from a previous site restore for eg.)
- Un-used web applications
The first step in such situations is to look at these locations, and remove all files that are not necessary. That’ll resolve the error in the website, but that is not enough.
An important part of our support philosophy is to implement fixes such that an issue won’t come back.
So, our server admins then dig in and figure out why the space exceeded in the first place.
Some common reasons and the permanent resolutions we implement are:
- Abandoned mailboxes – Unused mail accounts can easily become a spam dump, and eat up all the disk space. We set up disk usage alert mails to the site owner so that once any mail account exceeds the normal size, the account owner can investigate and delete unwanted mails or accounts.
- Unmaintained IMAP accounts – We’ve seen IMAP accounts with over 10 GB of mails in Trash. To avoid such issues, we setup fixed email quotas and alert mails so that users and site owners can take action before the quota tuns out.
- Old applications and files – In some websites we’ve seen old applications that were once used or installed to test features. These apps not only waste disk space, but are also a security threat. In the sites we maintain for our customers, we prevent such issues through periodic website/server audits where we detect and remove unused files.
Fixing inode exhaustion in session directory
WordPress plugins store session files in the home directory by default.
But some servers might have this set to an external directory like /tmp or /var depending on the application used to setup PHP and WordPress.
Since session files aren’t too large or numerous, it’ll usually be something else that’s taking up space in those directories.
For instance, in one server, the session directoy was set to /var/apps/users/path/to/tmpsession. The space in /var directory was exhausted by discarded backup dumps.
In that particular server, the solution was to delete old backups, but it can be something else in other servers.
So, we follow a top-down approach by looking at the inode count for each main folder, and then drill down to the directory that consumes the most inodes.
Fixing Space or Inode exhaustion in a server
This is a variation of the issue mentioned above.
Unoptimized backup processes or cache systems or log services can leave a lot of files lying around.
That’ll quickly consume the available space and inode quota.
So,
- We first sort the directories in a server based on their space and inode usage.
- Then we figure out which ones are not really needed, and delete the excess files.
- And last, we tweak the services that created those files to clear out the files after a fixed limit.
If your WordPress site or web application is currently showing this error, we can help you fix it. Click here to talk to our experts. We are online 24/7.
How to fix rsync: mkstemp “/path/to/file” no space left on device (28)
Many backup software use rsync tool to transfer backup files between servers.
We’ve seen many backup processes fail with the error rsync: mkstemp "/path/to/file" no space left on device (28)
.
Among the many reasons for this error, these are the most common:
- Double space issue – In its default setting, Rsync needs double the space to update a file. So, if rsync is trying to update a 20 GB file, it needs another 20 GB free space to hold the temporary file while the new version is being transferred from the source server. In some servers, this fails due to reaching space or quota limits.
- Quota exhaustion – In VPS servers there could be account level limits on number of inodes and space available. During transfer of large directories, this quota could get exhausted.
- Inode / Space limit on drive – Backup folders or database directories are sometimes mounted on a separate directory with limited space. So, when moving large archives, the space or inode limits can get exhausted.
To solve these issues, first we figure out exactly which limit is getting hit, and if there’s a way to implement an alternate solution.
For instance if we find that rsync is trying to update a large file of many GBs in size (eg. database dump or backup archive), we use an option called --inplace
in the rsync
command.
This will skip creating the temporary file, and will just update those parts of the file that were changed. In this way we avoid the need to upgrade disk quota.
Fixing space or inode overage
By far, the most common cause for inode or space exhaustion is all the good space being taken up by files that the server owner doesn’t need.
It could be old uncompressed backups, cache files, spam files and more.
So, the first step in our troubleshooting is always to sort all folders based on their space and inode usage.
Once we know the top folders that contributed to the disk overage, we can then drill down to weed out those that are not needed.
The second, and more important step in this resolution is to find out which process caused the junk file build-up and then reconfigure the service to automatically clean out old files.
Are you facing an rsync error right now? We can help you fix it in a few minutes. Click here to talk to our experts. We are online 24/7.
Fixing MySQL Errcode: 28 – No space left on device
MySQL servers sometimes run into this error when executing complex queries. An example is:
ERROR 3 (HY000) at line 1: Error writing file '/tmp/MY4Ei1vB' (Errcode: 28 - No space left on device)
When executing complex queries that merges several tables together, MySQL builds temporary tables in /tmp drive.
If the space or inodes are exhausted for any reason in these folders, MySQL will exit with this error.
Temporary drive can get quickly filled up with cache files, session files or other temporary files that was never cleaned up.
To resolve and prevent this from happening, we setup /tmp cleaning programs that’ll prevent junk files from piling up, and will be run every time the usage raises above 80%.
If you need help in resolving this error in your MySQL server, we can help you. Click here to talk to our experts. We are online 24/7.
How Bobcares helps prevent disk errors
All of what we have said here today is more “reactive” that “proactive”.
We said how we would RECOVER a server once a service fails with this error.
Here at Bobcares, our aim is to PREVENT such errors from happenig in the first place.
How do we do it? By being proactive.
We maintain production servers of web hosts, app developers, SaaS providers and other online businesses.
An important part in this maintenance service is Periodic Server Audits.
During audits, expert server administrators manually check every part of the server and fix everything that can affect the uptime of services.
Some of the checks to prevent disk errors are:
- Inode and Space usage – We check the available disk space and inodes in a server, and compare it with previous audit results. If we see an abnormal increase in the usage, we investigate in detail and regulate the service that’s causing the file growth.
- Temp folder auto-clearence – We setup scripts that keep the temp folders clean, and during the audits make sure that the program is working fine.
- Spam and catchall folders – We look for high volume mail directories, and delete accounts that are no longer maintained. This is has many times helped us prevent quota overage.
- Unused accounts deletion – Old user accounts that were either canceled or migrated out sometimes remain back in the server. We find and delete them.
- Old backups deletion – Sometimes people can leave uncompressed backups lying around. We detect such unused archives and delete them.
- Log file maintenance – We setup log rotation services that clears our old log files as it reaches a certain size limit. During our periodic audit we make sure these services are working well.
- Old packages deletion – Linux can leave old application installation packages even after an application is setup. We clear out old unused installation archives.
- Old application deletion – We scan for application folders that are no longer accessed, and delete unused apps and databases.
Of course, there are many more ways in which the disk space could be used up.
So, we customize the service for each of our customers, and make sure no part is left unchecked.
Over time this has helped us achieve close to 100% uptime for our customer servers.
Conclusion
No space left on device (28)
is a common error in Linux servers that is caused either due to lack of disk space or due to exhaustion of inodes. Today we’ve seen the various reasons for this error to occur while using Magento, WordPress, MySQL and Rsync.
It was very useful for me, thanks.