Similar to a recent post, one of the servers I manage was getting the “Error connecting to database” message at the website it hosted. When I logged in it was slow to respond. After some initial troubleshooting I determined it’s hard drive was full. The following is my experience learning to troubleshoot a Debian server out of disk space.
As always, I began by restarting by restarting Apache2 and MySQL. Luckily this immediately determined the issue. Upon restarting MySQL, I received a message stating it couldn’t start because where it was trying to write a log was out of space. Bingo! That was quicker than normal.
disk space commands
So, this was the first instance I needed to utilize disk usage commands. After a quick search I discovered the df command. This command stands for disk usage and is a great tool for seeing overall statistics on disk space. When I ran this command it revealed the sever had 0 bytes free. Ouch! Luckily it rebooted early when I tried that! Refer to the generic screenshot to see what df displays (also preferably don’t run as root).
Next I needed to find what was actually taking all this space! That’s where I discovered the du command. It boasts great versatility for finding what is using disk space on your linux server. There are several flags to make the information easier to read. I was rather confident that the specific file or folder was in the /var directory. This server was only hosting a few websites so it was a good place to start. There are a couple ways to do this but I ran the command.
- sudo du -h –max-depth=1 /var
Sure enough, the /var/log directory was HUGE! I ran the command again changing the directory again at the end to /var/log. You can repeat these steps to get down into buried directories. After executing this command a few times I was led to mysql.log file that was very large. Here is another example of what this command returns.
What is flooding the file
I initially was scared the website or webserver had been hacked. You can never be to confident with WordPress. I connected via FTP to the server and downloaded the file. It was much to large to edit on the fly. After scrolling to the bottom i began to see hundreds of lines flooded with alphanumeric characters. I scrolled to where these stopped and was given a hint in that area that the WordFence plugin had executed something recently before the meltdown. Also, I had just recently installed this security plugin. I remembered that one of its features was to scan the webserver and WordPress install to check for breaches or vulnerabilities. It was a pretty safe assumption that it was scanning the MySQL database and something was causing it to “freak out” and manifest this large log file.
Verifying the problem
The easiest way to verify the issue was to delete the mysql.log file. A new one will be automatically created eventually. After deleting the file, restarting Apache2 and MySQL I was able to log back into the wordpress dashboard. From here I was able to manually run a scan with wordfence. The scan didn’t take long to complete and by running our favorable du command I was able to determine the file had grown exponentially. Just to be safe I waited a few hours and verified the log file had barely changed. I then ran another scan with the Wordfence plugin. Again, the file size sky rocketed. I disabled the plugin, submitted a support request with Wordfence and wiped the non existent sweat from my brow.
I was finished! All I had to do was wait for some troubleshooting steps from Wordfence support. In the mean time I would keep a better eye on this server and verify plugins and WordPress were up to date. Per usual I finished this little conundrum with a sense of accomplishment. The Debian server was no longer out of disk space!