Have you seen a sudden increase in the disk usage for your Zimbra environment? Patching or re-starting Zimbra may very well be the culprit. Upgrading an instance of Zimbra is never boring. It can be quite annoying and irritating, but it’s rarely boring.
After having applied an official Zimbra patch last night, we “lost” over three terrabytes of available disk space. Now, most system administrators who have ever had to deal with Java applications know that they’re rarely “lean” on resources, whatever the resources may be, but losing three TB after a minor patch upgrade is rich, even by Java standards.
Logging and statistics is something Zimbra does a lot of, more than a lot. In fact, the directory /opt/zimbra/zmstat can easily grow beyond control for no good at all. So looking at it, it turns out some zmstat process had gone wild and dumped three TB of “stuff” into the files vm.csv, io.csv, and io.csv. These files do serve a purpose (depending on who you ask), but regardless of their level of usefulness, I don’t want three TB of data in them.
IMHO, I would recommend truncating them in place, unless you really really need the data.
As root, or as the zimbra user, do:
echo>vm.csv echo>io-x.csv echo>io.csv
NOTE: Make sure you understand the implications of truncating (resetting) these files. They are of little use to me, but your mileage may vary.
You may also want to prune old zmstats data as outlined in this Zimbra article:
To remove stats older than, say, 365 days, as the zimbra user, do:
/opt/zimbra/libexec/zmstat-cleanup --keep 365
zmstat-cleanup may complain about zmstat_max_retention not being set, in which case you can set it like this, as the zimbra user:
zmlocalconfig -e zmstat_max_retention=365
and then simply run /opt/zimbra/libexec/zmstat-cleanup without any arguments.
1 thought on “Zimbra running out of disk space after update and zmstat”
Hi, i experiencied this stanger behaviuor, vm.csv and io-x.csv grow constantly and slowing the server due to gzipping those files.
I didn’t have time nor founded this post.
In the followind day i’ll try your workaround.