[collectd] IO usage
Brandon Arp
brandonarp at gmail.com
Mon Dec 24 21:28:09 CET 2018
I worked at a fairly large company (as far as servers go) writing data to
rrd files. Long story short, rrd probably isn't the system you want to be
using for storing the data. As you've seen, it has very high IO
requirements. Moving the data from rrd to other systems (happy to
elaborate more directly, not on the mailing list) saved us a ton of
resources, let us grow the monitoring cluster significantly, and
drastically increased the fidelity of data. I would recommend looking into
some of the other time series databases out there. Many of them will
support 400+ clients very easily.
Brandon
On Mon, Dec 24, 2018 at 11:27 AM George <izghitu at gmail.com> wrote:
> Hi,
>
> Thanks a lot for your suggestions. It decreased the load a lot. I also
> followed some suggestions from here:
> https://oss.oetiker.ch/rrdtool-trac/wiki/TuningRRD
> https://collectd.org/wiki/index.php/Inside_the_RRDtool_plugin
>
> Merry Christmas to you too!
>
> În lun., 24 dec. 2018 la 18:14, Tasslehoff Burrfoot <
> tasslehoff at burrfoot.it> a scris:
>
>> My suggestion is to check very carefully which plugins are really
>> necessary on the clients to reduce the amount of data, for example the
>> plugin to check storage devices will generate a lot of data if you use lvm
>> (you'll get physical devices + device mapping devices).
>> When you made some clean up you can drastically reduce the I/O load
>> using CacheTimeout and CacheFlush options of rrdtool plugin on the host
>> which receive all the data, this will make your collectd server to flush
>> rrd on the storage less frequently and reduce the load on the storage
>> devices.
>> I hope this will help.
>>
>> Merry Christmas :)
>>
>> Tasslehoff Burrfoot
>>
>> _______________________________________________
> collectd mailing list
> collectd at verplant.org
> https://mailman.verplant.org/listinfo/collectd
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.verplant.org/pipermail/collectd/attachments/20181224/a609ed02/attachment.html>
More information about the collectd
mailing list