[collectd] collectd & rrdcached scalability issue

Stefan Wiederoder stefanwiederoder at googlemail.com
Tue Oct 2 10:29:01 CEST 2012


Hi Bostjan,

I´m currently storing my rrd´s on a RAM-disk, therefore I would think that
it´s not an IOPS problem.

these are my rrdcached options:
OPTIONS="-m 775 -s rrdcached -l unix:/tmp/rrdcached.sock -l
127.0.0.1:12345-p /var/run/rrdcached/rrdcached.pid -b
/opt/collectd_data -F -j
/opt/collectd_journal -t 16"

bye
,
Stefan


2012/9/8 Bostjan Skufca <bostjan at a2o.si>

> Hi Stefan,
>
> I know this may not be the solution you are looking for, but:
> How about using socket of rrdcached daemon and instructing it to flush the
> data of immediate interest (rrds you are about to view)?
> Works well for me on ganglia installation which uses rrdcached to cache up
> to three hours of data...
>
> What are your RRDcached settings? How much IOPS can you storage subsystem
> handle? SSD as a fast-paced fix? Do you have a graph of IOPS for your
> collectd host?
>
> b.
>
>
> On 7 September 2012 09:16, Stefan Wiederoder <
> stefanwiederoder at googlemail.com> wrote:
>
>> Hello list,
>>
>> my current collectd (v4.10) installation is using rrdcached (rrdtool
>> 1.4.4), which worked fine
>> so far. But with the number or hosts growing beyond 1000 it seems to have
>> a negative impact
>> on my central collectd server.
>>
>> I´ve 37 GB of data (residing in a RAM-disk), plus 12 GB of journal data.
>> Each hosts has a
>> minimum of nine plugins activated, but max is only two or three more on
>> some special boxes.
>>
>> the problem is that collectd data seems to lag "behind" the longer
>> collectd/rrdcached is running.
>> this accumulates up to three hours a day until the daily restart kicks
>> in. An example, I see data
>> from 12h when I look at graphs at 15h.
>>
>> this was not a problem with fewer hosts (< 900), it´s been getting worse
>> with more and more hosts.
>>
>> I´ve only specified the number of write_threads so far (=16), but no
>> other parameters.
>>
>> do you have any suggestion for me?
>>
>> thanks
>> ,
>> Stefan
>>
>>
>>
>> _______________________________________________
>> collectd mailing list
>> collectd at verplant.org
>> http://mailman.verplant.org/listinfo/collectd
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.verplant.org/pipermail/collectd/attachments/20121002/f51132a8/attachment.html>


More information about the collectd mailing list