[collectd] Traffic accounting and Collectd

Lindsay Holmwood lindsay at holmwood.id.au
Fri Apr 10 10:32:38 CEST 2015

Hi Ben,

On Fri, Apr 10, 2015 at 5:34 PM Stuart Cracraft <smcracraft at me.com> wrote:

> Hi Ben,
> We just use the exec plugin of collect to throw all the stuff into
> collectd and from there into graphite and from there to
> Icinga/Nagios for monitoring/alarms.
> It is super, super simple. Most people do that...
> We are transitioning into influxdb/grafana instead but for the past
> year or two the above are sufficient.
> For the exec plugin, we just create one erb script, in Chef, which
> sends out to the necessary systems, collects the items via the
> templates/default/foobar.sh.erb, and which outputs them to
> the stdout for the collectd parent process and from there up
> to the collectd+graphite graphs/servers.

The other approach you can look at is exposing the data as JSON, and using
collectd's curl_json[0] plugin to consume that data.

I'm currently working on a project with a server that's doing data
distribution, which exposes traffic accounting metrics via a HTTP endpoint
as JSON. We have collectd consuming this JSON, which works quite nicely as
nodes come and go thanks to the wildcard matchers of the Key argument[1].

So you don't need a server constantly running to serve this JSON, you could
write out the JSON to a file and serve it up with Apache, Nginx, etc.

Regardless of you using exec or curl_json, consider using counters to keep
track of the traffic, and the derive [2] type in collectd to automatically
calculate the rate of change. This keeps your collection logic much


[0] https://collectd.org/wiki/index.php/Plugin:cURL-JSON
[2] https://collectd.org/wiki/index.php/Data_source
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.verplant.org/pipermail/collectd/attachments/20150410/3d3d34c1/attachment.html>

More information about the collectd mailing list