[collectd] collectd storing data in a db without data condensing (like rrd)?

Dave Cottlehuber dch at skunkwerks.at
Wed Dec 28 15:41:03 CET 2016


On Tue, 20 Dec 2016, at 19:02, Andreas Schuldei wrote:
> What are the recommended ways of storing data in a database, where the
> data
> is not condensed, like rrd does it?
> 
> I am recoding mostly temperature data of ca 60 sensors, over the time
> frame
> of several years/decades, and I need to be able to compare between years.
> So it won't be a high volume of data coming in per hour, but it will be
> some data accumulating over the years.
> 
> this system will run on a resource constrained server, so something with
> a
> modest memory footprint  would be appreciated. (i expect to upgrade that
> system as the hardware dies, but i would prefer not to doing migrations
> between databases every time i switch hardware.
> 
> what database do you recommend?
> What would be a frontend (for plotting the data) to go with that?

Interesting questions.

I have been using graphite mainly because its old and stable. There are
many new shinier alternatives, such as influxdb, but the rate of change
in these projects is high and I don't have advanced needs nor high
performance.

I use it without the aggregation functions, backed by FreeBSD with zfs.
the compression is excellent and I store 7 years of flat metrics as a
result. It depends on what you mean by memory constrained here, but you
could reasonably run graphite on a 4GB low end server, and possibly
lower with some experimentation and tuning, maybe as low as 2GB RAM.

I use the graphite-api layer
http://graphite-api.readthedocs.io/en/latest/ with grafana to provide
graphs/plotting.

There are a number of more efficient carbon storage engines for
graphite, instead of the native format, neither of which I've needed to
use:

- https://github.com/lomik/go-carbon
- https://github.com/tureus/graphite-rust

and a few more which I can't seem to find atm, which provide a more
efficient storage layer.

I write metrics out from collectd via write_riemann to riemann.io
(clojure based) and subsequently trigger alerts if needed, and write the
rest out to graphite via the carbon daemon. This step wouldn't be needed
in your situation but it does allow some nice functionality. More
details available if needed.

Given collectd's write_http plugin you can send metrics to pretty much
anything you want, although querying 

A+
Dave

 



More information about the collectd mailing list