[collectd] processes plugin process by name - percent of all cpus feature
Marc Fournier
marc.fournier at camptocamp.com
Mon Jun 17 11:05:51 CEST 2013
Hello,
Excerpts from William Salt's message of 2013-06-17 02:00:35 +0200:
> Im using the processes plugin to find specific processes via regex, and
> monitoring their cpu usage, this is the total time spent in jiffies
> (milliseconds) across all cores.
>
> In graphite, i am dividing it by the number of cpu cores i have on a
> machine then turning that into percent. this gives me a percentage of total
> cpu usage across all cores for that process.
> However, i cannot ascertain in graphite the exact number of cpus a node
> has, it differs greatly, and its not scalable to have static values in
> graph query strings like this, nor efficient to work this out when
> rendering a graph.
Still not very nice and efficient, but you can use a variant of this[1] to
count the number of cpus using graphite functions. IE: this would return
the number of CPUs as seen by collectd's spu plugin:
sumSeries(offset(scale(collectd.your_hostname.cpu-*.cpu-idle, 0), 1))
Another thing that comes to mind: since collectd 5.3 there's an
"aggregation" plugin, which you can use to sum up the metrics of every cpu
on each host. You could then use these aggregated metrics for your
percentage calculation. There's a similar feature in graphite if you run
the carbon-aggregator in front of carbon-cache.
> I wondered if someone could patch the plugin to allow a flag to produce the
> average cpu use across all cores? grabbing the number of cpus
> from /proc/cpuinfo and dividing the metric by that? bonus if it had a flag
> to output a percent!
Vedran Bartonicek has 2 pull-requests doing this sort of thing for the load
and df plugins. Have look at PRs #343 and #344 on
github.com/collectd/collectd
Marc
[1] http://obfuscurity.com/2013/05/Graphite-Tip-Counting-Number-of-Metrics-Reported
More information about the collectd
mailing list