[collectd] zeormq architecture inquiry

Allan Feid allanfeid at gmail.com
Mon Aug 29 18:13:06 CEST 2011

So I talked to some people in the #zeromq channel on freenode, and have been
reading through the zeromq guide. Seems like at scale, with tons of nodes,
you would need pretty fast disk I/O to handle all the writes to RRD. Using
ZeroMQ, you could dish out to different collectd worker machines to handle
the data of certain kinds, example CPU on one worker and memory on another,
which would allow you to distribute your disk i/o amongst smaller and
cheaper nodes, possibly VMs. That's a bit complex for right now, but an idea
worth thinking about in the future. This would involve a zeromq process that
routed requests to various worker machines.

I think for my initial attempt at this, I will have one collectd instance
set to subscribe, and each node set to publish to my subscriber's endpoint.
This should allow nodes to come/go as they please, hopefully this goes well,
as the flexibility in zeromq makes a great way to distribute work.


On Mon, Aug 29, 2011 at 11:58 AM, Florian Forster <octo at collectd.org> wrote:

> Hi Allan,
> On Sun, Aug 28, 2011 at 08:11:14PM -0400, Allan Feid wrote:
> > I was wondering if anyone has any experience doing this, or can
> > provide some guidance on what zeromq architecture would scale best.
> I was wondering the same thing. For a normal (intra-)DC setup, I think
> it the best option would be a central subscriber that binds to a local
> address and many publishers which connect to that address. For an
> inter-DC setup, I'd probably go with aggregators on a DC level and
> repeat the same pattern, i.e. publish to a remote address and, on the
> global level, subscribe to a local address.
> Disclaimer: I hardly have any ZeroMQ experience either, so this is
> basically how I think stuff should work, but I might be totally wrong
> about it …
> Best regards,
> —octo
> --
> Florian octo Forster
> Hacker in training
> GnuPG: 0x0C705A15
> http://octo.it/
> Version: GnuPG v1.4.10 (GNU/Linux)
> Kaj3Z5k5TJqzVE7+n5JeVYrMkCUelHKI7LTovQvOZ8m/ViYz3tOugfAoRr4pU3M5
> GYflMXWAW5yES0+srZsTxChu22n717zwnL3DjqM4byZxL2b0S4h5I7v5pyWqqYSM
> LmZOOXwy/H8mWXcl1qsMgFBeV0s8jQnITq/yRjx+sQcMDk3Pj094AmLPSNh3tLrC
> mipOfWyXWBcsFtr7x9K+sqY6PFxmpZEJYU5JEJG+pptRdNmOvPcfGxjlkB4b1MQ3
> SlpCLHsWY28/e3fwQ0RVK0dc6+PqzqRLocCRxphbdqRYlZ3EJ9m4Zq+rMtrHra+r
> i4SpmptQQgRvp7vftv1tE3vYh7jCa7a3li7KhOA7axS1IHQh4zasN/c5/CGnXDLE
> RvaW4g71xOOoWe63SRABEOUiY+AHRS5+ZiY/+bprscofbd+HmE7CiFK/ZGEMnuG+
> eRX+L4sm7dhYckGe7AnokW5fVKOzJd9kJZ36nhEZOUFpLiGIDjoieRK9YZtVeVBF
> VIZV75WWElT+zFOUcG8tFulR4QrYlsRzoAs6AnvAMRZ4YjfaaIE5IS7Q+GbeQCcQ
> YuzsXm6E6h/ifx+gBtChpUpAEc+qanS1q1Y8RHCwP7RBEr8mxAYgS4v/YKBV62M9
> 7Pckd9P+aN6HuHRL/PUN
> =6w1f
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.verplant.org/pipermail/collectd/attachments/20110829/4a4ca50c/attachment.html>

More information about the collectd mailing list