MIT LICENSE
Updated 6 days ago
I've not yet updated to 0.13, we're running 0.11, but on python graphite we set this to 1000 per minute, so I'd use a similar value scaled down to seconds here...
With the python graphite components, we had reached the following architecture to meet our performance and HA requirements...
As the amount of metrics we collected grew, the bottleneck we kept hitting is that a carbon-cache process would hit 100% CPU and we would end up dropping metrics. We could keep adding more processes, but that couldn't go on forever on a single node, and we would have to start sharding our metric data between multiple cache nodes, which I wanted to avoid.