Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
This is the code for building up a local UVE cache in the AlarmGenerator
We will use this cache to calculate UVE changes, so that a post-agg UVE Stream can be generated in Kafka (this is not done yet) This commit also has optimizations for UVE reads in the AlarmGenerator - AG to Use get_uve instead of multi_uve_get - AG to use cfilt to get only the type which has changed (as per the pre-agg UVE Stream) - UveServer to use redis pipelines to reduce round-trips to redis - UVEServer should not create a new redis client for every UVE read operation. If a failure takes place while reading a UVE, the OpServer can ignore the failure, and rely on eventual consistency for clients. But AlarmGen must remember the UVE Keys so that they are processed again A state compression mechanism is provided for UVE processing. We accumulate changed UVEs in a working set (not a queue), and process the working set periodically This was tested in a single-box setup with 4000 UVEs contrail-stats --table AlarmgenUpdate.keys --where "name=*" --select keys.key "SUM(keys.count)" --last 1m | wc The system was processing 25000 UVE updates per minute. contrail-stats --table SandeshMessageStat.msg_info --where "name=*" --select msg_info.level "SUM(msg_info.messages)" --sort "SUM(msg_info.messages)" --last 1m contrail-stats --table AlarmgenUpdate.keys --where "name=*" --select "SUM(keys.count)" --last 1m The Kafka consumers is AlarmGen were able to process UVEs as fast as the collector generated updates. Due to state compression, UVE processing was able to keep up as well. /usr/share/kafka# bin/kafka-consumer-offset-checker.sh --zookeeper 127.0.0.1:2181 --topic uve-0 --group workers Change-Id: I82335d5de2fc040ffc2ba0b22037fe5879c5ea99 Partial-Bug: 1428271
- Loading branch information