You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
By executing the pop_closed method, it will successfully remove the Connection from the PoolNode, but it cannot remove it from the Lru.
Steps to reproduce
The existence of this memory leak can be proven through the following tests:
// pingora-pool/src/connection.rs#[test]fntest_pop_multi_threads(){use env_logger;let _ = env_logger::builder().is_test(true).filter_level(log::LevelFilter::Debug).try_init();let meta = ConnectionMeta::new(101,1);let value = "v1".to_string();let cp:Arc<ConnectionPool<String>> = Arc::new(ConnectionPool::new(3));// put meta in main thread
cp.put(&meta, value);{let cp = cp.clone();let meta = meta.clone();
std::thread::spawn(move || {// pop meta in child thread
cp.pop_closed(&meta);}).join().unwrap();}}// Add log print in the method pop_closedfnpop_closed(&self,meta:&ConnectionMeta){// NOTE: which of these should be done first?self.pop_evicted(meta);let r = self.lru.pop(&meta.id);debug!("pop from lru res: {}",r.is_some());// the added log print}
Console printing:
running 1 test
[2024-04-20T02:51:48Z DEBUG pingora_pool::connection] evict fd: 1 from key 101
[2024-04-20T02:51:48Z DEBUG pingora_pool::connection] pop from lru res: false
test connection::test_pop_multi_threads ... ok
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 9 filtered out; finished in 0.00s
Currently, the maximum size of memory leaks depends on the size setting of the ConnectionPool.
If I haven't missed any other important information, I guess the RwLock<ThreadLocal<RefCell<T>> is mainly to reduce lock contention. Another solution to this is to use the sharded lock way(every connection has its ID, which should not be difficult to achieve).
Your observation is correct. This is by design. We trade memory for execution efficiency. I won't call it memory leak because the memory is still tracked in the LRU and will be reclaimed when the LRU evicts it eventually. The code just doesn't free it right away.
At the time we implemented this logic we did not know ways to do caching without lock contention. Now we have things like TinyUFO. So we will optimize this in the future.
Describe the bug
RwLock<ThreadLocal<RefCell<T>>>
. Please seepingora/pingora-pool/src/lru.rs
Lines 42 to 50 in 845c30d
PoolNode
.idle_poll
async task(such as therelease_http_session
method inv2.rs
). Please seepingora/pingora-core/src/connectors/mod.rs
Lines 240 to 243 in 845c30d
pingora/pingora-pool/src/connection.rs
Lines 218 to 222 in 845c30d
pop_closed
method, it will successfully remove the Connection from thePoolNode
, but it cannot remove it from theLru
.Steps to reproduce
The existence of this memory leak can be proven through the following tests:
Console printing:
Currently, the maximum size of memory leaks depends on the size setting of the ConnectionPool.
If I haven't missed any other important information, I guess the
RwLock<ThreadLocal<RefCell<T>>
is mainly to reduce lock contention. Another solution to this is to use the sharded lock way(every connection has its ID, which should not be difficult to achieve).There are use cases in the Rocksdb project on how to use sharded cache, please refer to: https://github.com/facebook/rocksdb/blob/master/cache/sharded_cache.cc
The text was updated successfully, but these errors were encountered: