Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RedisSlidingWindowRateLimiter does not provide IdleDuration #61

Open
hacst opened this issue Jun 3, 2023 · 4 comments
Open

RedisSlidingWindowRateLimiter does not provide IdleDuration #61

hacst opened this issue Jun 3, 2023 · 4 comments

Comments

@hacst
Copy link
Contributor

hacst commented Jun 3, 2023

The current implementation always returns TimeSpan.Zero for RateLimiter.IdleDuration. When using a partitioned rate limiter this means the rate limiters created for the partitions never get cleaned up and will accumulate.

I think this is true for all rate limiters in this repo however as it is the only one I am using right now, I only looked at the sliding window rate limiter in detail so far. For that the most reasonable approach to me seems to be to accept some imprecision and just track the last expireat value set in the RedisSlidingWindowManager and provide an estimated idle duration for that. This is under the assumption that if a manager ever deletes a partition that was not actually fully idle, it is really no problem to have it just be re-created by the factory. The actual limit is still in redis after all.

I could create a pull request for the redis sliding rate limiter if this approach is acceptable.

This would not fully match what the interface wants though. The documentation says:

Specifies how long the RateLimiter has had all permits available.

A better estimate or precise answer could of course be given by reaching out to redis, however to perform potentially blocking operations in there seems problematic to me.

@dlxeon
Copy link

dlxeon commented Mar 27, 2024

I faced same memory issue with RedisTokenBucketRateLimiter. I use client IP address as partition key, with 100k unique IP addresses per day, that quickly grows not only in memory utilization, but also CPU utilization.
image

As as solution I subclassed RedisTokenBucketRateLimiter, overriden AcquireAsyncCore and IdleDuration to return time passed since last usage of limiter.

@cristipufu
Copy link
Owner

@dlxeon can you please open a PR?

@dlxeon
Copy link

dlxeon commented Mar 27, 2024

My solution is simple and not ideal, it should be improved for a generic use in library.
Basically what I do is following: update "last active" timestamp in constructor, AttemptAcquireCore and AcquireAsyncCore. Return TimeSpan.Zero when AcquireAsyncCore is in progress, otherwise return real timestamp passed since last activity.

public class IdleDurationRedisTokenBucketRateLimiter<TKey>(TKey partitionKey, TimeProvider timeProvider, RedisTokenBucketRateLimiterOptions options)
    : RedisTokenBucketRateLimiter<TKey>(partitionKey, options)
{
    private long? _lastActiveTimestamp = timeProvider.GetTimestamp();

    protected override RateLimitLease AttemptAcquireCore(int permitCount)
    {
        _lastActiveTimestamp = timeProvider.GetTimestamp();
        return base.AttemptAcquireCore(permitCount);
    }

    protected override async ValueTask<RateLimitLease> AcquireAsyncCore(int permitCount, CancellationToken cancellationToken)
    {
        _lastActiveTimestamp = null;
        try
        {
            return await base.AcquireAsyncCore(permitCount, cancellationToken);
        }
        finally
        {
            _lastActiveTimestamp = timeProvider.GetTimestamp();
        }
    }

    public override TimeSpan? IdleDuration
    {
        get => _lastActiveTimestamp == null ? TimeSpan.Zero : timeProvider.GetElapsedTime(_lastActiveTimestamp.Value);
    }
}

That has couple things to improve:

  1. If same limiter handles multiple active requests it may not return TimeSpan.Zero when limiter is actually active
  2. Code is .Net8 only, as it uses TimeProvider abstraction for unit testing.

I think to overcome multiple simultaneous requests handled by same rate limiter instance I can modify code to add thread-safe requests counter. Below is untested idea. I'll test that later this week

protected override async ValueTask<RateLimitLease> AcquireAsyncCore(int permitCount, CancellationToken cancellationToken)
{
    _lastActiveTimestamp = timeProvider.GetTimestamp();
    Interlocked.Increment(ref _activeRequestsCount);
    try
    {
        return await base.AcquireAsyncCore(permitCount, cancellationToken);
    }
    finally
    {
        Interlocked.Decrement(ref _activeRequestsCount);
        _lastActiveTimestamp = timeProvider.GetTimestamp();
    }
}

public override TimeSpan? IdleDuration
{
    get => _activeRequestsCount > 0 ? TimeSpan.Zero : timeProvider.GetElapsedTime(_lastActiveTimestamp);
}

@dlxeon
Copy link

dlxeon commented Apr 1, 2024

@dlxeon can you please open a PR?

Done. #157

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants