-
Notifications
You must be signed in to change notification settings - Fork 7.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add local token cache for cluster #3381
Open
absolute8511
wants to merge
3
commits into
alibaba:1.8
Choose a base branch
from
absolute8511:support-local-token-cache
base: 1.8
Could not load branches
Branch not found: {{ refName }}
Could not load tags
Nothing to show
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Change-Id: I6b1ece200cd6b2db181f570ab7207325f12fc12b
absolute8511
force-pushed
the
support-local-token-cache
branch
from
April 23, 2024 12:47
086b3a8
to
109c3b4
Compare
Change-Id: I110062ea817430e8e52779d5c86804f5a476e13f
Change-Id: I86e004d9af1a70011dc9efd630735e4ee9b495e6
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## 1.8 #3381 +/- ##
============================================
+ Coverage 45.89% 46.36% +0.46%
- Complexity 2145 2181 +36
============================================
Files 431 431
Lines 12903 13039 +136
Branches 1727 1749 +22
============================================
+ Hits 5922 6045 +123
- Misses 6279 6284 +5
- Partials 702 710 +8
☔ View full report in Codecov by Sentry. |
LearningGp
added
to-review
To review
area/cluster-flow
Issues or PRs related to cluster flow control
labels
May 29, 2024
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Change-Id: I6b1ece200cd6b2db181f570ab7207325f12fc12b
Describe what this PR does / why we need it
By default, the cluster limiter will have to request to token server for each request. This will increase the latency and make
too much requests to the token server. In order to solve the performance issue under cluster, the local token cache should be a proposal for this.
Does this pull request fix one issue?
fix #3382
Describe how you did it
In order to reduce the request to server, we add a background prefetch job to period check the tokens and prefetch a batch of tokens if necessary. While the user request coming it will first check the local tokens.
Describe how to verify it
the test case is added to verify it.
Special notes for reviews