Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

allocatorimpl: log when allocate target excluded due to full disk #118313

Closed
kvoli opened this issue Jan 25, 2024 · 0 comments · Fixed by #124073
Closed

allocatorimpl: log when allocate target excluded due to full disk #118313

kvoli opened this issue Jan 25, 2024 · 0 comments · Fixed by #124073
Assignees
Labels
A-kv-distribution Relating to rebalancing and leasing. C-enhancement Solution expected to add code/behavior + preserve backward-compat (pg compat issues are exception) O-support Originated from a customer P-2 Issues/test failures with a fix SLA of 3 months T-kv KV Team
Projects

Comments

@kvoli
Copy link
Collaborator

kvoli commented Jan 25, 2024

Is your feature request related to a problem? Please describe.
We log an allocator error when unable to allocate a target due to throttled stores, no stores matching constraints or not enough stores in the cluster:

error processing replica: ‹0 of 5 live stores are able to take a new replica for the range (2 already have a voter, 0 already have a non-voter); likely not enough nodes in cluster›

It would be helpful to also include information on stores which are currently full, checked here:

Jira issue: CRDB-35675

@kvoli kvoli added C-enhancement Solution expected to add code/behavior + preserve backward-compat (pg compat issues are exception) A-kv-distribution Relating to rebalancing and leasing. labels Jan 25, 2024
@kvoli kvoli added this to Incoming in KV via automation Jan 25, 2024
@blathers-crl blathers-crl bot added the T-kv KV Team label Jan 25, 2024
@kvoli kvoli added O-support Originated from a customer P-2 Issues/test failures with a fix SLA of 3 months labels Jan 25, 2024
craig bot pushed a commit that referenced this issue May 14, 2024
124073: allocatorimpl: include full store count in allocator error r=andrewbaptist a=kvoli

An allocator error is returned when a new or replacement replica cannot be allocated to a store. The error details how many live stores there are and how many hold existing replicas along with constraints. The error makes determining the cause of up-replication stalls easier.

Also include the number of alive stores which are ineligible due to full disks in the error. The updated error message when at least one store is full:

```
0 of N live stores are able to take a new replica for the range (X full disk, ...)
```

Where N is the number of live stores, of which X have a full disk.

Resolves: #118313
Release note: None

Co-authored-by: Austen McClernon <austen@cockroachlabs.com>
@craig craig bot closed this as completed in 32168d8 May 14, 2024
blathers-crl bot pushed a commit that referenced this issue May 14, 2024
An allocator error is returned when a new or replacement replica cannot
be allocated to a store. The error details how many live stores there
are and how many hold existing replicas along with constraints. The
error makes determining the cause of up-replication stalls easier.

Also include the number of alive stores which are ineligible due to full
disks in the error. The updated error message when at least one store is
full:

```
0 of N live stores are able to take a new replica for the range (X full disk, ...)
```

Where N is the number of live stores, of which X have a full disk.

Resolves: #118313
Release note: None
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-kv-distribution Relating to rebalancing and leasing. C-enhancement Solution expected to add code/behavior + preserve backward-compat (pg compat issues are exception) O-support Originated from a customer P-2 Issues/test failures with a fix SLA of 3 months T-kv KV Team
Projects
KV
Incoming
Development

Successfully merging a pull request may close this issue.

1 participant