-
Notifications
You must be signed in to change notification settings - Fork 157
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix the write timestamp in atomic GraphQL mutations #2893
base: main
Are you sure you want to change the base?
Fix the write timestamp in atomic GraphQL mutations #2893
Conversation
|
||
// Batch timestamp and nowInSeconds parameters should be unset to avoid using them in | ||
// BatchHandler's `makeParameters` | ||
assertFalse(batch.getParameters().hasTimestamp()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might be more idiomatic to use assertThat(batch.getParameters().hasTimestamp()).isFalse() here, given that we're already using assertJ
Thanks for the PR @marksurnin. |
+1 for PR against |
@marksurnin may be you will have to rebase your fork or create new PR? Right now, its showing a lot of diff (around 644 files as diff) |
a0c81df
to
ccf73c0
Compare
Yep, changing the base created a large diff. There was a large number of merge conflicts, so I ended up resetting my branch with |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@kathirsvn the coordinator build is failing to download dependencies from DataStax's internal Maven repo. I don't build
|
@marksurnin This is likely due to access problems: DSE jar and some of its dependencies (like netty with backported fixes) are only available from DataStax internal Maven repo. I was hoping we could trigger re-run and that'd pass but that does not seem to be the case (I tried and it appears to fail still). |
Yes, I just triggered another re-run and it failed. Let me follow up on this. |
As per suggestion from @jeffreyscarpenter, I tried recreating this PR here and the build step goes through fine now. So, it looks like some access issue (i.e. my account seems to have read access to the datastax internal repo but @marksurnin account doesn't). Now, we are getting below IT error
Probably, we will have to read the writetime and ttl as in |
@kathirsvn could you please try pushing an empty commit to my branch? Perhaps the maven repo authentication would work this way. I see that the credentials are read from secrets here but I don't know where the account's organization is used to check access.
Secondly, I can update the tests to read writetime and TTL via CQL but it's concerning that it's failing with
which is a standard use-case of an aggregation function. |
@marksurnin I think the issue is that forks by developers without push rights to original repo cannot access secrets; this to prevent potential security problems. |
@marksurnin I accepted the invitation to your fork and just pushed an empty commit to see if coordinator build CI job works fine now, let's see. |
Regarding the alias |
Cool, it looks like the coordinator build job is good now. Let's check the aggregate function problem (i.e. v_writetime) next. |
When I run the test
|
@marksurnin I get this error when I use any function other than what is in the supported aggregation functions list. For example below query works: query getAll {
foo {
values {
k
cc
v
v_count: _bigint_function(name:"count", args: ["v"])
}
}
} Note: I don't remember trying |
What this PR does:
Set
timestamp
andnow_in_seconds
in batch parameters only if they have been passed via query parameters.com.google.protobuf.Int64Value
values in query.proto are serialized to 0 if they are unset, which causes the bug described below.Which issue(s) this PR fixes:
Fixes #2875
As described in this comment, Stargate currently sets the write timestamp of all
@atomic
batch mutations to Unix time 0.Consequently, the TTL is calculated as (Unix time 0 + TTL). Therefore, even when using the maximum TTL of 20 years, Cassandra interprets these rows as having already expired and they cannot be retrieved.
Checklist