Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Truncate start time of juice stream nested objects #28005

Draft
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

bdach
Copy link
Collaborator

@bdach bdach commented Apr 26, 2024

RFC. PRing early, not necessarily ready to be shipped yet.

  • Pending a diffcalc sheet inspection

Intended as at least a partial solution to #26436. In testing, two of the three replays from the issue are FCs with this change, and the third is improved (and not fully fixed).

As it turns out, stable truncates the end time of sliders to integers and the start time of all juice stream parts is also truncated to integers.

This matters for replay playback when mappers push limits. For instance, let's take one case from the issue. In one of the maps involved, there was a juice-stream-ending fruit at time 7093.9655 according to lazer, which was also a hyperfruit.

The broken replay on this map included one frame at time 7093, with the catcher position before the hyperdash, and one frame at time 7097, with the catcher position after the hyperdash. Which meant that the replay handler moved out of the way of the hyperfruit at time 7093.9655, deciding that it is after the replay frame that was supposed to be catching it.

(For reference, the relevant replay playback code is here. Note the handling of position. Any frame with an action is important, which means that as long as any key is held, interpolation will not take place.)

On stable this is not a thing, because the fruit's end time was being truncated to int, therefore moving it back to time 7093 and restoring temporal integrity.

This probably doesn't matter in other rulesets that much because the input tolerances in something like osu! are much higher. catch is rather knife-edge, what with mappers doing the "edge dash" / "pixel jump" stuff (tl;dr: placing a circle just barely outside of hyperdash range, so that a perfect normal dash is required to catch it). Thus, this is applied locally to catch for now until proven necessary to put it elsewhere too.

Intended as at least a partial solution to
ppy#26436. In testing, two of the three
replays from the issue are FCs with this change, and the third is
improved (and not fully fixed).

As it turns out, stable truncates the end time of sliders to integers:

    https://github.com/peppy/osu-stable-reference/blob/79addff0f5d2a328059d2842d6d2968cfb740956/osu!/GameplayElements/HitObjects/Osu/SliderOsu.cs#L1037

and the start time of all juice stream parts is also truncated to
integers:

    https://github.com/peppy/osu-stable-reference/blob/79addff0f5d2a328059d2842d6d2968cfb740956/osu!/GameplayElements/HitObjects/Fruits/SliderFruits.cs#L86-L166

This matters for replay playback when mappers push limits. For instance,
let's take one case from the issue. In one of the maps involved, there
was a juice-stream-ending fruit at time 7093.9655 according to lazer,
which was also a hyperfruit.

The broken replay on this map included one frame at time 7093, with the
catcher position before the hyperdash, and one frame at time 7097,
with the catcher position *after* the hyperdash. Which meant that
the replay handler moved *out of the way* of the hyperfruit at time
7093.9655, deciding that it is *after* the replay frame that was
supposed to be catching it.

(For reference, the relevant replay playback code is here:

    https://github.com/ppy/osu/blob/3da5831075b187e967ca300dcba0a6761f07d1dd/osu.Game.Rulesets.Catch/Replays/CatchFramedReplayInputHandler.cs#L20-L31

Note the handling of `position`. Any frame with an action is important,
which means that as long as any key is held, interpolation will not
take place.)

On stable this is not a thing, because the fruit's end time was being
truncated to `int`, therefore moving it back to time 7093 and restoring
temporal integrity.

This probably doesn't matter in other rulesets that much because
the input tolerances in something like osu! or taiko are much higher.
catch is rather knife-edge, what with mappers doing the "edge dash" /
"pixel jump" stuff (tl;dr: placing a circle just barely outside of
hyperdash range, so that a perfect normal dash is required to catch it).
Thus, this is applied locally to catch for now until proven necessary
to put it elsewhere too.
@bdach
Copy link
Collaborator Author

bdach commented Apr 26, 2024

!diffcalc
RULESET=catch

Copy link

github-actions bot commented Apr 26, 2024

@bdach
Copy link
Collaborator Author

bdach commented Apr 26, 2024

Hmmmmmm there's not that many maps impacted although a few cases I'd probably want to look at closer (highest nomod SR swing upwards is +11.8%, lowest is -33% - anything outside of ±10% I'd probably want to comb over more closely).

Not sure, might be good to see at least some basic acknowledgement that from a conceptual standpoint it's fine to continue trying this change before I commit?

@smoogipoo
Copy link
Contributor

!diffcalc
RULESET=catch
SCORE_PROCESSOR_A=ppy/osu-queue-score-statistics#263
SCORE_PROCESSOR_B=ppy/osu-queue-score-statistics#263

Copy link

github-actions bot commented Apr 26, 2024

@smoogipoo
Copy link
Contributor

I feel like I've been very acutely aware of this, at least with respect to catch's conversion, but didn't do it because it led to even more incorrect values. I feel like this probably also has an effect here but maybe that was where I noticed the issue to a greater effect:
https://github.com/ppy/osu/pull/28005/files#diff-1f327e0d58e51950ac570df64ab0fe4e61a92d015c4c450501567684527d9d72R104

I'd be curious to see how this works combined with #25725 on the test cases in that PR.

@bdach
Copy link
Collaborator Author

bdach commented Apr 26, 2024

I did try and see if that other PR fixes the broken replays from the issue in the OP but it didn't. Didn't do much else checking beyond that, will do so whenever able.

@bdach
Copy link
Collaborator Author

bdach commented Apr 29, 2024

I'd be curious to see how this works combined with #25725 on the test cases in that PR.

The answer seems to be that the two seem to be completely orthogonal / independent of each other.

beatmap master #25725 #28005 #25725 + #28005
1009022 ✔️ ✔️
1392244 ✔️ ✔️
1514076
1653504 ✔️ ✔️
1654305 ✔️ ✔️
1875197 ✔️ ✔️
1962756 ✔️ ✔️
2041893
2042351 ✔️ ✔️
2077578
2165315 ✔️ ✔️
2186317 ✔️ ✔️
2195985 ✔️ ✔️
2256396 ✔️ ✔️
2339801 ✔️ ✔️
2389697 ✔️ ✔️
2638311 ✔️ ✔️
2789383 ✔️ ✔️
3091364 ✔️ ✔️
3335242 ✔️ ✔️
3614534
3620288
3642792 ✔️ ✔️
3652994
3922638 ✔️ ✔️
3942600 ✔️ ✔️
3944076
4044925 ✔️ ✔️
4087902 ✔️ ✔️
556909 ✔️ ✔️
828828 ✔️ ✔️
933615 ✔️ ✔️
991733
Total 10/33 15/33 10/33 15/33

Which makes sense, given the existing tolerance in conversion tests:

=> Precision.AlmostEquals(StartTime, other.StartTime, conversion_lenience)

I wouldn't expect this fix to change things too much there, as opposed to replays, wherein very strict equality of object start time begins to matter.

I feel like this probably also has an effect here but maybe that was where I noticed the issue to a greater effect

I suppose truncation would have to be applied here yes. I'm not sure it matters in the hyperdashing context which this PR is intended to address primarily (can a tiny droplet be hyper?)

I feel like I've been very acutely aware of this, at least with respect to catch's conversion, but didn't do it because it led to even more incorrect values

Well if you can remember what that was about exactly then I'd appreciate it because it may save me spending however many hours more down this hole.

@smoogipoo
Copy link
Contributor

I suppose truncation would have to be applied here yes.

It's more that truncation being applied at somewhat of a post-processing stage led to flat out wrong values because it's expected to be applied progressively during the event calculations.

Well if you can remember what that was about exactly then I'd appreciate it because it may save me spending however many hours more down this hole.

Unfortunately not. It's going to be a matter of exploration.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants