Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

hscan should save corrected fixdata #1595

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

ksanislo
Copy link

When this PR was merged back in 2017, hscan was updated to look at the new data and present it to stdout, however the change was never applied to the saved copy (LOCAL.data) of the information.

When this incorrect data is reloaded through hbal, under some conditions the invalid memory information can cause it to silently stop considering otherwise viable instance moves.

Correcting hscan to actually save the fixed data for external use, instead of saving the unmodified copy keeps things working as expected.

@ksanislo ksanislo changed the title save corrected fixdata hscan should save corrected fixdata May 18, 2021
@rbott
Copy link
Member

rbott commented Sep 23, 2021

@apoikos @iustin do you have any insights on this? From my rather limited understanding of Haskell, it looks good.

@iustin
Copy link
Contributor

iustin commented Sep 24, 2021

Hmm, I'm not sure this is the right thing. The basic question is whether a saved data file will have "corrected" or "uncorrected" data baked in. The code as it is before this commit, seems to keep "uncorrected" data and hbal should correct it on the fly as well, via the ExtLoader.loadExternalData, no? Is that not happening?

The patch would change things so that saved data is "corrected", but then we would need to know to prevent all the downstream uses from again applying the correction.

@ksanislo
Copy link
Author

@iustin The problem is that hbal won't correct data when loading it from a dump.

Please keep in mind that I'm not particularly knowledgeable in Haskell, and it has been 6 months since I looked into this, but in my limited understanding it seemed that the data gets 'fixed' only when it is missing. By saving the original uncorrected data, when that dump gets reloaded in the future, it now has what seems to be valid data in those fields. Since the saved LOCAL.data file doesn't specify if nodes are Xen vs KVM or even something else, there seems to be no way to determine that a specific datapoint would even need changed on reloading. In this case, trusting the saved file content happens automatically, so ensuring the corrected data is what gets written into the file seems like the proper choice.

@iustin
Copy link
Contributor

iustin commented Oct 11, 2021

Thanks for the expanded explanation. You're right, I misunderstood the original commit; it makes more sense to always have "correct" data in the file.

I will have to look again at the current/code since I thought it does attempt to correct the data somehow - it's all, I think, going through the same pipeline.

At least for this commit, the cleanup can be expanded - the cdata argument is not used, and thus should be removed from the argument list.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants