-
Notifications
You must be signed in to change notification settings - Fork 99
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(bridge): support decoding final era1 file & fix broken merkle proofs for partial epochs #1291
Conversation
64c7553
to
335ae6b
Compare
85b4e1c
to
9f55228
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Few style related comments. Up to you if you want to apply them.
🚀
@@ -66,21 +67,25 @@ impl Era1 { | |||
pub fn deserialize(buf: &[u8]) -> anyhow::Result<Self> { | |||
let file = E2StoreFile::deserialize(buf)?; | |||
ensure!( | |||
file.entries.len() == ERA1_ENTRY_COUNT, | |||
// era1 file #0-1895 || era1 file #1896 | |||
file.entries.len() == ERA1_ENTRY_COUNT || file.entries.len() == 21451, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Is it possible (and does it makes sense) to combine file.entries.len() == 21451
check with either hash, or index number or first/last block number of something like that?
Same in similar checks below.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I understand correctly, I'm not sure this is the place for that kind of verification.
The hash that this would be verified against does exists in the file path we provide, however there's no guarantee that this filepath is always the pointer provided for the file contents
- you can deserialize from a byte vector, so you wouldn't have a file path
- you can load an era1 file from a different filepath
So if there was to be any checking of the hash, it should be done by code/tooling a level above this module.
The same applies to index number & first/last block number. This code doesn't know what index/block number it's expected to find, so that validation should also occur a level above. I guess we could do some validation that the first_block_number = last_block_number - 8192
, but I'm not sure if that's necessary? It's the kind of thing that would be covered if we're checking the root matches the expected hash a level above.
trin-validation/src/accumulator.rs
Outdated
proof.push(B256::from_slice(&hex_decode( | ||
"0x0020000000000000000000000000000000000000000000000000000000000000", | ||
)?)); | ||
let epoch_size: i32 = epoch_acc.len() as i32; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think something like this would work:
let epoch_size = U256::from(epoch_acc.len());
let epoch_size = B256::from(epoch_size.to_le_bytes());
proof.push(epoch_size);
I would maybe even merge it in 2 lines, but that's less important.
trin-validation/src/accumulator.rs
Outdated
let mut leaves = vec![]; | ||
let mut header_difficulty = B256::ZERO; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you can extract header difficulty directly at this point:
let header_difficulty = B256::from(header_record.total_difficulty.to_le_bytes());
And you can also do it for header_record_hash:
let mut header_record_hash = B256::from(eth2_hashing::hash32_concat(
header_record.block_hash.as_slice(),
current_difficulty.as_slice(),
);
With above, you can even remove the loop:
let leaves = epoch_acc
.iter()
.map(|record| {
B256::from(eth2_hashing::hash32_concat(
record.block_hash.as_slice(),
record.total_difficulty.as_le_slice(),
))
})
.collect::<Vec<_>>();
(or you can keep it as a loop, but much simpler)
9f55228
to
0df8c5a
Compare
What was wrong?
Era1 file #1896 is actually an "incomplete" era1 file, so our bridge was unable to deserialize it, as it expected all era1 files to be "complete".
We were also unable to generate valid merkle proofs for partial epochs.
How was it fixed?
Add support for deserializing era1 file 1896, but not other "incomplete" era1 files, since this is the only valid era1 file that we expect to be incomplete.
Tested locally against
mainnet-01896-e6ebe562.era1
but the file is too large to include as a test asset.Fixed broken merkle proof generation and added tests.
To-Do