Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Internal consistency check failed, generated index block has wrong hash #3202

Open
1 of 2 tasks
SamiLehtinen opened this issue May 1, 2018 · 15 comments · May be fixed by #5068
Open
1 of 2 tasks

Internal consistency check failed, generated index block has wrong hash #3202

SamiLehtinen opened this issue May 1, 2018 · 15 comments · May be fixed by #5068

Comments

@SamiLehtinen
Copy link

SamiLehtinen commented May 1, 2018

  • I have searched open and closed issues for duplicates.
  • Only "quick browser", not thorough scan.

Environment info

  • Duplicati version: v2.0.3.6-2.0.3.6_canary_2018-04-23
  • Operating system: Ubuntu 16.04.4 Linux & Windows 10
  • Backend: SFTP & FTPS & FILE

Description

Backup sets basically get broken by just running backup. Three totally independent systems, with different backends and clients. As well as one "local / file" backup, where backup data is stored on second local disk. Something is seriously wrong if this happens?

"Internal consistency check failed, generated index block has wrong hash"

Steps to reproduce

  1. Just running normal backup.

Debug log

Checking remote backup ... Listing remote folder ... Fatal error => System.Exception: Internal consistency check failed, generated index block has wrong hash, 6yil2I1aMnlEYfq7luTpJh8RytKe5Gm7eeUuk0UAELU= vs I5/lRw4QR1CWI8t2ojmy/pgUFgAUcQqr8z+zpybt3rk= at Duplicati.Library.Main.Operation.Common.IndexVolumeCreator+<CreateIndexVolume>d__0.MoveNext () <0x409eb6e0 + 0x00b5b> in <filename unknown>:0 --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw () <0x7f58718016d0 + 0x00029> in <filename unknown>:0 at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess (System.Threading.Tasks.Task task) <0x7f58717ff6b0 + 0x000a7> in <filename unknown>:0 at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (System.Threading.Tasks.Task task) <0x7f58717ff630 + 0x0006b> in <filename unknown>:0 at System.Runtime.CompilerServices.TaskAwaiter.ValidateEnd (System.Threading.Tasks.Task task) <0x7f58717ff5e0 + 0x0003a> in <filename unknown>:0 at System.Runtime.CompilerServices.TaskAwaiter 1[TResult].GetResult () <0x7f58717ff8d0 + 0x00017> in <filename unknown>:0 at Duplicati.Library.Main.Operation.Backup.RecreateMissingIndexFiles+<>c__DisplayClass1_0+<<Run>b__0>d.MoveNext () <0x409e7740 + 0x006be> in <filename unknown>:0 --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw () <0x7f58718016d0 + 0x00029> in <filename unknown>:0 at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess (System.Threading.Tasks.Task task) <0x7f58717ff6b0 + 0x000a7> in <filename unknown>:0 at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (System.Threading.Tasks.Task task) <0x7f58717ff630 + 0x0006b> in <filename unknown>:0 at System.Runtime.CompilerServices.TaskAwaiter.ValidateEnd (System.Threading.Tasks.Task task) <0x7f58717ff5e0 + 0x0003a> in <filename unknown>:0 at System.Runtime.CompilerServices.TaskAwaiter.GetResult () <0x7f58717ff5c0 + 0x00012> in <filename unknown>:0 at CoCoL.AutomationExtensions+<RunTask>d__10 1[T].MoveNext () <0x40899140 + 0x00231> in <filename unknown>:0 --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw () <0x7f58718016d0 + 0x00029> in <filename unknown>:0 at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess (System.Threading.Tasks.Task task) <0x7f58717ff6b0 + 0x000a7> in <filename unknown>:0 at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (System.Threading.Tasks.Task task) <0x7f58717ff630 + 0x0006b> in <filename unknown>:0 at System.Runtime.CompilerServices.TaskAwaiter.ValidateEnd (System.Threading.Tasks.Task task) <0x7f58717ff5e0 + 0x0003a> in <filename unknown>:0 at System.Runtime.CompilerServices.TaskAwaiter.GetResult () <0x7f58717ff5c0 + 0x00012> in <filename unknown>:0 at Duplicati.Library.Main.Operation.BackupHandler+<RunAsync>d__18.MoveNext () <0x40932000 + 0x01d8e> in <filename unknown>:0

@umgfoin
Copy link

umgfoin commented May 2, 2018

Confirmed - same problem here with v2.0.3.6-2.0.3.6_canary_2018-04-23 under CentOS 6.9.
++umgfoin

@OronDF343
Copy link

OronDF343 commented May 2, 2018

Confirmed running 2.0.3.6_canary_2018-04-23 on Windows 7 x64

System.AggregateException: One or more errors occurred. --->
 System.AggregateException: Object reference not set to an instance of an object. --->
 System.NullReferenceException: Object reference not set to an instance of an object.
  at Duplicati.Library.Main.Volumes.BlockVolumeWriter.AddBlock(String hash, Byte[] data, Int32 offset, Int32 size, CompressionHint hint)
  at Duplicati.Library.Main.Operation.Backup.SpillCollectorProcess.<>c__DisplayClass0_0.<<Run>b__0>d.MoveNext() --- End of stack trace from previous location where exception was thrown ---
  at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
  at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
  at CoCoL.AutomationExtensions.<RunTask>d__10`1.MoveNext() --- End of stack trace from previous location where exception was thrown ---
  at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
  at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
  at Duplicati.Library.Main.Operation.BackupHandler.<RunMainOperation>d__11.MoveNext() --- End of stack trace from previous location where exception was thrown ---
  at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
  at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
  at Duplicati.Library.Main.Operation.BackupHandler.<RunAsync>d__18.MoveNext() --- End of inner exception stack trace ---
  at Duplicati.Library.Main.Operation.BackupHandler.<RunAsync>d__18.MoveNext() --- End of inner exception stack trace ---
  at CoCoL.ChannelExtensions.WaitForTaskOrThrow(Task task)
  at Duplicati.Library.Main.Controller.<>c__DisplayClass15_0.<Backup>b__0(BackupResults result)
  at Duplicati.Library.Main.Controller.RunAction[T](T result, String[]& paths, IFilter& filter, Action`1 method)
  at Duplicati.Library.Main.Controller.Backup(String[] inputsources, IFilter filter)
  at Duplicati.Server.Runner.Run(IRunnerData data, Boolean fromQueue) --->
 (Inner Exception #0) System.AggregateException: Object reference not set to an instance of an object. --->
 System.NullReferenceException: Object reference not set to an instance of an object.
  at Duplicati.Library.Main.Volumes.BlockVolumeWriter.AddBlock(String hash, Byte[] data, Int32 offset, Int32 size, CompressionHint hint)
  at Duplicati.Library.Main.Operation.Backup.SpillCollectorProcess.<>c__DisplayClass0_0.<<Run>b__0>d.MoveNext() --- End of stack trace from previous location where exception was thrown ---
  at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
  at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
  at CoCoL.AutomationExtensions.<RunTask>d__10`1.MoveNext() --- End of stack trace from previous location where exception was thrown ---
  at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
  at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
  at Duplicati.Library.Main.Operation.BackupHandler.<RunMainOperation>d__11.MoveNext() --- End of stack trace from previous location where exception was thrown ---
  at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
  at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
  at Duplicati.Library.Main.Operation.BackupHandler.<RunAsync>d__18.MoveNext() --- End of inner exception stack trace ---
  at Duplicati.Library.Main.Operation.BackupHandler.<RunAsync>d__18.MoveNext() --->
 (Inner Exception #0) System.NullReferenceException: Object reference not set to an instance of an object.
  at Duplicati.Library.Main.Volumes.BlockVolumeWriter.AddBlock(String hash, Byte[] data, Int32 offset, Int32 size, CompressionHint hint)
  at Duplicati.Library.Main.Operation.Backup.SpillCollectorProcess.<>c__DisplayClass0_0.<<Run>b__0>d.MoveNext() --- End of stack trace from previous location where exception was thrown ---
  at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
  at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
  at CoCoL.AutomationExtensions.<RunTask>d__10`1.MoveNext() --- End of stack trace from previous location where exception was thrown ---
  at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
  at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
  at Duplicati.Library.Main.Operation.BackupHandler.<RunMainOperation>d__11.MoveNext() --- End of stack trace from previous location where exception was thrown ---
  at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
  at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
  at Duplicati.Library.Main.Operation.BackupHandler.<RunAsync>d__18.MoveNext()<--- --->
 (Inner Exception #1) System.AggregateException: One or more errors occurred. --->
 System.Exception: Internal consistency check failed, generated index block has wrong hash, +r/f03iZQ6MiICYvSgPKbwXh1yI6z3xwe2SCZPsd1+U= vs 3IZM2YUqOTsxe9PFViBVsquyZnP+TvZda1G3R5nKlnc=
  at Duplicati.Library.Main.Operation.Common.IndexVolumeCreator.<CreateIndexVolume>d__0.MoveNext() --- End of stack trace from previous location where exception was thrown ---
  at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
  at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
  at Duplicati.Library.Main.Operation.Common.BackendHandler.<>c__DisplayClass13_0.<<UploadFileAsync>b__1>d.MoveNext() --- End of stack trace from previous location where exception was thrown ---
  at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
  at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
  at Duplicati.Library.Main.Operation.Common.BackendHandler.<UploadFileAsync>d__13.MoveNext() --- End of stack trace from previous location where exception was thrown ---
  at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
  at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
  at Duplicati.Library.Main.Operation.Backup.BackendUploader.<>c__DisplayClass0_0.<<Run>b__0>d.MoveNext() --- End of stack trace from previous location where exception was thrown ---
  at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
  at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
  at CoCoL.AutomationExtensions.<RunTask>d__10`1.MoveNext() --- End of inner exception stack trace --- --->
 (Inner Exception #0) System.Exception: Internal consistency check failed, generated index block has wrong hash, +r/f03iZQ6MiICYvSgPKbwXh1yI6z3xwe2SCZPsd1+U= vs 3IZM2YUqOTsxe9PFViBVsquyZnP+TvZda1G3R5nKlnc=
  at Duplicati.Library.Main.Operation.Common.IndexVolumeCreator.<CreateIndexVolume>d__0.MoveNext() --- End of stack trace from previous location where exception was thrown ---
  at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
  at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
  at Duplicati.Library.Main.Operation.Common.BackendHandler.<>c__DisplayClass13_0.<<UploadFileAsync>b__1>d.MoveNext() --- End of stack trace from previous location where exception was thrown ---
  at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
  at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
  at Duplicati.Library.Main.Operation.Common.BackendHandler.<UploadFileAsync>d__13.MoveNext() --- End of stack trace from previous location where exception was thrown ---
  at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
  at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
  at Duplicati.Library.Main.Operation.Backup.BackendUploader.<>c__DisplayClass0_0.<<Run>b__0>d.MoveNext() --- End of stack trace from previous location where exception was thrown ---
  at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
  at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
  at CoCoL.AutomationExtensions.<RunTask>d__10`1.MoveNext()<--- <--- --->
 (Inner Exception #2) System.AggregateException: One or more errors occurred. --->
 System.Exception: Unable to find log in lookup table, this may be caused by attempting to transport call contexts between AppDomains (eg. with remoting calls)
  at Duplicati.Library.Logging.Log.get_CurrentScope()
  at Duplicati.Library.Logging.Log.WriteMessage(LogMessageType type, String tag, String id, Exception ex, String message, Object[] arguments)
  at Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess.<>c.<Run>b__1_3(String rootpath, String path, Exception ex)
  at Duplicati.Library.Utility.Utility.<EnumerateFileSystemEntries>d__23.MoveNext()
  at System.Linq.Enumerable.<SelectManyIterator>d__17`2.MoveNext()
  at Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess.<>c__DisplayClass1_0.<<Run>b__0>d.MoveNext() --- End of stack trace from previous location where exception was thrown ---
  at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
  at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
  at CoCoL.AutomationExtensions.<RunTask>d__10`1.MoveNext() --- End of inner exception stack trace --- --->
 (Inner Exception #0) System.Exception: Unable to find log in lookup table, this may be caused by attempting to transport call contexts between AppDomains (eg. with remoting calls)
  at Duplicati.Library.Logging.Log.get_CurrentScope()
  at Duplicati.Library.Logging.Log.WriteMessage(LogMessageType type, String tag, String id, Exception ex, String message, Object[] arguments)
  at Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess.<>c.<Run>b__1_3(String rootpath, String path, Exception ex)
  at Duplicati.Library.Utility.Utility.<EnumerateFileSystemEntries>d__23.MoveNext()
  at System.Linq.Enumerable.<SelectManyIterator>d__17`2.MoveNext()
  at Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess.<>c__DisplayClass1_0.<<Run>b__0>d.MoveNext() --- End of stack trace from previous location where exception was thrown ---
  at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
  at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
  at CoCoL.AutomationExtensions.<RunTask>d__10`1.MoveNext()<--- <--- <--- 

@spoppi
Copy link

spoppi commented May 16, 2018

Confirmed running 2.0.3.6_canary_2018-04-23 on Fedora Linux 26 x64

Workaround:
Downgrade to 2.0.3.5 using the steps from https://forum.duplicati.com/t/downgrading-reverting-to-a-lower-version/3393

Assumption:
According to the changelog 2.0.3.6 "adds concurrent processing for the backup" which I assume might interfere with hash calculation.
Maybe setting the new options --concurrency-* to lower values (ie 1) might also help but hasn't been tested.

@OronDF343
Copy link

OronDF343 commented May 17, 2018 via email

@SamiLehtinen
Copy link
Author

I deleted whole backup set and database and clean started with parameter --concurrency-block-hashers=1, however issue remains. Backup runs once and completed. But when you'll try to run next backup, it fails. I tried it twice, same result both times.

I'm eagerly waiting for a new (next) version which would fix this.

Yet I saw this error message, maybe there's something else going on?
Expected there to be a temporary fileset for synthetic filelist (2, duplicati-b9097119f61414c23a9eadcd54a151892.dblock.zip.aes), but none was found?

@tygill
Copy link
Contributor

tygill commented Jun 1, 2018

I'm hitting this as well (on a backup I've been trying to repair for a while - I've done several clean recreates, purged broken files...I'm about to the point of just restarting the backup from scratch). I'm also running 2.0.3.6.

Edit: This was the specific error I got:
Internal consistency check failed, generated index block has wrong hash, ELsjADBcRYybjD/W6oD/67/gumcv6XZi6yNPq9EVU3o= vs JskT7wZasvvtI39gJYeeFXXKgwLCIB0EUs9utyiJdd8=
Looking in the database, I see an entry for JskT7w... in both the Block and BlocklistHashes tables. however, I don't see anything for the ELsjAD... hash.

One thing that I do notice in looking at the changes from 2.0.3.5 to 2.0.3.6 is this snippet from the new CreateIndexVolume method:

foreach(var b in await database.GetBlocklistsAsync(blockvolume.ID, options.Blocksize, options.BlockhashSize))
{
    var bh = Convert.ToBase64String(h.ComputeHash(b.Item2, 0, b.Item3));
    if (bh != b.Item1)
        throw new Exception(string.Format("Internal consistency check failed, generated index block has wrong hash, {0} vs {1}", bh, b.Item1));
    w.WriteBlocklist(b.Item1, b.Item2, 0, b.Item3);
}

Looking at GetBlocklistsAsync, it seems like b.Item2 is supposed to already be a hash - but this snippet is re-hashing it to get bh, which is what is apparently not matching b.Item1. @kenkendk - is the ComputeHash call here a bug?

Scratch that, it looks like its essentially rebuilding the blocklist based on the the hashes of the entries, and then validating that that block has the expected hash.

@kenkendk
Copy link
Member

kenkendk commented Jun 4, 2018

The code in there has actually not been updated, and should not be used concurrently. It simply builds an index file, by reading the local database.

My best guess is that something else is using the database at the same time causing it to trip up the reader and return mixed results.

With 2.0.3.6 I removed the old code that was building the index file in parallel with building the dblock file, such that this code is now called much more frequently, which is probably what cause the bug to appear.

I will be re-introducing the previous "create-as-you-go" functionality, as many users have reported significant slowdowns while creating the index file from the database.

kenkendk added a commit that referenced this issue Jun 4, 2018
…avoids issues with locks and transactions being prematurely released.

This might help with #3202
@tygill
Copy link
Contributor

tygill commented Jun 4, 2018

I don't know if it's a race condition / transient issue (though I suppose it could be). I repeated the backup run that gave the inconsistent hash error, and the second time found the exact same hash mismatch, which makes it seem like something repeatable is happening, and race conditions usually aren't.

I've also been trying to extract the database results and run a mini program that runs the GetBlocklists() method with that explicit input, in the hopes of matching either the reported or actual hash, and so far I haven't been able to, so there's likely something wrong there. I have the list of hashes for the blocklist in question (if you'd like to take a look at them, I can send them in a PM).

As for the commit you added to change the IEnumerables to arrays, I suspect in this case that will make things worse (and depending on how the IEnumerables are being consume, might actually point to where the error is coming from). In the case of the GetBlocklistsAsync method, the returned Tuple has a byte[] as the second item, and instead of being a new byte array for each blocklist, it reuses a buffer. With your recent commit, by the time anything starts processing the tuples, the buffer will be the same for every entry in the list. If the bug is being caused by the IEnumerable's results being performed concurrently (e.g., the buffer is being updated before the previous tuple has finished being processed) a better fix might be to change GetBlocklists() to return a scoped copy of the buffer, so that the buffer instance each Tuple has is unique:

string curHash = null;
int index = 0;
byte[] buffer = new byte[blocksize];

using(var rd = cmd.ExecuteReader(sql, volumeid))
    while (rd.Read())
    {
        var blockhash = rd.GetValue(0).ToString();
        if ((blockhash != curHash && curHash != null) || index + hashsize > buffer.Length)
        {
            byte[] returnedBuffer = new byte[index];
            Array.Copy(buffer, 0, returnedBuffer, 0, index);
            yield return new Tuple<string, byte[], int>(curHash, returnedBuffer, index);
            curHash = null;
            index = 0;
        }

        var hash = Convert.FromBase64String(rd.GetValue(1).ToString());
        Array.Copy(hash, 0, buffer, index, hashsize);
        curHash = blockhash;
        index += hashsize;
    }

if (curHash != null)
{
    byte[] returnedBuffer = new byte[index];
    Array.Copy(buffer, 0, returnedBuffer, 0, index);
    yield return new Tuple<string, byte[], int>(curHash, returnedBuffer, index);
}

@kenkendk
Copy link
Member

kenkendk commented Jun 6, 2018

The new commit did what you expected, and that appears to be the opposite of fixing it.
I will look into the shared byte[] buffer.

@kenkendk
Copy link
Member

kenkendk commented Jun 7, 2018

With the removal of the shared buffer, all the unittests now pass. I am still unsure if this will fix the actual problem, or just make it appear less frequently.

If the problem was not caused by a race condition, but instead some kind of invalid data, we should see the problem pop up again with this fix.

@SamiLehtinen
Copy link
Author

Even before this 2.0.3.6 version I've seen surprisingly lot i-file corruption, but b-files or dlists haven't been ever corrupted. It sounds likely it has been more or less broken all the time.

@connesy
Copy link

connesy commented Jun 14, 2018

Are there any updates on this issue? I have a client with this exact error, but only on one of many backups.

@SamiLehtinen
Copy link
Author

Just delete the corrupt file, and next backup will run auto repair to replace it with empty place holder. That's what I've done. Yet, I haven't confirmed how it possibly affects restore. I think I'll need leave full restore tests running during the weekend.

@duplicatibot
Copy link

This issue has been mentioned on Duplicati. There might be relevant details there:

https://forum.duplicati.com/t/any-way-to-recover-backup-repair-and-purge-broken-files-dont-help/17048/37

@Jojo-1000
Copy link
Contributor

Reproduction steps

Thanks to @ts678 for finding this:

  • Create backup with --blocksize=1KB (to allow smaller files)
  • Create 32769 zero bytes file (or filled with any text character): This is 1 byte more than will fit in one blocklist
  • Backup
  • Edit first byte to 1
  • Backup
  • Delete oldest dindex
  • Repair
2023-11-29 15:43:35 -05 - [Error-Duplicati.Library.Main.Operation.RepairHandler-CleanupMissingFileError]: Failed to perform cleanup for missing file: duplicati-i2185398e90964c16a6d1f2c6a88ea3c8.dindex.zip, message: Internal consistency check failed, generated index block has wrong hash, soneqSylq6Xy4YkaGvEb4nkUxIhU2w/ltLuVwTfg8tY= vs FAbgWIHimTZ3ZtMT4mwFVk7JG/ch0xcmvW5G5gaJU5o=

The general circumstances for this failure are:

  • have a file larger than a single blocklist can fit, but also not evenly divisible (the final blocklist must not be full, otherwise it will just be overwritten multiple times)
  • the file changes at the beginning, but the final blocklist stays the same (however many bytes that are)
  • the index corresponding to the unchanged final blocklist goes missing
  • repair will repeat the blocklist while recreating the index (even multiple times for more than 2 file versions) and append everything, then fail due to the incorrect hash
  • compact also has this bug, but without any checks, so it might cause undetected index file corruption

Core problem

Forum post with more detail

The SQL query in LocalDatabase.GetBlocklists is broken and will repeat the blocklist data in some circumstances:

SELECT "A"."Hash", "C"."Hash" FROM (SELECT "BlocklistHash"."BlocksetID", "Block"."Hash", * FROM  "BlocklistHash","Block" WHERE  "BlocklistHash"."Hash" = "Block"."Hash" AND "Block"."VolumeID" = ?) A,  "BlocksetEntry" B, "Block" C WHERE "B"."BlocksetID" = "A"."BlocksetID" AND  "B"."Index" >= ("A"."Index" * 32) AND "B"."Index" < (("A"."Index" + 1) * 32) AND "C"."ID" = "B"."BlockID"  ORDER BY "A"."BlocksetID", "B"."Index";

The subquery will fetch the same blocklist multiple times from different file versions (BlocksetID 3 and 8 in this case):

SELECT "BlocklistHash"."BlocksetID", "Block"."Hash", * FROM  "BlocklistHash","Block" WHERE  "BlocklistHash"."Hash" = "Block"."Hash" AND "Block"."VolumeID" = ?;

BlocksetID	Hash	BlocksetID	Index	Hash	ID	Hash	Size	VolumeID
3	DWypxEXA7TXbM2azclwoTH+Y7rEgA9kbDa4/K3ycNdA=	3	0	DWypxEXA7TXbM2azclwoTH+Y7rEgA9kbDa4/K3ycNdA=	4	DWypxEXA7TXbM2azclwoTH+Y7rEgA9kbDa4/K3ycNdA=	1024	3
3	v106/7c+/S7Gw2rTES3ZM+/tY8Thy//PqI4nWcFE8tg=	3	1	v106/7c+/S7Gw2rTES3ZM+/tY8Thy//PqI4nWcFE8tg=	6	v106/7c+/S7Gw2rTES3ZM+/tY8Thy//PqI4nWcFE8tg=	32	3
8	v106/7c+/S7Gw2rTES3ZM+/tY8Thy//PqI4nWcFE8tg=	8	1	v106/7c+/S7Gw2rTES3ZM+/tY8Thy//PqI4nWcFE8tg=	6	v106/7c+/S7Gw2rTES3ZM+/tY8Thy//PqI4nWcFE8tg=	32	3

The external query will then get all block hashes for each of these results and append them. If different hashes are interlaced, this causes no problem, but if the same hash appears back to back, the blocklist is wrong.

This subquery needs to be reworked to return each hash only once.

Jojo-1000 added a commit to Jojo-1000/duplicati that referenced this issue Dec 10, 2023
Closes duplicati#3202

Also add test to ensure that repair works as intended.
Jojo-1000 added a commit to Jojo-1000/duplicati that referenced this issue Dec 12, 2023
Closes duplicati#3202

Also add test to ensure that repair works as intended.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

9 participants