Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Invalid header marker #5142

Open
TheGP opened this issue Apr 6, 2024 · 19 comments
Open

Invalid header marker #5142

TheGP opened this issue Apr 6, 2024 · 19 comments

Comments

@TheGP
Copy link

TheGP commented Apr 6, 2024

It says Invalid header marker, can u make ur backup app work?? Wtf is this trash app.

@kenkendk
Copy link
Member

kenkendk commented Apr 6, 2024

Hi, can you provide some context?

What steps did you perform?
When did the error occur?
Have you done anything to investigate the source of the problem?

@TheGP
Copy link
Author

TheGP commented Apr 6, 2024

Just try to use ur app for some times and it will fail 100%. Why can't fix stability for 10 years already?

@ts678
Copy link
Collaborator

ts678 commented Apr 8, 2024

I've been using it for many years without getting this one. I get some others though. If you're willing to provide some answers, Duplicati forum is a better place to handle support requests. Searching older posts for similar ones can help. GitHub search and forum search don't work as well for me as Google search on the site, but they do discover some things related to this situation.

You've probably gotten a bad file on your (unnamed) destination somehow, but you need to talk about it, to determine repairs.

Can't repair after crash #2501 pops out of even a GitHub search, and gives some project history (since you're asking questions).

@TheGP
Copy link
Author

TheGP commented Apr 9, 2024

I tried to used it on all operating systems, with online storage and local, it failed 100% of the time with different errors everywhere. Shitties backup tool Ive ever seen!

@ts678
Copy link
Collaborator

ts678 commented Apr 9, 2024

https://usage-reporter.duplicati.com/ says 65 million backups per year are now being run on it. You can see some of the occasional poor results in Issues and Forum (and I'm sure some aren't posted). You can look at the Issues backlog around you (it exists -- not enough devs), however your not-specifically-described result above is unusually poor. Obviously help is available (preferably on the Forum) if you wish support, but you'll have to get specific about actions and errors.

@ts678
Copy link
Collaborator

ts678 commented Apr 9, 2024

I tried to used it on all operating systems

Windows is the easiest to set up, as you don't need to install mono (and the right one), as Linux and macOS need (being fixed).

Installation in the manual covers various OS types. Mac is still a bit vague. What does "all operating systems" mean? Say a few?

with online storage and local

Local has fewer things that can be set up wrong, or go wrong for whatever reason. Testing small backup as a start helps as well.

it failed 100% of the time with different errors

Need info on when (install, backup, restore, etc.) and how. Out of ample errors, could you share some of what they were saying?

If you're not going to provide anything specific, preferably in forum, there's nothing actionable here, so issue might as well close.

I will add "pending user feedback" label, which will make that automatic (eventually) if nothing comes in concerning the situation.

You've had two people try to help, but no info prevents that.

@ts678 ts678 added the pending user feedback needs information from the original poster label Apr 9, 2024
@TheGP
Copy link
Author

TheGP commented Apr 9, 2024

I dont want to look anywhere it fails 100% in some time, how much time do u think I have, I will write my own faster than solve your problems. I have tried windows 11, mac and Ubuntu.

https://usage-reporter.duplicati.com/ says 65 million backups per year

So how backup tool can suddenly have drop in stats for 30%? If it works correctly it just means it failed for 30% of the users and then they repaired it manually next month? I will definitely believe in that! XD Or it is 3 users. And it failed for one of them.

P.S. I of course was kidding with one month, on windows I think it worked for 1.2 year
P.P.S. Now it says "Found 1 files that are missing from the remote storage, please run repair" after clicking repair nothing happens as usual. Why do u suggest repair if it doesn't work?

@TheGP
Copy link
Author

TheGP commented Apr 9, 2024

I dont understand how the file can be missing at all. Do u know how to use like fileExists() or something? Can u read about transactions or something? Its so basic stuff🤦

@ts678
Copy link
Collaborator

ts678 commented Apr 9, 2024

So how backup tool can suddenly have drop in stats for 30%?

I suspect a recent change in the servers by @kenkendk, if you mean the recent lower uses. Below are some 2018-2023 statistics:

image

it fails 100% in some time

Thanks for "in some time". So presumably it works fine until it doesn't. Personally I think long-term reliability needs improvement, however it's not a matter of just wanting it. Problems, when they arise, need investigations, so need data, which needs user help. Combine that with a developer shortage, and you get a product which remains in Beta because it's not quite reached Stable yet...

Why do u suggest repair if it doesn't work?

Possibly an old leftover from when repair tried harder and slower. These days, my suggested advice ends in purge-broken-files assuming it's a dblock file. You can get the type from the file name, e.g. watch About --> Show log --> Live --> Warning. Because that's pretty clunky, the current Canary (eventually next Beta) also puts such things in the usual job log and it's easier on all of us.

Recovering by purging files is still relevant, but these days probably should add console-log-level=Information to get the headers (logging changed). The Repair change (old) release note is below (from About --> Changelog), but I think message should change.

Removed automatic attempts to rebuild dblock files as it is slow and rarely finds all the missing pieces (can be enabled with --rebuild-missing-dblock-files).

If you lost a dindex or dlist file, doing recovery by Repair is still correct (assuming database is right), and it just replaces missing file because the database has the information on what it expects to see, and it complains if it doesn't, though sometimes it's mistaken.

I dont understand how the file can be missing at all. Do u know how to use like fileExists() or something?

That last question makes me wonder if you understand the database usage, so let's go to the manual for some background on it:.

Database management

Duplicati makes use of a local database for each backup job that contains information about what is stored at the backend.

So Duplicati does know how to tell if the file exists, and if it doesn't when the database says it should, then it issues such an error.

Can u read about transactions or something?

I'm not a Duplicati developer, just helping out here, however to do that I'd specifically want to read about transactions in Duplicati.
Channel Pipeline describes the concurrency environment. While code has different threads looking after their transaction needs, it's unclear to me if that's coordinated with the needs of other threads. I'm looking for a design spec on concurrency and transactions.

@ts678 ts678 removed the pending user feedback needs information from the original poster label Apr 9, 2024
@TheGP
Copy link
Author

TheGP commented Apr 9, 2024

@ts678 if you are not related to development definitely no need to spend time in this conversation. I appreciate your help but I believe normal tool can be able to fix the backup itself without debugging it so hard.

Thanks for "in some time". So presumably it works fine until it doesn't.

On Ubuntu and Mac it broke in months, the Windows was the longest, probably because its just local disk backup. I just tired of fixing this stupid tool every time it breaks so wanted to post it here. Never happened to crash plan, never happened to Backblaze, never happened to my own crontab bash tool. Only this app is special.

which needs user help.

I helped 5-7 years ago, if its not enough time to fix basic issues, Im done with helping.

@ts678
Copy link
Collaborator

ts678 commented Apr 9, 2024

believe normal tool can be able to fix the backup itself without debugging it so hard.

I agree that it can be hard to debug & fix failures. FWIW I had CrashPlan home fail lots. One (not) fun one was auto-uninstall.
Their updater would do an uninstall, then fail the install. I debugged it heavily, but their devs never could get it working right.

Backblaze Personal Backup at least has an easy enough setup that it's hard to get wrong. Still, look at complaints about them.

Here you should eventually get at least an easier way to ID the type of file. If a dblock, it's an easy fix from GUI Commandline. Arguably, Duplicati should ideally not need to use this, but the destinations can and do drop files by themselves sometimes...

I helped 5-7 years ago, if its not enough time to fix basic issues, Im done with helping.

I can't find your GitHub history beyond a 2023 issue and a 2019 thumbs-down, but help here means provide info for support, which is sort of the unfortunate fallback for less than perfect software that sometimes needs human judgment and help for fix.

There's probably no instant cure for long issue backlog (and possibly the backlog isn't complete), but staffing might get better. Original author is back, after many years of absence from the client, and there is a new opportunity to actually hire some help:

Introducing “Duplicati, Inc.” repeats some history I've already said. It also demands a solid base to build on. I hope we get one.

@TheGP
Copy link
Author

TheGP commented Apr 10, 2024

Here you should eventually get at least an easier way to ID the type of file. If a dblock, it's an easy fix from GUI Commandline. Arguably, Duplicati should ideally not need to use this, but the destinations can and do drop files by themselves sometimes...

My destinations never dropped the files, nor online nor local disks, it loses its files by itself, maybe the system crushes during file copying or something.

I agree that it can be hard to debug & fix failures. FWIW I had CrashPlan home fail lots. One (not) fun one was auto-uninstall.
Their updater would do an uninstall, then fail the install. I debugged it heavily, but their devs never could get it working right.

Installing or uninstalling are doesn't matter so much, the core features matter much more.

less than perfect software

It's guaranteed to fail software. Its like you are installing Windows and expecting it die in 1-15 months guaranteed. Or like Photoshop stopped working and corrupted all files you worked with before. Or like your profile on Github suddenly corrupted all the info you have and now u need to set up a new account. Have u seen such app? I've seen only one like that. This one.

Complete guaranteed fail with no ability to use old backups.

@ts678
Copy link
Collaborator

ts678 commented Apr 10, 2024

Installing or uninstalling are doesn't matter so much, the core features matter much more.

Disagree. The core feature is backup, and when it self-uninstalls, that core feature goes away.
I just thought that was an annoying misfeature. Other CrashPlan features also failed. YMMV.

no ability to use old backups.

Not true, generally -- we won't know the situation here until you try. I've given you the steps.

This is getting pointless. Do you wish support? If so, I'll help. You complain of things I can't fix.

@TheGP
Copy link
Author

TheGP commented Apr 10, 2024

Do you wish support? If so, I'll help.

Nope, it will be a waste of time, the last time I followed the steps the problem became worse with each step and we ended up agreeing the backup have to be recreated to fix it.

I just reported a bug basically.

@ts678
Copy link
Collaborator

ts678 commented Apr 10, 2024

Nope, it will be a waste of time

Then there is probably nothing more I can do for you here. Maybe I'll let the developers decide if this issue needs to stay open. There's no value added by pointing out that there are open issues -- still. GitHub keeps track of that nicely. More on that later.

the last time I followed the steps the problem became worse with each step and we ended up agreeing the backup have to be recreated to fix it.

No record of this user name in the forum (maybe it was a different one). There was a 2023 issue where your conclusion was that message should advise a recreate after you damaged the backup by deleting files from its destination. Here's the ending of that:

Deleting backup files on the destination should not cause backup full stop #5026

I just exported the whole backup settings, deleted and re-imported. Anyway, the reason I bring it up is because it is not user friendly approach. It should suggest recreating back up from scratch by clicking a button or silently do it itself.

That point is coming up in the current issue, but generally recreating from scratch gets you into "no ability to use old backups" because the old versions got deleted. That's not something to do lightly, until all the less disruptive paths have not succeeded.

Generally, issues are support requests (ideally with reproducible steps, but that often isn't possible) or enhancement requests.
Insufficient information tends to add noise but no value, so such clutter gets cleared out to allow work on workable problems.

@TheGP
Copy link
Author

TheGP commented Apr 10, 2024

No record of this user name in the forum (maybe it was a different one). There was a 2023 issue where your conclusion was that message should advise a recreate after you damaged the backup by deleting files from its destination. Here's the ending of that:

What are u fact-checking me? Lol

@ts678
Copy link
Collaborator

ts678 commented Apr 10, 2024

Looking to see what you're talking about (still not finding it, but I'd believe something happened -- occasionally fresh start is the way out, but it's definitely something to be avoided -- unless the user is consulted, doesn't care, and finds it the easiest way out.

There's actually one useful tidbit here, although it's kind of buried in the noise. The message on Repair should probably guide the user better, especially since Repair is not always the right thing. I suspect the code knows what kind of file(s) it's missing, so could offer the proper suggestion on what to do. This general wish has come up before -- what to do with a mystifying error message?

Of course, if no issues were never hit, or were handled automatically, the issue of user involvement wouldn't exist. Not there yet.

Currently, there's a little popup, historical forum and issues for research (for those so inclined), and (usually) a support volunteer, which in this case is me, who tries to get information and make a recommendation. Mine still stands. It takes less than a minute.

I'd be willing to open an issue on your behalf on the need for a better error message, but that (and support) is about all I can do. Please let me know if you want that. Developers and testers (present and future) will have to take care of the breakage problems, sometimes following on generous troubleshooting clues some reporters can get. Knowing how things break is key to fixing them.

@kenkendk
Copy link
Member

kenkendk commented May 3, 2024

I have been looking a bit into the source for this error.
The error message is from the SharpAESCrypt library, and it thrown if it tries to read a file that is not an AES Encrypted file.
A far as I can see there are 2 distinct cases where the error is thrown:

  1. If the file is too short (most likely 0 bytes)
  2. If the data is corrupted/wrong

If we can boil it down to either case, it will be easier to trace.

  • If the problem is empty files, then we can look for that and treat them as not-existing (as the two conditions are similar in this case)

  • If the problem is corrupted data, I have seen issues where the backend will send an error message, like "Permission denied" or "File quarantined".

  • A more serious case is if the file is actually corrupted on storage, in this case there is no remedy as the original data may not exist anymore.

For any of the cases, it is relevant to figure out what backend you are using so we can look there and try to reproduce.

@Jojo-1000
Copy link
Contributor

A more serious case is if the file is actually corrupted on storage, in this case there is no remedy as the original data may not exist anymore.

I have a case for me where a file was just null bytes on a local drive, but was unable to reproduce it so it might have been from aborting in the debugger. It was just strange that it was not empty, but a normal volume size.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants