Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

task: added delete task handler #160

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open

task: added delete task handler #160

wants to merge 2 commits into from

Conversation

gioelecerati
Copy link
Member

No description provided.

@gioelecerati gioelecerati requested a review from a team as a code owner February 28, 2023 18:45
@codecov
Copy link

codecov bot commented Feb 28, 2023

Codecov Report

Merging #160 (56b2073) into main (aba5f6e) will decrease coverage by 0.58142%.
The diff coverage is 0.00000%.

Additional details and impacted files

Impacted file tree graph

@@                 Coverage Diff                 @@
##                main        #160         +/-   ##
===================================================
- Coverage   17.46835%   16.88693%   -0.58142%     
===================================================
  Files             16          17          +1     
  Lines           1975        2043         +68     
===================================================
  Hits             345         345                 
- Misses          1605        1673         +68     
  Partials          25          25                 
Files Coverage Δ
task/runner.go 22.67206% <ø> (ø)
task/delete.go 0.00000% <0.00000%> (ø)

Continue to review full report in Codecov by Sentry.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update aba5f6e...56b2073. Read the comment docs.

Files Coverage Δ
task/runner.go 22.67206% <ø> (ø)
task/delete.go 0.00000% <0.00000%> (ø)

Copy link
Member

@victorges victorges left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Requesting changes due to lack of pagination and the whole error handling story

task/delete.go Show resolved Hide resolved
task/delete.go Show resolved Hide resolved
glog.Errorf("Error listing files in directory %v", directory)
}

totalFiles := files.Files()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not total files, it's only 1 page! You have to handle pagination. e.g.: https://github.com/livepeer/livepeer-infra/pull/1214/files#diff-71a8aa1d4b064228d72ee7fbf0ba12affc84118bac0b08da9ad06125cd5f86deR178 (you dont need the rate limiting)

task/delete.go Show resolved Hide resolved
totalFiles := files.Files()

accumulator := NewAccumulator()
tctx.Progress.TrackCount(accumulator.Size, uint64(len(totalFiles)), 0)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have a problem here due to pagination. The total is not really the total if we do the pagination concurrently with the deletion. One alternative, only to get a right progress number, would be listing all the files beforehand accumulating them in memory until there are no more pages, then start the deletion process. It will make the deletion take a little longer if there are too many pages, but I think it may be interesting anyway.

Otherwise could try something smarter... using the asset size or duration and approximating the number of files... but idk, sounds like overengineering so i'd prefer either the pre-listing idea, having progress go back and forth once there are multiple pages, or even not have any progress at all.

WDYT?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note: maybe the ideal long term solution might be to store the total size of the asset files in the asset object, when they are created or modified. Then here we could just measure the progress based on that total size. Not a change for this PR tho as it involves a lot other code paths.

glog.Errorf("Error deleting file %v: %v (retrying...)", filename, err)
time.Sleep(retryInterval)
}
return fmt.Errorf("failed to delete file %v after %d retries", filename, maxRetries)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you return an error the whole error group will cancel. You should always return nil and only log the errors instead (if you want to delete as much as possible before failing)

Comment on lines +56 to +68
if err := eg.Wait(); err != nil {
glog.Errorf("Error deleting files: %v", err)
}

if ipfs := asset.AssetSpec.Storage.IPFS; ipfs != nil {
err = UnpinFromIpfs(*tctx, ipfs.CID, "cid")
if err != nil {
glog.Errorf("Error unpinning from IPFS %v", ipfs.CID)
}
err = UnpinFromIpfs(*tctx, ipfs.NFTMetadata.CID, "nftMetadataCid")
if err != nil {
glog.Errorf("Error unpinning metadata from IPFS %v", ipfs.NFTMetadata.CID)
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, thinking further about it, if you never return errors, this task will always succeed and we will never even retry deleting the files. In order to simplify this whole logic (since you have some complexity missing like pagination), I'd suggest you to just return errors for now.

If you do want the failures to be as much partial successes as possible, you can accumulate the intermediate errors you get on a local slice here, and in the end you create a single error in case you found any errors during the process. I think that could be premature optimization for now tho, so I'd rather invest more in the additional retry logics you implemented here for file deletion, and could delete for IPFS unpinning as well.

Can even create a generic helper function to retry another function a few times :D
Geppetto gave me this:

func retry(maxRetries int, f func() error) error {
    var err error

    for i := 0; i <= maxRetries; i++ {
        if err = f(); err == nil {
            return nil
        }
    }

    return fmt.Errorf("after %d attempts, last error: %s", maxRetries, err)
}

WDYT?

}

return &TaskHandlerOutput{
TaskOutput: &data.TaskOutput{},
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably still create the Delete output field (empty object), otherwise we have some code in the API that can explode if the task type output is missing.

func UnpinFromIpfs(tctx TaskContext, cid string, filter string) error {
assets, _, err := tctx.lapi.ListAssets(api.ListOptions{
Filters: map[string]interface{}{
filter: cid,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Weird go syntax, it looks like the field is called filter lol

task/runner.go Outdated
@@ -35,14 +35,15 @@ var (
// without including extraneous error information from Catalyst
errInvalidVideo = UnretriableError{errors.New("invalid video file codec or container, check your input file against the input codec and container support matrix")}
// TODO(yondonfu): Add link in this error message to a page with the input codec/container support matrix
errProbe = UnretriableError{errors.New("failed to probe or open file, check your input file against the input codec and container support matrix")}
errProbe = UnretriableError{errors.New("failed to probe or open file, check your input file against the input codec and container support matrix")}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've fixed this on main already so you may get some conflicts

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants