Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Operator torch._ops.aten.linalg_vector_norm.default is not Aten Canonical #3566

Closed
nbansal90 opened this issue May 9, 2024 · 5 comments
Closed
Labels
bug Something isn't working module: exir Issues related to Export IR module: kernels Issues related to kernel libraries, e.g. portable kernels and optimized kernels triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@nbansal90
Copy link

I am trying to create an Executorch Program using the steps mentioned at Setting Up Executorch. I made the changes successfully according to my model… But as I am executing the export script… I am getting this error:

torch._export.verifier.SpecViolationError: Operator torch._ops.aten.linalg_vector_norm.default is not Aten Canonical.
Looking back at my model, the only operator that might have caused this issue seems to be:
q = torch.nn.functional.normalize(q, dim=-1),
But I am not sure how exactly I deal with this error and am stuck at this step.

What could be a probable workaround for this case? Any suggestions/help is appreciated!

Regards!

@cccclai
Copy link
Contributor

cccclai commented May 10, 2024

It's supposed to an aten core op. Probably this pull request can help pytorch/pytorch#125789 - we're trying to land it

@cccclai cccclai added the bug Something isn't working label May 10, 2024
@mergennachin mergennachin added triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module module: exir Issues related to Export IR module: kernels Issues related to kernel libraries, e.g. portable kernels and optimized kernels labels May 10, 2024
@nbansal90
Copy link
Author

@cccclai that's great to know! Would appreciate it if this update could be pushed in as this is something which blocking my efforts to benchmark my model on an edge device.

Thank you for your prompt response.

@cccclai
Copy link
Contributor

cccclai commented Jun 3, 2024

Before the PR is merged, workaround is calling these following ops manually. The same idea will be applied to pytorch/pytorch#125789

def decomp_linalg_vector_norm(a, order):
    # Compute the absolute values of the elements of 'a'
    abs_a = torch.abs(a)

    # Compute the sum of the absolute values raised to the power of order
    sum_p = torch.sum(torch.pow(abs_a, ord=order))

    # Compute the order-th root of the sum
    norm_value = torch.pow(sum_p, 1/order)
    return norm_value

@nbansal90
Copy link
Author

Sure! I will give it a try.

@nbansal90
Copy link
Author

nbansal90 commented Jun 11, 2024

def decomp_linalg_vector_norm(a, order=2):
abs_a = torch.abs(a)
sum_p = torch.sum(torch.pow(abs_a, ord=order))
norm_value = torch.pow(sum_p, 1/order)
out = torch.div(sum_p/norm_value)
return out

Resolves the issue for me. Since, there is already a PR looking in to adding this as part of core aten operator. closing this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working module: exir Issues related to Export IR module: kernels Issues related to kernel libraries, e.g. portable kernels and optimized kernels triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

No branches or pull requests

3 participants