Skip to content

Issues: pytorch/executorch

[v0.2.1] Release Tracker
#3409 opened Apr 29, 2024 by dbort
Open 7
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Author
Filter by author
Label
Filter by label
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Milestones
Filter by milestone
Assignee
Filter by who’s assigned
Sort

Issues list

Errors when lowering to edge. module: exir Issues related to Export IR triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#3659 opened May 17, 2024 by ismaeelbashir03
Llama example build failure on MacOS bug Something isn't working high priority module: build Related to buck2 and cmake build triage review Items require an triage review triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#3600 opened May 14, 2024 by GregoryComer
Llama2-7B mobile app crashes on Samsung S23 8GB RAM Android Android building and execution related. module: extension Related to extension built on top of runtime, e.g. pybindings, data loader, etc. triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#3599 opened May 14, 2024 by salykova
Does llama2 example on Android utilize HTP? triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#3586 opened May 11, 2024 by CHNtentes
Is Qwen in the roadmap?
#3583 opened May 11, 2024 by DzAvril
Ensure python version is compatible before building wheels module: build Related to buck2 and cmake build module: doc Related to our documentation, both in docs/ and docblocks triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#3570 opened May 10, 2024 by Ciao-Wen-Chen
Evaluation results of llama2 with exetorch llm: evaluation Perplexity, accuracy triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#3568 opened May 10, 2024 by l2002924700
Operator torch._ops.aten.linalg_vector_norm.default is not Aten Canonical bug Something isn't working module: exir Issues related to Export IR module: kernels Issues related to kernel libraries, e.g. portable kernels and optimized kernels triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#3566 opened May 9, 2024 by nbansal90
what's the meaning of "Groupwise 4-bit (128)" module: quantization module: xnnpack Issues related to xnnpack delegation rfc Request for comment and feedback on a post, proposal, etc.
#3559 opened May 9, 2024 by l2002924700
Quantize Llava encoder
#3557 opened May 9, 2024 by iseeyuan
Exporting Llama3's tokenizer bug Something isn't working module: doc Related to our documentation, both in docs/ and docblocks triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#3555 opened May 8, 2024 by vifi2021
Support Phi 3 model high priority triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#3550 opened May 8, 2024 by iseeyuan
ERROR: Overriding output data pointer allocated by memory plan is not allowed. bug Something isn't working partner: qualcomm For backend delegation, kernels, demo, etc. from the 3rd-party partner, Qualcomm triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#3528 opened May 7, 2024 by sunqijie0350
converting llama3 models with added tokens enhancement Not as big of a feature, but technically not a bug. Should be easy to fix
#3519 opened May 6, 2024 by l3utterfly
kv cache manipulation? enhancement Not as big of a feature, but technically not a bug. Should be easy to fix feature A request for a proper, new feature.
#3518 opened May 6, 2024 by l3utterfly
torch.max(input) fails at XNNPACK runtime bug Something isn't working module: xnnpack Issues related to xnnpack delegation
#3516 opened May 6, 2024 by kinghchan
ProTip! Exclude everything labeled bug with -label:bug.