Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[method.cpp:825] Error setting input 0: 0x10 #3572

Open
mikekgfb opened this issue May 10, 2024 · 2 comments
Open

[method.cpp:825] Error setting input 0: 0x10 #3572

mikekgfb opened this issue May 10, 2024 · 2 comments

Comments

@mikekgfb
Copy link
Contributor

As per @ali-khosh -- this appears to be an issue with Executorch model handling, or should we do something differently how we use ET for eval?

When running
python3 torchchat.py eval llama3 --pte-path llama3.pte --limit 5

I get the following error:

Warning: checkpoint path ignored because an exported DSO or PTE path specified
Using device=cpu
Loading model...
Time to load model: 0.03 seconds
[program.cpp:130] InternalConsistency verification requested but not available
config.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████| 665/665 [00:00<00:00, 1.20MB/s]
model.safetensors: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████| 548M/548M [00:17<00:00, 32.2MB/s]
generation_config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████| 124/124 [00:00<00:00, 1.01MB/s]
tokenizer_config.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████| 26.0/26.0 [00:00<00:00, 68.1kB/s]
vocab.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 1.04M/1.04M [00:00<00:00, 7.75MB/s]
merges.txt: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████| 456k/456k [00:00<00:00, 51.6MB/s]
tokenizer.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████| 1.36M/1.36M [00:00<00:00, 5.44MB/s]
/Users/ali/Desktop/torchchat/torchchat/.venv/lib/python3.10/site-packages/datasets/load.py:1486: FutureWarning: The repository for EleutherAI/wikitext_document_level contains custom code which must be executed to correctly load the dataset. You can inspect the repository content at https://hf.co/datasets/EleutherAI/wikitext_document_level
You can avoid this message in future by passing the argument trust_remote_code=True.
Passing trust_remote_code=True will be mandatory to load this dataset from the next major release of datasets.
warnings.warn(
Downloading builder script: 100%|██████████████████████████████████████████████████████████████████████████████████████| 10.7k/10.7k [00:00<00:00, 21.4MB/s]
Downloading readme: 100%|██████████████████████████████████████████████████████████████████████████████████████████████| 7.78k/7.78k [00:00<00:00, 12.4MB/s]
Repo card metadata block was not found. Setting CardData to empty.
Downloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████| 4.72M/4.72M [00:00<00:00, 14.9MB/s]
Generating test split: 62 examples [00:00, 2689.69 examples/s]
Generating train split: 629 examples [00:00, 8728.97 examples/s]
Generating validation split: 60 examples [00:00, 8409.35 examples/s]
0%| | 0/5 [00:00<?, ?it/s][tensor_impl.cpp:93] Attempted to resize a static tensor to a new shape at dimension 1 old_size: 1 new_size: 1263
[method.cpp:825] Error setting input 0: 0x10
0%| | 0/5 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/Users/ali/Desktop/torchchat/torchchat/torchchat.py", line 150, in
eval_main(args)
File "/Users/ali/Desktop/torchchat/torchchat/eval.py", line 245, in main
result = eval(
File "/Users/ali/Desktop/torchchat/torchchat/.venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/Users/ali/Desktop/torchchat/torchchat/eval.py", line 192, in eval
eval_results = evaluate(
File "/Users/ali/Desktop/torchchat/torchchat/.venv/lib/python3.10/site-packages/lm_eval/utils.py", line 402, in _wrapper
return fn(*args, **kwargs)
File "/Users/ali/Desktop/torchchat/torchchat/.venv/lib/python3.10/site-packages/lm_eval/evaluator.py", line 330, in evaluate
resps = getattr(lm, reqtype)(cloned_reqs)
File "/Users/ali/Desktop/torchchat/torchchat/.venv/lib/python3.10/site-packages/lm_eval/models/huggingface.py", line 616, in loglikelihood_rolling
string_nll = self._loglikelihood_tokens(
File "/Users/ali/Desktop/torchchat/torchchat/.venv/lib/python3.10/site-packages/lm_eval/models/huggingface.py", line 787, in _loglikelihood_tokens
self._model_call(batched_inps, **call_kwargs), dim=-1
File "/Users/ali/Desktop/torchchat/torchchat/eval.py", line 145, in _model_call
logits = model_forward(self._model, x, input_pos)
File "/Users/ali/Desktop/torchchat/torchchat/generate.py", line 260, in model_forward
return model(x, input_pos)
File "/Users/ali/Desktop/torchchat/torchchat/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/Users/ali/Desktop/torchchat/torchchat/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in call_impl
return forward_call(*args, **kwargs)
File "/Users/ali/Desktop/torchchat/torchchat/build/model_et.py", line 15, in forward
logits = self.model
.forward(forward_inputs)
RuntimeError: method->set_inputs() for method 'forward' failed with error 0x12

@mikekgfb
Copy link
Contributor Author

Please also update T188169161 if this is resolved/mitigated.

@ali-khosh
Copy link

Still having issues. Updated the task.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants