Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ValueError: Unable to restore custom object of type _tf_keras_metric. Please make sure that any custom layers are included in the custom_objects arg when calling load_model() and make sure that all layers implement get_config and from_config #224

Open
fidesachates opened this issue Feb 13, 2022 · 1 comment
Labels

Comments

@fidesachates
Copy link

Describe the bug
When I use the tf2 branch suggested from this issue, I can't run the convert step.

To Reproduce
Steps to reproduce the behavior:

  1. Start a bash shell in the nvidia/cuda:11.5.1-cudnn8-runtime-ubuntu20.04 docker image
  2. Check out the git repo and branch from the pr above
  3. Run the setup.sh
  4. Follow the written documentation in this repo to train a wake word
  5. Run the precise-convert as written in the documentation.

Expected behavior
A pb file is created.

Log files

Command output

(.venv) root@0170575a25bb:/precise# precise-convert mywakeword.net
Converting mywakeword.net to mywakeword.tflite ...
2022-02-13 14:03:25.311901: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-02-13 14:03:25.331090: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-02-13 14:03:25.331215: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-02-13 14:03:25.331436: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-02-13 14:03:25.332303: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-02-13 14:03:25.332468: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-02-13 14:03:25.332591: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-02-13 14:03:25.815796: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-02-13 14:03:25.815980: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-02-13 14:03:25.816121: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-02-13 14:03:25.816246: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1525] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 6505 MB memory: -> device: 0, name: NVIDIA GeForce RTX 2060 SUPER, pci bus id: 0000:01:00.0, compute capability: 7.5
WARNING:tensorflow:Layer net will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Traceback (most recent call last):
File "/root/mycroft-precise/.venv/bin/precise-convert", line 33, in
sys.exit(load_entry_point('mycroft-precise', 'console_scripts', 'precise-convert')())
File "/root/mycroft-precise/precise/scripts/base_script.py", line 49, in run_main
script.run()
File "/root/mycroft-precise/precise/scripts/convert.py", line 38, in run
self.convert(args.model, args.out.format(model=model_name))
File "/root/mycroft-precise/precise/scripts/convert.py", line 60, in convert
model = K.models.load_model(model_path, custom_objects={'weighted_log_loss': weighted_log_loss})
File "/root/mycroft-precise/.venv/lib/python3.8/site-packages/keras/utils/traceback_utils.py", line 67, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/root/mycroft-precise/.venv/lib/python3.8/site-packages/keras/saving/saved_model/load.py", line 1008, in revive_custom_object
raise ValueError(
ValueError: Unable to restore custom object of type _tf_keras_metric. Please make sure that any custom layers are included in the custom_objects arg when calling load_model() and make sure that all layers implement get_config and from_config.
(.venv) root@0170575a25bb:/precise# ./test.sh

Environment (please complete the following information):

  • Device type: docker image nvidia/cuda:11.5.1-cudnn8-runtime-ubuntu20.04
  • OS: Ubuntu 20.04

Additional context
Running things inside a docker container that has all the gpus exposed from the host machine to the docker image

@TobiGee
Copy link

TobiGee commented May 12, 2022

I'm facing the same issues as you do. Neither using cuda, nor docker. Just the plain old cpu version.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants