-
Notifications
You must be signed in to change notification settings - Fork 3.8k
Description
As documented in #5392 running the test for the tan operator under Tflite 2.1.0 fails. This issue is to track this and to ensure that this is uncommented in the test on a later date.
The error from running this in the testsuite is as below.
`tests/python/frontend/tflite/test_forward.py:810:
tests/python/frontend/tflite/test_forward.py:807: in _test_forward_unary_elemwise
test_op(np.random.uniform(-10, 10, (3, 2)).astype(np.float32))
tests/python/frontend/tflite/test_forward.py:764: in _test_tan
return _test_unary_elemwise(math_ops.tan, data)
tests/python/frontend/tflite/test_forward.py:698: in _test_unary_elemwise
compare_tflite_with_tvm(data, ['in:0'], [in_data], [out])
tests/python/frontend/tflite/test_forward.py:182: in compare_tflite_with_tvm
tflite_model_buffer = converter.convert()
/usr/local/lib/python3.6/dist-packages/tensorflow_core/lite/python/lite.py:1007: in convert
**converter_kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow_core/lite/python/convert.py:457: in toco_convert_impl
enable_mlir_converter=enable_mlir_converter)
`
With a further message :
> raise ConverterError("See console for info.\n%s\n%s\n" % (stdout, stderr)) E tensorflow.lite.python.convert.ConverterError: See console for info. E 2020-04-22 14:19:17.631972: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libnvinfer.so.6'; dlerror: libnvinfer.so.6: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: E 2020-04-22 14:19:17.632034: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libnvinfer_plugin.so.6'; dlerror: libnvinfer_plugin.so.6: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: E 2020-04-22 14:19:17.632043: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:30] Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly. E 2020-04-22 14:19:18.259254: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: Tan E 2020-04-22 14:19:18.259328: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before Removing unused ops: 1 operators, 2 arrays (0 quantized) E 2020-04-22 14:19:18.259416: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 1 operators, 2 arrays (0 quantized) E 2020-04-22 14:19:18.259440: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After general graph transformations pass 1: 1 operators, 2 arrays (0 quantized) E 2020-04-22 14:19:18.259456: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before Group bidirectional sequence lstm/rnn: 1 operators, 2 arrays (0 quantized) E 2020-04-22 14:19:18.259468: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before dequantization graph transformations: 1 operators, 2 arrays (0 quantized) E 2020-04-22 14:19:18.259478: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before Identify nearest upsample.: 1 operators, 2 arrays (0 quantized) E 2020-04-22 14:19:18.259494: I tensorflow/lite/toco/allocate_transient_arrays.cc:345] Total transient array allocated size: 0 bytes, theoretical optimal value: 0 bytes. E 2020-04-22 14:19:18.259506: I tensorflow/lite/toco/toco_tooling.cc:456] Estimated count of arithmetic ops: 0 ops, equivalently 0 MACs E 2020-04-22 14:19:18.259513: I tensorflow/lite/toco/toco_tooling.cc:471] Number of parameters: 0 E 2020-04-22 14:19:18.259697: E tensorflow/lite/toco/toco_tooling.cc:498] We are continually in the process of adding support to TensorFlow Lite for more ops. It would be helpful if you could inform us of how this conversion went by opening a github issue at https://github.com/tensorflow/tensorflow/issues/new?template=40-tflite-op-request.md E and pasting the following: E E Some of the operators in the model are not supported by the standard TensorFlow Lite runtime. If those are native TensorFlow operators, you might be able to use the extended runtime by passing --enable_select_tf_ops, or by setting target_ops=TFLITE_BUILTINS,SELECT_TF_OPS when calling tf.lite.TFLiteConverter(). Otherwise, if you have a custom implementation for them you can disable this error with --allow_custom_ops, or by setting allow_custom_ops=True when calling tf.lite.TFLiteConverter(). Here is a list of builtin operators you are using: . Here is a list of operators for which you will need custom implementations: Tan. E Traceback (most recent call last): E File "/usr/local/bin/toco_from_protos", line 8, in <module> E sys.exit(main()) E File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/lite/toco/python/toco_from_protos.py", line 93, in main E app.run(main=execute, argv=[sys.argv[0]] + unparsed) E File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/platform/app.py", line 40, in run E _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef) E File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 299, in run E _run_main(main, args) E File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 250, in _run_main E sys.exit(main(argv)) E File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/lite/toco/python/toco_from_protos.py", line 56, in execute E enable_mlir_converter) E Exception: We are continually in the process of adding support to TensorFlow Lite for more ops. It would be helpful if you could inform us of how this conversion went by opening a github issue at https://github.com/tensorflow/tensorflow/issues/new?template=40-tflite-op-request.md E and pasting the following: E E Some of the operators in the model are not supported by the standard TensorFlow Lite runtime. If those are native TensorFlow operators, you might be able to use the extended runtime by passing --enable_select_tf_ops, or by setting target_ops=TFLITE_BUILTINS,SELECT_TF_OPS when calling tf.lite.TFLiteConverter(). Otherwise, if you have a custom implementation for them you can disable this error with --allow_custom_ops, or by setting allow_custom_ops=True when calling tf.lite.TFLiteConverter(). Here is a list of builtin operators you are using: . Here is a list of operators for which you will need custom implementations: Tan.
I have experimented with setting
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
as documented here https://www.tensorflow.org/lite/guide/ops_select
regards
Ramana