Skip to content

Conversation

@parjong
Copy link
Owner

@parjong parjong commented Apr 24, 2022

Let's build a .tflite file for state-of-the-art NN models!
[ci skip]

Signed-off-by: Jonghyun Park parjong@gmail.com

export_ALBERT.py Outdated
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
converter._experimental_lower_tensor_list_ops = False

tflite_model = converter.convert()
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2022-04-24 13:55:05.748163: W tensorflow/compiler/mlir/lite/flatbuffer_export.cc:1880] Graph contains the following resource op(s), that use(s) resource type. Currently, the resource type is not natively supported in TFLite. Please consider not using the resource type if there are issues with either TFLite converter or TFLite runtime:                                                                                          Resource ops: SentencepieceOp, SentencepieceTokenizeOp
Details:                                                                                                                                              tf.SentencepieceOp() -> (tensor<!tf_type.resource>) : {container = "", device = "", model = ...
shared_name = "SentenceTokenizerInitializer/SentencepieceOp_load_3", use_node_name_sharing = false}
        tf.SentencepieceTokenizeOp(tensor<!tf_type.resource>, tensor<?x!tf_type.string>, tensor<i32>, tensor<f32>, tensor<i1>, tensor<i1>, tensor<i1>) -> (tensor<?xi32>, tensor<?xi64>) : {Tsplits = i64, device = "", out_type = i32, return_nbest = false}
        tf.StaticRegexReplace(tensor<?x!tf_type.string>) -> (tensor<?x!tf_type.string>) : {device = "", pattern = "\\p{Mn}", replace_global = true, rewrite = ""}
        tf.TensorListFromTensor(tensor<?x!tf_type.variant>, tensor<0xi32>) -> (tensor<!tf_type.variant<tensor<!tf_type.variant>>>) : {device = ""}
        tf.TensorListFromTensor(tensor<?xi64>, tensor<0xi32>) -> (tensor<!tf_type.variant<tensor<i64>>>) : {device = ""}
        tf.TensorListGetItem(tensor<!tf_type.variant<tensor<!tf_type.variant>>>, tensor<i32>, tensor<0xi32>) -> (tensor<!tf_type.variant>) : {device = ""}
        tf.TensorListGetItem(tensor<!tf_type.variant<tensor<i64>>>, tensor<i32>, tensor<0xi32>) -> (tensor<i64>) : {device = ""}
        tf.TensorListReserve(tensor<i32>, tensor<i32>) -> (tensor<!tf_type.variant<tensor<*x!tf_type.variant>>>) : {device = ""}
        tf.TensorListSetItem(tensor<!tf_type.variant<tensor<*x!tf_type.variant>>>, tensor<i32>, tensor<!tf_type.variant>) -> (tensor<!tf_type.variant<tensor<*x!tf_type.variant>>>) : {device = ""}
        tf.TensorListStack(tensor<!tf_type.variant<tensor<*x!tf_type.variant>>>, tensor<0xi32>) -> (tensor<?x!tf_type.variant>) : {device = "", num_elements = -1 : i64}
See instructions: https://www.tensorflow.org/lite/guide/ops_select

,.

@parjong parjong force-pushed the draft/export_tflite_model branch 2 times, most recently from d3d898e to 44ef833 Compare April 24, 2022 05:25

converter = tf.lite.TFLiteConverter.from_keras_model(embedding_model)

tflite_model = converter.convert()
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I ran this script under WSL2 with 4GB memory constraint, and it fails as below.

2022-04-24 14:25:55.503549: W tensorflow/python/util/util.cc:368] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
WARNING:absl:Found untraced functions such as restored_function_body, restored_function_body, restored_function_body, restored_function_body, restored_function_body while saving (showing 5 of 3339). These functions will not be directly callable after loading.
Killed

@parjong parjong force-pushed the draft/export_tflite_model branch from 44ef833 to 3c466cf Compare April 24, 2022 05:55
Let's build a .tflite file for state-of-the-art NN models!
[ci skip]

Signed-off-by: Jonghyun Park <parjong@gmail.com>
@parjong parjong force-pushed the draft/export_tflite_model branch from 3c466cf to 264c625 Compare April 24, 2022 05:58
f.write(tflite_model)
# def export_to_tflite: END

export_to_tflite(create_encode_model(), 'models.mine/ALBERT.tflite')
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is possible to export the encode module as a .tflite model, but the .tflite model includes flex ops (tf.Einsum):

Flex ops: FlexEinsum
Details:
        tf.Einsum(tensor<?x12x?x128xf32>, tensor<?x?x12x64xf32>) -> (tensor<?x?x12x64xf32>) : {device = "", equation = "acbe,aecd->abcd"}
        tf.Einsum(tensor<?x?x128xf32>, tensor<128x768xf32>) -> (tensor<?x?x768xf32>) : {device = "", equation = "...x,xy->...y"}
        tf.Einsum(tensor<?x?x12x64xf32>, tensor<12x64x768xf32>) -> (tensor<?x?x768xf32>) : {device = "", equation = "abcd,cde->abe"}
        tf.Einsum(tensor<?x?x12x64xf32>, tensor<?x?x12x64xf32>) -> (tensor<?x12x?x?xf32>) : {device = "", equation = "aecd,abcd->acbe"}
        tf.Einsum(tensor<?x?x3072xf32>, tensor<3072x768xf32>) -> (tensor<?x?x768xf32>) : {device = "", equation = "abc,cd->abd"}
        tf.Einsum(tensor<?x?x768xf32>, tensor<768x12x64xf32>) -> (tensor<?x?x12x64xf32>) : {device = "", equation = "abc,cde->abde"}
        tf.Einsum(tensor<?x?x768xf32>, tensor<768x3072xf32>) -> (tensor<?x?x3072xf32>) : {device = "", equation = "abc,cd->abd"}
See instructions: https://www.tensorflow.org/lite/guide/ops_select

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants