-
Notifications
You must be signed in to change notification settings - Fork 3.8k
Closed
Description
I'm using from_onnx to convert my model
dense_h32_w32_c3_sNone_pNone_kNone.pt_quant.onnx.zip into tvm relay. It looks like b_scale in QLinearMatMul expects a scalar scale not a vector of scales.
I'm not sure if this is a problem with onnx creating a vector for the weights, or if QLinearMatMul should support scale vectors.
Info about the model:
- Input: (batch, channel, height, width) -> (1, 3, 32, 32)
- Global Average pool: (batch, channel, 1, 1) -> (1, 3, 1, 1)
- Reshape: (batch, channel) -> (1, 3)
- Dense Layer: (batch, channel, 20) -> (3, 20)
Dense Layer is where the error occurs
You might notice when you debug
b_scale's shape that it has shape=20.
Expected behavior
No errors converting my model dense_h32_w32_c3_sNone_pNone_kNone.pt_quant.onnx.zip into tvm relay using from_onnx.
Actual behavior
assert num_elem == 1, "Cannot squeeze tensor shape {} to scalar form.".format(x_shape)
E AssertionError: Cannot squeeze tensor shape (20,) to scalar form.