Put the following script directlyonnx模型路徑 onnx_path
修改即可:
from pprint import pprint
import onnxruntime
onnx_path = "originalpool/output.onnx"
# onnx_path = "custompool/output.onnx"
provider = "CPUExecutionProvider"
onnx_session = onnxruntime.InferenceSession(onnx_path, providers=[provider])
print("----------------- 輸入部分 -----------------")
input_tensors = onnx_session.get_inputs() # 該 API 會返回列表
for input_tensor in input_tensors: # Because there may be multiple inputs,所以為列表
input_info = {
"name" : input_tensor.name,
"type" : input_tensor.type,
"shape": input_tensor.shape,
}
pprint(input_info)
print("----------------- 輸出部分 -----------------")
output_tensors = onnx_session.get_outputs() # 該 API 會返回列表
for output_tensor in output_tensors: # Because there may be multiple outputs,所以為列表
output_info = {
"name" : output_tensor.name,
"type" : output_tensor.type,
"shape": output_tensor.shape,
}
pprint(output_info)
值得說明的是,如果onnx模型的輸入shape是固定的,The output of this script is:
'----------------- 輸入部分 -----------------'
{'name': 'x',
'shape': [1, 3, 224, 224],
'type': 'tensor(float)'}
'----------------- 輸出部分 -----------------'
{'name': 'bilinear_interp_v2_7.tmp_0',
'shape': [1, 2, 224, 224],
'type': 'tensor(float)'}
值得說明的是,如果onnx模型的輸入shape是非固定的,The output of this script is:
----------------- 輸入部分 -----------------
{'name': 'x',
'shape': [None, 3, None, None],
'type': 'tensor(float)'}
----------------- 輸出部分 -----------------
{'name': 'bilinear_interp_v2_7.tmp_0',
'shape': [None, 2, None, None],
'type': 'tensor(float)'}
shape 中有 None