Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions docs/deploy/hls.rst
Original file line number Diff line number Diff line change
Expand Up @@ -64,11 +64,11 @@ We use two python scripts for this tutorial.

tgt="sdaccel"

fadd = tvm.runtime.load("myadd.so")
fadd = tvm.runtime.load_module("myadd.so")
if os.environ.get("XCL_EMULATION_MODE"):
fadd_dev = tvm.runtime.load("myadd.xclbin")
fadd_dev = tvm.runtime.load_module("myadd.xclbin")
else:
fadd_dev = tvm.runtime.load("myadd.awsxclbin")
fadd_dev = tvm.runtime.load_module("myadd.awsxclbin")
fadd.import_module(fadd_dev)

ctx = tvm.context(tgt, 0)
Expand Down
2 changes: 1 addition & 1 deletion docs/dev/introduction_to_module_serialization.rst
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ Let us build one ResNet-18 workload for GPU as an example first.
resnet18_lib.export_library(path_lib)

# load it back
loaded_lib = tvm.runtime.load(path_lib)
loaded_lib = tvm.runtime.load_module(path_lib)
assert loaded_lib.type_key == "library"
assert loaded_lib.imported_modules[0].type_key == "cuda"

Expand Down
6 changes: 3 additions & 3 deletions docs/dev/relay_bring_your_own_codegen.rst
Original file line number Diff line number Diff line change
Expand Up @@ -905,7 +905,7 @@ We also need to register this function to enable the corresponding Python API:
TVM_REGISTER_GLOBAL("module.loadbinary_examplejson")
.set_body_typed(ExampleJsonModule::LoadFromBinary);

The above registration means when users call ``tvm.runtime.load(lib_path)`` API and the exported library has an ExampleJSON stream, our ``LoadFromBinary`` will be invoked to create the same customized runtime module.
The above registration means when users call ``tvm.runtime.load_module(lib_path)`` API and the exported library has an ExampleJSON stream, our ``LoadFromBinary`` will be invoked to create the same customized runtime module.

In addition, if you want to support module creation directly from an ExampleJSON file, you can also implement a simple function and register a Python API as follows:

Expand All @@ -930,7 +930,7 @@ In addition, if you want to support module creation directly from an ExampleJSON
*rv = ExampleJsonModule::Create(args[0]);
});

It means users can manually write/modify an ExampleJSON file, and use Python API ``tvm.runtime.load("mysubgraph.examplejson", "examplejson")`` to construct a customized module.
It means users can manually write/modify an ExampleJSON file, and use Python API ``tvm.runtime.load_module("mysubgraph.examplejson", "examplejson")`` to construct a customized module.

*******
Summary
Expand All @@ -954,7 +954,7 @@ In summary, here is a checklist for you to refer:
* ``Run`` to execute a subgraph.
* Register a runtime creation API.
* ``SaveToBinary`` and ``LoadFromBinary`` to serialize/deserialize customized runtime module.
* Register ``LoadFromBinary`` API to support ``tvm.runtime.load(your_module_lib_path)``.
* Register ``LoadFromBinary`` API to support ``tvm.runtime.load_module(your_module_lib_path)``.
* (optional) ``Create`` to support customized runtime module construction from subgraph file in your representation.

* An annotator to annotate a user Relay program to make use of your compiler and runtime (TBA).
2 changes: 1 addition & 1 deletion rust/tvm/examples/resnet/src/build_resnet.py
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ def download_img_labels():
def test_build(build_dir):
""" Sanity check with random input"""
graph = open(osp.join(build_dir, "deploy_graph.json")).read()
lib = tvm.runtime.load(osp.join(build_dir, "deploy_lib.so"))
lib = tvm.runtime.load_module(osp.join(build_dir, "deploy_lib.so"))
params = bytearray(open(osp.join(build_dir,"deploy_param.params"), "rb").read())
input_data = tvm.nd.array(np.random.uniform(size=data_shape).astype("float32"))
ctx = tvm.cpu()
Expand Down