-
Notifications
You must be signed in to change notification settings - Fork 3.8k
[DOC] More detailed installation instruction #262
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
tqchen
pushed a commit
to tqchen/tvm
that referenced
this pull request
May 26, 2018
tqchen
pushed a commit
to tqchen/tvm
that referenced
this pull request
Jul 6, 2018
sergei-mironov
pushed a commit
to sergei-mironov/tvm
that referenced
this pull request
Aug 8, 2018
vinx13
pushed a commit
to vinx13/tvm
that referenced
this pull request
Mar 9, 2022
gigiblender
pushed a commit
to gigiblender/tvm
that referenced
this pull request
Jan 19, 2023
Previously, when a Relay function contains a Call which directly uses Tuples as arguments (the example below), ``` %25 = (%23, %24) /* ty=(Tensor[(1, 160), float32], Tensor[(1, 160), float32]) */; %26 = concatenate(%25, axis=-1) /* ty=Tensor[(1, 320), float32] */; ``` our Relay-translator is unable to generate corresponding CallTIR, because the translator always assumes a argument of a Call is mapped to a single tensor (see the code snippet below: the translator directly passes the Relax variable `new_args[-1]` to function `te_tensors`, which translate a Var to a single tensor). https://github.com/tlc-pack/relax/blob/60e9a01cdfdd013945790fc03d5abad29b8a7c0b/python/tvm/relax/testing/relay_translator.py#L124 https://github.com/tlc-pack/relax/blob/60e9a01cdfdd013945790fc03d5abad29b8a7c0b/src/relax/ir/emit_te.h#L56-L61 But in fact, the Relax variable may correspond to a Tuple of tensors, which wasn’t taken into consideration before. And such case can lead to error in `TETensor`, when creating tensors. Therefore, this PR fixes the issue by examine the Relax variable before the tensor creation of Relay Call arguments. If an argument has shape Tuple and type TupleType, we break down the tuple Variable and emit a TupleGetItem for each field, and meanwhile create a tensor for each field.
gigiblender
pushed a commit
to gigiblender/tvm
that referenced
this pull request
Jan 19, 2023
…pache#316) This PR removes the `global_symbol` linkage added by Relay Translator. It also fixes unaddressed comments of apache#262. All tests can pass locally and I believe it is safe to merge this PR directly.
junrushao
pushed a commit
to junrushao/tvm
that referenced
this pull request
Feb 8, 2023
Previously, when a Relay function contains a Call which directly uses Tuples as arguments (the example below), ``` %25 = (%23, %24) /* ty=(Tensor[(1, 160), float32], Tensor[(1, 160), float32]) */; %26 = concatenate(%25, axis=-1) /* ty=Tensor[(1, 320), float32] */; ``` our Relay-translator is unable to generate corresponding CallTIR, because the translator always assumes a argument of a Call is mapped to a single tensor (see the code snippet below: the translator directly passes the Relax variable `new_args[-1]` to function `te_tensors`, which translate a Var to a single tensor). https://github.com/tlc-pack/relax/blob/60e9a01cdfdd013945790fc03d5abad29b8a7c0b/python/tvm/relax/testing/relay_translator.py#L124 https://github.com/tlc-pack/relax/blob/60e9a01cdfdd013945790fc03d5abad29b8a7c0b/src/relax/ir/emit_te.h#L56-L61 But in fact, the Relax variable may correspond to a Tuple of tensors, which wasn’t taken into consideration before. And such case can lead to error in `TETensor`, when creating tensors. Therefore, this PR fixes the issue by examine the Relax variable before the tensor creation of Relay Call arguments. If an argument has shape Tuple and type TupleType, we break down the tuple Variable and emit a TupleGetItem for each field, and meanwhile create a tensor for each field.
junrushao
pushed a commit
to junrushao/tvm
that referenced
this pull request
Feb 8, 2023
…pache#316) This PR removes the `global_symbol` linkage added by Relay Translator. It also fixes unaddressed comments of apache#262. All tests can pass locally and I believe it is safe to merge this PR directly.
yelite
pushed a commit
to yelite/tvm
that referenced
this pull request
Feb 17, 2023
Previously, when a Relay function contains a Call which directly uses Tuples as arguments (the example below), ``` %25 = (%23, %24) /* ty=(Tensor[(1, 160), float32], Tensor[(1, 160), float32]) */; %26 = concatenate(%25, axis=-1) /* ty=Tensor[(1, 320), float32] */; ``` our Relay-translator is unable to generate corresponding CallTIR, because the translator always assumes a argument of a Call is mapped to a single tensor (see the code snippet below: the translator directly passes the Relax variable `new_args[-1]` to function `te_tensors`, which translate a Var to a single tensor). https://github.com/tlc-pack/relax/blob/60e9a01cdfdd013945790fc03d5abad29b8a7c0b/python/tvm/relax/testing/relay_translator.py#L124 https://github.com/tlc-pack/relax/blob/60e9a01cdfdd013945790fc03d5abad29b8a7c0b/src/relax/ir/emit_te.h#L56-L61 But in fact, the Relax variable may correspond to a Tuple of tensors, which wasn’t taken into consideration before. And such case can lead to error in `TETensor`, when creating tensors. Therefore, this PR fixes the issue by examine the Relax variable before the tensor creation of Relay Call arguments. If an argument has shape Tuple and type TupleType, we break down the tuple Variable and emit a TupleGetItem for each field, and meanwhile create a tensor for each field.
yelite
pushed a commit
to yelite/tvm
that referenced
this pull request
Feb 17, 2023
…pache#316) This PR removes the `global_symbol` linkage added by Relay Translator. It also fixes unaddressed comments of apache#262. All tests can pass locally and I believe it is safe to merge this PR directly.
MasterJH5574
pushed a commit
to MasterJH5574/tvm
that referenced
this pull request
Aug 17, 2025
junrushao
added a commit
to junrushao/tvm
that referenced
this pull request
Nov 14, 2025
Upstream : https://github.com/apache/tvm-ffi.git Branch : main New HEAD : ae346ec92a3c386f1376064ae086aae72947c329 Subject : [DTYPE] Align bool parsing to align with DLPack (apache#262) Author : Tianqi Chen <tqchen@users.noreply.github.com> Date : 2025-11-14T18:40:40-05:00 Delta : 1 commit(s) since 7f3f8726156a Compare : apache/tvm-ffi@7f3f872...ae346ec This commit updates the tvm-ffi submodule to the latest upstream HEAD.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.