-
Notifications
You must be signed in to change notification settings - Fork 3.8k
[REFACTOR][TYPE] Remove un-necessary var sub-field in TypeVars #4615
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…nd TypeVar Currently, we use a tvm::Var to represent a placeholder for shapes in generic types. This is not necessary for GlobalTypeVar(as we never parameterize by shape var), and is a bit twisted for TypeVar. As we move to a unified type system, we want to break the dependency from the base TypeVar(which is shared across the languages) from the expression. Note that it is fine for TensorType to depend on Expr. One alternative solution to embed the Var would be to introduce a TypeVarExpr, which can wrap a TypeVar as Expr. However, this new alternative won't be natural until we migrate the type to the global scope. Lucikly, we have not yet start to depend on the shape parameterization heavily yet. This PR removes the tvm::Var from the typevars. We will follow up with another PR to migrate the types to a base location. After that, we should be able to use the more elegant approach via TypeVarExpr.
zhiics
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
|
having type that use expr is called dependent type. typically, instead of typevarexpr, ppl unify the "type", "kind" and "expr" data type into one single type. Why dont we merge type and expr? It will, e.g. make attribute not special anymore, become first class citizen so one can very easily define function in tvm that basically call another function. right now it is not possible unless you do it in C++, because you have to fill out typerelation and do dispatching yourself there. |
|
@MarisaKirisame I think we could bring a separate discuss thread in the forum for the type system design. My previous takeaway with @jroesch was that we don't want to introduce a full dependent type. Currently the Expr used in the TensorType is not part of relay expression, and only works on integer arithmetics to expression shape relations. |
…nd TypeVar (apache#4615) Currently, we use a tvm::Var to represent a placeholder for shapes in generic types. This is not necessary for GlobalTypeVar(as we never parameterize by shape var), and is a bit twisted for TypeVar. As we move to a unified type system, we want to break the dependency from the base TypeVar(which is shared across the languages) from the expression. Note that it is fine for TensorType to depend on Expr. One alternative solution to embed the Var would be to introduce a TypeVarExpr, which can wrap a TypeVar as Expr. However, this new alternative won't be natural until we migrate the type to the global scope. Lucikly, we have not yet start to depend on the shape parameterization heavily yet. This PR removes the tvm::Var from the typevars. We will follow up with another PR to migrate the types to a base location. After that, we should be able to use the more elegant approach via TypeVarExpr.
…nd TypeVar (apache#4615) Currently, we use a tvm::Var to represent a placeholder for shapes in generic types. This is not necessary for GlobalTypeVar(as we never parameterize by shape var), and is a bit twisted for TypeVar. As we move to a unified type system, we want to break the dependency from the base TypeVar(which is shared across the languages) from the expression. Note that it is fine for TensorType to depend on Expr. One alternative solution to embed the Var would be to introduce a TypeVarExpr, which can wrap a TypeVar as Expr. However, this new alternative won't be natural until we migrate the type to the global scope. Lucikly, we have not yet start to depend on the shape parameterization heavily yet. This PR removes the tvm::Var from the typevars. We will follow up with another PR to migrate the types to a base location. After that, we should be able to use the more elegant approach via TypeVarExpr.
…nd TypeVar (apache#4615) Currently, we use a tvm::Var to represent a placeholder for shapes in generic types. This is not necessary for GlobalTypeVar(as we never parameterize by shape var), and is a bit twisted for TypeVar. As we move to a unified type system, we want to break the dependency from the base TypeVar(which is shared across the languages) from the expression. Note that it is fine for TensorType to depend on Expr. One alternative solution to embed the Var would be to introduce a TypeVarExpr, which can wrap a TypeVar as Expr. However, this new alternative won't be natural until we migrate the type to the global scope. Lucikly, we have not yet start to depend on the shape parameterization heavily yet. This PR removes the tvm::Var from the typevars. We will follow up with another PR to migrate the types to a base location. After that, we should be able to use the more elegant approach via TypeVarExpr.
Currently, we use a tvm::Var to represent a placeholder for shapes in generic types.
This is not necessary for GlobalTypeVar(as we never parameterize by shape var),
and is a bit twisted for TypeVar.
As we move to a unified type system, we want to break the dependency
from the base TypeVar(which is shared across the languages) from the expression.
Note that it is fine for TensorType to depend on Expr.
One alternative solution to embed the Var would be to introduce a TypeVarExpr,
which can wrap a TypeVar as Expr. However, this new alternative won't be
natural until we migrate the type to the global scope.
Given that have not yet start to depend on the shape parameterization heavily yet, we propose to carry out the TypeVar change first.
This PR removes the tvm::Var from the typevars. We will follow up with another
PR to migrate the types to a base location. After that, we should be able to
use the more elegant approach via TypeVarExpr.