-
Notifications
You must be signed in to change notification settings - Fork 3.8k
Adding initial SVE support to TVM #8655
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Prototype containing initial VLA and predication implementation
|
@tqchen @jcf94 @junrushao1994 you may also be interested |
huajsj
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @mbaret.
| # ctx = remote.context(target) | ||
| # # launch the kernel. | ||
| # n = nn | ||
| # a = tvm.nd.array(np.random.uniform(size=(n + base, stride)).astype(A.dtype), ctx) | ||
| # c = tvm.nd.array(np.zeros((n, stride), dtype=C.dtype), ctx) | ||
| # f(a, c) | ||
| # tvm.testing.assert_allclose(c.asnumpy(), a.asnumpy()[base:] + 1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove
| # ctx = remote.context(target) | ||
| # # launch the kernel. | ||
| # a = tvm.nd.empty((n,), A.dtype, ctx).copyfrom(np.random.uniform(size=(n, lanes))) | ||
| # c = tvm.nd.empty((n,), C.dtype, ctx) | ||
| # f(a, c) | ||
| # tvm.testing.assert_allclose(c.asnumpy(), a.asnumpy() + 1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove
| # ctx = remote.context(target) | ||
| # # launch the kernel. | ||
| # n = nn | ||
| # a = tvm.nd.array(np.random.uniform(size=(n + base)).astype(A.dtype), ctx) | ||
| # c = tvm.nd.array(np.zeros(n, dtype=C.dtype), ctx) | ||
| # f(a, c) | ||
| # tvm.testing.assert_allclose(c.asnumpy(), a.asnumpy()[::-1][:n]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove
| } | ||
|
|
||
| // All pattern | ||
| int all_pattern = 31; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
use macro?
| // auto const str_to_parse = os.str(); | ||
| // auto pos = str_to_parse.find("x"); | ||
| // auto stem= str_to_parse.substr(0, pos); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove
| // DataType(int code, int bits) { | ||
| // data_.code = static_cast<uint8_t>(code); | ||
| // data_.bits = static_cast<uint8_t>(bits); | ||
| // is_scalable_ = true; | ||
| // std::cout<<bits<<std::endl; | ||
| // data_.lanes = uint16_t(128) / static_cast<uint16_t>(8); // minimal lanes | ||
| // | ||
| //// if (code == kBFloat) { | ||
| //// ICHECK_EQ(bits, 16); | ||
| //// } | ||
| // } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove
| bool is_scalable() const { return is_scalable_; } | ||
|
|
||
| DataType with_scalable_lanes() const { | ||
| int min_num_lanes = 128 / bits(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
macro? and does here only support vector size 128 or should be configurable between 128 - 2048?
| bool is_vector_bool() const { return is_vector() && bits() == 1; } | ||
| /*! \return whether type is a Void type. */ | ||
| bool is_void() const { return code() == DataType::kHandle && bits() == 0 && lanes() == 0; } | ||
| bool is_scalable() const { return is_scalable_; } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
doxygen comments.
| bool operator==(const DataType& other) const { | ||
| return data_.code == other.data_.code && data_.bits == other.data_.bits && | ||
| data_.lanes == other.data_.lanes; | ||
| data_.lanes == other.data_.lanes; // && is_scalable_ == other.is_scalable_; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
seem like no diff with original code, is the comment for "is_scalable_ == other.is_scalable" intend logic?
| bool is_void() const { return code() == DataType::kHandle && bits() == 0 && lanes() == 0; } | ||
| bool is_scalable() const { return is_scalable_; } | ||
|
|
||
| DataType with_scalable_lanes() const { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how this function cooperate with is_scalable_? what happen if is_scalable_ is false but this function get called?
|
This PR appears to be out of date, please feel free to reopen it if this is not the case. As part of the new year we are attempting to triage the project's open pull requests to ensure that code which Thanks again for your contribution, and feel free to reach out to discuss these changes. |
Prototype for the addition of Arm Architecture's Scalable Vector Extension (SVE) in TVM containing initial VLA and predication implementation, based on earlier work by Giuseppe Rossini.
The RFC can be found here