-
Notifications
You must be signed in to change notification settings - Fork 3.8k
[TOPI] [Hexagon] Batch flatten slice op initial version #11522
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[TOPI] [Hexagon] Batch flatten slice op initial version #11522
Conversation
|
HI @Lunderberg , @cconvey : Could you please review this PR ? |
Lunderberg
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall looks good, just some general comments and nitpicks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you @Lunderberg for your review. I have fixed the comments.
Right now CI is failing because of v69 support not being available in LLVM. Hopefully #11539 should address this and then the tests will pass in CI.
tests/python/contrib/test_hexagon/test_batch_flatten.py::TestBatchFlatten::test_batch_flatten[float16-input_shape3-n11c-1024c-1d-input_axis_sep0-nc-1d-output_axis_sep0] 'hexagonv69' is not a recognized processor for this target (ignoring processor)
|
CI is passing now. |
7c273c6 to
9338ce3
Compare
|
Hi @Lunderberg @cconvey : Could you please review this PR for any more comments ? |
|
@abhikran-quic : Apologies for being slow to reply. I'm hoping to review this early next week if that's helpful. |
199bd85 to
e24bbd6
Compare
e24bbd6 to
89479ba
Compare
|
Hi @Lunderberg @cconvey @mehrdadh, |
Reviewing now. |
Lunderberg
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Taking one more read-through after the latest commits, and found a couple more nitpicky changes.
abhikran-quic
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you @Lunderberg and @jverma-quic for your comments. I've fixed them in the latest commit.
7c1a83b to
2713dbe
Compare
Lunderberg
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for making the changes, and LGTM!
cconvey
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! One minor comment, but feel free to ignore it. We can revisit the issue later if necessary.
|
|
||
| batch_flatten_func = te.create_prim_func([inp, out]) | ||
| sch = tir.Schedule(batch_flatten_func, debug_mask="all") | ||
| compute = sch.get_block("compute") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a bit suspicious about assuming that there's a block named "compute", as I don't see any promises in the documentation about the name and what it represents. But making assumptions like this seems somewhat idiomatic within TVM, so IMHO it's okay enough.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I completely agree with you and don't like it either. But, there doesn't seem to be a way to specify a different block name for topi defined ops. Please correct me if I'm wrong.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since I'm reusing batch_flatten compute in topi , the block name compute comes up. Sharing the schedule below
@main = primfn(var_A: handle, var_compute: handle) -> ()
attr = {"global_symbol": "main", "tir.noalias": True}
buffers = {A: Buffer(A_1: Pointer(global float16), float16, [1, 1, 1, 2, 1024], [], axis_separators=[4]),
compute: Buffer(compute_1: Pointer(global float16), float16, [1, 2, 1024], [], axis_separators=[2])}
buffer_map = {var_A: A, var_compute: compute} {
block([], "root") {
tir.reads([])
tir.writes([])
for (i0: int32, 0, 1) {
for (i1_0_0: int32, 0, 1) {
for (i1_0_1: int32, 0, 1) {
for (i1_1_0: int32, 0, 2) {
for (i1_1_1_0: int32, 0, 16) {
for (i1_1_1_1: int32, 0, 64) "vectorized" {
block([1, 2048], "compute") as [i, j] {
bind(i, 0)
bind(j, (((i1_1_0*1024) + (i1_1_1_0*64)) + i1_1_1_1))
tir.reads([A[0, 0, 0, (j / 1024), (j % 1024)]])
tir.writes([compute[0, (j / 1024), (j % 1024)]])
compute[0, (j / 1024), (j % 1024)] = A[0, 0, 0, (j / 1024), (j % 1024)]
}
}
}
}
}
}
}
}
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
2713dbe to
0514097
Compare
|
@Lunderberg , @mehrdadh , @cconvey : Could you please merge this PR ? I've fixed merge conflicts. |
|
Gentle reminder! Please help in merging this PR. I want to raise another PR that's dependent on this one. |
|
@abhikran-quic please resolve the conflict by rebasing with |
@mehrdadh : I have resolved merge conflicts. Could you please review this ? |
* [TOPI] [Hexagon] Batch flatten slice op initial version * Fix lint errors * Fix more lint errors * Fix lint warnings * Fix review comments * Update tests to use util functions * Update __init__.py * Fix review comments
* [TOPI] [Hexagon] Batch flatten slice op initial version * Fix lint errors * Fix more lint errors * Fix lint warnings * Fix review comments * Update tests to use util functions * Update __init__.py * Fix review comments
* [TOPI] [Hexagon] Batch flatten slice op initial version * Fix lint errors * Fix more lint errors * Fix lint warnings * Fix review comments * Update tests to use util functions * Update __init__.py * Fix review comments
This patch adds the initial python implementation batch flatten slice op for hexagon.
Slice ops are basically ops that make certain assumptions about the input and output dimensions and are expected to be called after the original op has been sliced according to those dimensions at the graph level.
cc @Lunderberg @cconvey @mehrdadh