Skip to content

Conversation

@multiverstack-intellif
Copy link
Contributor

@multiverstack-intellif multiverstack-intellif commented Sep 13, 2022

Currently, cache read/write requires to be stage pipeline, but it is unnecessary theoretically.
When there is WAR, the target of cache_read could be specified by the consumer_blocks parameter. This also allows cache_write to handle the write after read of the same buffer.

cc @Hzfengsy @junrushao1994 @wrongtest-intellif

vi = T.axis.S(128, i * 16 + j)
B[vi] = A[vi] + 1.0

for i in T.grid(128):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need to change this testcase? A common practice is not to change the existing test case if possible.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, indeed. but if considering a case that R-W-R a same buffer, and we want to cache_read the second R, the 1st R should be ignored. This PR's solution is to specify the 2nd R in consumer_blocks and ignore R if it is not in consumer_blocks. So for this test case, block("B") is ignored. Since the purpose of this case is to cache_read block("C"), the cache block seems to be more reasonable if next to block("C"). Any better solutions is welcomed.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc @jwfromm since he is the author of the test case

@Hzfengsy Hzfengsy merged commit fbb500e into apache:main Sep 24, 2022
xinetzone pushed a commit to daobook/tvm that referenced this pull request Nov 25, 2022
…ed behavior (apache#12766)

[TIR][Schedule] Relax cache read/write's restriction and fix unexpected behavior.

Co-authored-by: Min Chen <chen.min@intellif.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants