-
Notifications
You must be signed in to change notification settings - Fork 78
[RFC] CodeGenAArch64 backend with Scalable Vector Extension (SVE) #94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This RFC is to add CodeGenAArch64 backend with SVE.
|
There is more context around where this is going in the meta-RFC :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
most comment summarized in followup convos
|
|
||
| With SVE enabled, this TIR would further be lowered to LLVM: | ||
|
|
||
| ``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Based on this description, seems the proposed approach is that:
- we pattern matching a fixed vectorization( lane=5)
- raise it back to SVE pattern (with vscale and lane!=5)
- codegen
One concern is that the code can be simplified by the assumption(lane=5) during lowering phase, but that simplification does not work for the general case.
Edit: After thinking a bit more, i now think the above concern can be addressed by clarifying a strict set of raising rules. so feel free to ignore this
|
Thanks @ekalda . It is great to see us having conversations on bringing in SVE. The main question we want to resolve likely is going to be what is the TIR spec goes into codegen that contains SVE info. Three alternatives have been discussed so far: A0: Loop with annotation but body as scalar for (i: int32, 0, 20;i, annotation={"VLA"}) {
C_2[i] = A_2[i] + B_2[i];
}A1: Vectorized loop with constant vector factor for (i: int32, 0, 20; i) {
C_2[ramp(i, 0, 5)] = A_2[ramp(i, 0, 5)] + B_2[ramp(i, 0, 5)];
}A2: Vectorized loop with some form of TIR repr for sve vector for (i: int32, 0, 20; i) {
C_2[ramp(i, 0, vscale)] = A_2[ramp(i, 0, vscale)] + B_2[ramp(i, 0, vscale)];
}This would involve updates to the ramp note TIR. See DiscussionThe above three perspective are to setup the stage for discussion. We discussion comparing A0, A1, A2 in this to setup context or followups, and they do not need to block this RFC. This RFC proposes A1. Because it is a proposed change to codegen only, which does not change TIR. If A1 can be implemented robustly, then it think it is a positive step(close to S0 type change we had in other conversations) even if we want to do things in several stages(with follow up S1 changes). The main question of discussion is how can we implement A1 robustly. Since turning a specialized code into general one is a bit like raising (from special case to general ones). It would be good to add high-level description about the pattern match and conversation rules. For some background, initially I thought that there might be some traps when the code contains some specializations to lane, but thinking a bit more I find my initial thought of counter example actually is fine under A1. So I am more convinced of this approach. It would be good to add some clarification around the following lines: We would only turn SVE specialization if the code satisfies the following pattern
|
|
Thanks for your input and suggestions @tqchen, much appreciated! I added a paragraph about pattern matching TIR, see if it makes sense. Yes, this RFC propses A1 change. A2 style TIR intrinsic is in the plan further down the line, it would let us expose SVE capabilities to the core compiler, so we could explore a larger space of optimisations. The decision to enable it initially just in the TIR->LLVM boundary came from a realisation that we can generate perfectly valid SVE from just looking at the TIR, without having to modify it. I have spent some time playing around with the current LLVM codegen and I think you make a very good point with the robustness. I have been looking at simple vectorized loads and stores (simple meaning here that the stride is 1 and that the index expression is a Ramp node, not a complex non-linear calculation with Ramp as a leaf node), the main challenge I currently see is that while the index itself is 1D at the point of code generation, the loop nest necessarily isn't, so I have to figure out the right loop bound that needs changing from the base of the Ramp node. It seems to me that we have to do some sort of analysis pass just before the codegen to collect that info. It would have been nice to directly generate the SVE LLVM "as we go" during the LLVM codegen, but it seems that we generate LLVM with the loop bounds fixed before we visit the loop body (so before we discover the Ramp nodes) and we can't change the bound afterwards. I think doing an analysis pass would help with the robustness since we can gather as much information from the TIR graph as we need to. I haven't worked a lot with LLVM backends, so interested in hearing any thoughts/suggestions. |
|
Thanks @ekalda i don't have further comments at this pt |
This RFC is to add CodeGenAArch64 backend with SVE.