Add top-level CMake kernels target#12806
Conversation
|
Stack from ghstack (oldest at bottom): |
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/12806
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New FailureAs of commit 42c53a6 with merge base ce9da63 ( NEW FAILURE - The following job has failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
| check_conflicting_options_on( | ||
| IF_ON EXECUTORCH_SELECT_OPS_YAML CONFLICTS_WITH | ||
| EXECUTORCH_SELECT_OPS_LIST EXECUTORCH_SELECT_OPS_MODEL | ||
| ) |
There was a problem hiding this comment.
I mean in the future it should be composable. I'm OK with having this for now.
There was a problem hiding this comment.
@GregoryComer Should this also check for a conflict with the OPS_FROM_MODEL API?
There was a problem hiding this comment.
Oh I see it there, later down.
| if(NOT EXECUTORCH_SELECT_OPS_YAML STREQUAL "" | ||
| OR NOT EXECUTORCH_SELECT_OPS_LIST STREQUAL "" | ||
| OR NOT EXECUTORCH_SELECT_OPS_MODEL STREQUAL "" | ||
| ) |
There was a problem hiding this comment.
Thank you for adding this!
|
@GregoryComer can you remove these options: https://github.com/pytorch/executorch/blob/main/examples/selective_build/CMakeLists.txt#L55 and let it use the ones you added |
| LIB_NAME | ||
| "executorch_selected_kernels" | ||
| KERNEL_LIBS | ||
| "portable_kernels" |
There was a problem hiding this comment.
Thanks for adding this to top-level cmake!
So, executorch_selected_kernels is a selective build library for portable_kernels only it seems? If a user wants to selectively build on optimized or custom kernel libs, they should either:
- Update here (for test purposes, I guess).
- Follow the current flow and create their own selective build lib.
There was a problem hiding this comment.
Yeah, as a follow-up, I intend to enable optimized (and portable_optimized) kernels. To enable selective build for other libraries (quantized ops, llm ops, etc.), I should just be able to use the merge_yaml functionality to generate a combined op yaml, right?
For user's custom op libs, I think they'll probably need to define their own targets, as we can't access their targets from here (unless we re-wrote as a macro/function, but we can look at that separately).
Backend op libs are another case that needs more thinking through, as well. But hopefully this should be a good starting point.
I've created a follow-up task to track this: #12948. Will do this shortly. |
Add a top level kernels CMake target, which includes all configured kernel libraries. I validate this change by building a simple runner using the executorch_kernels target and verifying that it was able to build and run MobileNet v2, with and without selective build. Once the changes land in ExecuTorch, this will be long-term validated in executorch-examples CI.
There are a few follow-ups needed, which I will handle shortly. I still need to add merging logic for the YAML files to generate a single selective library that includes quantized and LLM ops, when configured to build them.
This is done in the context of top-level CMake targets, tracked in #12293.