[Build] Run pyright against modified files#14
Conversation
|
|
||
| set -ex | ||
|
|
||
| python python/tools/run_modified_files.py --include 'python/*.py' -- pyright |
There was a problem hiding this comment.
We can have errors in unmodified files due to modifications elsewhere, can't we?
Is it particularly slow to run everywhere? Otherwise, I'd keep things simple and run it everywhere, or would parallelize instead.
There was a problem hiding this comment.
Its' super fast. Speed is not an issue. The 1500 errors that pop up when we run it are.
See https://genesis-ai-company.slack.com/archives/C08JT0YHM2A/p1748974351128939 for more context
There was a problem hiding this comment.
Does it keep checking files that have been changed previously, therefore have been fixed, but have not been changed again in the current commit/PR?
There was a problem hiding this comment.
it checks files that have been modified relative to main.
it won't check files that have been modified previously in main, but not since.
There was a problem hiding this comment.
basically, it proects against breaking new files, and will gradually enforce pyright on any files, as we modify them (and once a file is fixed, the only way to unfix it would be to modify it again, I think?)
There was a problem hiding this comment.
I think that you can cause typing errors in that file if you modify a dependency in another file (e.g., a type that it uses that is defined in another file). And once a file is fixed, I think we should keep running pyright on it.
An alternative may be to maintain a list of exclusions in the pyright configuration from which we'd gradually remove files, as done in gs-core.
There was a problem hiding this comment.
right now, that would be a list of like 1500 files... Baby steps. We don't even know we can fix even a single file yet.
There was a problem hiding this comment.
I agree with Charles. Not a huge fan of the approach of only checking modified files. But maybe we should remove all this confusing / obsfucated logic once Taichi repo is finally clean.
There was a problem hiding this comment.
For sure yeah. I just can't do everything all at once :)
There was a problem hiding this comment.
that would be a list of like 1500 files...
We can also add full directory exclusions, if that is more manageable.
| pip3 install scikit-build | ||
|
|
||
| pip install pyright pybind11-stubgen |
There was a problem hiding this comment.
Can we configure this in a pyproject.toml with a dev dependency group so that everyone can use it to set up local dev as well?
There was a problem hiding this comment.
I dont think we can use pyproject.toml, though perhaps you know how to configure taichi so we can? Would love to migraate to pyproject.toml
But yes, should probalby add to requirements_dev.txt. I'll do that
There was a problem hiding this comment.
Added to reuiqrements_dev.txt
There was a problem hiding this comment.
Hmmm, I guess since I'm running a full build anyway, not much excuse for not removing it from the pyright script. I'll try doing that.
| import fnmatch | ||
|
|
||
|
|
||
| def get_changed_files(include_pattern: str) -> List[str]: |
There was a problem hiding this comment.
I have mixed feelings about adding this complexity. If it's too slow to run everywhere, couldn't we scope actions (e.g., by directory) and trigger only on directory changes. This would seem a more robust and transparent solution to me.
There was a problem hiding this comment.
See other thread.
| steps: | ||
| # looks like we need to first build the dll... | ||
| # easiest way to do that is to build the wheel | ||
| # at some point, we should probably have a separate wheel building thing |
There was a problem hiding this comment.
I did this for Jiminy in the past. You can find how to do this here.
There was a problem hiding this comment.
yeah, lets have that as a separate pr please. One step at a time.
| build: | ||
| runs-on: ubuntu-22.04 | ||
| steps: | ||
| # looks like we need to first build the dll... |
There was a problem hiding this comment.
Actually you should build the DLL, then generate the stubs to get typing working. This is basically what you did actually, and yes it is absolutely necessary. Note that you should bundle the stubs with the wheel for the end-user (actually a developper using taichi) to have proper typing experience out-of-the-box. I did this before for Jiminy here.
There was a problem hiding this comment.
Ok, so the pyi bit should be part of the wheel build? Sounds good. Thanks!
CharlesMasson
left a comment
There was a problem hiding this comment.
As discussed in comments, I don't find that way we currently run pyright very practical, but if we want to address that in a subsequent PR, we can move forward.
Convert MatrixField.to_torch / to_numpy to thin calls into _interop.get_zerocopy_torch / get_zerocopy_numpy. Adds a single _matrix_view_shape() helper that returns (expected_shape, as_vector); both to_torch and to_numpy now go through it instead of duplicating the n / m / keep_dims branching (closes review #13, #14). Same wins as the ScalarField migration: - _zerocopy_cache as cached_property (closes review #17) - automatic registration with pyquadrants.cache_holders (closes #18) - copy=True now zerocopies + clones (closes #15, #16, #21) - to_numpy(copy=False) goes through numpy.from_dlpack directly, no torch round-trip (closes #6) - Apple Metal double-sync via the centralised helpers (closes #1, #22, #23) The redundant reshape after zerocopy stays (DLPack returns the field's flat n-d shape; numpy/torch users expect the matrix dims appended) but it's now in one place per method instead of two.
Issue: #
Brief Summary
Run pyright against modified files
copilot:summary
Walkthrough
copilot:walkthrough