[Doc] Check markup links#7
Conversation
|
I've created a ticket at https://linear.app/genesis-ai-company/issue/CMP-22/fix-broken-links to fix the plethora of broken links. |
|
Let's merge this as-is, and fix the links later? (I kind of would prefer the broken links to somehow make the test yellow ,rathe than red, but anywya) |
|
actually, lets make it check modified files only. so at least we stabilize the links, and dont break new ones. |
| |`ti.simt.warp.active_mask` | `__activemask` | | ||
| |`ti.simt.warp.sync` | `__syncwarp` | | ||
|
|
||
| See [Taichi's API reference](https://docs.taichi-lang.org/api/taichi/lang/simt/warp/#module-taichi.lang.simt.warp) |
There was a problem hiding this comment.
Why remove this documentation? Just use the right link: https://docs.taichi-lang.org/docs/simt
There was a problem hiding this comment.
wait, doesnt that simply link to this doc?
There was a problem hiding this comment.
Perhaps I also simply remove changes to this file from this PR? (since I modified the config to only check changed files anyway)
There was a problem hiding this comment.
I think it better to fix broken links !
There was a problem hiding this comment.
There are a lot of broken links, hence why I'm only checking modified files. See previous run: https://github.com/genesis-company/taichi/actions/runs/15408489192/job/43355671280 I've created a ticket to fix these, https://linear.app/genesis-ai-company/issue/CMP-22/fix-broken-links , but I dont think we have time to spend 7-8 hours fixing them right now, in this PR.
There was a problem hiding this comment.
If someone modifies a doc, then 1. they have some context and expertise on that doc, 2. the scope is one single doc, so it's much more manageable for them to fix that doc, at that time I feel. This PR means that:
- we don't break any additional links
- we will gradually fix links as and when we modify pages
There was a problem hiding this comment.
Ok I was not aware about that. In this case, I'm in favour of keeping the broken links and update them progressively instead of deleting part of the doc.
There was a problem hiding this comment.
wait, doesnt that simply link to this doc?
You are right, so for this one I would just remove the sentence as you did initially, my bad x)
But you can do this later if you want.
|
|
||
| To avoid such implicit casting, you can manually cast your operands to | ||
| desired types, using `ti.cast`. Please see | ||
| [Default precisions](#default-precisions) for more details on |
There was a problem hiding this comment.
Why remove this documentation?
There was a problem hiding this comment.
Because I couldn't find the right link :)
There was a problem hiding this comment.
Maybe I simply remove changes to this file from this PR?
There was a problem hiding this comment.
Let's remove this one. Cannot find it either.
There was a problem hiding this comment.
The closest I could find is this one: https://docs.taichi-lang.org/docs/master/global_settings#going-high-precision
I think it would be nice to reword the sentence a bit and update the link by this one.
There was a problem hiding this comment.
Following your other comment, this could be done in another PR (one day in the future :p).
There was a problem hiding this comment.
removed this file from change in f5b017c
|
Thanks! |
Introduce the helper machinery that the per-class to_torch / to_numpy methods will migrate to in subsequent commits. Existing public symbols (can_zerocopy, dlpack_to_torch, invalidate_zerocopy_cache, current_arch_is_cpu) are preserved as deprecated shims so the in-tree pre-rework callers continue to work; they will be removed once every call site is migrated. New surface: - _ZerocopyCache: per-instance container with two independent slots (torch tensor + numpy ndarray), each filled lazily on first access via torch.utils.dlpack.from_dlpack and numpy.from_dlpack respectively. Numpy zero-copy now bypasses torch entirely (closes review #6). - make_zerocopy_cache_if_supported(owner, ...): constructs a cache when zero-copy is supported and registers `owner` with `pyquadrants.cache_holders` so invalidation is wired automatically (closes review #18). - get_zerocopy_torch / get_zerocopy_numpy: thin entry points that implement the always-zerocopy-then-clone semantic (closes review #15, #16, #21) and the Apple Metal double-sync (qd.sync() on read AND torch.mps.synchronize() after .clone()/.to(); closes review #1, #22, #23). Also applies the small lints from the review: - Module-level constant for the torch>2.9.1 MPS bytes_offset probe; drops the pointless lru_cache wrapper around a zero-arg helper (closes review #2). - ASCII '...' instead of Unicode horizontal ellipsis '\u2026' in the docstring (closes review #3). - Top-level imports for numpy and torch (try/except for the no-torch CI case); no per-call lazy imports in the new code path (closes review #7, #9). The deprecated shim still does what the existing per-class methods expect; the new helpers are torch-clean. cache_holders is still empty until the next commits register Ndarray / ScalarField / MatrixField; this commit alone is no-op behaviourally.
Issue: #
Brief Summary
Check markup links (this is not part of existing upstream checks; I'm adding it)
copilot:summary
Walkthrough
copilot:walkthrough