Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/clang_tidy.yml
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ jobs:
# to do this, we need to run cmake with the following options:
# -DCMAKE_EXPORT_COMPILE_COMMANDS=ON
# I'm fairly sure that we don't need to run the actual build,
# but it's not obvious to me how to do this. So, I'm just going
# but it's not obvious to me how to do this. So, I'm just going
# to run a full build for now, and we can FIXME this later.
- name: Prequisites for build
run: |
Expand Down
19 changes: 19 additions & 0 deletions .github/workflows/linters.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
name: Linters, using linux
on:
workflow_dispatch:
pull_request:
branches:
- main
jobs:
build:
name: Linters
runs-on: ubuntu-22.04
steps:
- name: Python check
uses: actions/setup-python@v4
with:
python-version: '3.10'
- uses: actions/checkout@v4
- name: Run linters
run: |
bash .github/workflows/scripts_new/linters.sh
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same again.

Copy link
Copy Markdown

@CharlesMasson CharlesMasson Jun 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ruff for instance has an official well maintained github action. I think it'd make sense to use it, so that what we have to maintain is minimal.

3 changes: 2 additions & 1 deletion .github/workflows/linux_x86.yml
Original file line number Diff line number Diff line change
@@ -1,11 +1,12 @@
name: Linux x86 on-demand
name: Linux x86
on:
pull_request:
branches:
- main
workflow_dispatch:
jobs:
build:
name: Linux x86 Build
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v4
Expand Down
4 changes: 3 additions & 1 deletion .github/workflows/scripts/ti_build/vulkan.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,9 @@
def setup_vulkan():
u = platform.uname()
if u.system == "Linux":
url = f"https://sdk.lunarg.com/sdk/download/{VULKAN_VERSION}/linux/vulkansdk-linux-x86_64-{VULKAN_VERSION}.tar.xz"
url = (
f"https://sdk.lunarg.com/sdk/download/{VULKAN_VERSION}/linux/vulkansdk-linux-x86_64-{VULKAN_VERSION}.tar.xz"
)
prefix = get_cache_home() / f"vulkan-{VULKAN_VERSION}"

download_dep(url, prefix, strip=1)
Expand Down
31 changes: 31 additions & 0 deletions .github/workflows/scripts_new/linters.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
#!/bin/bash

set -ex

python -V
pwd
ls
uname -a

# python + C++
# =============

pip install pre-commit
pre-commit run -a --show-diff

# python
# ======

pip install pyright
# Need to deal with C++ linkage issues first (and possibly
# some other things), before we can turn on pyright

pip install isort
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you think of using ruff? We already use it in gs-core and it's the most robust and complete. We can only enable isort rules for now if we want to limit ourself to it.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like ruff

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should merge all CI PRs first , before running a ruff pass, since that will conflict with the whole world...

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(possibly excluding this one I suppose 🤔 )

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well... I suppose most of the changes are to either C++ or CI, which are both orthogonal to ruff 🤔

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we could start using ruff now and only enable isort rules instead of migrating later, but up to you.

Copy link
Copy Markdown

@CharlesMasson CharlesMasson Jun 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One advantage I see with ruff is that it has well-maintained pre-commit hooks and Github actions, so I think it'd allow removing some code in this PR, especially since it would also allow removing black and pylint in addition to isort.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh interesting. didnt realize ruff check --select I was an option. Looking into that. Thanks!

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, but I'd favor configuring in pyproject.toml or ruff.toml, so that configuration is done in a single place and more transparently.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup, good point. thanks! 🙌

# TODO: run isort on all python files, and commit those, then
# uncomment the following line:
# isort --check-only --diff python

# C++
# ===

# TODO: figure out how to run clang-tidy
4 changes: 2 additions & 2 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,15 +3,15 @@ ci:
autoupdate_commit_msg: '[misc] Update pre-commit hooks'

default_language_version:
python: python3.12
python: python3.10
Comment thread
duburcqa marked this conversation as resolved.

exclude: ^((tests/python/test_exception)\.py$|external/)
repos:
- repo: https://github.com/psf/black
rev: 25.1.0
hooks:
- id: black
language_version: python3.12
language_version: python3.10
args: ['-l', '120']

- repo: https://github.com/pre-commit/mirrors-clang-format
Expand Down
2 changes: 2 additions & 0 deletions python/taichi/tools/vtk.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,8 @@ def write_vtk(scalar_field, filename):
zcoords = np.array([0, 1])
elif dimensions == 3:
zcoords = np.arange(0, field_shape[2])
else:
Comment thread
duburcqa marked this conversation as resolved.
raise ValueError("dimensions should be 2 or 3")
gridToVTK(
filename,
x=np.arange(0, field_shape[0]),
Expand Down
75 changes: 37 additions & 38 deletions python/tools/markdown_link_check.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,106 +6,103 @@

error_found = False # Track if any errors are found


def check_markdown_links(file_path, base_dir=None):
"""
Check all links in a Markdown file, including anchor references.

Args:
file_path: Path to the Markdown file
base_dir: Base directory for relative links (defaults to file's directory)
"""
global error_found
if base_dir is None:
base_dir = os.path.dirname(os.path.abspath(file_path))
with open(file_path, 'r', encoding='utf-8') as f:

with open(file_path, "r", encoding="utf-8") as f:
content = f.read()

# Find all links and image references
link_pattern = r'\[.*?\]\((.*?)\)|!\[.*?\]\((.*?)\)'
link_pattern = r"\[.*?\]\((.*?)\)|!\[.*?\]\((.*?)\)"
matches = re.findall(link_pattern, content)

# Combine both capturing groups (links and images)
links = [match[0] or match[1] for match in matches if match[0] or match[1]]

for link in links:
parsed = urlparse(link)

# Skip mailto and external links
if parsed.scheme in ('http', 'https', 'mailto'):
if parsed.scheme in ("http", "https", "mailto"):
print(f"[-] External link (not checked): {link}")
continue

# Handle anchor-only links
if not parsed.path and parsed.fragment:
check_anchor(file_path, parsed.fragment)
continue

# Handle relative paths
if not parsed.scheme and not parsed.netloc:
full_path = os.path.normpath(os.path.join(base_dir, parsed.path))

# Check if file exists
if not os.path.exists(full_path):
print(f"❌ Broken link: {link} (File not found: {full_path})")
error_found = True
continue

# Check anchor in local file
if parsed.fragment:
if full_path.endswith('.md'):
if full_path.endswith(".md"):
check_anchor(full_path, parsed.fragment)
else:
# For non-markdown files, we can't check anchors
print(f"⚠️ Anchor in non-Markdown file (not checked): {link}")


def check_anchor(md_file_path, anchor):
"""
Check if an anchor exists in a Markdown file.

Args:
md_file_path: Path to the Markdown file
anchor: Anchor to check (without #)
"""
global error_found
try:
with open(md_file_path, 'r', encoding='utf-8') as f:
with open(md_file_path, "r", encoding="utf-8") as f:
content = f.read()

# Improved anchor cleaning: remove non-alphanum except hyphens, collapse multiple hyphens, strip hyphens
def clean_anchor(s):
s = s.lower().replace(' ', '-')
s = re.sub(r'[^a-z0-9\-]', '', s)
s = re.sub(r'-+', '-', s)
s = s.strip('-')
s = s.lower().replace(" ", "-")
s = re.sub(r"[^a-z0-9\-]", "", s)
s = re.sub(r"-+", "-", s)
s = s.strip("-")
return s

normalized_anchor = clean_anchor(anchor)

# Pattern for Markdown headers
header_pattern = r'^#+\s+(.*)$'
header_pattern = r"^#+\s+(.*)$"

found = False
available_anchors = []
for line in content.split('\n'):
for line in content.split("\n"):
match = re.match(header_pattern, line)
if match:
header_text = match.group(1)
anchor_dash = clean_anchor(header_text)
anchor_underscore = re.sub(r'[^a-z0-9\-]', '', header_text.lower().replace(' ', '_'))
anchor_nospace = re.sub(r'[^a-z0-9\-]', '', header_text.replace(' ', ''))
anchor_raw = re.sub(r'[^a-z0-9\-]', '', header_text)
possible_anchors = [
anchor_dash,
anchor_underscore,
anchor_nospace,
anchor_raw
]
anchor_underscore = re.sub(r"[^a-z0-9\-]", "", header_text.lower().replace(" ", "_"))
anchor_nospace = re.sub(r"[^a-z0-9\-]", "", header_text.replace(" ", ""))
anchor_raw = re.sub(r"[^a-z0-9\-]", "", header_text)
possible_anchors = [anchor_dash, anchor_underscore, anchor_nospace, anchor_raw]
available_anchors.append(anchor_dash)
if normalized_anchor in possible_anchors:
found = True
break

if not found:
print(f"❌ Broken anchor: #{anchor} in {md_file_path}")
print(f" Available anchors in this file:")
Expand All @@ -115,18 +112,20 @@ def clean_anchor(s):
except Exception as e:
print(f"⚠️ Error checking anchor #{anchor} in {md_file_path}: {str(e)}")


def find_markdown_files(root_dir):
"""
Recursively find all .md files under root_dir.
"""
md_files = []
for dirpath, _, filenames in os.walk(root_dir):
for filename in filenames:
if filename.lower().endswith('.md'):
if filename.lower().endswith(".md"):
md_files.append(os.path.join(dirpath, filename))
return md_files

if __name__ == '__main__':

if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Check Markdown links in a directory or a single Markdown file.")
parser.add_argument("path", help="Path to the root directory or a Markdown file")
args = parser.parse_args()
Expand All @@ -139,7 +138,7 @@ def find_markdown_files(root_dir):
if not md_files:
print(f"No Markdown files found in {input_path}")
exit(0)
elif os.path.isfile(input_path) and input_path.lower().endswith('.md'):
elif os.path.isfile(input_path) and input_path.lower().endswith(".md"):
md_files = [input_path]
else:
print(f"Error: {input_path} is not a directory or a Markdown (.md) file.")
Expand Down
22 changes: 12 additions & 10 deletions taichi/program/sparse_solver.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -221,17 +221,17 @@ void CuSparseSolver::reorder(const CuSparseMatrix &A) {
assert(nullptr != h_csr_val_B_);
assert(nullptr != h_map_B_from_A_);

CUDADriver::get_instance().memcpy_device_to_host(h_csr_row_ptr_B_, d_csrRowPtrA,
sizeof(int) * (rowsA + 1));
CUDADriver::get_instance().memcpy_device_to_host(h_csr_col_ind_B_, d_csrColIndA,
sizeof(int) * nnzA);
CUDADriver::get_instance().memcpy_device_to_host(
h_csr_row_ptr_B_, d_csrRowPtrA, sizeof(int) * (rowsA + 1));
CUDADriver::get_instance().memcpy_device_to_host(
h_csr_col_ind_B_, d_csrColIndA, sizeof(int) * nnzA);
CUDADriver::get_instance().memcpy_device_to_host(h_csrValA, d_csrValA,
sizeof(float) * nnzA);

// compoute h_Q_
CUSOLVERDriver::get_instance().csSpXcsrsymamdHost(cusolver_handle_, rowsA,
nnzA, descr_, h_csr_row_ptr_B_,
h_csr_col_ind_B_, h_Q_);
CUSOLVERDriver::get_instance().csSpXcsrsymamdHost(
cusolver_handle_, rowsA, nnzA, descr_, h_csr_row_ptr_B_, h_csr_col_ind_B_,
h_Q_);
CUDADriver::get_instance().malloc((void **)&d_Q_, sizeof(int) * colsA);
CUDADriver::get_instance().memcpy_host_to_device((void *)d_Q_, (void *)h_Q_,
sizeof(int) * (colsA));
Expand All @@ -255,9 +255,11 @@ void CuSparseSolver::reorder(const CuSparseMatrix &A) {
sizeof(int) * (rowsA + 1));
CUDADriver::get_instance().malloc((void **)&d_csr_col_ind_B_,
sizeof(int) * nnzA);
CUDADriver::get_instance().malloc((void **)&d_csr_val_B_, sizeof(float) * nnzA);
CUDADriver::get_instance().memcpy_host_to_device(
(void *)d_csr_row_ptr_B_, (void *)h_csr_row_ptr_B_, sizeof(int) * (rowsA + 1));
CUDADriver::get_instance().malloc((void **)&d_csr_val_B_,
sizeof(float) * nnzA);
CUDADriver::get_instance().memcpy_host_to_device((void *)d_csr_row_ptr_B_,
(void *)h_csr_row_ptr_B_,
sizeof(int) * (rowsA + 1));
CUDADriver::get_instance().memcpy_host_to_device(
(void *)d_csr_col_ind_B_, (void *)h_csr_col_ind_B_, sizeof(int) * nnzA);
CUDADriver::get_instance().memcpy_host_to_device(
Expand Down
6 changes: 3 additions & 3 deletions taichi/program/sparse_solver.h
Original file line number Diff line number Diff line change
Expand Up @@ -97,12 +97,12 @@ class CuSparseSolver : public SparseSolver {
int *d_Q_{nullptr};
int *h_csr_row_ptr_B_{nullptr}; /* <int> n+1 */
int *h_csr_col_ind_B_{nullptr}; /* <int> nnzA */
float *h_csr_val_B_{nullptr}; /* <float> nnzA */
float *h_csr_val_B_{nullptr}; /* <float> nnzA */
int *h_map_B_from_A_{nullptr}; /* <int> nnzA */
int *d_csr_row_ptr_B_{nullptr}; /* <int> n+1 */
int *d_csr_col_ind_B_{nullptr}; /* <int> nnzA */
float *d_csr_val_B_{nullptr}; /* <float> nnzA */
// NOLINTEND
float *d_csr_val_B_{nullptr}; /* <float> nnzA */
// NOLINTEND
public:
CuSparseSolver();
explicit CuSparseSolver(SolverType solver_type) : solver_type_(solver_type) {
Expand Down
8 changes: 4 additions & 4 deletions taichi/python/export_lang.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -350,11 +350,11 @@ void export_lang(py::module &m) {
.def("insert_snode_access_flag", &ASTBuilder::insert_snode_access_flag)
.def("reset_snode_access_flag", &ASTBuilder::reset_snode_access_flag);

auto device_capability_config = py::class_<DeviceCapabilityConfig>(
m, "DeviceCapabilityConfig");
auto device_capability_config =
py::class_<DeviceCapabilityConfig>(m, "DeviceCapabilityConfig");

auto compiled_kernel_data = py::class_<CompiledKernelData>(
m, "CompiledKernelData");
auto compiled_kernel_data =
py::class_<CompiledKernelData>(m, "CompiledKernelData");

py::class_<Program>(m, "Program")
.def(py::init<>())
Expand Down
Loading