Skip to content

cmake(pt): robustly detect PyTorch CXX11 ABI via Python; unify comments#4879

Closed
OutisLi wants to merge 5 commits intodeepmodeling:develfrom
OutisLi:pr/fix_cxx_abi
Closed

cmake(pt): robustly detect PyTorch CXX11 ABI via Python; unify comments#4879
OutisLi wants to merge 5 commits intodeepmodeling:develfrom
OutisLi:pr/fix_cxx_abi

Conversation

@OutisLi
Copy link
Collaborator

@OutisLi OutisLi commented Aug 11, 2025

cmake(pt): robustly detect PyTorch CXX11 ABI via Python; unify comments

Summary

  • Replace fragile CMake-side parsing of TORCH_CXX_FLAGS with a robust, runtime-accurate ABI detection using Python: torch.compiled_with_cxx11_abi().
  • Unify and streamline comments in the modified block to match the rest of the file (concise English, no banner-style markers).
  • Preserve the existing behavior for ABI mismatch handling with TensorFlow and for defaulting the macro definition when unset.

Motivation

  • Parsing TORCH_CXX_FLAGS can be unreliable for wheel-based distributions and certain environments where the flag may be absent or misleading.
  • Querying torch.compiled_with_cxx11_abi() from Python reflects the actual build/runtime setting of the installed PyTorch, reducing false negatives/positives.
  • Improve readability and consistency of the CMake file’s comments.

Changes

  • Find Python interpreter and invoke:
    • env PYTHONPATH=${Python_SITEARCH} ${Python_EXECUTABLE} -c "import torch; print(int(torch.compiled_with_cxx11_abi()))"
    • Capture result as OP_CXX_ABI_PT and log a clear status message.
  • When OP_CXX_ABI was previously defined (e.g., via TensorFlow), keep the existing mismatch handling:
    • Fatal error when building non-Python interfaces only.
    • Otherwise, enable compatibility build and set OP_CXX_ABI_COMPAT.
  • When OP_CXX_ABI was not set, default to the PyTorch-detected value and add -D_GLIBCXX_USE_CXX11_ABI=${OP_CXX_ABI}.
  • Keep the logic for collecting PyTorch_LIBRARY_PATH, includes, and optional torch.libs link directory for wheel installs.
  • Normalize comments (English, concise) in the touched area.

Compatibility

  • No change to the intended build outcomes; only the detection method is updated.
  • Retains the prior mismatch semantics and messages.
  • Should be more reliable across Linux/macOS/Windows and for wheel-based installs.

Testing

  • Configure with -DENABLE_PYTORCH=ON and build; expect a log line like:
    • -- Detecting PyTorch CXX11 ABI via Python API...
    • -- Detected PyTorch CXX11 ABI: 0|1
  • Validate both scenarios:
    • With TensorFlow enabled so that OP_CXX_ABI is pre-set (mismatch branches).
    • Without TensorFlow so that we default to the PyTorch-detected ABI.
  • Wheel install case: ensure link_directories(${PyTorch_LIBRARY_PATH}/../../torch.libs) remains effective.

Notes for Reviewers

  • The detection step adds a Python invocation during configure time; the project already requires Python for related paths, and this call is lightweight.
  • If torch is not importable in the active Python, the error message is explicit and actionable.

Changelog

  • Build: switch PyTorch CXX11 ABI detection to Python API and unify comments in source/CMakeLists.txt.

Summary by CodeRabbit

  • New Features
    • Automatic detection and use of PyTorch ABI and install paths to simplify setup and building.
  • Bug Fixes
    • Better handling of ABI mismatches with clearer status/warning messages and safer fallbacks to prevent build failures.
    • More reliable linking to PyTorch libraries, including wheel-style installs.
  • Chores
    • Build configuration readability improved with minor formatting refinements.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 11, 2025

📝 Walkthrough

Walkthrough

Replaces flag-parsing ABI detection with Python-driven detection (torch.compiled_with_cxx11_abi), adds Python-based PyTorch prefix and library path discovery, extends ABI mismatch handling and compatibility flags, and ensures PyTorch include/library paths are appended to backend paths. Changes are contained to CMake integration logic.

Changes

Cohort / File(s) Summary
CMake integration (single file)
source/CMakeLists.txt
Replaces ABI parsing of TORCH_CXX_FLAGS with Python torch.compiled_with_cxx11_abi() detection and fallback parsing of -D_GLIBCXX_USE_CXX11_ABI; adds Python-based discovery of PyTorch CMake prefix when building Python-related targets; resolves Torch target location and appends PyTorch include/library paths to backend paths; derives OP_CXX_ABI_PT, compares with existing OP_CXX_ABI and implements fatal/error/warning/compatibility handling and DEEPMD_BUILD_COMPAT_CXXABI toggling; sets -D_GLIBCXX_USE_CXX11_ABI when defaulting OP_CXX_ABI; adds link_directories handling for wheel scenarios; minor formatting and newline separation adjustments.

Sequence Diagram(s)

sequenceDiagram
    participant CMake as CMakeLists.txt
    participant Python as python
    participant Torch as torch (python package)

    CMake->>Python: run script to import torch (if BUILD_CPP_IF/USE_PT_PYTHON_LIBS and conditions)
    alt python+torch available
        Python->>Torch: call compiled_with_cxx11_abi()
        Torch-->>Python: return ABI (true/false)
        Python-->>CMake: return OP_CXX_ABI_PT and prefix (site-packages / cmake path)
        CMake->>CMake: set OP_CXX_ABI_PT, append CMAKE_PREFIX_PATH, derive library path, update BACKEND_LIBRARY_PATH
    else fallback
        CMake->>CMake: fallback parse -D_GLIBCXX_USE_CXX11_ABI from CMAKE_CXX_FLAGS
    end

    CMake->>CMake: compare OP_CXX_ABI (if set) with OP_CXX_ABI_PT
    alt mismatch & not BUILD_PY_IF
        CMake-->>User: FATAL_ERROR (abort)
    else mismatch & BUILD_PY_IF
        alt BUILD_CPP_IF false
            CMake-->>User: STATUS about mismatch (non-fatal)
        else BUILD_CPP_IF true
            CMake-->>User: WARNING about mismatch
            CMake->>CMake: enable C++ OP build, disable PT C++ libs, set DEEPMD_BUILD_COMPAT_CXXABI ON, set OP_CXX_ABI_COMPAT=OP_CXX_ABI_PT
        end
    else match
        CMake->>CMake: set DEEPMD_BUILD_COMPAT_CXXABI OFF
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~30 minutes

Possibly related PRs

Suggested labels

Core, OP, C++

Suggested reviewers

  • wanghan-iapcm
✨ Finishing Touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🔭 Outside diff range comments (1)
source/CMakeLists.txt (1)

402-411: Refactor to use the imported Torch::Torch target and remove deprecated LOCATION

  • File: source/CMakeLists.txt (lines 402–411)
-  # get torch library directory from target "torch"
-  get_target_property(_TORCH_LOCATION torch LOCATION)
-  get_filename_component(PyTorch_LIBRARY_PATH ${_TORCH_LOCATION} DIRECTORY)
+  # get torch library directory from imported target Torch::Torch
+  get_target_property(_TORCH_LOCATION Torch::Torch IMPORTED_LOCATION)
+  if(NOT _TORCH_LOCATION)
+    # handle multi-config generators
+    get_target_property(_TORCH_LOCATION Torch::Torch IMPORTED_LOCATION_${CMAKE_BUILD_TYPE})
+  endif()
+  if(_TORCH_LOCATION)
+    get_filename_component(PyTorch_LIBRARY_PATH "${_TORCH_LOCATION}" DIRECTORY)
+  else()
+    message(WARNING "Unable to resolve Torch::Torch IMPORTED_LOCATION; BACKEND_LIBRARY_PATH may be incomplete.")
+  endif()
  • Replace the global link_directories() call with a guarded target_link_directories() (requires CMake 3.13+), or simply rely on the imported target’s link information. For the wheel layout:
if(USE_PT_PYTHON_LIBS OR BUILD_PY_IF)
  if(EXISTS "${PyTorch_LIBRARY_PATH}/../../torch.libs")
    target_link_directories(<backend_target> PRIVATE
      "${PyTorch_LIBRARY_PATH}/../../torch.libs"
    )
  endif()
endif()
🧹 Nitpick comments (6)
source/CMakeLists.txt (6)

323-328: Fix variable expansion typo in fatal error message

"${PYTORCH_CMAKE_PREFIX_PATH_RESULT_VAR}" is missing the opening brace in the message and will print literally, obscuring the real exit code.

-        "Cannot determine PyTorch CMake prefix path, error code: $PYTORCH_CMAKE_PREFIX_PATH_RESULT_VAR}, error message: ${PYTORCH_CMAKE_PREFIX_PATH_ERROR_VAR}"
+        "Cannot determine PyTorch CMake prefix path, error code: ${PYTORCH_CMAKE_PREFIX_PATH_RESULT_VAR}, error message: ${PYTORCH_CMAKE_PREFIX_PATH_ERROR_VAR}"

307-315: Python discovery redundancy and scope

You’re calling find_package(Python COMPONENTS Interpreter REQUIRED) in two nearby blocks. Not harmful, but redundant. You can rely on the first call or wrap subsequent usage with if(NOT Python_Interpreter_FOUND). Minor cleanup only.


340-347: Gate ABI detection to relevant platforms to reduce noise

_GLIBCXX_USE_CXX11_ABI matters on libstdc++ (primarily Linux). On Windows/macOS this is usually irrelevant. Consider gating detection and definition under if(UNIX AND NOT APPLE) to avoid confusing logs and unnecessary defines on non-Linux platforms.


307-331: Add ENV error capture for cmake_prefix_path probe and avoid hard failure in mixed setups

For the cmake_prefix_path probe, consider capturing ERROR_VARIABLE (you already do) and downgrading to STATUS/WARNING when USE_PT_PYTHON_LIBS is off, so libtorch-only builds don’t fail here. This aligns with the new ABI fallback approach.


425-428: Defaulting OP_CXX_ABI to 1 is fine; ensure the macro is defined consistently

You set the default value here; ensure -D_GLIBCXX_USE_CXX11_ABI=${OP_CXX_ABI} is applied once globally after this point if not already added in earlier branches.


1-3: Silence CMP0144 policy warning (dev warning in CI)

To address the policy warning seen in CI, set the policy explicitly near the top:

# after cmake_minimum_required(...)
if(POLICY CMP0144)
  cmake_policy(SET CMP0144 NEW)
endif()
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c7d8da6 and 222d9ee.

📒 Files selected for processing (1)
  • source/CMakeLists.txt (3 hunks)
🧰 Additional context used
🪛 GitHub Actions: Test Python
source/CMakeLists.txt

[error] 361-361: CMake configuration failed: Failed to detect PyTorch CXX11 ABI. Please ensure 'torch' is installed and accessible in the current Python environment ('/home/runner/.cache/uv/builds-v0/.tmpWuw0dT/bin/python').


[warning] 302-302: CMake Warning (dev): CMP0144 is not set: find_package uses upper-case _ROOT variables. Run 'cmake --help-policy CMP0144' for policy details. Use the cmake_policy command to set the policy and suppress.

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (14)
  • GitHub Check: Build wheels for cp311-win_amd64
  • GitHub Check: Build wheels for cp311-macosx_x86_64
  • GitHub Check: Build wheels for cp311-macosx_arm64
  • GitHub Check: Build wheels for cp311-manylinux_x86_64
  • GitHub Check: Build wheels for cp310-manylinux_aarch64
  • GitHub Check: Build wheels for cp311-manylinux_x86_64
  • GitHub Check: Build C library (2.14, >=2.5.0,<2.15, libdeepmd_c_cu11.tar.gz)
  • GitHub Check: Build C library (2.18, libdeepmd_c.tar.gz)
  • GitHub Check: Build C++ (cuda120, cuda)
  • GitHub Check: Build C++ (rocm, rocm)
  • GitHub Check: Build C++ (cuda, cuda)
  • GitHub Check: Test C++ (true)
  • GitHub Check: Test C++ (false)
  • GitHub Check: Analyze (c-cpp)
🔇 Additional comments (1)
source/CMakeLists.txt (1)

332-339: C++ standard bumps look fine

Bumping C++ standard based on Torch version remains intact. No issues.

Comment on lines +348 to +354
execute_process(
COMMAND env PYTHONPATH=${Python_SITEARCH} ${Python_EXECUTABLE} -c
"import torch; print(int(torch.compiled_with_cxx11_abi()))"
OUTPUT_VARIABLE DETECTED_PT_ABI
RESULT_VARIABLE pt_abi_result
OUTPUT_STRIP_TRAILING_WHITESPACE)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

⚠️ Potential issue

Make Python call portable and resilient (no 'env', avoid Python_SITEARCH dependency)

  • Using env in execute_process(COMMAND ...) breaks on Windows. Use ENVIRONMENT instead.
  • Python_SITEARCH is not guaranteed when only the Interpreter component is found. It can be empty and isn’t needed to import torch from the active interpreter.
-  execute_process(
-    COMMAND env PYTHONPATH=${Python_SITEARCH} ${Python_EXECUTABLE} -c
-            "import torch; print(int(torch.compiled_with_cxx11_abi()))"
-    OUTPUT_VARIABLE DETECTED_PT_ABI
-    RESULT_VARIABLE pt_abi_result
-    OUTPUT_STRIP_TRAILING_WHITESPACE)
+  execute_process(
+    COMMAND ${Python_EXECUTABLE} -c "import torch; print(int(torch.compiled_with_cxx11_abi()))"
+    OUTPUT_VARIABLE DETECTED_PT_ABI
+    RESULT_VARIABLE pt_abi_result
+    ERROR_VARIABLE pt_abi_err
+    OUTPUT_STRIP_TRAILING_WHITESPACE
+    # If you still need to influence sys.path, do it via ENVIRONMENT (portable):
+    # ENVIRONMENT "PYTHONPATH=${Python_SITEARCH}"
+  )

Optional: If you want to prefer the active interpreter’s environment, drop PYTHONPATH entirely (as above). If you keep it, gate it with if(DEFINED Python_SITEARCH).

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
execute_process(
COMMAND env PYTHONPATH=${Python_SITEARCH} ${Python_EXECUTABLE} -c
"import torch; print(int(torch.compiled_with_cxx11_abi()))"
OUTPUT_VARIABLE DETECTED_PT_ABI
RESULT_VARIABLE pt_abi_result
OUTPUT_STRIP_TRAILING_WHITESPACE)
execute_process(
COMMAND ${Python_EXECUTABLE} -c "import torch; print(int(torch.compiled_with_cxx11_abi()))"
OUTPUT_VARIABLE DETECTED_PT_ABI
RESULT_VARIABLE pt_abi_result
ERROR_VARIABLE pt_abi_err
OUTPUT_STRIP_TRAILING_WHITESPACE
# If you still need to influence sys.path, do it via ENVIRONMENT (portable):
# ENVIRONMENT "PYTHONPATH=${Python_SITEARCH}"
)
🤖 Prompt for AI Agents
In source/CMakeLists.txt around lines 348 to 354, the execute_process invocation
uses a platform-unsafe "env" wrapper and unconditionally sets PYTHONPATH to
Python_SITEARCH which may be empty; change it to call the interpreter directly
via ${Python_EXECUTABLE} and supply environment variables via CMake's
ENVIRONMENT keyword (not the "env" command) or simply drop PYTHONPATH entirely
to prefer the active interpreter’s environment; if you must set PYTHONPATH,
guard it with if(DEFINED Python_SITEARCH) and only add ENVIRONMENT
"PYTHONPATH=${Python_SITEARCH}" when defined so the command is portable on
Windows and resilient when Python_SITEARCH is missing.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🔭 Outside diff range comments (1)
source/CMakeLists.txt (1)

307-330: Fix error-message variable expansion; keep the prefix-path discovery but fail with a clear, correct message

  • The error message at Line 326 uses shell-style $VAR instead of CMake’s ${VAR} and has a stray closing brace. It will print literally or misformat.
  • The rest of this block looks good; keeping a hard error here is reasonable since USE_PT_PYTHON_LIBS explicitly opts into Python-wheel linkage.

Apply this diff:

-  if(NOT ${PYTORCH_CMAKE_PREFIX_PATH_RESULT_VAR} EQUAL 0)
-    message(
-      FATAL_ERROR
-        "Cannot determine PyTorch CMake prefix path, error code: $PYTORCH_CMAKE_PREFIX_PATH_RESULT_VAR}, error message: ${PYTORCH_CMAKE_PREFIX_PATH_ERROR_VAR}"
-    )
-  endif()
+  if(NOT ${PYTORCH_CMAKE_PREFIX_PATH_RESULT_VAR} EQUAL 0)
+    message(
+      FATAL_ERROR
+        "Cannot determine PyTorch CMake prefix path. "
+        "error code: ${PYTORCH_CMAKE_PREFIX_PATH_RESULT_VAR}, "
+        "error message: ${PYTORCH_CMAKE_PREFIX_PATH_ERROR_VAR}"
+    )
+  endif()
♻️ Duplicate comments (3)
source/CMakeLists.txt (3)

377-416: Always emit the final -D_GLIBCXX_USE_CXX11_ABI after ABI resolution (handles mismatch path too)

Currently, add_definitions is emitted in “match” and “no prior ABI” branches, but not when there is a mismatch and BUILD_PY_IF is ON (compat build). Emit the macro once after the decision so all targets consistently see OP_CXX_ABI.

Apply this minimal change:

     else()
       # ABI matches; no compat build needed
       set(DEEPMD_BUILD_COMPAT_CXXABI OFF)
-      add_definitions(-D_GLIBCXX_USE_CXX11_ABI=${OP_CXX_ABI})
     endif()
   else()
     # no prior ABI; default to PyTorch ABI
     set(OP_CXX_ABI ${OP_CXX_ABI_PT})
-    add_definitions(-D_GLIBCXX_USE_CXX11_ABI=${OP_CXX_ABI})
   endif()
+  # Ensure the macro is defined for our code based on final OP_CXX_ABI
+  if(DEFINED OP_CXX_ABI)
+    add_definitions(-D_GLIBCXX_USE_CXX11_ABI=${OP_CXX_ABI})
+  endif()

348-354: Make the Python call portable; drop ‘env’, optionally guard PYTHONPATH via ENVIRONMENT

Using the external “env” program breaks on Windows and Python_SITEARCH may be unset when only Interpreter is found. Prefer direct call + CMake’s ENVIRONMENT.

Apply this diff:

-  execute_process(
-    COMMAND env PYTHONPATH=${Python_SITEARCH} ${Python_EXECUTABLE} -c
-            "import torch; print(int(torch.compiled_with_cxx11_abi()))"
-    OUTPUT_VARIABLE DETECTED_PT_ABI
-    RESULT_VARIABLE pt_abi_result
-    OUTPUT_STRIP_TRAILING_WHITESPACE)
+  execute_process(
+    COMMAND ${Python_EXECUTABLE} -c "import torch; print(int(torch.compiled_with_cxx11_abi()))"
+    OUTPUT_VARIABLE DETECTED_PT_ABI
+    RESULT_VARIABLE pt_abi_result
+    ERROR_VARIABLE pt_abi_err
+    OUTPUT_STRIP_TRAILING_WHITESPACE
+    # If you still need to influence sys.path, guard and use ENVIRONMENT (portable):
+    # ENVIRONMENT "PYTHONPATH=${Python_SITEARCH}"
+  )

355-375: Improve fallback: prefer TORCH_CXX_FLAGS when available; include error diagnostics

Great that you no longer hard-fail. Prefer parsing TORCH_CXX_FLAGS first (when set by Torch package), then fall back to CMAKE_CXX_FLAGS, and include the Python error output to aid debugging.

Apply this diff:

-  else()
-    # Python torch not available -> warn and default
-    string(REGEX MATCH "-D_GLIBCXX_USE_CXX11_ABI=([01])" _m
-                 "${CMAKE_CXX_FLAGS}")
-    if(CMAKE_MATCH_1)
-      set(OP_CXX_ABI_PT "${CMAKE_MATCH_1}")
-      message(
-        STATUS "Parsed PyTorch CXX11 ABI from CMAKE_CXX_FLAGS: ${OP_CXX_ABI_PT}"
-      )
-    else()
-      message(
-        WARNING
-          "Could not detect PyTorch CXX11 ABI (torch import failed with code=${pt_abi_result}). "
-          "Defaulting OP_CXX_ABI_PT=1.")
-      set(OP_CXX_ABI_PT 1)
-    endif()
-  endif()
+  else()
+    # Python torch not available -> fall back to flags or default
+    if(DEFINED TORCH_CXX_FLAGS)
+      string(REGEX MATCH "-D_GLIBCXX_USE_CXX11_ABI=([01])" _pt "${TORCH_CXX_FLAGS}")
+      if(CMAKE_MATCH_1)
+        set(OP_CXX_ABI_PT "${CMAKE_MATCH_1}")
+        message(STATUS "Detected PyTorch CXX11 ABI from TORCH_CXX_FLAGS: ${OP_CXX_ABI_PT}")
+      endif()
+    endif()
+    if(NOT DEFINED OP_CXX_ABI_PT)
+      string(REGEX MATCH "-D_GLIBCXX_USE_CXX11_ABI=([01])" _cc "${CMAKE_CXX_FLAGS}")
+      if(CMAKE_MATCH_1)
+        set(OP_CXX_ABI_PT "${CMAKE_MATCH_1}")
+        message(STATUS "Parsed PyTorch CXX11 ABI from CMAKE_CXX_FLAGS: ${OP_CXX_ABI_PT}")
+      endif()
+    endif()
+    if(NOT DEFINED OP_CXX_ABI_PT)
+      message(WARNING
+        "Could not detect PyTorch CXX11 ABI (torch import failed with code=${pt_abi_result}"
+        "${pt_abi_err MATCHES \".+\" ? \"; see pt_abi_err\" : \"\"}). Defaulting OP_CXX_ABI_PT=1.")
+      set(OP_CXX_ABI_PT 1)
+    endif()
+  endif()
🧹 Nitpick comments (1)
source/CMakeLists.txt (1)

413-420: Modernize target/file path retrieval and guard wheel link dir

  • LOCATION is legacy; prefer generator expression or IMPORTED_LOCATION for imported targets.
  • Guard the torch.libs link directory to avoid noisy warnings when it doesn’t exist.

Apply this diff:

-  get_target_property(_TORCH_LOCATION torch LOCATION)
-  get_filename_component(PyTorch_LIBRARY_PATH ${_TORCH_LOCATION} DIRECTORY)
+  # Prefer generator expression for target file path (works for imported targets)
+  get_filename_component(PyTorch_LIBRARY_PATH "$<TARGET_FILE:torch>" DIRECTORY)
   list(APPEND BACKEND_LIBRARY_PATH ${PyTorch_LIBRARY_PATH})
   list(APPEND BACKEND_INCLUDE_DIRS ${TORCH_INCLUDE_DIRS})
   if(USE_PT_PYTHON_LIBS OR BUILD_PY_IF)
     # when libtorch.so is in a wheel
-    link_directories(${PyTorch_LIBRARY_PATH}/../../torch.libs)
+    if(EXISTS "${PyTorch_LIBRARY_PATH}/../../torch.libs")
+      link_directories(${PyTorch_LIBRARY_PATH}/../../torch.libs)
+    endif()
   endif()
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 222d9ee and cdda058.

📒 Files selected for processing (1)
  • source/CMakeLists.txt (3 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (28)
  • GitHub Check: Test Python (2, 3.12)
  • GitHub Check: Test Python (6, 3.12)
  • GitHub Check: Build wheels for cp311-manylinux_x86_64
  • GitHub Check: Build wheels for cp311-win_amd64
  • GitHub Check: Test Python (3, 3.9)
  • GitHub Check: Build wheels for cp311-manylinux_x86_64
  • GitHub Check: Build wheels for cp311-macosx_arm64
  • GitHub Check: Test Python (5, 3.12)
  • GitHub Check: Test Python (5, 3.9)
  • GitHub Check: Test Python (6, 3.9)
  • GitHub Check: Build wheels for cp310-manylinux_aarch64
  • GitHub Check: Build wheels for cp311-macosx_x86_64
  • GitHub Check: Test Python (4, 3.9)
  • GitHub Check: Test Python (4, 3.12)
  • GitHub Check: Test Python (3, 3.12)
  • GitHub Check: Test Python (1, 3.9)
  • GitHub Check: Test Python (2, 3.9)
  • GitHub Check: Test Python (1, 3.12)
  • GitHub Check: Test C++ (true)
  • GitHub Check: Test C++ (false)
  • GitHub Check: Build C++ (cuda, cuda)
  • GitHub Check: Build C++ (rocm, rocm)
  • GitHub Check: Build C++ (clang, clang)
  • GitHub Check: Build C++ (cpu, cpu)
  • GitHub Check: Build C++ (cuda120, cuda)
  • GitHub Check: Build C library (2.18, libdeepmd_c.tar.gz)
  • GitHub Check: Build C library (2.14, >=2.5.0,<2.15, libdeepmd_c_cu11.tar.gz)
  • GitHub Check: Analyze (c-cpp)
🔇 Additional comments (2)
source/CMakeLists.txt (2)

332-338: C++ standard bump logic for Torch versions looks good

Conditionally raising CMAKE_CXX_STANDARD based on Torch version is correct and minimal.


340-347: Python discovery and status logging are fine

Reusing find_package(Python ... Interpreter) here is OK; message aids diagnostics.

@codecov
Copy link

codecov bot commented Aug 11, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
⚠️ Please upload report for BASE (devel@c7d8da6). Learn more about missing BASE report.
⚠️ Report is 83 commits behind head on devel.

Additional details and impacted files
@@           Coverage Diff            @@
##             devel    #4879   +/-   ##
========================================
  Coverage         ?   84.34%           
========================================
  Files            ?      702           
  Lines            ?    68583           
  Branches         ?     3573           
========================================
  Hits             ?    57848           
  Misses           ?     9595           
  Partials         ?     1140           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@OutisLi OutisLi closed this Aug 25, 2025
@OutisLi OutisLi deleted the pr/fix_cxx_abi branch October 20, 2025 01:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant