Skip to content

wawatt/foundationpose_cpp_windows

 
 

Repository files navigation

Foundationpose-CPP

Windows

First, thanks to the original author. Since my deployment environment is Windows, I modified the original repo to run on Windows 10/11. Inference environment: CUDA 11.8 + CUDNN 8.9 + TensorRT 8.6.1.6

1. Basic Dependencies

For convenience, use vcpkg. Refer to the official documentation for usage:

git clone https://github.com/microsoft/vcpkg.git
cd vcpkg
./bootstrap-vcpkg.bat
./vcpkg.exe install assimp opencv4 eigen3 gtest glog 

2. CV-CUDA -> ppl.cv

CV-CUDA currently (2025-07-30) does not support Windows. The main modification in this branch is using ppl.cv instead. Compile ppl.cv:

git clone https://github.com/openppl/ppl.cv.git
cd ppl.cv
mkdir build
cd build
cmake .. -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=../install ^
-DPYBIND11_NOPYTHON=ON -DPPLCV_USE_CUDA=ON -DPPLCV_USE_MSVC_STATIC_RUNTIME=OFF
cmake --build . --config Release --target ALL_BUILD
cmake --build . --config Release --target install

3. Compile This Repository

git clone https://github.com/wawatt/foundationpose_cpp_windows.git
cd foundationpose_cpp
git submodule init
git submodule update

Remove MSVC-unsupported flags -Wextra -Wdeprecated in: easy_deploy_tool\deploy_core and easy_deploy_tool\inference_core\trt_core

cd foundationpose_cpp
mkdir build
cd build
cmake .. -DCMAKE_BUILD_TYPE=Release -DCMAKE_TOOLCHAIN_FILE=D:/build/vcpkg/scripts/buildsystems/vcpkg.cmake ^
-DENABLE_TENSORRT=ON -DTRT_DIR=E:/ENVs/TensorRT-8.6.1.6 ^
-Dpplcv_DIR="E:/ENVs/ppl.cv/install/lib/cmake/ppl" ^
-DCMAKE_CUDA_ARCHITECTURES="86;89"
cmake --build . --config Release --target ALL_BUILD

4. Convert ONNX to TensorRT Models

Script

5. Run

cd foundationpose_cpp_windows
"build/bin/Release/simple_tests.exe" --gtest_filter=foundationpose_test.test

The following text is unmodified...

About this project

This project is adapted from nvidia-issac-pose-estimation, with simplified dependencies. It enables inference using ONNX models exported from the Python implementation of FoundationPose, making deployment and application highly convenient.

​Notes:​​ This repository only contains the code for the FoundationPose component. The complete 6D pose estimation pipeline also relies on object masks, which can be generated by algorithms like SAM. For reference implementations and optimized inference of MobileSAM and NanoSAM, please visit EasyDeploy.

Update LOG

[2025.04] Decoupled Register and Track processes; Output poses under mesh coordinates, providing mesh_loader interfaces for external extension. Related PR.

[2025.03] Aligned rendering process with the original Python implementation, supporting rendering without texture input. Related PR.

[2025.03] Added support for Jetson Orin platform with one-click Docker environment setup. See link.

Features

  1. Removed complex environment setup and dependency issues from the original project, enabling easy integration with other projects.

  2. Implemented encapsulation of the FoundationPose algorithm, ​​supporting dynamic-sized image input​​ for flexible usage. Provided tutorial scripts for generating 3D object models using BundleSDF.

  3. 🔥 Supports Jetson Orin development boards (Orin-NX-16GB).

Demo

Test results on public mustard dataset:

1
foundationpose(fp16) Register test result
1
foundationpose(fp16) Track test result

Performance on nvidia-4060-8G and i5-12600kf:

nvidia-4060-8G fps cpu gpu
foundationpose(fp16)-Register 2.8 100% 6.5GB
foundationpose(fp16)-Track 220 100% 5.8GB

Performance on jetson-orin-nx-16GB:

jetson-orin-nx-16GB fps cpu mem_total
foundationpose(fp16)-Register 0.6 15% 5.6GB(5.5GB on gpu)
foundationpose(fp16)-Track 100 60% 5.1GB(5.0GB on gpu)

Usage

Environment Setup

  1. Clone the repository:

    git clone git@github.com:zz990099/foundationpose_cpp.git
    cd foundationpose_cpp
    git submodule init
    git submodule update
  2. Build using Docker:

    cd ${foundationpose_cpp}
    bash easy_deploy_tool/docker/easy_deploy_startup.sh
    # Select `jetson` -> `trt10_u2204`/`trt8_u2204` (`trt8_u2004` not supported)
    bash easy_deploy_tool/docker/into_docker.sh

Model Conversion

  1. Download ONNX models from google drive and place them in /workspace/models/.

  2. Convert models:

    cd /workspace
    bash tools/cvt_onnx2trt.bash

Build Project

  1. Compile the project:
  cd /workspace
  mkdir build && cd build
  cmake -DENABLE_TENSORRT=ON ..
  make -j

Run Demo

Use public Dataset Demo (mustard)

  1. Download and extract the dataset to /workspace/test_data/ from here.

  2. Run tests:

    cd /workspace/build
    ./bin/simple_tests --gtest_filter=foundationpose_test.test

Custom 3D Model Generation

  1. Refer to Generating 3D Models with BundleSDF.

  2. Modify paths in /workspace/simple_tests/src/test_foundationpose.cpp for your data and rebuild.

  3. Run tests:

    cd /workspace/build
    ./bin/simple_tests --gtest_filter=foundationpose_test.test
  4. Results for Register and Track processes will be saved in /workspace/test_data/.

References

For any questions, feel free to raise a issue or contact 771647586@qq.com.

About

[Windows OS 10/11] A TensorRT and C++ based deployment of ​​FoundationPose

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • C++ 83.7%
  • Cuda 12.2%
  • C 2.8%
  • Other 1.3%