diff --git a/README.md b/README.md index 8c73898506..e3a165f058 100644 --- a/README.md +++ b/README.md @@ -73,7 +73,6 @@ Development wheels are available as nightlies, please update `--extra-index-url` pip install --pre \ --extra-index-url=https://pypi.nvidia.com \ --extra-index-url=https://pypi.anaconda.org/rapidsai-wheels-nightly/simple/ \ - nvidia-cuda-runtime-cu12=12.9.* \ cuopt-server-cu12==25.10.* cuopt-sh-client==25.10.* ``` @@ -82,7 +81,6 @@ For CUDA 13.x: ```bash pip install \ --extra-index-url=https://pypi.nvidia.com \ - nvidia-cuda-runtime==13.0.* \ cuopt-server-cu13==25.10.* cuopt-sh-client==25.10.* ``` @@ -91,7 +89,6 @@ Development wheels are available as nightlies, please update `--extra-index-url` pip install --pre \ --extra-index-url=https://pypi.nvidia.com \ --extra-index-url=https://pypi.anaconda.org/rapidsai-wheels-nightly/simple/ \ - nvidia-cuda-runtime==13.0.* \ cuopt-server-cu13==25.10.* cuopt-sh-client==25.10.* ``` @@ -115,13 +112,13 @@ Users can pull the cuOpt container from the NVIDIA container registry. ```bash # For CUDA 12.x -docker pull nvidia/cuopt:latest-cuda12.9-py312 +docker pull nvidia/cuopt:latest-cuda12.9-py3.13 # For CUDA 13.x -docker pull nvidia/cuopt:latest-cuda13.0-py312 +docker pull nvidia/cuopt:latest-cuda13.0-py3.13 ``` -Note: The ``latest`` tag is the latest stable release of cuOpt. If you want to use a specific version, you can use the ``-cuda12.9-py312`` or ``-cuda13.0-py312`` tag. For example, to use cuOpt 25.5.0, you can use the ``25.5.0-cuda12.8-py312`` or ``25.5.0-cuda13.0-py312`` tag. Please refer to `cuOpt dockerhub page `_ for the list of available tags. +Note: The ``latest`` tag is the latest stable release of cuOpt. If you want to use a specific version, you can use the ``-cuda12.9-py3.13`` or ``-cuda13.0-py3.13`` tag. For example, to use cuOpt 25.10.0, you can use the ``25.10.0-cuda12.9-py3.13`` or ``25.10.0-cuda13.0-py3.13`` tag. Please refer to `cuOpt dockerhub page `_ for the list of available tags. More information about the cuOpt container can be found [here](https://docs.nvidia.com/cuopt/user-guide/latest/cuopt-server/quick-start.html#container-from-docker-hub). diff --git a/ci/docker/Dockerfile b/ci/docker/Dockerfile index e7f023040b..d7629fcd0c 100644 --- a/ci/docker/Dockerfile +++ b/ci/docker/Dockerfile @@ -62,9 +62,7 @@ RUN \ --extra-index-url https://pypi.anaconda.org/rapidsai-wheels-nightly/simple \ --no-cache-dir \ "cuopt-server-${cuda_suffix}==${CUOPT_VER}" \ - "cuopt-sh-client==${CUOPT_VER}" \ - "cuda-toolkit[cudart]==${cuda_major_minor}.*" \ - ${nvidia_cuda_runtime_pkg} && \ + "cuopt-sh-client==${CUOPT_VER}" && \ python -m pip list # Remove gcc to save space, gcc was required for building psutils diff --git a/dependencies.yaml b/dependencies.yaml index 5abb3197e8..d153a02915 100644 --- a/dependencies.yaml +++ b/dependencies.yaml @@ -750,12 +750,12 @@ dependencies: cuda: "12.*" use_cuda_wheels: "true" packages: - - cuda-toolkit[cublas,curand,cusolver,cusparse,nvtx]==12.* + - cuda-toolkit[cublas,cudart,curand,cusolver,cusparse,nvtx]==12.* - matrix: cuda: "13.*" use_cuda_wheels: "true" packages: - - cuda-toolkit[cublas,curand,cusolver,cusparse,nvtx]==13.* + - cuda-toolkit[cublas,cudart,curand,cusolver,cusparse,nvtx]==13.* # if use_cuda_wheels=false is provided, do not add dependencies on any CUDA wheels # (e.g. for DLFW and pip devcontainers) - matrix: @@ -765,6 +765,7 @@ dependencies: # (just as a source of documentation, as this populates pyproject.toml in source control) - matrix: packages: + - nvidia-cudart - nvidia-cublas - nvidia-curand - nvidia-cusparse diff --git a/docs/cuopt/source/cuopt-c/quick-start.rst b/docs/cuopt/source/cuopt-c/quick-start.rst index 4ae4e1ef8c..c3da7449ff 100644 --- a/docs/cuopt/source/cuopt-c/quick-start.rst +++ b/docs/cuopt/source/cuopt-c/quick-start.rst @@ -19,14 +19,10 @@ This wheel is a Python wrapper around the C++ library and eases installation and pip uninstall cuopt-thin-client # CUDA 13 - pip install --extra-index-url=https://pypi.nvidia.com \ - 'nvidia-cuda-runtime==13.0.*' \ - 'libcuopt-cu13==25.10.*' + pip install --extra-index-url=https://pypi.nvidia.com 'libcuopt-cu13==25.10.*' # CUDA 12 - pip install --extra-index-url=https://pypi.nvidia.com \ - 'nvidia-cuda-runtime-cu12==12.9.*' \ - 'libcuopt-cu12==25.10.*' + pip install --extra-index-url=https://pypi.nvidia.com 'libcuopt-cu12==25.10.*' .. note:: @@ -36,12 +32,10 @@ This wheel is a Python wrapper around the C++ library and eases installation and # CUDA 13 pip install --pre --extra-index-url=https://pypi.nvidia.com --extra-index-url=https://pypi.anaconda.org/rapidsai-wheels-nightly/simple/ \ - 'nvidia-cuda-runtime==13.0.*' \ 'libcuopt-cu13==25.10.*' # CUDA 12 pip install --pre --extra-index-url=https://pypi.nvidia.com --extra-index-url=https://pypi.anaconda.org/rapidsai-wheels-nightly/simple/ \ - 'nvidia-cuda-runtime-cu12==12.9.*' \ 'libcuopt-cu12==25.10.*' Conda diff --git a/docs/cuopt/source/cuopt-python/quick-start.rst b/docs/cuopt/source/cuopt-python/quick-start.rst index 65acea5db6..616b994d1a 100644 --- a/docs/cuopt/source/cuopt-python/quick-start.rst +++ b/docs/cuopt/source/cuopt-python/quick-start.rst @@ -13,14 +13,10 @@ pip .. code-block:: bash # CUDA 13 - pip install --extra-index-url=https://pypi.nvidia.com \ - 'nvidia-cuda-runtime==13.0.*' \ - 'cuopt-cu13==25.10.*' + pip install --extra-index-url=https://pypi.nvidia.com 'cuopt-cu13==25.10.*' # CUDA 12 - pip install --extra-index-url=https://pypi.nvidia.com \ - 'nvidia-cuda-runtime-cu12==12.9.*' \ - 'cuopt-cu12==25.10.*' + pip install --extra-index-url=https://pypi.nvidia.com 'cuopt-cu12==25.10.*' .. note:: @@ -30,12 +26,10 @@ pip # CUDA 13 pip install --pre --extra-index-url=https://pypi.nvidia.com --extra-index-url=https://pypi.anaconda.org/rapidsai-wheels-nightly/simple/ \ - 'nvidia-cuda-runtime==13.0.*' \ 'cuopt-cu13==25.10.*' # CUDA 12 pip install --pre --extra-index-url=https://pypi.nvidia.com --extra-index-url=https://pypi.anaconda.org/rapidsai-wheels-nightly/simple/ \ - 'nvidia-cuda-runtime-cu12==12.9.*' \ 'cuopt-cu12==25.10.*' @@ -63,19 +57,19 @@ NVIDIA cuOpt is also available as a container from Docker Hub: .. code-block:: bash - docker pull nvidia/cuopt:latest-cuda12.9-py3.12 + docker pull nvidia/cuopt:latest-cuda12.9-py3.13 .. note:: - The ``latest`` tag is the latest stable release of cuOpt. If you want to use a specific version, you can use the ``-cuda12.9-py3.12`` tag. For example, to use cuOpt 25.5.0, you can use the ``25.5.0-cuda12.9-py3.12`` tag. Please refer to `cuOpt dockerhub page `_ for the list of available tags. + The ``latest`` tag is the latest stable release of cuOpt. If you want to use a specific version, you can use the ``-cuda12.9-py3.13`` tag. For example, to use cuOpt 25.10.0, you can use the ``25.10.0-cuda12.9-py3.13`` tag. Please refer to `cuOpt dockerhub page `_ for the list of available tags. .. note:: - The nightly version of cuOpt is available as ``[VERSION]a-cuda12.9-py3.12`` tag. For example, to use cuOpt 25.8.0a, you can use the ``25.8.0a-cuda12.9-py3.12`` tag. + The nightly version of cuOpt is available as ``[VERSION]a-cuda12.9-py3.13`` tag. For example, to use cuOpt 25.10.0a, you can use the ``25.10.0a-cuda12.9-py3.13`` tag. Also the cuda version and python version might change in the future. Please refer to `cuOpt dockerhub page `_ for the list of available tags. The container includes both the Python API and self-hosted server components. To run the container: .. code-block:: bash - docker run --gpus all -it --rm nvidia/cuopt:latest-cuda12.9-py3.12 /bin/bash + docker run --gpus all -it --rm nvidia/cuopt:latest-cuda12.9-py3.13 /bin/bash This will start an interactive session with cuOpt pre-installed and ready to use. diff --git a/docs/cuopt/source/cuopt-server/quick-start.rst b/docs/cuopt/source/cuopt-server/quick-start.rst index 1e6fce235b..782feac834 100644 --- a/docs/cuopt/source/cuopt-server/quick-start.rst +++ b/docs/cuopt/source/cuopt-server/quick-start.rst @@ -29,13 +29,11 @@ pip # CUDA 13 pip install --pre --extra-index-url=https://pypi.nvidia.com --extra-index-url=https://pypi.anaconda.org/rapidsai-wheels-nightly/simple/ \ - 'nvidia-cuda-runtime==13.0.*' \ 'cuopt-server-cu13==25.10.*' \ 'cuopt-sh-client==25.10.*' # CUDA 12 pip install --pre --extra-index-url=https://pypi.nvidia.com --extra-index-url=https://pypi.anaconda.org/rapidsai-wheels-nightly/simple/ \ - 'nvidia-cuda-runtime-cu12==12.9.*' \ 'cuopt-server-cu12==25.10.*' \ 'cuopt-sh-client==25.10.*' @@ -59,19 +57,19 @@ NVIDIA cuOpt is also available as a container from Docker Hub: .. code-block:: bash - docker pull nvidia/cuopt:latest-cuda12.9-py3.12 + docker pull nvidia/cuopt:latest-cuda12.9-py3.13 .. note:: - The ``latest`` tag is the latest stable release of cuOpt. If you want to use a specific version, you can use the ``-cuda12.9-py3.12`` tag. For example, to use cuOpt 25.5.0, you can use the ``25.5.0-cuda12.9-py3.12`` tag. Please refer to `cuOpt dockerhub page `_ for the list of available tags. + The ``latest`` tag is the latest stable release of cuOpt. If you want to use a specific version, you can use the ``-cuda12.9-py3.13`` tag. For example, to use cuOpt 25.10.0, you can use the ``25.10.0-cuda12.9-py3.13`` tag. Please refer to `cuOpt dockerhub page `_ for the list of available tags. The container includes both the Python API and self-hosted server components. To run the container: .. code-block:: bash - docker run --gpus all -it --rm -p 8000:8000 -e CUOPT_SERVER_PORT=8000 nvidia/cuopt:latest-cuda12.9-py3.12 + docker run --gpus all -it --rm -p 8000:8000 -e CUOPT_SERVER_PORT=8000 nvidia/cuopt:latest-cuda12.9-py3.13 .. note:: - The nightly version of cuOpt is available as ``[VERSION]a-cuda12.9-py3.12`` tag. For example, to use cuOpt 25.8.0a, you can use the ``25.8.0a-cuda12.9-py3.12`` tag. + The nightly version of cuOpt is available as ``[VERSION]a-cuda12.9-py3.13`` tag. For example, to use cuOpt 25.10.0a, you can use the ``25.10.0a-cuda12.9-py3.13`` tag. Also the cuda version and python version might change in the future. Please refer to `cuOpt dockerhub page `_ for the list of available tags. .. note:: Make sure you have the NVIDIA Container Toolkit installed on your system to enable GPU support in containers. See the `installation guide `_ for details. diff --git a/python/libcuopt/pyproject.toml b/python/libcuopt/pyproject.toml index 3bbefc2c17..93271addf9 100644 --- a/python/libcuopt/pyproject.toml +++ b/python/libcuopt/pyproject.toml @@ -46,6 +46,7 @@ dependencies = [ "cuopt-mps-parser==25.10.*,>=0.0.0a0", "librmm==25.10.*,>=0.0.0a0", "nvidia-cublas", + "nvidia-cudart", "nvidia-curand", "nvidia-cusolver", "nvidia-cusparse",