diff --git a/README.md b/README.md index 301296c393..4ca1e3aef2 100644 --- a/README.md +++ b/README.md @@ -63,6 +63,8 @@ For CUDA 12.x: pip install --extra-index-url=https://pypi.nvidia.com cuopt-server-cu12==25.5.* cuopt-sh-client==25.5.* nvidia-cuda-runtime-cu12==12.8.* ``` +Development wheels are available as nightlies, please update `--extra-index-url` to `https://pypi.anaconda.org/rapidsai-wheels-nightly/simple/` to install latest nightly packages. + ### Conda cuOpt can be installed with conda (via [miniforge](https://github.com/conda-forge/miniforge)) from the `nvidia` channel: @@ -74,19 +76,22 @@ Users who are used to conda env based workflows would benefit with conda package For CUDA 12.x: ```bash conda install -c rapidsai -c conda-forge -c nvidia \ - cuopt-server=25.05 cuopt-sh-client=25.05 python=3.12 cuda-version=12.8 + cuopt-server=25.05.* cuopt-sh-client=25.05.* python=3.12 cuda-version=12.8 ``` We also provide [nightly Conda packages](https://anaconda.org/rapidsai-nightly) built from the HEAD -of our latest development branch. +of our latest development branch. Just replace `-c rapidsai` with `-c rapidsai-nightly`. ### Container Users can pull the cuOpt container from the NVIDIA container registry. ```bash -docker pull nvidia/cuopt:25.5.0-cuda12.8-py312 +docker pull nvidia/cuopt:latest-cuda12.8-py312 ``` + +Note: The ``latest`` tag is the latest stable release of cuOpt. If you want to use a specific version, you can use the ``-cuda12.8-py312`` tag. For example, to use cuOpt 25.5.0, you can use the ``25.5.0-cuda12.8-py312`` tag. Please refer to `cuOpt dockerhub page `_ for the list of available tags. + More information about the cuOpt container can be found [here](https://docs.nvidia.com/cuopt/user-guide/latest/cuopt-server/quick-start.html#container-from-docker-hub). Users who are using cuOpt for quick testing or research can use the cuOpt container. Alternatively, users who are planning to plug cuOpt as a service in their workflow can quickly start with the cuOpt container. But users are required to build security layers around the service to safeguard the service from untrusted users. diff --git a/docs/cuopt/source/cuopt-python/quick-start.rst b/docs/cuopt/source/cuopt-python/quick-start.rst index 3fd72d58c5..50d494318b 100644 --- a/docs/cuopt/source/cuopt-python/quick-start.rst +++ b/docs/cuopt/source/cuopt-python/quick-start.rst @@ -17,6 +17,10 @@ For CUDA 12.x: pip install --extra-index-url=https://pypi.nvidia.com cuopt-cu12==25.5.* nvidia-cuda-runtime-cu12==12.8.* +.. note:: + For development wheels which are available as nightlies, please update `--extra-index-url` to `https://pypi.anaconda.org/rapidsai-wheels-nightly/simple/`. + + Conda ----- @@ -29,6 +33,9 @@ For CUDA 12.x: conda install -c rapidsai -c conda-forge -c nvidia \ cuopt=25.05.* python=3.12 cuda-version=12.8 +.. note:: + For development conda packages which are available as nightlies, please update `-c rapidsai` to `-c rapidsai-nightly`. + Container --------- @@ -37,13 +44,16 @@ NVIDIA cuOpt is also available as a container from Docker Hub: .. code-block:: bash - docker pull nvidia/cuopt:25.5.0-cuda12.8-py312 + docker pull nvidia/cuopt:latest-cuda12.8-py312 + +.. note:: + The ``latest`` tag is the latest stable release of cuOpt. If you want to use a specific version, you can use the ``-cuda12.8-py312`` tag. For example, to use cuOpt 25.5.0, you can use the ``25.5.0-cuda12.8-py312`` tag. Please refer to `cuOpt dockerhub page `_ for the list of available tags. The container includes both the Python API and self-hosted server components. To run the container: .. code-block:: bash - docker run --gpus all -it --rm nvidia/cuopt:25.5.0-cuda12.8-py312 + docker run --gpus all -it --rm nvidia/cuopt:latest-cuda12.8-py312 This will start an interactive session with cuOpt pre-installed and ready to use. diff --git a/docs/cuopt/source/cuopt-server/quick-start.rst b/docs/cuopt/source/cuopt-server/quick-start.rst index 770d36559c..5fad1d5228 100644 --- a/docs/cuopt/source/cuopt-server/quick-start.rst +++ b/docs/cuopt/source/cuopt-server/quick-start.rst @@ -14,6 +14,8 @@ For CUDA 12.x: pip install --extra-index-url=https://pypi.nvidia.com cuopt-server-cu12==25.5.* cuopt-sh-client==25.5.* nvidia-cuda-runtime-cu12==12.8.* +.. note:: + For development wheels which are available as nightlies, please update `--extra-index-url` to `https://pypi.anaconda.org/rapidsai-wheels-nightly/simple/`. Conda ----- @@ -27,6 +29,9 @@ For CUDA 12.x: conda install -c rapidsai -c conda-forge -c nvidia \ cuopt-server=25.05.* cuopt-sh-client=25.05.* python=3.12 cuda-version=12.8 +.. note:: + For development conda packages which are available as nightlies, please update `-c rapidsai` to `-c rapidsai-nightly`. + Container from Docker Hub ------------------------- @@ -35,13 +40,16 @@ NVIDIA cuOpt is also available as a container from Docker Hub: .. code-block:: bash - docker pull nvidia/cuopt:25.5.0-cuda12.8-py312 + docker pull nvidia/cuopt:latest-cuda12.8-py312 + +.. note:: + The ``latest`` tag is the latest stable release of cuOpt. If you want to use a specific version, you can use the ``-cuda12.8-py312`` tag. For example, to use cuOpt 25.5.0, you can use the ``25.5.0-cuda12.8-py312`` tag. Please refer to `cuOpt dockerhub page `_ for the list of available tags. The container includes both the Python API and self-hosted server components. To run the container: .. code-block:: bash - docker run --gpus all -it --rm -p 8000:8000 -e CUOPT_SERVER_PORT=8000 nvidia/cuopt:25.5.0-cuda12.8-py312 /bin/bash -c "python3 -m cuopt_server.cuopt_service" + docker run --gpus all -it --rm -p 8000:8000 -e CUOPT_SERVER_PORT=8000 nvidia/cuopt:latest-cuda12.8-py312 /bin/bash -c "python3 -m cuopt_server.cuopt_service" .. note:: Make sure you have the NVIDIA Container Toolkit installed on your system to enable GPU support in containers. See the `installation guide `_ for details. diff --git a/docs/cuopt/source/index.rst b/docs/cuopt/source/index.rst index 44f50db163..38b585f3a7 100644 --- a/docs/cuopt/source/index.rst +++ b/docs/cuopt/source/index.rst @@ -60,6 +60,16 @@ Command Line Interface (cuopt-cli) Command Line Interface Overview +======================================== +Third-Party Modeling Languages +======================================== +.. toctree:: + :maxdepth: 4 + :caption: Third-Party Modeling Languages + :name: Third-Party Modeling Languages + + thirdparty_modeling_languages/index.rst + ============= Resources ============= diff --git a/docs/cuopt/source/introduction.rst b/docs/cuopt/source/introduction.rst index d3878d3eaa..5eb537ce2f 100644 --- a/docs/cuopt/source/introduction.rst +++ b/docs/cuopt/source/introduction.rst @@ -118,6 +118,10 @@ cuOpt supports the following APIs: - `Linear Programming (LP) - Server `_ - `Mixed Integer Linear Programming (MILP) - Server `_ - `Routing (TSP, VRP, and PDP) - Server `_ +- Third-party modeling languages + - `AMPL `_ + - `PuLP `_ + ================================== Installation Options diff --git a/docs/cuopt/source/lp-features.rst b/docs/cuopt/source/lp-features.rst index 7d0adab21c..9fa3b3fd99 100644 --- a/docs/cuopt/source/lp-features.rst +++ b/docs/cuopt/source/lp-features.rst @@ -7,6 +7,12 @@ Availability The LP solver can be accessed in the following ways: +- **Third-Party Modeling Languages**: cuOpt's LP and MILP solver can be called directly from the following third-party modeling languages. This allows you to leverage GPU acceleration while maintaining your existing optimization workflow in these modeling languages. + + Supported modeling languages: + - AMPL + - PuLP + - **C API**: A native C API that provides direct low-level access to cuOpt's LP capabilities, enabling integration into any application or system that can interface with C. - **As a Self-Hosted Service**: cuOpt's LP solver can be deployed as a in your own infrastructure, enabling you to maintain full control while integrating it into your existing systems. diff --git a/docs/cuopt/source/lp-milp-settings.rst b/docs/cuopt/source/lp-milp-settings.rst index d7800e14cb..6a5309a570 100644 --- a/docs/cuopt/source/lp-milp-settings.rst +++ b/docs/cuopt/source/lp-milp-settings.rst @@ -48,7 +48,7 @@ Solution File Note: the default value is ``""`` and no solution file is written. User Problem File -^^^^^^^^^^^^^^^^ +^^^^^^^^^^^^^^^^^ ``CUOPT_USER_PROBLEM_FILE`` controls the name of a file where cuOpt should write the user problem. Note: the default value is ``""`` and no user problem file is written. diff --git a/docs/cuopt/source/milp-features.rst b/docs/cuopt/source/milp-features.rst index 97a5729ac4..9168047f2b 100644 --- a/docs/cuopt/source/milp-features.rst +++ b/docs/cuopt/source/milp-features.rst @@ -7,6 +7,12 @@ Availability The MILP solver can be accessed in the following ways: +- **Third-Party Modeling Languages**: cuOpt's LP and MILP solver can be called directly from the following third-party modeling languages. This allows you to leverage GPU acceleration while maintaining your existing optimization workflow in these modeling languages. + + Currently supported solvers: + - AMPL + - PuLP + - **C API**: A native C API that provides direct low-level access to cuOpt's MILP solver, enabling integration into any application or system that can interface with C. - **As a Self-Hosted Service**: cuOpt's MILP solver can be deployed in your own infrastructure, enabling you to maintain full control while integrating it into your existing systems. diff --git a/docs/cuopt/source/thirdparty_modeling_languages/index.rst b/docs/cuopt/source/thirdparty_modeling_languages/index.rst new file mode 100644 index 0000000000..8a5024e9ea --- /dev/null +++ b/docs/cuopt/source/thirdparty_modeling_languages/index.rst @@ -0,0 +1,17 @@ +=============================== +Third-Party Modeling Languages +=============================== + + +-------------------------- +AMPL Support +-------------------------- + +AMPL can be used with near zero code changes: simply switch to cuOpt as a solver to solve linear and mixed-integer programming problems. Please refer to the `AMPL documentation `_ for more information. + +-------------------------- +PuLP Support +-------------------------- + +PuLP can be used with near zero code changes: simply switch to cuOpt as a solver to solve linear and mixed-integer programming problems. +Please refer to the `PuLP documentation `_ for more information. Also, see the example notebook in the `cuopt-examples `_ repository.