Installation scripts for an AI applications using ROCm on Linux.
Note
From version 10.0, the script is distribution-independent thanks to the use of Podman.
All you need is a correctly configured Podman and amdgpu.
Important
All models and applications are tested on a GPU with 24GB of VRAM.
Some applications may not work on GPUs with less VRAM.
| Name | Info |
|---|---|
| CPU | AMD Ryzen 9 9950X3D |
| GPU | AMD Radeon 7900XTX |
| RAM | 64GB DDR5 6600MHz |
| Motherboard | Gigabyte X870 AORUS ELITE WIFI7 (BIOS F8) |
| OS | Debian 13.2 |
| Kernel | 6.12.57+deb13-amd64 |
| Name | Links | Additional information |
|---|---|---|
| KoboldCPP | https://github.com/YellowRoseCx/koboldcpp-rocm | Support GGML and GGUF models. |
| Text generation web UI | https://github.com/oobabooga/text-generation-webui https://github.com/ROCm/bitsandbytes.git https://github.com/turboderp/exllamav2 |
1. Support ExLlamaV2, llama.cpp and Transformers. 2. If you are using Transformers, it is recommended to use sdpa option instead of flash_attention_2. |
| SillyTavern | https://github.com/SillyTavern/SillyTavern | |
| llama.cpp | https://github.com/ggerganov/llama.cpp | 1. Put model.gguf into llama.cpp folder. 2. In run.sh file, change the values of GPU offload layers and context size to match your model. |
| Name | Link | Additional information |
|---|---|---|
| WhisperSpeech web UI | https://github.com/Mateusz-Dera/whisperspeech-webui | Install and run WhisperSpeech web UI first. |
| Name | Links | Additional information |
|---|---|---|
| ComfyUI | https://github.com/comfyanonymous/ComfyUI | Workflows templates are in the workflows folder. |
| Name | Links | Additional information |
|---|---|---|
| ACE-Step | https://github.com/ace-step/ACE-Step |
| Name | Links | Additional information |
|---|---|---|
| WhisperSpeech web UI | https://github.com/Mateusz-Dera/whisperspeech-webui https://github.com/collabora/WhisperSpeech |
|
| F5-TTS | https://github.com/SWivid/F5-TTS | Remember to select voice. |
| Matcha-TTS | https://github.com/shivammehta25/Matcha-TTS | |
| Dia | https://github.com/nari-labs/dia https://github.com/tralamazza/dia/tree/optional-rocm-cuda |
Script uses the optional-rocm-cuda fork by tralamazza. |
| KaniTTS | https://github.com/nineninesix-ai/kani-tts | If you want to change the default model, edit the kanitts/config.py file. |
| Name | Links | Additional information |
|---|---|---|
| PartCrafter | https://github.com/wgsxm/PartCrafter | Added custom simple UI. Uses a modified version of PyTorch Cluster for ROCm https://github.com/Mateusz-Dera/pytorch_cluster_rocm. |
| TRELLIS-AMD | https://github.com/CalebisGross/TRELLIS-AMD | GLB Export Takes 5-10 Minutes. Mesh preview may show grey, but the actual export works correctly. |
1. Install Podman.
Note
If you are using Debian 13.2, you can use sudo apt-get update && sudo apt-get -y install podman podman-compose qemu-system (should also work on Ubuntu 24.04)
2. Make sure that /dev/dri and /dev/kfd are accessible.
ls /dev/dri
ls /dev/kfdImportant
Your distribution must have amdgpu configured.
3. Make sure that your user has permissions for the video and render groups.
sudo usermod -aG video,render $USERImportant
If not, you need reboot after this step.
4. Clone repository.
git clone https://github.com/Mateusz-Dera/ROCm-AI-Installer.git5. Run installer.
./install.sh6. Set variables
Note
By default, the script is configured for AMD Radeon 7900XTX.
For other cards and architectures, edit GFX and HSA_OVERRIDE_GFX_VERSION.
7. Create a container if you are upgrading or running the script for the first time.
8. Install the applications of your choice.
9. Go to the application folder and run:
./run.shNote
Everything is configured to start from the host side (You don't need to enter the container).
To check if the container is running:
podman psIf the container is not running, start it with:
podman start rocmTo enter the container's bash shell:
podman exec -it rocm bashTo stop and remove the container:
podman stop rocm
podman rm rocmOr force remove (stop and remove in one command):
podman rm -f rocm