diff --git a/.gitignore b/.gitignore index 3ce0cf6879..9afd1b1944 100644 --- a/.gitignore +++ b/.gitignore @@ -17,7 +17,6 @@ STRU_READIN_ADJUST.cif build dist .idea -toolchain.tar.gz time.json *.pyc __pycache__ diff --git a/toolchain/README.md b/toolchain/README.md index 1336e67573..7c35a15d4a 100644 --- a/toolchain/README.md +++ b/toolchain/README.md @@ -1,6 +1,6 @@ # The ABACUS Toolchain -Version 2024.3 +Version 2025.1 ## Author @@ -27,12 +27,13 @@ and give setup files that you can use to compile ABACUS. - [x] Support for [LibRI](https://github.com/abacusmodeling/LibRI) by submodule or automatic installation from github.com (but installed LibRI via `wget` seems to have some problem, please be cautious) - [x] A mirror station by Bohrium database, which can download CEREAL, LibNPY, LibRI and LibComm by `wget` in China Internet. - [x] Support for GPU compilation, users can add `-DUSE_CUDA=1` in builder scripts. +- [x] Support for AMD compiler and math lib `AOCL` and `AOCC` (not fully complete due to flang and AOCC-ABACUS compliation error) - [ ] Change the downloading url from cp2k mirror to other mirror or directly downloading from official website. (doing) +- [ ] Support a JSON or YAML configuration file for toolchain, which can be easily modified by users. - [ ] A better README and Detail markdown file. - [ ] Automatic installation of [DEEPMD](https://github.com/deepmodeling/deepmd-kit). - [ ] Better compliation method for ABACUS-DEEPMD and ABACUS-DEEPKS. - [ ] Modulefile generation scripts. -- [ ] Support for AMD compiler and math lib like `AOCL` and `AOCC` ## Usage Online & Offline @@ -49,6 +50,8 @@ There are also well-modified script to run *install_abacus_toolchain.sh* for `gn > ./toolchain_gnu.sh # for intel-mkl > ./toolchain_intel.sh +# for amd aocc-aocl +> ./toolchain_amd.sh # for intel-mkl-mpich > ./toolchain_intel-mpich.sh ``` @@ -94,7 +97,7 @@ The above station will be updated handly but one should notice that the version If one want to install ABACUS by toolchain OFFLINE, one can manually download all the packages from [cp2k-static/download](https://www.cp2k.org/static/downloads) or official website and put them in *build* directory by formatted name -like *fftw-3.3.10.tar.gz*, or *openmpi-5.0.5.tar.bz2*, +like *fftw-3.3.10.tar.gz*, or *openmpi-5.0.6.tar.bz2*, then run this toolchain. All package will be detected and installed automatically. Also, one can install parts of packages OFFLINE and parts of packages ONLINE @@ -109,19 +112,23 @@ just by using this toolchain The needed dependencies version default: -- `cmake` 3.30.0 +- `cmake` 3.31.2 - `gcc` 13.2.0 (which will always NOT be installed, But use system) -- `OpenMPI` 4.1.6 (5.0.5 can be used but have some problem in OpenMP parallel computation in ELPA) -- `MPICH` 4.2.2 +- `OpenMPI` 5.0.6 (Version 5 OpenMPI is good but will have compability problem, user can manually downarade to Version 4 in toolchain scripts) +- `MPICH` 4.3.0 - `OpenBLAS` 0.3.28 (Intel toolchain need `get_vars.sh` tool from it) - `ScaLAPACK` 2.2.1 (a developing version) - `FFTW` 3.3.10 -- `LibXC` 6.2.2 -- `ELPA` 2024.05.001 +- `LibXC` 7.0.0 +- `ELPA` 2025.01.001 - `CEREAL` 1.3.2 - `RapidJSON` 1.1.0 -And Intel-oneAPI need user or server manager to manually install from Intel. -[Intel-oneAPI](https://www.intel.cn/content/www/cn/zh/developer/tools/oneapi/toolkits.html) +And: +- Intel-oneAPI need user or server manager to manually install from Intel. +- - [Intel-oneAPI](https://www.intel.cn/content/www/cn/zh/developer/tools/oneapi/toolkits.html) +- AMD AOCC-AOCL need user or server manager to manually install from AMD. +- - [AOCC](https://www.amd.com/zh-cn/developer/aocc.html) +- - [AOCL](https://www.amd.com/zh-cn/developer/aocl.html) Dependencies below are optional, which is NOT installed by default: @@ -130,7 +137,7 @@ Dependencies below are optional, which is NOT installed by default: - `LibRI` 0.2.0 - `LibComm` 0.1.1 -Users can install them by using `--with-*=install` in toolchain*.sh, which is `no` in default. +Users can install them by using `--with-*=install` in toolchain*.sh, which is `no` in default. Also, user can specify the absolute path of the package by `--with-*=path/to/package` in toolchain*.sh to allow toolchain to use the package. > Notice: LibRI, LibComm and Libnpy is on actively development, you should check-out the package version when using this toolchain. Also, LibRI and LibComm can be installed by github submodule, that is also work for libnpy, which is more recommended. Users can easily compile and install dependencies of ABACUS @@ -151,6 +158,8 @@ If compliation is successful, a message will be shown like this: > ./build_abacus_gnu.sh > To build ABACUS by intel-toolchain, just use: > ./build_abacus_intel.sh +> To build ABACUS by amd-toolchain in gcc-aocl, just use: +> ./build_abacus_amd.sh > or you can modify the builder scripts to suit your needs. ``` @@ -180,11 +189,70 @@ or you can also do it in a more completely way: ## Common Problems and Solutions -### LibRI and LibComm for EXX +### Intel-oneAPI problem + +#### OneAPI 2025.0 problem + +Generally, OneAPI 2025.0 can be useful to compile basic function of ABACUS, but one will encounter compatible problem related to something. Here is the treatment +- related to rapidjson: +- - Not to use rapidjson in your toolchain +- - or use the master branch of [RapidJSON](https://github.com/Tencent/rapidjson) +- related to LibRI: not to use LibRI or downgrade your OneAPI. + +#### ELPA problem via Intel-oneAPI toolchain in AMD server + +The default compiler for Intel-oneAPI is `icpx` and `icx`, which will cause problem when compling ELPA in AMD server. (Which is a problem and needed to have more check-out) + +The best way is to change `icpx` to `icpc`, `icx` to `icc`. user can manually change it in *toolchain_intel.sh* via `--with-intel-classic=yes` + +Notice: `icc` and `icpc` from Intel Classic Compiler of Intel-oneAPI is not supported for 2024.0 and newer version. And Intel-OneAPI 2023.2.0 can be found in QE website. You need to download Base-toolkit for MKL and HPC-toolkit for MPi and compiler for Intel-OneAPI 2023.2.0, while in Intel-OneAPI 2024.x, only the HPC-toolkit is needed. + +You can get Intel-OneAPI in [QE-managed website](https://pranabdas.github.io/espresso/setup/hpc/#installing-intel-oneapi-libraries), and use this code to get Intel oneAPI Base Toolkit and HPC Toolkit: +```shell +wget https://registrationcenter-download.intel.com/akdlm/IRC_NAS/992857b9-624c-45de-9701-f6445d845359/l_BaseKit_p_2023.2.0.49397_offline.sh +wget https://registrationcenter-download.intel.com/akdlm/IRC_NAS/0722521a-34b5-4c41-af3f-d5d14e88248d/l_HPCKit_p_2023.2.0.49440_offline.sh +``` + +Related discussion here [#4976](https://github.com/deepmodeling/abacus-develop/issues/4976) + +#### link problem in early 2023 version oneAPI + +Sometimes Intel-oneAPI have problem to link `mpirun`, +which will always show in 2023.2.0 version of MPI in Intel-oneAPI. +Try `source /path/to/setvars.sh` or install another version of IntelMPI may help. + +which is fixed in 2024.0.0 version of Intel-oneAPI, +And will not occur in Intel-MPI before 2021.10.0 (Intel-oneAPI before 2023.2.0) + +More problem and possible solution can be accessed via [#2928](https://github.com/deepmodeling/abacus-develop/issues/2928) + +### AMD AOCC-AOCL problem + +You cannot use AOCC to complie abacus now, see [#5982](https://github.com/deepmodeling/abacus-develop/issues/5982) . + +However, use AOCC-AOCL to compile dependencies is permitted and usually get boosting in ABACUS effciency. But you need to get rid of `flang` while compling ELPA. Toolchain itself help you make this `flang` shade in default, and you can manully use `flang` by setting `--with-flang=yes` in `toolchain_amd.sh` to have a try. -- GCC toolchain with OpenMPI cannot compile LibComm v0.1.1 due to the different MPI variable type from MPICH and IntelMPI, see discussion here [#5033](https://github.com/deepmodeling/abacus-develop/issues/5033), you can switch to GCC-MPICH or Intel toolchain +Notice: ABACUS via GCC-AOCL in AOCC-AOCL toolchain have no application with DeePKS, DeePMD and LibRI. + +### OpenMPI problem + +#### in EXX and LibRI + +- GCC toolchain with OpenMPI cannot compile LibComm v0.1.1 due to the different MPI variable type from MPICH and IntelMPI, see discussion here [#5033](https://github.com/deepmodeling/abacus-develop/issues/5033), you can try use a newest branch of LibComm by +``` +git clone https://gitee.com/abacus_dft/LibComm -b MPI_Type_Contiguous_Pool +``` +or pull the newest master branch of LibComm +``` +git clone https://github.com/abacusmodeling/LibComm +``` +. yet another is switching to GCC-MPICH or Intel toolchain - It is recommended to use Intel toolchain if one wants to include EXX feature in ABACUS, which can have much better performance and can use more than 16 threads in OpenMP parallelization to accelerate the EXX process. +#### OpenMPI-v5 + +OpenMPI in version 5 has huge update, lead to compatibility problem. If one wants to use the OpenMPI in version 4 (4.1.6), one can specify `--with-openmpi-4th=yes` in *toolchain_gnu.sh* + ### GPU version of ABACUS For GPU version of ABACUS (do not GPU version installer of ELPA, which is still doing work), add following options in build*.sh: @@ -242,26 +310,6 @@ When you encounter problem like `GLIBCXX_3.4.29 not found`, it is sure that your After my test, you need `gcc`>11.3.1 to enable deepmd feature in ABACUS. -### Intel-oneAPI problem - -#### ELPA problem via Intel-oneAPI toolchain in AMD server - -The default compiler for Intel-oneAPI is `icpx` and `icx`, which will cause problem when compling ELPA in AMD server. (Which is a problem and needed to have more check-out) - -The best way is to change `icpx` to `icpc`, `icx` to `icc`. user can manually change it in toolchain*.sh via `--with-intel-classic=yes` - -Notice: `icc` and `icpc` from Intel Classic Compiler of Intel-oneAPI is not supported for 2024.0 and newer version. And Intel-OneAPI 2023.2.0 can be found in website. See discussion here [#4976](https://github.com/deepmodeling/abacus-develop/issues/4976) - -#### link problem in early 2023 version oneAPI - -Sometimes Intel-oneAPI have problem to link `mpirun`, -which will always show in 2023.2.0 version of MPI in Intel-oneAPI. -Try `source /path/to/setvars.sh` or install another version of IntelMPI may help. - -which is fixed in 2024.0.0 version of Intel-oneAPI, -And will not occur in Intel-MPI before 2021.10.0 (Intel-oneAPI before 2023.2.0) - -More problem and possible solution can be accessed via [#2928](https://github.com/deepmodeling/abacus-develop/issues/2928) ## Advanced Installation Usage diff --git a/toolchain/build_abacus_gnu-aocl.sh b/toolchain/build_abacus_gnu-aocl.sh new file mode 100755 index 0000000000..3ab0ce97fd --- /dev/null +++ b/toolchain/build_abacus_gnu-aocl.sh @@ -0,0 +1,82 @@ +#!/bin/bash +#SBATCH -J build +#SBATCH -N 1 +#SBATCH -n 16 +#SBATCH -o install.log +#SBATCH -e install.err +# JamesMisaka in 2025.03.09 + +# Build ABACUS by amd-openmpi toolchain + +# module load openmpi aocc aocl + +ABACUS_DIR=.. +TOOL=$(pwd) +INSTALL_DIR=$TOOL/install +source $INSTALL_DIR/setup +cd $ABACUS_DIR +ABACUS_DIR=$(pwd) +#AOCLhome=/opt/aocl # user can specify this parameter + +BUILD_DIR=build_abacus_gnu +rm -rf $BUILD_DIR + +PREFIX=$ABACUS_DIR +ELPA=$INSTALL_DIR/elpa-2025.01.001/cpu +CEREAL=$INSTALL_DIR/cereal-1.3.2/include/cereal +LIBXC=$INSTALL_DIR/libxc-7.0.0 +RAPIDJSON=$INSTALL_DIR/rapidjson-1.1.0/ +# LAPACK=$AOCLhome/lib +# SCALAPACK=$AOCLhome/lib +# FFTW3=$AOCLhome +# LIBRI=$INSTALL_DIR/LibRI-0.2.1.0 +# LIBCOMM=$INSTALL_DIR/LibComm-0.1.1 +# LIBTORCH=$INSTALL_DIR/libtorch-2.1.2/share/cmake/Torch +# LIBNPY=$INSTALL_DIR/libnpy-1.0.1/include +# DEEPMD=$HOME/apps/anaconda3/envs/deepmd # v3.0 might have problem + +# if clang++ have problem, switch back to g++ + +cmake -B $BUILD_DIR -DCMAKE_INSTALL_PREFIX=$PREFIX \ + -DCMAKE_CXX_COMPILER=clang++ \ + -DMPI_CXX_COMPILER=mpicxx \ + -DELPA_DIR=$ELPA \ + -DCEREAL_INCLUDE_DIR=$CEREAL \ + -DLibxc_DIR=$LIBXC \ + -DENABLE_LCAO=ON \ + -DENABLE_LIBXC=ON \ + -DUSE_OPENMP=ON \ + -DUSE_ELPA=ON \ + -DENABLE_RAPIDJSON=ON \ + -DRapidJSON_DIR=$RAPIDJSON \ +# -DLAPACK_DIR=$LAPACK \ +# -DSCALAPACK_DIR=$SCALAPACK \ +# -DFFTW3_DIR=$FFTW3 \ +# -DENABLE_DEEPKS=1 \ +# -DTorch_DIR=$LIBTORCH \ +# -Dlibnpy_INCLUDE_DIR=$LIBNPY \ +# -DENABLE_LIBRI=ON \ +# -DLIBRI_DIR=$LIBRI \ +# -DLIBCOMM_DIR=$LIBCOMM \ +# -DDeePMD_DIR=$DEEPMD \ + +# if one want's to include deepmd, your system gcc version should be >= 11.3.0 for glibc requirements + +cmake --build $BUILD_DIR -j `nproc` +cmake --install $BUILD_DIR 2>/dev/null + +# generate abacus_env.sh +cat << EOF > "${TOOL}/abacus_env.sh" +#!/bin/bash +source $INSTALL_DIR/setup +export PATH="${PREFIX}/bin":\${PATH} +EOF + +# generate information +cat << EOF +========================== usage ========================= +Done! +To use the installed ABACUS version +You need to source ${TOOL}/abacus_env.sh first ! +""" +EOF \ No newline at end of file diff --git a/toolchain/build_abacus_gnu.sh b/toolchain/build_abacus_gnu.sh index e6a9798fd0..27328c7eec 100755 --- a/toolchain/build_abacus_gnu.sh +++ b/toolchain/build_abacus_gnu.sh @@ -4,8 +4,7 @@ #SBATCH -n 16 #SBATCH -o install.log #SBATCH -e install.err -# install ABACUS with libxc and deepks -# JamesMisaka in 2023.08.31 +# JamesMisaka in 2025.03.09 # Build ABACUS by gnu-toolchain @@ -24,16 +23,16 @@ rm -rf $BUILD_DIR PREFIX=$ABACUS_DIR LAPACK=$INSTALL_DIR/openblas-0.3.28/lib SCALAPACK=$INSTALL_DIR/scalapack-2.2.1/lib -ELPA=$INSTALL_DIR/elpa-2024.05.001/cpu +ELPA=$INSTALL_DIR/elpa-2025.01.001/cpu FFTW3=$INSTALL_DIR/fftw-3.3.10 CEREAL=$INSTALL_DIR/cereal-1.3.2/include/cereal -LIBXC=$INSTALL_DIR/libxc-6.2.2 +LIBXC=$INSTALL_DIR/libxc-7.0.0 RAPIDJSON=$INSTALL_DIR/rapidjson-1.1.0/ # LIBRI=$INSTALL_DIR/LibRI-0.2.1.0 # LIBCOMM=$INSTALL_DIR/LibComm-0.1.1 # LIBTORCH=$INSTALL_DIR/libtorch-2.1.2/share/cmake/Torch # LIBNPY=$INSTALL_DIR/libnpy-1.0.1/include -# DEEPMD=$HOME/apps/anaconda3/envs/deepmd +# DEEPMD=$HOME/apps/anaconda3/envs/deepmd # v3.0 might have problem cmake -B $BUILD_DIR -DCMAKE_INSTALL_PREFIX=$PREFIX \ -DCMAKE_CXX_COMPILER=g++ \ @@ -57,8 +56,6 @@ cmake -B $BUILD_DIR -DCMAKE_INSTALL_PREFIX=$PREFIX \ # -DLIBRI_DIR=$LIBRI \ # -DLIBCOMM_DIR=$LIBCOMM \ # -DDeePMD_DIR=$DEEPMD \ -# -DTensorFlow_DIR=$DEEPMD \ - # # add mkl env for libtorch to link # if one want to install libtorch, mkl should be load in build process diff --git a/toolchain/build_abacus_intel-mpich.sh b/toolchain/build_abacus_intel-mpich.sh index 6ed443668e..59e93967ae 100755 --- a/toolchain/build_abacus_intel-mpich.sh +++ b/toolchain/build_abacus_intel-mpich.sh @@ -4,13 +4,12 @@ #SBATCH -n 16 #SBATCH -o install.log #SBATCH -e install.err -# build and install ABACUS with libxc, also can with deepks and deepmd -# JamesMisaka in 2023.08.31 +# JamesMisaka in 2025.03.09 # Build ABACUS by intel-toolchain with mpich # module load mkl compiler -# source path/to/vars.sh +# source path/to/setvars.sh ABACUS_DIR=.. TOOL=$(pwd) @@ -23,15 +22,15 @@ BUILD_DIR=build_abacus_intel-mpich rm -rf $BUILD_DIR PREFIX=$ABACUS_DIR -ELPA=$INSTALL_DIR/elpa-2024.05.001/cpu +ELPA=$INSTALL_DIR/elpa-2025.01.001/cpu CEREAL=$INSTALL_DIR/cereal-1.3.2/include/cereal -LIBXC=$INSTALL_DIR/libxc-6.2.2 +LIBXC=$INSTALL_DIR/libx-7.0.0 RAPIDJSON=$INSTALL_DIR/rapidjson-1.1.0/ # LIBTORCH=$INSTALL_DIR/libtorch-2.1.2/share/cmake/Torch # LIBNPY=$INSTALL_DIR/libnpy-1.0.1/include # LIBRI=$INSTALL_DIR/LibRI-0.2.1.0 # LIBCOMM=$INSTALL_DIR/LibComm-0.1.1 -# DEEPMD=$HOME/apps/anaconda3/envs/deepmd +# DEEPMD=$HOME/apps/anaconda3/envs/deepmd # v3.0 might have problem cmake -B $BUILD_DIR -DCMAKE_INSTALL_PREFIX=$PREFIX \ -DCMAKE_CXX_COMPILER=icpx \ @@ -53,7 +52,6 @@ cmake -B $BUILD_DIR -DCMAKE_INSTALL_PREFIX=$PREFIX \ # -DLIBRI_DIR=$LIBRI \ # -DLIBCOMM_DIR=$LIBCOMM \ # -DDeePMD_DIR=$DEEPMD \ -# -DTensorFlow_DIR=$DEEPMD \ # if one want's to include deepmd, your system gcc version should be >= 11.3.0 for glibc requirements diff --git a/toolchain/build_abacus_intel.sh b/toolchain/build_abacus_intel.sh index 064c9c8bf8..a2ef7dd8b0 100755 --- a/toolchain/build_abacus_intel.sh +++ b/toolchain/build_abacus_intel.sh @@ -4,13 +4,12 @@ #SBATCH -n 16 #SBATCH -o install.log #SBATCH -e install.err -# install ABACUS with libxc and deepks -# JamesMisaka in 2023.08.22 +# JamesMisaka in 2025.03.09 # Build ABACUS by intel-toolchain # module load mkl compiler mpi -# source path/to/vars.sh +# source path/to/setvars.sh ABACUS_DIR=.. TOOL=$(pwd) @@ -23,15 +22,15 @@ BUILD_DIR=build_abacus_intel rm -rf $BUILD_DIR PREFIX=$ABACUS_DIR -ELPA=$INSTALL_DIR/elpa-2024.05.001/cpu +ELPA=$INSTALL_DIR/elpa-2025.01.001/cpu CEREAL=$INSTALL_DIR/cereal-1.3.2/include/cereal -LIBXC=$INSTALL_DIR/libxc-6.2.2 +LIBXC=$INSTALL_DIR/libxc-7.0.0 RAPIDJSON=$INSTALL_DIR/rapidjson-1.1.0/ # LIBTORCH=$INSTALL_DIR/libtorch-2.1.2/share/cmake/Torch # LIBNPY=$INSTALL_DIR/libnpy-1.0.1/include # LIBRI=$INSTALL_DIR/LibRI-0.2.1.0 # LIBCOMM=$INSTALL_DIR/LibComm-0.1.1 -# DEEPMD=$HOME/apps/anaconda3/envs/deepmd +# DEEPMD=$HOME/apps/anaconda3/envs/deepmd # v3.0 might have problem # if use deepks and deepmd cmake -B $BUILD_DIR -DCMAKE_INSTALL_PREFIX=$PREFIX \ @@ -54,7 +53,6 @@ cmake -B $BUILD_DIR -DCMAKE_INSTALL_PREFIX=$PREFIX \ # -DLIBRI_DIR=$LIBRI \ # -DLIBCOMM_DIR=$LIBCOMM \ # -DDeePMD_DIR=$DEEPMD \ -# -DTensorFlow_DIR=$DEEPMD \ cmake --build $BUILD_DIR -j `nproc` diff --git a/toolchain/install_abacus_toolchain.sh b/toolchain/install_abacus_toolchain.sh index 68836133ed..2ed465f646 100755 --- a/toolchain/install_abacus_toolchain.sh +++ b/toolchain/install_abacus_toolchain.sh @@ -59,10 +59,11 @@ USAGE: $(basename $SCRIPT_NAME) [options] -Or a more RECOMMENDED way is to use it by pre-setting workflow scripts: +A MORE RECOMMENDED way is to use it by pre-setting workflow scripts: > gcc-openmpi-openblas environments: toolchain_gnu.sh > intel-mkl-mpi environments: toolchain_intel.sh > intel-mpich environments: toolchain_intel_mpich.sh +> AMD environments: toolchain_amd.sh [in development] OPTIONS: @@ -148,14 +149,18 @@ The --with-PKG options follow the rules: --with-PKG The option keyword alone will be equivalent to --with-PKG=install - --with-gcc The GCC compiler to use to compile ABACUS. + --with-gcc Use the GNU compiler to use to build ABACUS. Default = system - --with-intel Use the Intel compiler to compile ABACUS. + --with-intel Use the Intel compiler to build ABACUS. Default = system --with-intel-classic Use the classic Intel compiler (icc, icpc, ifort) to compile ABACUS. Default = no --with-ifx Use the new Intel Fortran compiler ifx instead of ifort to compile dependence of ABACUS, along with mpiifx (if --with-intel-classic=no) Default = yes + --with-amd Use the AMD compiler to build CP2K. + Default = system + --with-flang Use flang in AMD compiler, which may lead to problem and efficiency loss in ELPA + Default = no --with-cmake Cmake utilities Default = install --with-openmpi OpenMPI, important if you want a parallel version of ABACUS. @@ -180,6 +185,10 @@ The --with-PKG options follow the rules: it replaces the FFTW library. If the ScaLAPACK component is found, it replaces the one specified by --with-scalapack. Default = system + --with-aocl AMD Optimizing CPU Libraries, which provides LAPACK, BLAS, FFTW, ScaLAPACK + the ScaLAPACK and FFTW can directly use which in AOCL by setting --with-scalapack=system and --with-fftw=system if AOCL in system environment. + related scripts are in development to incorporate scalapack and fftw once for all. + Default = system --with-openblas OpenBLAS is a free high performance LAPACK and BLAS library, the successor to GotoBLAS. Default = install @@ -233,9 +242,9 @@ EOF # PACKAGE LIST: register all new dependent tools and libs here. Order # is important, the first in the list gets installed first # ------------------------------------------------------------------------ -tool_list="gcc intel cmake" +tool_list="gcc intel amd cmake" mpi_list="mpich openmpi intelmpi" -math_list="mkl openblas" +math_list="mkl aocl openblas" lib_list="fftw libxc scalapack elpa cereal rapidjson libtorch libnpy libri libcomm" package_list="${tool_list} ${mpi_list} ${math_list} ${lib_list}" # ------------------------------------------------------------------------ @@ -263,6 +272,9 @@ with_scalapack="__INSTALL__" if [ "${MKLROOT}" ]; then export MATH_MODE="mkl" with_mkl="__SYSTEM__" +elif [ "${AOCLhome}" ]; then + export MATH_MODE="aocl" + with_aocl="__SYSTEM__" else export MATH_MODE="openblas" fi @@ -288,6 +300,8 @@ if (command -v mpiexec > /dev/null 2>&1); then elif (mpiexec --version 2>&1 | grep -s -q "Intel"); then echo "MPI is detected and it appears to be Intel MPI" with_gcc="__DONTUSE__" + with_amd="__DONTUSE__" + with_aocl="__DONTUSE__" with_intel="__SYSTEM__" with_intelmpi="__SYSTEM__" export MPI_MODE="intelmpi" @@ -314,9 +328,11 @@ export intel_classic="no" # and will lead to problem in force calculation # but icx is recommended by intel compiler # option: --with-intel-classic can change it to yes/no -# zhaoqing by 2023.08 +# JamesMisaka by 2023.08 export intelmpi_classic="no" -export with_ifx="yes" +export with_ifx="yes" # whether ifx is used in oneapi +export with_flang="no" # whether flang is used in aocc +export openmpi_4th="no" # whether openmpi downgrade export GPUVER="no" export MPICH_DEVICE="ch4" export TARGET_CPU="native" @@ -336,6 +352,8 @@ if [ "${CRAY_LD_LIBRARY_PATH}" ]; then export MPI_MODE="mpich" # set default value for some installers appropriate for CLE with_gcc="__DONTUSE__" + with_amd="__DONTUSE__" + with_aocl="__DONTUSE__" with_intel="__DONTUSE__" with_fftw="__SYSTEM__" with_scalapack="__DONTUSE__" @@ -373,7 +391,9 @@ while [ $# -ge 1 ]; do --install-all) # set all package to the default installation status for ii in ${package_list}; do - if [ "${ii}" != "intel" ] && [ "${ii}" != "intelmpi" ]; then + if [ "${ii}" != "intel" ] && + [ "${ii}" != "intelmpi" ] && + [ "${ii}" != "amd" ]; then eval with_${ii}="__INSTALL__" fi done @@ -408,6 +428,12 @@ while [ $# -ge 1 ]; do cray) export MATH_MODE="cray" ;; + aocl) + export MATH_MODE="aocl" + with_aocl="__SYSTEM__" + with_fftw="__SYSTEM__" + with_scalapack="__SYSTEM__" + ;; mkl) export MATH_MODE="mkl" ;; @@ -416,7 +442,7 @@ while [ $# -ge 1 ]; do ;; *) report_error ${LINENO} \ - "--math-mode currently only supports mkl, and openblas as options" + "--math-mode currently only supports mkl, aocl, openblas and cray as options" ;; esac ;; @@ -496,6 +522,9 @@ while [ $# -ge 1 ]; do export MPI_MODE=mpich fi ;; + --with-4th-openmpi*) + openmpi_4th=$(read_with "${1}" "no") # default new openmpi + ;; --with-openmpi*) with_openmpi=$(read_with "${1}") if [ "${with_openmpi}" != "__DONTUSE__" ]; then @@ -514,12 +543,21 @@ while [ $# -ge 1 ]; do --with-intel-mpi-clas*) intelmpi_classic=$(read_with "${1}" "no") # default new intel mpi compiler ;; - --with-intel*) + --with-intel*) # must be read after items above with_intel=$(read_with "${1}" "__SYSTEM__") ;; --with-ifx*) with_ifx=$(read_with "${1}" "yes") # default yes ;; + --with-amd*) + with_amd=$(read_with "${1}" "__SYSTEM__") + ;; + --with-flang*) + with_flang=$(read_with "${1}" "no") + ;; + --with-aocl*) + with_aocl=$(read_with "${1}" "__SYSTEM__") + ;; --with-libxc*) with_libxc=$(read_with "${1}") ;; @@ -590,9 +628,17 @@ export ENABLE_CRAY="${enable_cray}" # ------------------------------------------------------------------------ # Compiler conflicts if [ "${with_intel}" != "__DONTUSE__" ] && [ "${with_gcc}" = "__INSTALL__" ]; then - echo "You have chosen to use the Intel compiler, therefore the installation of the GCC compiler will be skipped." + echo "You have chosen to use the Intel compiler, therefore the installation of the GNU compiler will be skipped." with_gcc="__SYSTEM__" fi +if [ "${with_amd}" != "__DONTUSE__" ] && [ "${with_gcc}" = "__INSTALL__" ]; then + echo "You have chosen to use the AMD compiler, therefore the installation of the GNU compiler will be skipped." + with_gcc="__SYSTEM__" +fi +if [ "${with_amd}" != "__DONTUSE__" ] && [ "${with_intel}" != "__DONTUSE__" ]; then + report_error "You have chosen to use the AMD and the Intel compiler. Select only one compiler." + exit 1 +fi # MPI library conflicts if [ "${MPI_MODE}" = "no" ]; then if [ "${with_scalapack}" != "__DONTUSE__" ]; then @@ -606,7 +652,7 @@ if [ "${MPI_MODE}" = "no" ]; then else # if gcc is installed, then mpi needs to be installed too if [ "${with_gcc}" = "__INSTALL__" ]; then - echo "You have chosen to install the GCC compiler, therefore MPI libraries have to be installed too" + echo "You have chosen to install the GNU compiler, therefore MPI libraries have to be installed too" case ${MPI_MODE} in mpich) with_mpich="__INSTALL__" @@ -842,6 +888,8 @@ To build ABACUS by gnu-toolchain, just use: ./build_abacus_gnu.sh To build ABACUS by intel-toolchain, just use: ./build_abacus_intel.sh +To build ABACUS by amd-toolchain in gcc-aocl, just use: + ./build_abacus_gnu-aocl.sh or you can modify the builder scripts to suit your needs. """ EOF diff --git a/toolchain/scripts/VERSION b/toolchain/scripts/VERSION index bce1d6fc01..d50de5efde 100644 --- a/toolchain/scripts/VERSION +++ b/toolchain/scripts/VERSION @@ -1,2 +1,2 @@ # version file to force a rebuild of the entire toolchain -VERSION="2024.3" +VERSION="2025.1" \ No newline at end of file diff --git a/toolchain/scripts/stage0/install_amd.sh b/toolchain/scripts/stage0/install_amd.sh new file mode 100755 index 0000000000..c79aca5381 --- /dev/null +++ b/toolchain/scripts/stage0/install_amd.sh @@ -0,0 +1,106 @@ +#!/bin/bash -e + +# TODO: Review and if possible fix shellcheck errors. +# shellcheck disable=all + +# Last Update in 2025-0308 +# NOTICE: flang cannot be used when compiling ELPA + +[ "${BASH_SOURCE[0]}" ] && SCRIPT_NAME="${BASH_SOURCE[0]}" || SCRIPT_NAME=${0} +SCRIPT_DIR="$(cd "$(dirname "${SCRIPT_NAME}")/.." && pwd -P)" + +source "${SCRIPT_DIR}"/common_vars.sh +source "${SCRIPT_DIR}"/tool_kit.sh +source "${SCRIPT_DIR}"/signal_trap.sh +source "${INSTALLDIR}"/toolchain.conf +source "${INSTALLDIR}"/toolchain.env + +[ -f "${BUILDDIR}/setup_amd" ] && rm "${BUILDDIR}/setup_amd" + +AMD_CFLAGS="" +AMD_LDFLAGS="" +AMD_LIBS="" +mkdir -p "${BUILDDIR}" +cd "${BUILDDIR}" + +case "${with_amd}" in + __INSTALL__) + echo "==================== Installing the AMD compiler ======================" + echo "__INSTALL__ is not supported; please install the AMD compiler manually" + exit 1 + ;; + __SYSTEM__) + echo "==================== Finding AMD compiler from system paths ====================" + check_command clang "amd" && CC="$(realpath $(command -v clang))" || exit 1 + check_command clang++ "amd" && CXX="$(realpath $(command -v clang++))" || exit 1 + if [ "${with_flang}" = "yes" ]; then + check_command flang "amd" && FC="$(realpath $(command -v flang))" || exit 1 + else + check_command gfortran "gcc" && FC="gfortran" || exit 1 + add_lib_from_paths GCC_LDFLAGS "libgfortran.*" ${LIB_PATHS} + fi + F90="${FC}" + F77="${FC}" + ;; + __DONTUSE__) + # Nothing to do + ;; + *) + echo "==================== Linking AMD compiler to user paths ====================" + pkg_install_dir="${with_amd}" + check_dir "${pkg_install_dir}/bin" + check_dir "${pkg_install_dir}/lib" + check_dir "${pkg_install_dir}/include" + check_command ${pkg_install_dir}/bin/clang "amd" && CC="${pkg_install_dir}/bin/clang" || exit 1 + check_command ${pkg_install_dir}/bin/clang++ "amd" && CXX="${pkg_install_dir}/bin/clang++" || exit 1 + if [ "${with_flang}" = "yes" ]; then + check_command ${pkg_install_dir}/bin/flang "amd" && FC="${pkg_install_dir}/bin/flang" || exit 1 + else + check_command gfortran "gcc" && FC="$(command -v gfortran)" || exit 1 + fi + F90="${FC}" + F77="${FC}" + AMD_CFLAGS="-I'${pkg_install_dir}/include'" + AMD_LDFLAGS="-L'${pkg_install_dir}/lib' -Wl,-rpath,'${pkg_install_dir}/lib'" + ;; +esac +if [ "${with_amd}" != "__DONTUSE__" ]; then + echo "CC is ${CC}" + [ $(realpath $(command -v clang) | grep -e aocc-compiler) ] || echo "Check the AMD C compiler path" + echo "CXX is ${CXX}" + [ $(realpath $(command -v clang++) | grep -e aocc-compiler) ] || echo "Check the AMD C++ compiler path" + echo "FC is ${FC}" + if [ "${with_flang}" = "yes" ]; then + [ $(realpath $(command -v flang) | grep -e aocc-compiler) ] || echo "Check the AMD Fortran compiler path" + else + [ $(realpath $(command -v gfortran) | grep -e aocc-compiler) ] || echo "Check the GNU Fortran compiler path" + fi + cat << EOF > "${BUILDDIR}/setup_amd" +export CC="${CC}" +export CXX="${CXX}" +export FC="${FC}" +export F90="${F90}" +export F77="${F77}" +EOF + if [ "${with_amd}" != "__SYSTEM__" ]; then + cat << EOF >> "${BUILDDIR}/setup_amd" +prepend_path PATH "${pkg_install_dir}/bin" +prepend_path LD_LIBRARY_PATH "${pkg_install_dir}/lib" +prepend_path LD_RUN_PATH "${pkg_install_dir}/lib" +prepend_path LIBRARY_PATH "${pkg_install_dir}/lib" +prepend_path CPATH "${pkg_install_dir}/include" +EOF + fi + cat << EOF >> "${BUILDDIR}/setup_amd" +export AMD_CFLAGS="${AMD_CFLAGS}" +export AMD_LDFLAGS="${AMD_LDFLAGS}" +export AMD_LIBS="${AMD_LIBS}" +EOF + cat "${BUILDDIR}/setup_amd" >> ${SETUPFILE} +fi + +load "${BUILDDIR}/setup_amd" +write_toolchain_env "${INSTALLDIR}" + +cd "${ROOTDIR}" +report_timing "amd" diff --git a/toolchain/scripts/stage0/install_cmake.sh b/toolchain/scripts/stage0/install_cmake.sh index b16c9719b2..c0c16fc4a0 100755 --- a/toolchain/scripts/stage0/install_cmake.sh +++ b/toolchain/scripts/stage0/install_cmake.sh @@ -3,7 +3,7 @@ # TODO: Review and if possible fix shellcheck errors. # shellcheck disable=all -# Last Update in 2024-0811 +# Last Update in 2025-0308 [ "${BASH_SOURCE[0]}" ] && SCRIPT_NAME="${BASH_SOURCE[0]}" || SCRIPT_NAME=$0 SCRIPT_DIR="$(cd "$(dirname "$SCRIPT_NAME")/.." && pwd -P)" @@ -21,13 +21,13 @@ cd "${BUILDDIR}" case "${with_cmake}" in __INSTALL__) echo "==================== Installing CMake ====================" - cmake_ver="3.30.0" + cmake_ver="3.31.2" if [ "${OPENBLAS_ARCH}" = "arm64" ]; then cmake_arch="linux-aarch64" - cmake_sha256="daa89552fd9102fb70399b31b5605c4f61125023bbbed947757a7b53ce36c4d0" + cmake_sha256="85cc81f782cd8b5ac346e570ad5cfba3bdbe5aa01f27f7ce6266c4cef93342550" elif [ "${OPENBLAS_ARCH}" = "x86_64" ]; then cmake_arch="linux-x86_64" - cmake_sha256="1a5969fe81fea6e5220d053d9d3e3754cbc85be07d2d428bebdcfe87137971a9" + cmake_sha256="b81cf3f4892683133f330cd7c016c28049b5725617db24ca8763360883545d34" else report_error ${LINENO} \ "cmake installation for ARCH=${ARCH} is not supported. You can try to use the system installation using the flag --with-cmake=system instead." diff --git a/toolchain/scripts/stage0/install_gcc.sh b/toolchain/scripts/stage0/install_gcc.sh index 7ea611acdb..dc8bb9e1d4 100755 --- a/toolchain/scripts/stage0/install_gcc.sh +++ b/toolchain/scripts/stage0/install_gcc.sh @@ -10,7 +10,7 @@ SCRIPT_DIR="$(cd "$(dirname "$SCRIPT_NAME")/.." && pwd -P)" # gcc 13 is good gcc_ver="13.2.0" gcc_sha256="8cb4be3796651976f94b9356fa08d833524f62420d6292c5033a9a26af315078" -# use gcc 14 with caution +# use gcc 14 with caution, do not compatible with GPU-ABACUS #gcc_ver="14.2.0" #gcc_sha256="7d376d445f93126dc545e2c0086d0f647c3094aae081cdb78f42ce2bc25e7293" diff --git a/toolchain/scripts/stage0/install_stage0.sh b/toolchain/scripts/stage0/install_stage0.sh index a398fdc0fa..7c2eb2568c 100755 --- a/toolchain/scripts/stage0/install_stage0.sh +++ b/toolchain/scripts/stage0/install_stage0.sh @@ -5,6 +5,7 @@ ./scripts/stage0/install_gcc.sh ./scripts/stage0/install_intel.sh +./scripts/stage0/install_amd.sh ./scripts/stage0/setup_buildtools.sh ./scripts/stage0/install_cmake.sh diff --git a/toolchain/scripts/stage0/setup_buildtools.sh b/toolchain/scripts/stage0/setup_buildtools.sh index 5172574a22..924282ed41 100755 --- a/toolchain/scripts/stage0/setup_buildtools.sh +++ b/toolchain/scripts/stage0/setup_buildtools.sh @@ -3,7 +3,7 @@ # TODO: Review and if possible fix shellcheck errors. # shellcheck disable=all -# Last Update in 2023-0901 +# Last Update in 2025-0310 [ "${BASH_SOURCE[0]}" ] && SCRIPT_NAME="${BASH_SOURCE[0]}" || SCRIPT_NAME=$0 SCRIPT_DIR="$(cd "$(dirname "${SCRIPT_NAME}")/.." && pwd -P)" @@ -27,19 +27,15 @@ if [ "${with_intel}" != "__DONTUSE__" ]; then CFLAGS="-O2 -fPIC -fp-model=precise -funroll-loops -g -qopenmp -qopenmp-simd -traceback" if [ "${TARGET_CPU}" = "native" ]; then CFLAGS="${CFLAGS} -xHost" - elif [ "${TARGET_CPU}" = "generic" ]; then - CFLAGS="${CFLAGS} -mtune=${TARGET_CPU}" else - CFLAGS="${CFLAGS} -march=${TARGET_CPU} -mtune=${TARGET_CPU}" + CFLAGS="${CFLAGS} -mtune=${TARGET_CPU}" fi FFLAGS="${CFLAGS}" +elif [ "${with_amd}" != "__DONTUSE__" ]; then + CFLAGS="-O2 -fPIC -fopenmp -g -mtune=${TARGET_CPU}" + FFLAGS="${CFLAGS}" else - CFLAGS="-O2 -fPIC -fno-omit-frame-pointer -fopenmp -g" - if [ "${TARGET_CPU}" = "generic" ]; then - CFLAGS="${CFLAGS} -mtune=generic ${TSANFLAGS}" - else - CFLAGS="${CFLAGS} -march=${TARGET_CPU} -mtune=${TARGET_CPU} ${TSANFLAGS}" - fi + CFLAGS="-O2 -fPIC -fno-omit-frame-pointer -fopenmp -g -mtune=${TARGET_CPU} ${TSANFLAGS}" FFLAGS="${CFLAGS} -fbacktrace" fi CXXFLAGS="${CFLAGS}" @@ -47,7 +43,7 @@ F77FLAGS="${FFLAGS}" F90FLAGS="${FFLAGS}" FCFLAGS="${FFLAGS}" -if [ "${with_intel}" == "__DONTUSE__" ]; then +if [ "${with_intel}" == "__DONTUSE__" ] && [ "${with_amd}" == "__DONTUSE__" ]; then export CFLAGS="$(allowed_gcc_flags ${CFLAGS})" export FFLAGS="$(allowed_gfortran_flags ${FFLAGS})" export F77FLAGS="$(allowed_gfortran_flags ${F77FLAGS})" @@ -55,7 +51,7 @@ if [ "${with_intel}" == "__DONTUSE__" ]; then export FCFLAGS="$(allowed_gfortran_flags ${FCFLAGS})" export CXXFLAGS="$(allowed_gxx_flags ${CXXFLAGS})" else - # TODO Check functions for allowed Intel compiler flags + # TODO Check functions for allowed Intel or AMD compiler flags export CFLAGS export FFLAGS export F77FLAGS diff --git a/toolchain/scripts/stage1/install_mpich.sh b/toolchain/scripts/stage1/install_mpich.sh index dd89fa8e1f..cd561519fb 100755 --- a/toolchain/scripts/stage1/install_mpich.sh +++ b/toolchain/scripts/stage1/install_mpich.sh @@ -3,7 +3,7 @@ # TODO: Review and if possible fix shellcheck errors. # shellcheck disable=all -# Last Update in 2024-0912 +# Last Update in 2025-0308 [ "${BASH_SOURCE[0]}" ] && SCRIPT_NAME="${BASH_SOURCE[0]}" || SCRIPT_NAME=$0 SCRIPT_DIR="$(cd "$(dirname "$SCRIPT_NAME")/.." && pwd -P)" @@ -12,8 +12,8 @@ SCRIPT_DIR="$(cd "$(dirname "$SCRIPT_NAME")/.." && pwd -P)" # mpich_sha256="17406ea90a6ed4ecd5be39c9ddcbfac9343e6ab4f77ac4e8c5ebe4a3e3b6c501" # mpich_ver="4.1.2" # mpich_sha256="3492e98adab62b597ef0d292fb2459b6123bc80070a8aa0a30be6962075a12f0" -mpich_ver="4.2.2" -mpich_sha256="883f5bb3aeabf627cb8492ca02a03b191d09836bbe0f599d8508351179781d41" +mpich_ver="4.3.0" +mpich_sha256="5e04132984ad83cab9cc53f76072d2b5ef5a6d24b0a9ff9047a8ff96121bcc63" mpich_pkg="mpich-${mpich_ver}.tar.gz" source "${SCRIPT_DIR}"/common_vars.sh diff --git a/toolchain/scripts/stage1/install_openmpi.sh b/toolchain/scripts/stage1/install_openmpi.sh index 99adfc5bd9..ab65a89553 100755 --- a/toolchain/scripts/stage1/install_openmpi.sh +++ b/toolchain/scripts/stage1/install_openmpi.sh @@ -3,15 +3,20 @@ # TODO: Review and if possible fix shellcheck errors. # shellcheck disable=all -# Last Update in 2024-0912 +# Last Update in 2025-0308 +# Change default version to openmpi 5 +# allow user to choose openmpi 4 in used scripts [ "${BASH_SOURCE[0]}" ] && SCRIPT_NAME="${BASH_SOURCE[0]}" || SCRIPT_NAME=$0 SCRIPT_DIR="$(cd "$(dirname "$SCRIPT_NAME")/.." && pwd -P)" -#openmpi_ver="5.0.5" -#openmpi_sha256="6588d57c0a4bd299a24103f4e196051b29e8b55fbda49e11d5b3d32030a32776" -openmpi_ver="4.1.6" -openmpi_sha256="f740994485516deb63b5311af122c265179f5328a0d857a567b85db00b11e415" +if [ "${openmpi_4th}" = "yes" ]; then + openmpi_ver="4.1.6" + openmpi_sha256="f740994485516deb63b5311af122c265179f5328a0d857a567b85db00b11e415" +else + openmpi_ver="5.0.6" + openmpi_sha256="bd4183fcbc43477c254799b429df1a6e576c042e74a2d2f8b37d537b2ff98157" +fi openmpi_pkg="openmpi-${openmpi_ver}.tar.bz2" source "${SCRIPT_DIR}"/common_vars.sh @@ -74,6 +79,7 @@ case "${with_openmpi}" in ./configure CFLAGS="${CFLAGS}" \ --prefix=${pkg_install_dir} \ --libdir="${pkg_install_dir}/lib" \ + --with-libevent=internal \ ${EXTRA_CONFIGURE_FLAGS} \ > configure.log 2>&1 || tail -n ${LOG_LINES} configure.log make -j $(get_nprocs) > make.log 2>&1 || tail -n ${LOG_LINES} make.log @@ -101,7 +107,6 @@ case "${with_openmpi}" in check_command mpifort "openmpi" && MPIFC="$(command -v mpifort)" || exit 1 MPIFORT="${MPIFC}" MPIF77="${MPIFC}" - # Fortran code in ABACUS is built via the mpifort wrapper, but we may need additional # libraries and linker flags for C/C++-based MPI codepaths, pull them in at this point. OPENMPI_CFLAGS="$(mpicxx --showme:compile)" OPENMPI_LDFLAGS="$(mpicxx --showme:link)" diff --git a/toolchain/scripts/stage2/install_aocl.sh b/toolchain/scripts/stage2/install_aocl.sh new file mode 100755 index 0000000000..51bcc8c4dd --- /dev/null +++ b/toolchain/scripts/stage2/install_aocl.sh @@ -0,0 +1,92 @@ +#!/bin/bash -e + +# TODO: Review and if possible fix shellcheck errors. +# shellcheck disable=all + +# Last Update in 2025-0308 + +[ "${BASH_SOURCE[0]}" ] && SCRIPT_NAME="${BASH_SOURCE[0]}" || SCRIPT_NAME=$0 +SCRIPT_DIR="$(cd "$(dirname "$SCRIPT_NAME")/.." && pwd -P)" + +source "${SCRIPT_DIR}"/common_vars.sh +source "${SCRIPT_DIR}"/tool_kit.sh +source "${SCRIPT_DIR}"/signal_trap.sh +source "${INSTALLDIR}"/toolchain.conf +source "${INSTALLDIR}"/toolchain.env + +[ -f "${BUILDDIR}/setup_aocl" ] && rm "${BUILDDIR}/setup_aocl" + +AOCL_CFLAGS="" +AOCL_LDFLAGS="" +AOCL_LIBS="" +AOCL_ROOT="" +! [ -d "${BUILDDIR}" ] && mkdir -p "${BUILDDIR}" +cd "${BUILDDIR}" + +case "${with_aocl}" in + __INSTALL__) + echo "==================== Installing AOCL ====================" + report_error ${LINENO} "To install AOCL, please contact your system administrator." + exit 1 + ;; + __SYSTEM__) + echo "==================== Finding AOCL from system paths ====================" + check_lib -lblis "AOCL" + check_lib -lflame "AOCL" + AOCL_LIBS="-lblis -lflame" + add_include_from_paths AOCL_CFLAGS "blis.h" $INCLUDE_PATHS + add_lib_from_paths AOCL_LDFLAGS "libblis.*" $LIB_PATHS + add_include_from_paths AOCL_CFLAGS "lapack.h" $INCLUDE_PATHS + add_lib_from_paths AOCL_LDFLAGS "libflame.*" $LIB_PATHS + ;; + __DONTUSE__) ;; + + *) + echo "==================== Linking AOCL to user paths ====================" + pkg_install_dir="$with_openblas" + check_dir "${pkg_install_dir}/include" + check_dir "${pkg_install_dir}/lib" + AOCL_CFLAGS="-I'${pkg_install_dir}/include'" + AOCL_LDFLAGS="-L'${pkg_install_dir}/lib' -Wl,-rpath,'${pkg_install_dir}/lib'" + AOCL_LIBS="-lblis -lflame" + ;; +esac +if [ "$with_openblas" != "__DONTUSE__" ]; then + if [ "$with_openblas" != "__SYSTEM__" ]; then + cat << EOF > "${BUILDDIR}/setup_aocl" +prepend_path LD_LIBRARY_PATH "$pkg_install_dir/lib" +prepend_path LD_RUN_PATH "$pkg_install_dir/lib" +prepend_path LIBRARY_PATH "$pkg_install_dir/lib" +prepend_path PKG_CONFIG_PATH "$pkg_install_dir/lib/pkgconfig" +prepend_path CMAKE_PREFIX_PATH "$pkg_install_dir" +prepend_path CPATH "$pkg_install_dir/include" +export LD_LIBRARY_PATH="$pkg_install_dir/lib:"\${LD_LIBRARY_PATH} +export LD_RUN_PATH="$pkg_install_dir/lib:"\${LD_RUN_PATH} +export LIBRARY_PATH="$pkg_install_dir/lib:"\${LIBRARY_PATH} +export CPATH="$pkg_install_dir/include:"\${CPATH} +export PKG_CONFIG_PATH="$pkg_install_dir/lib/pkgconfig:"\${PKG_CONFIG_PATH} +export CMAKE_PREFIX_PATH="$pkg_install_dir:"\${CMAKE_PREFIX_PATH} +export AOCL_ROOT=${pkg_install_dir} +EOF + cat "${BUILDDIR}/setup_aocl" >> $SETUPFILE + fi + cat << EOF >> "${BUILDDIR}/setup_aocl" +export AOCL_ROOT="${pkg_install_dir}" +export AOCL_CFLAGS="${AOCL_CFLAGS}" +export AOCL_LDFLAGS="${AOCL_LDFLAGS}" +export AOCL_LIBS="${AOCL_LIBS}" +export MATH_CFLAGS="\${MATH_CFLAGS} ${AOCL_CFLAGS}" +export MATH_LDFLAGS="\${MATH_LDFLAGS} ${AOCL_LDFLAGS}" +export MATH_LIBS="\${MATH_LIBS} ${AOCL_LIBS}" +export PKG_CONFIG_PATH="${pkg_install_dir}/lib/pkgconfig" +export CMAKE_PREFIX_PATH="${pkg_install_dir}" +prepend_path PKG_CONFIG_PATH "$pkg_install_dir/lib/pkgconfig" +prepend_path CMAKE_PREFIX_PATH "$pkg_install_dir" +EOF +fi + +load "${BUILDDIR}/setup_aocl" +write_toolchain_env "${INSTALLDIR}" + +cd "${ROOTDIR}" +report_timing "aocl" diff --git a/toolchain/scripts/stage2/install_mathlibs.sh b/toolchain/scripts/stage2/install_mathlibs.sh index ed78957882..3ad3aef336 100755 --- a/toolchain/scripts/stage2/install_mathlibs.sh +++ b/toolchain/scripts/stage2/install_mathlibs.sh @@ -25,6 +25,10 @@ case "$MATH_MODE" in "${SCRIPTDIR}"/stage2/install_mkl.sh "${with_mkl}" load "${BUILDDIR}/setup_mkl" ;; + aocl) + "${SCRIPTDIR}"/stage2/install_aocl.sh "${with_aocl}" + load "${BUILDDIR}/setup_aocl" + ;; openblas) "${SCRIPTDIR}"/stage2/install_openblas.sh "${with_openblas}" load "${BUILDDIR}/setup_openblas" diff --git a/toolchain/scripts/stage3/install_elpa.sh b/toolchain/scripts/stage3/install_elpa.sh index c077097f36..01e7980810 100755 --- a/toolchain/scripts/stage3/install_elpa.sh +++ b/toolchain/scripts/stage3/install_elpa.sh @@ -3,14 +3,16 @@ # TODO: Review and if possible fix shellcheck errors. # shellcheck disable=all -# Last Update in 2024-0811 +# Last Update in 2025-0308 [ "${BASH_SOURCE[0]}" ] && SCRIPT_NAME="${BASH_SOURCE[0]}" || SCRIPT_NAME=$0 SCRIPT_DIR="$(cd "$(dirname "$SCRIPT_NAME")/.." && pwd -P)" # From https://elpa.mpcdf.mpg.de/software/tarball-archive/ELPA_TARBALL_ARCHIVE.html -elpa_ver="2024.05.001" -elpa_sha256="9caf41a3e600e2f6f4ce1931bd54185179dade9c171556d0c9b41bbc6940f2f6" +# elpa_ver="2024.05.001" +# elpa_sha256="9caf41a3e600e2f6f4ce1931bd54185179dade9c171556d0c9b41bbc6940f2f6" +elpa_ver="2025.01.001" +elpa_sha256="3ef0c6aed9a3e05db6efafe6e14d66eb88b2a1354d61e765b7cde0d3d5f3951e" source "${SCRIPT_DIR}"/common_vars.sh @@ -102,6 +104,8 @@ case "$with_elpa" in mkdir -p "build_${TARGET}" cd "build_${TARGET}" + if [ "${with_amd}" != "__DONTUSE__" ]; then + echo "AMD compiler detected, enable special option operation" ../configure --prefix="${pkg_install_dir}/${TARGET}/" \ --libdir="${pkg_install_dir}/${TARGET}/lib" \ --enable-openmp=${enable_openmp} \ @@ -110,21 +114,51 @@ case "$with_elpa" in --disable-c-tests \ --disable-cpp-tests \ ${config_flags} \ - --enable-nvidia-gpu=$([ "$TARGET" = "nvidia" ] && echo "yes" || echo "no") \ + --enable-nvidia-gpu-kernels=$([ "$TARGET" = "nvidia" ] && echo "yes" || echo "no") \ --with-cuda-path=${CUDA_PATH:-${CUDA_HOME:-/CUDA_HOME-notset}} \ - --with-NVIDIA-GPU-compute-capability=$([ "$TARGET" = "nvidia" ] && echo "sm_$ARCH_NUM" || echo "sm_35") \ + --with-NVIDIA-GPU-compute-capability=$([ "$TARGET" = "nvidia" ] && echo "sm_$ARCH_NUM" || echo "sm_75") \ CUDA_CFLAGS="-std=c++14 -allow-unsupported-compiler" \ OMPI_MCA_plm_rsh_agent=/bin/false \ FC=${MPIFC} \ CC=${MPICC} \ CXX=${MPICXX} \ CPP="cpp -E" \ - FCFLAGS="${FCFLAGS} ${MATH_CFLAGS} ${SCALAPACK_CFLAGS} -ffree-line-length-none ${AVX_flag} ${FMA_flag} ${SSE4_flag} ${AVX512_flags} -fno-lto" \ + FCFLAGS="${FCFLAGS} ${MATH_CFLAGS} ${SCALAPACK_CFLAGS} ${AVX_flag} ${FMA_flag} ${SSE4_flag} ${AVX512_flags} -fno-lto" \ CFLAGS="${CFLAGS} ${MATH_CFLAGS} ${SCALAPACK_CFLAGS} ${AVX_flag} ${FMA_flag} ${SSE4_flag} ${AVX512_flags} -fno-lto" \ CXXFLAGS="${CXXFLAGS} ${MATH_CFLAGS} ${SCALAPACK_CFLAGS} ${AVX_flag} ${FMA_flag} ${SSE4_flag} ${AVX512_flags} -fno-lto" \ - LDFLAGS="-Wl,--allow-multiple-definition -Wl,--enable-new-dtags ${MATH_LDFLAGS} ${SCALAPACK_LDFLAGS} ${cray_ldflags}" \ + LDFLAGS="${MATH_LDFLAGS} ${SCALAPACK_LDFLAGS} ${cray_ldflags} -lstdc++" \ LIBS="${SCALAPACK_LIBS} $(resolve_string "${MATH_LIBS}" "MPI")" \ > configure.log 2>&1 || tail -n ${LOG_LINES} configure.log + # remove unsupported compile option in libtool + sed -i ./libtool \ + -e 's/\\$wl-soname //g' \ + -e 's/\\$wl--whole-archive\\$convenience \\$wl--no-whole-archive//g' \ + -e 's/\\$wl\\$soname //g' + else + ../configure --prefix="${pkg_install_dir}/${TARGET}/" \ + --libdir="${pkg_install_dir}/${TARGET}/lib" \ + --enable-openmp=${enable_openmp} \ + --enable-shared=yes \ + --enable-static=yes \ + --disable-c-tests \ + --disable-cpp-tests \ + ${config_flags} \ + --enable-nvidia-gpu-kernels=$([ "$TARGET" = "nvidia" ] && echo "yes" || echo "no") \ + --with-cuda-path=${CUDA_PATH:-${CUDA_HOME:-/CUDA_HOME-notset}} \ + --with-NVIDIA-GPU-compute-capability=$([ "$TARGET" = "nvidia" ] && echo "sm_$ARCH_NUM" || echo "sm_75") \ + CUDA_CFLAGS="-std=c++14 -allow-unsupported-compiler" \ + OMPI_MCA_plm_rsh_agent=/bin/false \ + FC=${MPIFC} \ + CC=${MPICC} \ + CXX=${MPICXX} \ + CPP="cpp -E" \ + FCFLAGS="${FCFLAGS} ${MATH_CFLAGS} ${SCALAPACK_CFLAGS} ${AVX_flag} ${FMA_flag} ${SSE4_flag} ${AVX512_flags} -fno-lto" \ + CFLAGS="${CFLAGS} ${MATH_CFLAGS} ${SCALAPACK_CFLAGS} ${AVX_flag} ${FMA_flag} ${SSE4_flag} ${AVX512_flags} -fno-lto" \ + CXXFLAGS="${CXXFLAGS} ${MATH_CFLAGS} ${SCALAPACK_CFLAGS} ${AVX_flag} ${FMA_flag} ${SSE4_flag} ${AVX512_flags} -fno-lto" \ + LDFLAGS="-Wl,--allow-multiple-definition -Wl,--enable-new-dtags ${MATH_LDFLAGS} ${SCALAPACK_LDFLAGS} ${cray_ldflags} -lstdc++" \ + LIBS="${SCALAPACK_LIBS} $(resolve_string "${MATH_LIBS}" "MPI")" \ + > configure.log 2>&1 || tail -n ${LOG_LINES} configure.log + fi make -j $(get_nprocs) > make.log 2>&1 || tail -n ${LOG_LINES} make.log make install > install.log 2>&1 || tail -n ${LOG_LINES} install.log cd .. diff --git a/toolchain/scripts/stage3/install_libxc.sh b/toolchain/scripts/stage3/install_libxc.sh index 40cc371d41..04c96b7aad 100755 --- a/toolchain/scripts/stage3/install_libxc.sh +++ b/toolchain/scripts/stage3/install_libxc.sh @@ -3,13 +3,15 @@ # TODO: Review and if possible fix shellcheck errors. # shellcheck disable=all -# Last Update in 2023-0901 +# Last Update in 2025-0309 [ "${BASH_SOURCE[0]}" ] && SCRIPT_NAME="${BASH_SOURCE[0]}" || SCRIPT_NAME=$0 SCRIPT_DIR="$(cd "$(dirname "$SCRIPT_NAME")/.." && pwd -P)" -libxc_ver="6.2.2" -libxc_sha256="a0f6f1bba7ba5c0c85b2bfe65aca1591025f509a7f11471b4cd651a79491b045" +# libxc_ver="6.2.2" +# libxc_sha256="a0f6f1bba7ba5c0c85b2bfe65aca1591025f509a7f11471b4cd651a79491b045" +libxc_ver="7.0.0" +libxc_sha256="e9ae69f8966d8de6b7585abd9fab588794ada1fab8f689337959a35abbf9527d" source "${SCRIPT_DIR}"/common_vars.sh source "${SCRIPT_DIR}"/tool_kit.sh source "${SCRIPT_DIR}"/signal_trap.sh @@ -30,22 +32,33 @@ case "$with_libxc" in pkg_install_dir="${INSTALLDIR}/libxc-${libxc_ver}" #pkg_install_dir="${HOME}/lib/libxc/${libxc_ver}-gcc8" install_lock_file="$pkg_install_dir/install_successful" + libxc_pkg="libxc-${libxc_ver}.tar.bz2" if verify_checksums "${install_lock_file}"; then echo "libxc-${libxc_ver} is already installed, skipping it." else - if [ -f libxc-${libxc_ver}.tar.gz ]; then - echo "libxc-${libxc_ver}.tar.gz is found" + if [ -f ${libxc_pkg} ]; then + echo "${libxc_pkg} is found" else - download_pkg_from_ABACUS_org "${libxc_sha256}" "libxc-${libxc_ver}.tar.gz" + #download_pkg_from_ABACUS_org "${libxc_sha256}" "${libxc_pkg}" + libxc_url="https://gitlab.com/libxc/libxc/-/archive/${libxc_ver}/${libxc_pkg}" + download_pkg_from_url "${libxc_sha256}" "${libxc_pkg}" "${libxc_url}" fi echo "Installing from scratch into ${pkg_install_dir}" [ -d libxc-${libxc_ver} ] && rm -rf libxc-${libxc_ver} - tar -xzf libxc-${libxc_ver}.tar.gz + tar -xjf ${libxc_pkg} cd libxc-${libxc_ver} # using cmake method to install libxc is neccessary for abacus - mkdir build && cd build - cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=${pkg_install_dir} \ - -DBUILD_SHARED_LIBS=YES -DCMAKE_INSTALL_LIBDIR=lib -DENABLE_FORTRAN=ON -DENABLE_PYTHON=OFF -DBUILD_TESTING=NO .. \ + mkdir build + cd build + cmake \ + -DCMAKE_BUILD_TYPE=Release \ + -DCMAKE_INSTALL_PREFIX=${pkg_install_dir} \ + -DBUILD_SHARED_LIBS=YES \ + -DCMAKE_INSTALL_LIBDIR=lib \ + -DCMAKE_VERBOSE_MAKEFILE=ON \ + -DENABLE_FORTRAN=ON \ + -DENABLE_PYTHON=OFF \ + -DBUILD_TESTING=OFF .. \ > configure.log 2>&1 || tail -n ${LOG_LINES} configure.log make -j $(get_nprocs) > make.log 2>&1 || tail -n ${LOG_LINES} make.log make install > install.log 2>&1 || tail -n ${LOG_LINES} install.log diff --git a/toolchain/toolchain_amd.sh b/toolchain/toolchain_amd.sh new file mode 100755 index 0000000000..b8055176c6 --- /dev/null +++ b/toolchain/toolchain_amd.sh @@ -0,0 +1,37 @@ +#!/bin/bash +#SBATCH -J install +#SBATCH -N 1 +#SBATCH -n 16 +#SBATCH -o compile.log +#SBATCH -e compile.err + +# JamesMisaka in 2023-09-16 +# install abacus dependency by gnu-toolchain +# one can use mpich or openmpi. +# openmpi will be faster, but not compatible in some cases. +# libtorch and libnpy are for deepks support, which can be =no +# if you want to run EXX calculation, you should set --with-libri=install +# mpich (and intel toolchain) is recommended for EXX support + +./install_abacus_toolchain.sh \ +--with-amd=system \ +--math-mode=aocl \ +--with-intel=no \ +--with-gcc=no \ +--with-openmpi=install \ +--with-cmake=install \ +--with-scalapack=system \ +--with-libxc=install \ +--with-fftw=system \ +--with-elpa=install \ +--with-cereal=install \ +--with-rapidjson=install \ +--with-libtorch=no \ +--with-libnpy=no \ +--with-libri=no \ +--with-libcomm=no \ +--with-4th-openmpi=no \ +--with-flang=no \ +| tee compile.log +# if you want to use openmpi-version4: set --with-4th-openmpi=yes +# flang is not recommended to use in this stage \ No newline at end of file diff --git a/toolchain/toolchain_gnu.sh b/toolchain/toolchain_gnu.sh index 26e4b71c55..bf5be6a129 100755 --- a/toolchain/toolchain_gnu.sh +++ b/toolchain/toolchain_gnu.sh @@ -13,8 +13,11 @@ # if you want to run EXX calculation, you should set --with-libri=install # mpich (and intel toolchain) is recommended for EXX support -./install_abacus_toolchain.sh --with-openmpi=install \ ---with-intel=no --with-gcc=system \ +./install_abacus_toolchain.sh \ +--with-gcc=system \ +--with-intel=no \ +--with-openblas=install \ +--with-openmpi=install \ --with-cmake=install \ --with-scalapack=install \ --with-libxc=install \ @@ -26,4 +29,6 @@ --with-libnpy=no \ --with-libri=no \ --with-libcomm=no \ -| tee compile.log \ No newline at end of file +--with-4th-openmpi=no \ +| tee compile.log +# if you want to use openmpi-version4: set --with-4th-openmpi=yes \ No newline at end of file diff --git a/toolchain/toolchain_intel-mpich.sh b/toolchain/toolchain_intel-mpich.sh index 059ef541dd..1f50679f1a 100755 --- a/toolchain/toolchain_intel-mpich.sh +++ b/toolchain/toolchain_intel-mpich.sh @@ -13,8 +13,10 @@ # module load mkl compiler ./install_abacus_toolchain.sh \ ---with-intel=system --math-mode=mkl \ ---with-gcc=no --with-mpich=install \ +--with-intel=system \ +--math-mode=mkl \ +--with-gcc=no \ +--with-mpich=install \ --with-cmake=install \ --with-scalapack=no \ --with-libxc=install \ diff --git a/toolchain/toolchain_intel.sh b/toolchain/toolchain_intel.sh index e5298c570d..d12afc919d 100755 --- a/toolchain/toolchain_intel.sh +++ b/toolchain/toolchain_intel.sh @@ -14,8 +14,10 @@ # module load mkl mpi compiler ./install_abacus_toolchain.sh \ ---with-intel=system --math-mode=mkl \ ---with-gcc=no --with-intelmpi=system \ +--with-intel=system \ +--math-mode=mkl \ +--with-gcc=no \ +--with-intelmpi=system \ --with-cmake=install \ --with-scalapack=no \ --with-libxc=install \