diff --git a/content/authors/det/_index.md b/content/authors/det/_index.md deleted file mode 100644 index 05ee5733..00000000 --- a/content/authors/det/_index.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -# Display name -title: Deb Triant - -# Username (this should match the folder name) -authors: -- det - -# Is this the primary user of the site? -superuser: false - -# Role/position -role: Research Computing Scientist - -# Organizations/Affiliations -organizations: -- name: University of Virginia Research Computing - url: "https://www.rc.virginia.edu" - - -interests: -- Bioinformatics -- HPC -- Research - ---- diff --git a/content/notes/rio-intro/01-hpcuva.md b/content/notes/rio-intro/01-hpcuva.md index 0293843f..bdba33bc 100644 --- a/content/notes/rio-intro/01-hpcuva.md +++ b/content/notes/rio-intro/01-hpcuva.md @@ -17,7 +17,7 @@ These systems are used to process and store public data, internal use data, or s **Rio (High Security)** -Rio is designed for processing and storing HSD such as HIPAA, FERPA, dbGAP, etc. +Rio is designed for processing and storing HSD such as HIPAA, FERPA, Controlled Access Data (CAD), etc. Rio is not CUI certified yet; use the Ivy VM instead. diff --git a/content/notes/rio-intro/03-ivy-rio.md b/content/notes/rio-intro/03-ivy-rio.md index 4cd04e1f..48848fbf 100644 --- a/content/notes/rio-intro/03-ivy-rio.md +++ b/content/notes/rio-intro/03-ivy-rio.md @@ -22,5 +22,5 @@ Ivy Linux VMs can serve as a frontend for accessing the Rio HPC system (availabl **Rio** -Rio is used for large-scale analysis of HIPAA, FERPA, and dbGaP data. +Rio is used for large-scale analysis of HIPAA, FERPA, and CAD. diff --git a/content/notes/rio-intro/05-access.md b/content/notes/rio-intro/05-access.md index 0e8ac048..8d26ee91 100644 --- a/content/notes/rio-intro/05-access.md +++ b/content/notes/rio-intro/05-access.md @@ -7,7 +7,7 @@ menu: rio-intro: --- -Principal Investigators (PIs) can request access by submitting an Ivy Linux VM request through the [services app](https://services.rc.virginia.edu/). +Principal Investigators (PIs) can request access by submitting an Ivy Linux VM request through the [services app](https://services.rc.virginia.edu/). Research staff may apply for an exception. diff --git a/content/notes/rio-intro/07-userprovisioning.md b/content/notes/rio-intro/07-userprovisioning.md index 0adb564c..0fe9ffcd 100644 --- a/content/notes/rio-intro/07-userprovisioning.md +++ b/content/notes/rio-intro/07-userprovisioning.md @@ -15,6 +15,4 @@ Once a project is approved, a PI and her/his researchers must sign a RUDA (one f Each researcher must also complete the _[High Security Awareness Training (HSAT)](http://in.virginia.edu/hsat-training)_. This must be completed to permit each researcher access to the HSVPN filter. -> More information on security training and HSVPN access can be found [here](https://www.rc.virginia.edu/userinfo/ivy/) on the RC website. - diff --git a/content/notes/rio-intro/08-connecting.md b/content/notes/rio-intro/08-connecting.md index d9d75395..c25c23f5 100644 --- a/content/notes/rio-intro/08-connecting.md +++ b/content/notes/rio-intro/08-connecting.md @@ -7,11 +7,11 @@ menu: rio-intro: --- -Before connecting to your VM, you must install the following on your personal machine/device. - * Install the Cisco AnyConnect Secure Mobility Client. - * Install Opswat. - * Install Duo MFA on personal smartphone. - * See [here on the RC website](https://www.rc.virginia.edu/userinfo/ivy/) for details. +Before connecting to your VM, you must install the following on your workstation. + * Cisco AnyConnect Secure Mobility Client. + * Opswat. + * Duo MFA on personal smartphone. + * See [here on the RC website](https://www.rc.virginia.edu/userinfo/ivy/) for details. There are two ways to connect to your Linux VM: diff --git a/content/notes/rio-intro/09-access-ssh.md b/content/notes/rio-intro/09-access-ssh.md index 04c85249..3d686281 100644 --- a/content/notes/rio-intro/09-access-ssh.md +++ b/content/notes/rio-intro/09-access-ssh.md @@ -16,7 +16,7 @@ Steps: ```bash ssh [mst3k@10.xxx.xxx.xxx](mailto:mst3k@10.xxx.xxx.xxx) ``` -, where `mst3k` is replaced with your user ID and the `x`'s are replaced with your VM's IP address (given in the Services app). +`mst3k` is replaced with your user ID and the `x`'s are replaced with your VM's IP address (given in the Services app). {{< figure src=/notes/rio-intro/img/rio-intro_5.png alt="Screenshot showing a terminal window in MobaXterm with the command ssh mst3k@10.xxx.xxx.xxx entered, demonstrating how a user connects to their Rio VM over SSH using their user ID and VM IP address. The command line interface displays a green and yellow bar with the date, time, and file path /home/mobaxterm." width=90% height=90% >}} diff --git a/content/notes/rio-intro/11-access-browser.md b/content/notes/rio-intro/11-access-browser.md index 168760e3..2034b4b4 100644 --- a/content/notes/rio-intro/11-access-browser.md +++ b/content/notes/rio-intro/11-access-browser.md @@ -12,11 +12,10 @@ Steps: 1. Start the HSVPN 2. Open a web browser and enter the IP address for your VM (e.g., `https://10.xxx.xxx.xxx`). If you get a warning message, you may need to click on Advanced Settings and/or a Connect Anyway option, depending on your web browser. -3. Upon login, you will see a selection of different graphical applications that you can use. -{{< figure src=/notes/rio-intro/img/rio-intro_6.jpg caption="Graphical application options available when accessing a Rio VM through a web browser: JupyterHub, RStudio, and FastX Desktop." alt="Screenshot showing the login interface for accessing a Rio virtual machine through a web browser. Three application icons are displayed — JupyterHub, RStudio Server, and FastX Desktop — representing the graphical tools users can launch on their VM after connecting via HTTPS." width=90% height=90% >}} +Logging in will take you to a web application called Open OnDemand (OOD). + -These applications run on the VM itself and do not use HPC resources. diff --git a/content/notes/rio-intro/12-VM-storage.md b/content/notes/rio-intro/12-VM-storage.md new file mode 100644 index 00000000..da59765e --- /dev/null +++ b/content/notes/rio-intro/12-VM-storage.md @@ -0,0 +1,26 @@ +--- +title: VM Storage +date: 2025-11-12-03:53:56Z +type: docs +weight: 653 +menu: + rio-intro: +--- + +Storage is split across two sets of directories. + +1. Personal directory + +The personal directory for individual data storage is mounted under `/home/$USER`. Little space is allotted for `/home` directories per VM. Individual home folders are personal, and not shareable. + +2. Shared storage space + +Shared storage space is mounted under `/standard/ivy-hip-name` where `ivy-hip-name` is replaced by the name of your Ivy project's Grouper group name. + +The default of Research Standard Storage is 1TB. PIs can get more when first requesting the Ivy project. Storage can be resized using our [Storage Request Form](https://www.rc.virginia.edu/form/storage/). + +### Rio Caveat + +One caveat to consider when working in the Rio environment is that `/home` is not mounted on Rio compute nodes. This means that if your compute jobs reference any files in your `/home` storage, they will not be found. Ivy VMs not using Rio can ignore this condition. + +We recommend working exclusively out of the VM's Research Standard storage directly for Rio. Individual user subdirectories are automatically created under `/standard` to organize the space. \ No newline at end of file diff --git a/content/notes/rio-intro/17-transferring-data.md b/content/notes/rio-intro/13-transferring-data.md similarity index 85% rename from content/notes/rio-intro/17-transferring-data.md rename to content/notes/rio-intro/13-transferring-data.md index 6c8a70a4..a349d2a8 100644 --- a/content/notes/rio-intro/17-transferring-data.md +++ b/content/notes/rio-intro/13-transferring-data.md @@ -11,7 +11,7 @@ menu: **UVA IVY-DTN** is the the official collection for moving files into High-Security Research Standard Storage. -To transfer data to High-Security Research Standard Storage, please see the special instructions [here on the RC website](https://www.rc.virginia.edu/userinfo/ivy/). +To transfer data to High-Security Research Standard Storage, please see the special Globus instructions [here on the RC website](https://www.rc.virginia.edu/userinfo/ivy/). > Ensure that you are __NOT__ connected to the HSVPN. Data transfer will not work if you are connected to the HSVPN. diff --git a/content/notes/rio-intro/14-OOD.md b/content/notes/rio-intro/14-OOD.md new file mode 100644 index 00000000..589c08f6 --- /dev/null +++ b/content/notes/rio-intro/14-OOD.md @@ -0,0 +1,13 @@ +--- +title: Open OnDemand +date: 2025-11-12-03:53:56Z +type: docs +weight: 855 +menu: + rio-intro: +--- + +Open OnDemand is a web application that allows for HPC access through a graphical browser window. + + +{{< figure src=/notes/rio-intro/img/OOD.png alt="Screenshot of the Open OnDemand web interface. The top navigation bar shows menu options: 'Jobs,' 'Clusters,' 'Interactive Apps,' and 'My Interactive Sessions.' The center of the page displays the Open OnDemand logo with the text: 'OnDemand provides an integrated, single access point for all of your HPC resources.'" caption="Features of the application are accessible using the top gray bar: Jobs, Clusters, and Interactive Apps." width=90% height=90% >}} \ No newline at end of file diff --git a/content/notes/rio-intro/14-fastx.md b/content/notes/rio-intro/14-fastx.md deleted file mode 100644 index fb1673e5..00000000 --- a/content/notes/rio-intro/14-fastx.md +++ /dev/null @@ -1,24 +0,0 @@ ---- -title: Working with Files Using FastX -date: 2025-11-12-03:53:56Z -type: docs -weight: 700 -menu: - rio-intro: ---- - -FastX Desktop is a great tool for managing files and launching applications with a graphical user interface. - -{{< figure src=/notes/rio-intro/img/rio-intro_7.jpg caption="FastX Dashboard" alt="Screenshot of the FastX dashboard showing the “My Sessions” window with options to launch a MATE desktop environment or a terminal session. A sidebar includes buttons for launching a session and managing bookmarks." width=80% height=80% >}} - -Once signed-in, you can select either a "MATE" (GUI Linux Desktop) or "Terminal" session - -MATE sessions provide a terminal (black box in menu bar) and a file browser (white box) in menu bar. The CAJA file browser can help with management and navigation of the filesystem. - -{{< figure src=/notes/rio-intro/img/rio-intro_8.jpg caption="Example of the MATE Desktop environment on a Rio VM, showing the CAJA file browser open to the user’s home directory." alt="Screenshot of the MATE Desktop environment with the CAJA file browser window open. The desktop background shows icons for “Computer,” “egg3xa’s Home,” and “Trash.” The file browser displays folders such as Desktop, Documents, Downloads, Music, Pictures, Public, R, Templates, and an untitled Jupyter notebook file." width=90% height=90% >}} - -More information on using FastX can be found on our [website](https://www.rc.virginia.edu/userinfo/hpc/logintools/fastx/) and in our [Intro to HPC slides](https://learning.rc.virginia.edu/notes/hpc-intro/connecting_to_the_system/connecting_fastx_desktop/). - - - - diff --git a/content/notes/rio-intro/15-active-jobs.md b/content/notes/rio-intro/15-active-jobs.md new file mode 100644 index 00000000..d02a247b --- /dev/null +++ b/content/notes/rio-intro/15-active-jobs.md @@ -0,0 +1,13 @@ +--- +title: Active Jobs +date: 2025-11-12-03:53:56Z +type: docs +weight: 855 +menu: + rio-intro: + parent: Open OnDemand +--- + +The Jobs tab on the OOD menu bar contains the active jobs tab. This shows all jobs currently running or queued on all partitions for all users. You can filter the search for **All Jobs** or **Your Jobs**. + +The Filter search bar allows you to narrow jobs by either user, queue, job name, job ID, or job status (running, queued, completed). More job details can be viewed using the dropdown to the left of a given job. Completed jobs will be visible only for a short while as they are exiting. \ No newline at end of file diff --git a/content/notes/rio-intro/15-file-editing.md b/content/notes/rio-intro/15-file-editing.md deleted file mode 100644 index 2f34aaf6..00000000 --- a/content/notes/rio-intro/15-file-editing.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: File Editing in Linux VM -date: 2025-11-12-03:53:56Z -type: docs -weight: 750 -menu: - rio-intro: ---- - -Editing files in a Linux virtual machine can be done using several text editors, depending on whether you are working in a terminal window or within the desktop environment. - -When working in a Terminal Window, use command-line editors such as `vi`, `emacs`, or `pluma`. - -When working in the Desktop environment, you can use the Pluma Text Editor. - diff --git a/content/notes/rio-intro/16-clusters.md b/content/notes/rio-intro/16-clusters.md new file mode 100644 index 00000000..ba76ba37 --- /dev/null +++ b/content/notes/rio-intro/16-clusters.md @@ -0,0 +1,11 @@ +--- +title: Clusters +date: 2025-11-12-03:53:56Z +type: docs +weight: 860 +menu: + rio-intro: + parent: Open OnDemand +--- + +The Clusters tab on the OOD menu bar opens a new browser tab with a Linux command line interface for shell access. This gives you terminal access to the frontend Ivy VM, similar to logging in via SSH. Here, you can manipulate files, install software packages for Python & R, and submit command line jobs. Running processes here does not utilize Rio compute node resources. \ No newline at end of file diff --git a/content/notes/rio-intro/16-storage.md b/content/notes/rio-intro/16-storage.md deleted file mode 100644 index 5e2dd1c2..00000000 --- a/content/notes/rio-intro/16-storage.md +++ /dev/null @@ -1,22 +0,0 @@ ---- -title: Storing Data -date: 2025-11-12-03:53:56Z -type: docs -weight: 800 -menu: - rio-intro: ---- - -Virtual machines do not come with any significant disk storage of their own. - -A PI specifies the storage space they would like to have when requesting access to Ivy. - -The default allocation is of 1TB of Research Standard Storage. - -This storage is mounted under `/data/ivy-hip-name`, where `ivy-hip-name` is replaced by the name of your Ivy project's Grouper group name. - -Storage can be resized upon request. Submit your request [here on the RC website](https://www.rc.virginia.edu/form/storage/). - -JupyterLab, RStudio, Rio, and FastX can all save data into the shared storage at `/data/ivy-hip-name`. - - diff --git a/content/notes/rio-intro/17-interactive-apps.md b/content/notes/rio-intro/17-interactive-apps.md new file mode 100644 index 00000000..13557a80 --- /dev/null +++ b/content/notes/rio-intro/17-interactive-apps.md @@ -0,0 +1,17 @@ +--- +title: Interactive Apps +date: 2025-11-12-03:53:56Z +type: docs +weight: 865 +menu: + rio-intro: + parent: Open OnDemand +--- + +Open OnDemand offers GUI applications to interactively run jobs on Rio compute nodes. Job submissions are managed by a resource manager called Slurm. + +The following applications are available: +- JupyterLab +- RStudio Server +- FastX Desktop + diff --git a/content/notes/rio-intro/18-preinstalled-software.md b/content/notes/rio-intro/18-preinstalled-software.md deleted file mode 100644 index 54706995..00000000 --- a/content/notes/rio-intro/18-preinstalled-software.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -title: Preinstalled Software -date: 2025-11-12-03:53:56Z -type: docs -weight: 900 -menu: - rio-intro: ---- - -Every Virtual Machine comes with a base installation of software by default. See [here on the RC website](https://www.rc.virginia.edu/userinfo/ivy/ivy-linux-sw/) for a list of preinstalled software and how to request new software installations. - -Additional software packages are pre-approved and available for installation upon request. Software not already approved will need to undergo a security evaluation to determine if it is suitable for a High Security environment. - -Python and R packages are available to users through the normal `pip`, `conda`, and `CRAN` library installation methods. - -Access to installed software is done using the `lmod` Module System. - - diff --git a/content/notes/rio-intro/18-requesting-session.md b/content/notes/rio-intro/18-requesting-session.md new file mode 100644 index 00000000..c084749f --- /dev/null +++ b/content/notes/rio-intro/18-requesting-session.md @@ -0,0 +1,39 @@ +--- +title: Requesting a Session +date: 2025-11-12-03:53:56Z +type: docs +weight: 900 +menu: + rio-intro: + parent: Open OnDemand +--- + +To request an interactive session, select one of the applications from the dropdown. This will prompt you to fill in the desired resources for your interactive job. + +{{< figure src=/notes/rio-intro/img/jupyterlab.png alt="Open OnDemand JupyterLab launch form showing fields: Partition set to Standard, 1 hour, 1 core, 6 GB memory, allocation hpc_build, and a Launch button." width=45% height=45% >}} + +The above example is for starting a JupyterLab session. + +### Choosing Resource Requests + +**Partition:** Standard or GPU (only select GPU if you are using GPU-enabled code). + +**Time:** Time limit for your session. Once reached, your session will terminate along with any running code. This cannot be adjusted once set. + +**Cores:** Used in parallel processing. Your code must be modified to take advantage of using multiple cores. + +**Memory:** A good rule of thumb is to request 2 to 3 times the size of the data that you are reading in or generating. + +**Optional Slurm Option:** This field is used to input one of the many other Slurm options. + +### Queueing the Session + +Once you've filled out the resource request form, click the Launch button at the bottom to queue your job. While queued, the job will wait for the resources you've asked for to become available. Requests with higher resource requests (more cores, more memory, more time) may wait longer. + +{{< figure src=/notes/rio-intro/img/queue.png alt="Open OnDemand interface showing a queued JupyterLab session with creation time, one-hour request, session ID link, and a Delete button." caption="Example of a JupyterLab session waiting in the queue on Open OnDemand." width=90% height=90% >}} + +### Launching the Session + +When the job has started, the status will change from Queued to Running. A button will appear to open a browser, launching the interactive session. + +{{< figure src=/notes/rio-intro/img/runningjob.png alt="Open OnDemand interface showing a running JupyterLab session. The display includes the host node name, creation time, remaining time, session ID link, a Delete button, and a blue 'Connect to Jupyter' button." caption="Example of a JupyterLab session that has started running and is ready to connect in Open OnDemand." width=90% height=90% >}} diff --git a/content/notes/rio-intro/19-interactive-desktop.md b/content/notes/rio-intro/19-interactive-desktop.md new file mode 100644 index 00000000..7a2ea06e --- /dev/null +++ b/content/notes/rio-intro/19-interactive-desktop.md @@ -0,0 +1,23 @@ +--- +title: Interactive Desktop +date: 2025-11-12-03:53:56Z +type: docs +weight: 950 +menu: + rio-intro: + parent: Open OnDemand +--- + +The Interactive Desktop application launches a virtual desktop session. + +The Desktop app is a great tool for managing files and launching applications with a graphical user interface. + +A Firefox browser is available and can be used to browse the web. The browser must be launched from a terminal with the +network proxy set. Use the following commands in a terminal: +```bash +export HTTPS_PROXY=http://figgis-s.hpc.virginia.edu:8080 + +export HTTP_PROXY=http://figgis-s.hpc.virginia.edu:8080 + +firefox +``` \ No newline at end of file diff --git a/content/notes/rio-intro/19-modules.md b/content/notes/rio-intro/19-modules.md deleted file mode 100644 index acb8d479..00000000 --- a/content/notes/rio-intro/19-modules.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Modules -date: 2025-11-12-03:53:56Z -type: docs -weight: 950 -menu: - rio-intro: - parent: Preinstalled Software ---- - -Any application software that you want to use will need to be loaded with the `module load` command. - -For example: -```bash -module load apptainer/1.3.1 -``` -You will need to load the module any time that you create a new shell. This includes every time that you log out and back in as well as every time that you run a batch job on a compute node. - -**Module Commands** - -Some basic commands to use `lmod`: - -`module avail` lists every module avaiable to load. - -`module key ` searches for modules in a specified category (i.e. `module key bio`). - -`module spider ` prints information about the software including different offered versions. - -`module load ` loads the desired software. - -`module load ` loads a specific version of a module. - -`module unload ` unloads the desired software. - -`module list` prints all currently loaded modules. - -`module purge` unloads all modules. diff --git a/content/notes/rio-intro/24-support.md b/content/notes/rio-intro/20-officehours.md similarity index 100% rename from content/notes/rio-intro/24-support.md rename to content/notes/rio-intro/20-officehours.md diff --git a/content/notes/rio-intro/20-slurm.md b/content/notes/rio-intro/20-slurm.md deleted file mode 100644 index d88cbfdf..00000000 --- a/content/notes/rio-intro/20-slurm.md +++ /dev/null @@ -1,72 +0,0 @@ ---- -title: Slurm -date: 2025-11-12-03:53:56Z -type: docs -weight: 1000 -menu: - rio-intro: ---- - -**Slurm** manages the hardware resources on the cluster (e.g. compute nodes/cpu cores, compute memory, etc.). - -Slurm allows you to request resources within the cluster to run your code. It is used for submitting jobs to compute nodes from an access point (Ivy VM). - -Frontends are intended for editing, compiling, and very short test runs. - -Production jobs go to the compute nodes through the resources manager. - -#### Example Slurm Script - -A Slurm script is a bash script with Slurm directives (#SBATCH) and command-line instructions for running your program. - -```bash -#!/bin/bash -#SBATCH --nodes=1 #total number of nodes for the job -#SBATCH --ntasks-per-node=1 #how many copies of code to run -#SBATCH --time=1-12:00:00 #amount of time for the whole job -#SBATCH --partition=standard #the queue/partition to run on -#SBATCH --account=myGroupName #the account/allocation to use - -module purge -module load goolf R #load modules that my job needs -Rscript myProg.R #command-line execution of my job -``` - -#### Submitting a Slurm Job - -To submit the Slurm command file to the queue, use the `sbatch` command at the command line prompt. - -`/home` only exists on VM. Compute nodes do not have access to `/home`. - -Job submission needs to be done from `/standard/storage/`. - -It is recommended to include the complete file paths in the script. For example, if the script on the previous page is in a file named `job_script.slurm`, we can submit it as follows: -```bash -[jus2yw@ivy-tst-rc-1 ivy-tst-rc]$ sbatch job_script.slurm -``` - -Output: -```plaintext -Submitted batch job 18316 -``` - -#### Submitting an Interactive Slurm Job - - -To submit an interactive Slurm job to the queue, use the `ijob` command at the command line prompt. - -For example, if you want to run an interactive application on a compute node in the standard queue using one cpu, we can submit it as follows: -```bash --bash-4.1$ ijob –c 1 –p standard –A MyGroupName -t 06:00:00 -``` -Output: -```plaintext -salloc: Pending job allocation 21640112 -salloc: job 21640112 queued and waiting for resources salloc: job 21640112 has been allocated resources -salloc: Granted job allocation 21640112 srun: Step created for job 21640112 -udc-aw34-21c0-teh1m$ -``` - -Slurm documentation: - * [https://www.rc.virginia.edu/userinfo/hpc/slurm/](https://www.rc.virginia.edu/userinfo/hpc/slurm/) - * [https://slurm.schedmd.com/](https://slurm.schedmd.com/) \ No newline at end of file diff --git a/content/notes/rio-intro/21-partitions.md b/content/notes/rio-intro/21-partitions.md deleted file mode 100644 index c9785db5..00000000 --- a/content/notes/rio-intro/21-partitions.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -title: Partitions (Queues) -date: 2025-11-12-03:53:56Z -type: docs -weight: 1050 -menu: - rio-intro: - parent: Slurm ---- - -Rio has several partitions (or queues) for job submissions. - -You will need to specify a partition when you submit a job. - -To see the partitions that are available to you, type `qlist` at the command-line prompt. - - -{{< figure src=/notes/rio-intro/img/rio-intro_9.png caption="Command output" alt="Screenshot of terminal output displaying Slurm partition information. The table lists four queues—neo, neo-gpu, standard, and gpu—along with their total and free cores, running and pending jobs (all zero), and time limits of seven days for most partitions and three days for the GPU queue." width=75% height=75% >}} - -> The `neo` partitions are exclusive to a specific group and are not for general users of Rio. - diff --git a/content/notes/rio-intro/22-jobstatus.md b/content/notes/rio-intro/22-jobstatus.md deleted file mode 100644 index 9ee4cf92..00000000 --- a/content/notes/rio-intro/22-jobstatus.md +++ /dev/null @@ -1,58 +0,0 @@ ---- -title: Checking Job Status -date: 2025-11-12-03:53:56Z -type: docs -weight: 1400 -menu: - rio-intro: - parent: Slurm ---- - -To display the status of only your _active_ jobs, type: `squeue –u `. - -The `squeue` command will show pending jobs and running jobs, but not failed, canceled or completed job. - -```bash --bash-4.2$ squeue –u mst3k -``` -Output: - -{{< table >}} -| JOBID | PARTITION | NAME | USER | ST | TIME | NODES | NODELIST(REASON) | -| :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | -| 18316 | standard | job_sci | mst3k | R | 1:45 | 1 | udc-aw38-34-l | -{{< /table >}} - -A job's status is indicated by: - -`PD` – pending - -`R` – running - -`CG` – exiting - - -To display the status of all jobs, type: `sacct –S `. - -```bash -[jus2yw@ivy-tst-rc-1 ivy-tst-rc]$ sacct –S 2025-01-01 -``` -Output: -{{< table >}} -| JobID | JobName | Partition | Account | AllocCPUS | State | ExitCode| -| :-: | :-: | :-: | :-: | :-: | :-: | :-: | -| 3104009 | RAxML_NoC+ | standard | hpc_build | 20 | COMPLETED | 0:0 | -| 3104009.bat+ | batch | | hpc_build | 20 | COMPLETED | 0:0 | -| 3104009.0 | raxmlHPC-+ | | hpc_build | 20 | COMPLETED | 0:0 | -| 3108537 | sys/dashb+ | gpu | hpc_build | 1 | CANCELLED+ | 0:0 | -| 3108537.bat+ | batch | | hpc_build | 1 | CANCELLED | 0:15 | -| 3108562 | sys/dashb+ | gpu | hpc_build | 1 | TIMEOUT | 0:0 | -| 3108562.bat+ | batch | | hpc_build | 1 | CANCELLED | 0:15 | -| 3109392 | sys/dashb+ | gpu | hpc_build | 1 | TIMEOUT | 0:0 | -| 3109392.bat+ | batch | | hpc_build | 1 | CANCELLED | 0:15 | -| 3112064 | srun | gpu | hpc_build | 1 | FAILED | 1:0 | -| 3112064.0 | bash | | hpc_build | 1 | FAILED | 1:0 | -{{< /table >}} - -The `sacct` command lists all jobs (pending, running, completed, canceled, failed, etc.) since the specified date. - diff --git a/content/notes/rio-intro/23-canceling-job.md b/content/notes/rio-intro/23-canceling-job.md deleted file mode 100644 index 40bea88e..00000000 --- a/content/notes/rio-intro/23-canceling-job.md +++ /dev/null @@ -1,25 +0,0 @@ ---- -title: Canceling a Job -date: 2025-11-12-03:53:56Z -type: docs -weight: 1450 -menu: - rio-intro: - parent: Slurm ---- - - -To delete a job from the queue, use the `scancel` command with the job ID number at the command line prompt: `-bash-4.2$ scancel 18316` - -To cancel all your jobs, run this command: `-bash-4.2$ scancel –u $USER` - -We use `scontrol` to print information of a running job: `scontrol show job ` - -**More Commands:** - -`sacct` will return accounting information about your job. See [Slurm's](https://slurm.schedmd.com/sacct.html)[ ](https://slurm.schedmd.com/sacct.html)[docuementation](https://slurm.schedmd.com/sacct.html) for a full list of options. - * Use the option `–j ` to inspect a particular job. - - -`seff` will return information about the utilization (called the “efficiency”) of core and memory. - diff --git a/content/notes/rio-intro/img/OOD.png b/content/notes/rio-intro/img/OOD.png new file mode 100644 index 00000000..4d354dbf Binary files /dev/null and b/content/notes/rio-intro/img/OOD.png differ diff --git a/content/notes/rio-intro/img/jupyterlab.png b/content/notes/rio-intro/img/jupyterlab.png new file mode 100644 index 00000000..157bcbe1 Binary files /dev/null and b/content/notes/rio-intro/img/jupyterlab.png differ diff --git a/content/notes/rio-intro/img/queue.png b/content/notes/rio-intro/img/queue.png new file mode 100644 index 00000000..a8c04338 Binary files /dev/null and b/content/notes/rio-intro/img/queue.png differ diff --git a/content/notes/rio-intro/img/runningjob.png b/content/notes/rio-intro/img/runningjob.png new file mode 100644 index 00000000..f591390c Binary files /dev/null and b/content/notes/rio-intro/img/runningjob.png differ diff --git a/layouts/partials/site_head.htmlold b/layouts/partials/site_head_old.txt similarity index 100% rename from layouts/partials/site_head.htmlold rename to layouts/partials/site_head_old.txt diff --git a/layouts/partials/widgets/about.html b/layouts/partials/widgets/about.html index e58e8f52..11877405 100644 --- a/layouts/partials/widgets/about.html +++ b/layouts/partials/widgets/about.html @@ -23,14 +23,14 @@
{{ if site.Params.avatar.gravatar }} - Avatar of {{$person_page.Title}} + {{$person_page.Title}} {{ else if $avatar }} {{ $avatar_image := $avatar.Fill "270x270 Center" }} - Avatar of {{$person_page.Title}} + {{$person_page.Title}} {{ end }}
-

{{ $person_page.Title }}

+

{{ $person_page.Title }}

{{ with $person.role }}

{{ . | markdownify | emojify }}

{{ end }} {{ range $person.organizations }}