Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion src/SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,6 @@

- [Parallel Computing](./chapter4/chapter4.md)
- [What is Parallel Computing?](./chapter4/parallel-computing.md)
- [Shared Memory Resources](./chapter4/shared-memory.md)
- [Multithreading](./chapter4/multithreading.md)
- [OpenMP](./chapter4/openmp.md)
- [Challenges](./chapter4/challenges.md)
Expand Down
62 changes: 62 additions & 0 deletions src/chapter4/challenges.md
Original file line number Diff line number Diff line change
@@ -1 +1,63 @@
# Challenges

🚧 Under Construction 🏗️

## Task 1 - Parallise `for` Loop

Goal: To to create an array `[0,1,2...19]`

1. Git clone [HPC-Training-Challenges](https://github.com/MonashDeepNeuron/HPC-Training-Challenges)
2. Go to the directory “challenges/parallel-computing”. Compile array.c and execute it. Check the run time of the serial code
3. Add `#pragma<>`
4. Compile the code again
5. Run parallel code and check the improved run time

## Task 2 - Run task 1 on HPC cluster

1. Check the available partitions with `show_cluster`
2. Modify `RunHello.sh `
3. `sbatch RunHello.sh`
4. `cat slurm<>.out` and check the run time

>You can also use [strudel web](https://beta.desktop.cvl.org.au/login) to run the script without sbatch

## Task 3 - Reduction Clause

Goal: To find the sum of the array elements

1. Compile `reduction.c` and execute it. Check the run time
2. Add `#pragma<>`
3. Compile `reduction.c` again
4. Run parallel code and check the improved run time. Make sure you got the same result as the serial code

>`module load gcc` to use newer version of gcc if you have error with something like `-std=c99`

## Task 4 - Private clause

The goal of this task is to square each value in array and find the sum of them
1. Compile private.c and execute it. Check the run time. `#include` the default library `<math.h>` and link it
2. Add `#pragma<>`
3. Compile `private.c` again
4. Run parallel code and check the improved run time

## Task 5 - Calculate Pi using "Monte Carlo Algorithm"

Goal: To estimate the value of pi from simulation

- No instructions on this task. Use what you have learnt in previous tasks to run a parallel code!
- You should get a result close to pi(3.1415…….)

Short explanation of Monte Carlo algorithm:

[YouTube Video: Monte Carlo Simulation](https://www.youtube.com/watch?v=7ESK5SaP-bc&ab_channel=MarbleScience)

![Monte Carlo](imgs/Monte%20Carlo.png)

## Bonus - Laplace equation to calculate the temperature of a square plane

- Modify `laplace2d.c`
- Use Makefile to compile the code
- Make the program as fast as you can

Brief Algorithm of Laplace equation:
![](imgs/Pasted%20image%2020230326142826.png)
Binary file added src/chapter4/imgs/4 Parallel Computing OpenMP.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/chapter4/imgs/Hybrid Parallel Programming.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/chapter4/imgs/Monte Carlo.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/chapter4/imgs/OpenMP and Directive.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/chapter4/imgs/Parallel Computing Example.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/chapter4/imgs/Pasted image 20230325113147.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/chapter4/imgs/Pasted image 20230325113254.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/chapter4/imgs/Pasted image 20230325113303.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/chapter4/imgs/Pasted image 20230325113312.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/chapter4/imgs/Shared Memory Architecture.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/chapter4/imgs/Slurm Architecture.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/chapter4/imgs/Thread vs Processes.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/chapter4/imgs/Threads Visualisation.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/chapter4/imgs/Time Command.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/chapter4/imgs/Top Command.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/chapter4/imgs/sbatch Command.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/chapter4/imgs/show_cluster Command.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/chapter4/imgs/squeue Command.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
66 changes: 65 additions & 1 deletion src/chapter4/multithreading.md
Original file line number Diff line number Diff line change
@@ -1 +1,65 @@
# Multithreading
# Multithreading on HPC

## Thread vs Process

![Thread vs Processes](imgs/Thread%20vs%20Processes.png)

When computer runs a program, your source code is loaded into RAM and process is started.
A **process** is a collection of code, memory, data and other resources.
A process runs in a unique address space. So Two processes can not see each other’s memory.

A **thread** is a sequence of code that is executed inside the scope of the **process**. You can (usually) have multiple **threads** executing concurrently within the same process.
**Threads** can view the memory (i.e. variables) of other threads within the same process

A **multiprocessing** system has more than two processors, whereas **multithreading** is a program execution technique that allows a single process to have multiple code segments.

## Architecture of a HPC Cluster (Massive)

![Slurm Architecture](imgs/Slurm%20Architecture.png)

The key in HPC is to write a parallel computing code that utilise multiple nodes at the same time. essentially, more computers faster your application

## Using Massive

### Find Available Partition

Command:
```bash
show_cluster
```

![show_cluster Command](imgs/show_cluster%20Command.png)

Before you run your job, it’s important to check the available resources.

`show_cluster` is a good command to check the available resources such as CPU and Memory. Make sure to also check the status of the of the node, so that your jobs get started without waiting

### Sending Jobs

Command:
```bash
#SBATCH`--flag=value
```

![sbatch Command](imgs/sbatch%20Command.png)

Here is the example of shell script for running multi-threading job
`#sbatch` specifies resources and then it runs the executable named hello.

`#sbatch` tasks specifies how many processes to run
Cpus per task is pretty self explanatory, it specifies how many cpu cores you need to run a process, this will be the number of threads used in the job
And make sure to specify which partition you are using

### Monitor Jobs

Command:
```bash
squeue
# or
squeue -u <username>
```

![squeue Command](imgs/squeue%20Command.png)

After you submitted your job, you can use the command squeue to monitor your job
you can see the status of your job to check whether it’s pending or running and also how long has it been since the job has started.
123 changes: 122 additions & 1 deletion src/chapter4/openmp.md
Original file line number Diff line number Diff line change
@@ -1 +1,122 @@
# OpenMP
# Parallel Computing with OpenMP

## What is OpenMP

OpenMP, stand for open multi-processing is an API for writing multithreaded applications

It has a set of compiler directives and library routines for parallel applications, and it greatly simplifies writing multi-threaded code in Fortran, C and C++.

Just few lines of additional code can make your application parallel 

OpenMP uses shared memory architecture. It assumes all code runs on a single server

## Threads

![Threads Visualisation](imgs/Threads%20Visualisation.png)

A thread of execution is the smallest instruction that can be managed independently by an operating system.

In parallel region, multiple threads are spawned and utilises the cores on CPU

> Only one thread exists in a serial region

## OpenMP Compiler Directives

Recall compiler directives in C; particularly the `#pragma` directive. These can be used to create custom functionality for a compiler and enable specialized features in-code.

`#pragma` is a preprocessor directive that is used to provide additional information to the compiler beyond the standard language syntax. It allows programmers to give hints or directives to the compiler, which the compiler can use to optimize the code or to use specific compiler features or extensions.

The `#pragma` directive is followed by a keyword that specifies the type of pragma and any additional parameters or options that are needed. For example, the `#pragma omp` directive is used in OpenMP parallel programming to provide hints to the compiler about how to parallelize code. Here are some examples of `#pragma` directives:
- `#pragma once`: This is a commonly used pragma in C and C++ header files to ensure that the header file is included only once in a compilation unit. This can help to prevent errors that can occur when the same header file is included multiple times.
- `#pragma message`: This pragma is used to emit a compiler message during compilation. This can be useful for providing additional information to the programmer or for debugging purposes.
- `#pragma warning`: This pragma is used to control compiler warnings. It can be used to turn specific warnings on or off, or to change the severity of warnings.
- `#pragma pack`: This pragma is used to control structure packing in C and C++. It can be used to specify the alignment of structure members, which can affect the size and layout of structures in memory.
- `#pragma optimize`: This pragma is used to control code optimization. It can be used to specify the level of optimization, or to turn off specific optimizations that may be causing problems.

It is important to note that `#pragma` directives are compiler-specific, meaning that different compilers may interpret them differently or may not support certain directives at all. It is important to check the documentation for a specific compiler to understand how it interprets `#pragma` directives.

OpenMP provides a set of `#pragma` directives that can be used to specify the parallelization of a particular loop or section of code. For example, the `#pragma omp parallel` directive is used to start a parallel region, where multiple threads can execute the code concurrently. The `#pragma omp for` directive is used to parallelize a loop, with each iteration of the loop being executed by a different thread.

Here's an example of how `#pragma` directives can be used with OpenMP to parallelize a simple loop:

```c
#include <omp.h>
#include <stdio.h>

int main() {
int i;
#pragma omp parallel for
for (i = 0; i < 10; i++) {
printf("Thread %d executing iteration %d\n", omp_get_thread_num(), i);
}
return 0;
}
```

Use `gcc -fopenmp` to compile your code when you use `#pragma`

## Compile OpenMP

1. Add `#include <omp.h> if you are using OpenMP function`
2. Run `gcc -fopenmp -o hello hello.c`

## How it works

![OpenMP and Directive](imgs/OpenMP%20and%20Directive.png)
[Source](https://www.researchgate.net/figure/OpenMP-API-The-master-thread-is-indicated-with-T-0-while-inside-the-parallel-region_fig3_329536624
)

Here is an example of `#pragma`
- The function starts with serial region
- At the line `#pragma omp parallel`, a group of threads are spawned to create parallel region inside the bracket
- At the end of the bracket, the program goes back to serial computing

## Running "Hello World" on Multi-threads

>If you're unsure about the difference between **multi-threading** and **multi-processing**, check the page [here](multithreading.md)

**Drawing in Serial (Left) vs Parallel (Right)**
![](imgs/4%20Parallel%20Computing%20OpenMP.gif)

Drawing in serial versus drawing in parallel, you can see how we can place one pixel at a time and take a long time to make the drawing, but on the right hand side if we choose to load and place four pixels down simultaneously we can get the picture faster, however during the execution it can be hard to make out what the final image will be, given we don’t know what pixel will be placed where in each execution step.

Now this is obviously a fairly abstract analogy compared to exactly what’s happening under the hood, however if we go back to the slide diagram containing zones of multiple threads and serial zones, some parts of a program must be serial as if this program went further and drew a happy face and then a frown face, drawing both at the same time is not useful to the program, yes it would be drawn faster but the final image won’t make sense or achieve the goal of the program.

## How many threads? You can dynamically change it

**`omp_set_num_threads()` Library Function**
Value is set inside program. Need to recompile program to change

**`OMP_NUM_THREADS` Environment Variable**

```bash
export OMP_NUM_THREADS=4
./hello
```

The operating system maps the threads to available hardware. You would not normally want to exceed the number of cores/processors available to you.

## Measuring Performance

The command `top` or `htop` looks into a process. As you can see from the image on right, it shows the CPU usages.

![Top Command](imgs/Top%20Command.png)

The command `time` checks the overall performance of the code.

![Time Command](imgs/Time%20Command.png)

By running this command, you get real time, user time and system time.

**Real** is wall clock time - time from start to finish of the call. This includes the time of overhead

**User** is the amount of CPU time spent outside the kernel within the process

**Sys** is the amount of CPU time spent in the kernel within the process.
**User** time + **Sys** time will tell you how much actual CPU time your process used.

## More Features of OpenMP

- [YouTube Video: Introduction to OpenMP](https://www.youtube.com/watch?v=iPb6OLhDEmM&list=PLLX-Q6B8xqZ8n8bwjGdzBJ25X2utwnoEG&index=11 )
- [YouTube Video: Data environment -\#pragma omp parallel private](https://www.youtube.com/watch?v=dlrbD0mMMcQ&list=PLLX-Q6B8xqZ8n8bwjGdzBJ25X2utwnoEG&index=17)
- [YouTube Video: Parallel Loops - \#omp parallel for reduction()](https://www.youtube.com/watch?v=iPb6OLhDEmM&list=PLLX-Q6B8xqZ8n8bwjGdzBJ25X2utwnoEG&index=11 )
42 changes: 41 additions & 1 deletion src/chapter4/parallel-computing.md
Original file line number Diff line number Diff line change
@@ -1 +1,41 @@
# What is Parallel Computing?
# Introduction to Parallel Computing

## What is Parallel Computing?

Parallel computing is about executing the instructions of the program simultaneously

One of the core values of computing is the breaking down of a big problem into smaller easier to solve problems, or at least smaller problems.

In some cases, the steps required to solve the problem can be executed simultaneously (in parallel) rather than sequentially (in order)

A supercomputer is not just about fast processors. It is multiple processors working together in simultaneously. Therefore it makes sense to utilise parallel computing in the HPC environment, given the access to large numbers of processors

![Running Processes in Parallel](imgs/Running%20Processes%20in%20Parallel.png)

An example of parallel computing looks like this.

![Parallel Computing Example](imgs/Parallel%20Computing%20Example.png)

Here there is an array which contains numbers from 0 to 999. The program is to increment each values by 1. Comparing serial code on left and parallel code on right, parallel code is utilising 4 cores of a CPU. Therefore, it can expect approximately 4 times speed up from just using 1 core, what we are seeing here is how the same code can in-fact execute faster as four times as many elements can be updated in the same time one would be.

## Parallel Computing Memory Architectures

Parallel computing has various memory architectures

### Shared Memory Architecture:

There is shared memory architectures where multiple CPUs runs on the same server. OpenMP uses this model

![Shared Memory Architecture](imgs/Shared%20Memory%20Architecture.png)

### Distributed Memory Architecture:

This distributed memory architecture where CPU and memory are bundled together and works by communicating with other nodes. Message passing protocol called lMPI is used in this model

![Distributed Memory Architecture](imgs/Distributed%20Memory%20Architecture.png)

### Hybrid Parallel Programming:

For High Performance Computing (HPC) applications, OpenMP is combined with MPI. This is often referred to as Hybrid Parallel Programming.

![Hybrid Parallel Programming](imgs/Hybrid%20Parallel%20Programming.png)
1 change: 0 additions & 1 deletion src/chapter4/shared-memory.md

This file was deleted.