Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
78 changes: 65 additions & 13 deletions docs/34.xbeach/1.tutorials/2.run-galveston-island-example.md
Original file line number Diff line number Diff line change
@@ -1,16 +1,16 @@
---
title: Run the Galveston Island Beach and Dune Simulation
description: ""
description: Learn how to run a real-world XBeach simulation on Inductiva.AI and scale it in the cloud.
seo:
title: ""
description: ""
title: Run a real-world XBeach simulation on Inductiva.AI
description: Learn how to run the Galveston Island XBeach simulation on Inductiva.AI and scale it in the cloud.
---

This tutorial walks you through running a high-fidelity XBeach simulation using the Inductiva API, based on a real-world dataset
that requires significant computational resources.
This tutorial walks you through running a high-fidelity XBeach simulation using
the Inductiva API, based on a real-world dataset that requires
significant computational resources.

## Objective
The goal is to demonstrate how to run the `Galveston Island` use case from the [GRIIDC repository](https://data.griidc.org/data/HI.x833.000:0001) - a research data platform from Texas A&M University-Corpus Christi’s Harte Research Institute for Gulf of Mexico Studies.
We will run and scale the `Galveston Island` use case from the [GRIIDC repository](https://data.griidc.org/data/HI.x833.000:0001), a research platform maintained by Texas A&M University-Corpus Christi’s Harte Research Institute for Gulf of Mexico Studies.

## Prerequisites
1. Download the dataset titled "XBeach model setup and results for beach and dune enhancement scenarios on Galveston Island, Texas", available [here](https://data.griidc.org/data/HI.x833.000:0001#individual-files).
Expand Down Expand Up @@ -40,7 +40,7 @@ To reduce simulation time, update the `params.txt` file with the following chang
- Set `tstop` to `34560` to shorten the simulation duration.

## Run Your Simulation
Below is the code required to run the simulation using the Inductiva API.
Below is the script required to run this simulation using the Inductiva API.

In this example, we use a `c2d-highcpu-56` cloud machine featuring 56 virtual CPUs (vCPUs) and a 20 GB data disk.

Expand Down Expand Up @@ -103,14 +103,66 @@ Learn more about costs at: https://inductiva.ai/guides/basics/how-much-does-it-c
```

As you can see in the "In Progress" line, the part of the timeline that represents the actual execution of the simulation,
the core computation time of this simulation was approximately 1 hour and 17 minutes.
the core computation time of this simulation was approximately **1 hour and 17 minutes**.

## Upgrading to Powerful Machines
One of Inductiva’s key advantages is how easily you can scale your simulations to larger, more powerful machines with minimal code changes. Scaling up simply requires updating the `machine_type` parameter when allocating your cloud machine.
## Scaling Your Simulation

You can upgrade to a next-generation cloud machine, increase the number of vCPUs, or do both!
### Upgrading to More Powerful Machines
One of Inductiva’s key advantages is how **easily you can scale your simulations** to larger, more powerful machines with minimal code changes. Scaling up simply requires updating the `machine_type` parameter when allocating your cloud machine.

For example, running the simulation on a machine with more vCPUs, such as the `c2d-highcpu-112`, reduces computation time from 1 hour and 17 minutes to approximately **47 minutes**, with a modest cost increase to US$0.48.
You can:
- Increase the number of vCPUs,
- Upgrade to next-generation cloud machines,
- Or do both.

Explore the full range of available machines [here](https://console.inductiva.ai/machine-groups/instance-types).

For example, running the simulation on a machine with **more vCPUs**, such as the `c2d-highcpu-112`, reduces runtime from 1 hour and 17 minutes to approximately **47 minutes**, with a modest cost increase to US$0.48.

Using **latest-generation c4d instances** further improves performance, even with fewer vCPUs than comparable c2d machines (48 vs 56 vCPUs and 96 vs 112 vCPUs). Why? **c4d processors are significantly faster per core**, making them ideal when time-to-solution is critical. The trade-off is a higher price per vCPU.

Below is a comparison showing the effect of scaling this simulation:

| Machine Type | vCPUs| Execution Time | Estimated Cost (USD) |
|------------------|------|-----------------|----------------------|
| c2d-highcpu-56 | 56 | 1h, 17 min | 0.40 |
| c2d-highcpu-112 | 112 | 47 min | 0.48 |
| c4d-highcpu-48 | 48 | 1h, 9 min | 0.97 |
| c4d-highcpu-96 | 96 | 44 min | 1.23 |

### Hyperthreading Considerations
By default, Google Cloud machines have **hyperthreading enabled**, meaning each physical CPU core runs **two hardware threads**.

While hyperthreading can improve throughput in general workloads, in HPC simulations with many threads (30+), it can actually reduce performance due to bandwidth and cache contention.

To disable hyper-threading and use only physical cores, configure the machine group with `threads_per_core=1`:

```python
cloud_machine = inductiva.resources.MachineGroup( \
provider="GCP",
machine_type="c2d-highcpu-56",
threads_per_core=1,
spot=True)
```

Below are the results of the same simulations with hyperthreading disabled (1 thread per core):

| Machine Type | Threads (active vCPUs)| Execution Time | Estimated Cost (USD) |
|------------------|-----------------------|-----------------|----------------------|
| c2d-highcpu-56 | 28 | 1h, 19 min | 0.39 |
| c2d-highcpu-112 | 56 | 45 min | 0.40 |
| c4d-highcpu-96 | 48 | 37 min | 1.02 |
| c4d-highcpu-192 | 96 | 26 min | 1.43 |

**Disabling hyperthreading improves performance**, even with fewer threads. The c4d cores still outperform older c2d machines per core, confirming that faster processors make a measurable difference.

## Key Takeaways
- **Scaling is easy:** simply change the machine type; the rest of your code stays the same.
- **c4d machines are faster per core**, even with fewer vCPUs, making them ideal when speed is critical, though at a higher cost.
- **Hyperthreading can slow memory-bound simulations**; disabling it often improves performance.
- **Choose wisely:** use c4d when runtime matters most, and c2d when cost per vCPU is the priority.

Inductiva gives you **all the flexibility of modern cloud HPC without the headaches** — faster results, effortless scaling, and minimal code changes.

It’s that simple! 🚀

Expand Down