Skip to content

Commit 8b3f0f1

Browse files
committed
💥 Upgrade to Docusaurus v3
1 parent edbe9e1 commit 8b3f0f1

File tree

11 files changed

+3272
-1748
lines changed

11 files changed

+3272
-1748
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ It's talking about configuration references and useful files and links used by B
1010

1111
---
1212

13-
This website is built using [Docusaurus 2](https://docusaurus.io/), a modern static website generator.
13+
This website is built using [Docusaurus 3](https://docusaurus.io/), a modern static website generator.
1414

1515
### Installation
1616

docs/github/actions/selfhosted-runners.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ All the documentation of **actions-runner-controller** is [here](https://github.
2121

2222
<!-- - Open port and connections with `iptables` (needed by `cert-manager`)
2323
24-
```shell
24+
```bash
2525
sudo iptables --insert INPUT --source 0.0.0.0/0 --jump ACCEPT && \
2626
sudo iptables --insert INPUT --destination 0.0.0.0/0 --jump ACCEPT && \
2727
sudo iptables --insert FORWARD --source 0.0.0.0/0 --jump ACCEPT && \
@@ -32,13 +32,13 @@ sudo iptables --insert OUTPUT --destination 0.0.0.0/0 --jump ACCEPT
3232

3333
- Install Helm
3434

35-
```shell
35+
```bash
3636
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
3737
```
3838

3939
- Install **cert-manager**
4040

41-
```shell
41+
```bash
4242
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.9.1/cert-manager.yaml
4343
```
4444

@@ -56,31 +56,31 @@ The `kubectl apply -f` doesn't work properly. Prefer install with the **actions-
5656

5757
- Then, create a secret containing the `GITHUB_TOKEN`.
5858

59-
```shell
59+
```bash
6060
export GITHUB_TOKEN=<your-github-token>
6161
```
6262

6363
- Create the **actions-runner-controller** namespace.
6464

65-
```shell
65+
```bash
6666
kubectl create namespace actions-runner-system
6767
```
6868

6969
- Create Kubernetes secrets containing the `GITHUB_TOKEN`.
7070

71-
```shell
71+
```bash
7272
kubectl create secret generic controller-manager \
7373
-n actions-runner-system \
7474
--from-literal=github_token=${GITHUB_TOKEN}
7575
```
7676

7777
- Install the **actions-runner-controller** using Helm.
7878

79-
```shell
79+
```bash
8080
helm repo add actions-runner-controller https://actions-runner-controller.github.io/actions-runner-controller
8181
```
8282

83-
```shell
83+
```bash
8484
helm upgrade --install --namespace actions-runner-system --create-namespace \
8585
--wait actions-runner-controller actions-runner-controller/actions-runner-controller
8686
```

docs/github/actions/semantic-release.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -24,13 +24,13 @@ Use the `exec` plugin to store the semantic release `${nextRelease.version}` var
2424

2525
### Install `@semantic-release/exec` plugin
2626

27-
```shell
27+
```bash
2828
yarn add -D @semantic-release/exec
2929
```
3030

3131
or using `npm`:
3232

33-
```shell
33+
```bash
3434
npm install --save-dev @semantic-release/exec
3535
```
3636

docs/kubernetes/ansible.md

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -9,15 +9,15 @@ How to manage multiple **Kubernetes** nodes with **Ansible**.
99
:::info
1010
Do not forget to change permissions of your ssh keys to allow **Ansible** to connect to them by doing the following:
1111

12-
```shell
12+
```bash
1313
sudo chmod -R 600 ~/.ssh/key_name.key
1414
```
1515

1616
:::
1717

1818
## Installation
1919

20-
```shell
20+
```bash
2121
sudo apt install software-properties-common -y && \
2222
sudo add-apt-repository --yes --update ppa:ansible/ansible && \
2323
sudo apt install ansible -y
@@ -35,7 +35,7 @@ If a host is reinstalled and has a different key in `known_hosts`, this will res
3535

3636
Create a `~/.ansible.cfg` file in user's home directory:
3737

38-
```shell title="~/.ansible.cfg"
38+
```bash title="~/.ansible.cfg"
3939
[defaults]
4040
host_key_checking = False
4141
```
@@ -46,7 +46,7 @@ Path to your **Ansible hosts file** is: `/etc/ansible/hosts`.<br/>It is where yo
4646

4747
You can use the following hosts file configuration as example:
4848

49-
```shell
49+
```bash
5050
[workers]
5151
my-first-worker-alias ansible_host=10.10.10.10 ansible_ssh_private_key_file=/home/ubuntu/.ssh/example-0.key
5252
my-second-worker-alias ansible_host=10.10.10.10 ansible_ssh_private_key_file=/home/ubuntu/.ssh/example-1.key
@@ -59,15 +59,15 @@ Here, `workers` is the name of the group of hosts. **It could be any name you wa
5959

6060
**Ansible** action example:
6161

62-
```shell
62+
```bash
6363
ansible all -m ping # ping all hosts in the hosts file (or in the inventory)
6464
```
6565

66-
```shell
66+
```bash
6767
ansible workers -m service -a "name=httpd state=restarted" # This will restart httpd on all workers.
6868
```
6969

70-
```shell
70+
```bash
7171
ansible all -a "sudo apt update && sudo apt upgrade -y && sudo apt full-upgrade -y && sudo apt autoremove -y" # This will update all packages on all hosts.
7272
```
7373

@@ -77,7 +77,7 @@ Argument `-v` is to use the `verbose` mode. It will print the output of the comm
7777

7878
Another example of command to reboot all nodes:
7979

80-
```shell
80+
```bash
8181
ansible all -m shell -a "sudo reboot"
8282
```
8383

@@ -93,7 +93,7 @@ Ansible playbooks are a way to automate the deployment of your infrastructure. I
9393

9494
### Example command
9595

96-
```shell
96+
```bash
9797
ansible-playbook my-playbook.yml
9898
ansible-playbook my-playbook.yml -i hosts.txt
9999
ansible-playbook my-playbook.yml -i hosts.txt --limit "my-first-worker-alias"
@@ -107,19 +107,19 @@ First, you need to install k3s on all the nodes. You can do it using the followi
107107
Replace `<MASTER_NODE_IP>` and `<TOKEN>` by your own k3s informations.
108108
:::
109109

110-
```shell
110+
```bash
111111
ansible workers -v -m shell -a "curl -sfL https://get.k3s.io | K3S_URL=https://<MASTER_NODE_IP>:6443 K3S_TOKEN=<TOKEN> sh -"
112112
```
113113

114114
For instance, to easily config. `iptables` on all nodes, you can use the following command:
115115

116-
```shell
116+
```bash
117117
ansible all -m shell -a "sudo iptables -A INPUT -p tcp --dport 6443 -j ACCEPT`
118118
```
119119
120120
### Hosts file example
121121
122-
```shell title="/etc/ansible/hosts"
122+
```bash title="/etc/ansible/hosts"
123123
# My inventory file is located in /etc/ansible/hosts on the cluster.
124124
125125
[workers]

docs/kubernetes/elastic-cloud.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ Login as the **elastic** user. The password can be obtained with the following c
4646
Replace **quickstart** by your **Kibana** deployment name.
4747
:::
4848

49-
```shell
49+
```bash
5050
kubectl get secret quickstart-es-elastic-user -o=jsonpath='{.data.elastic}' | base64 --decode; echo
5151
```
5252

0 commit comments

Comments
 (0)