Skip to content
This repository was archived by the owner on Oct 8, 2025. It is now read-only.
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
38 commits
Select commit Hold shift + click to select a range
3ccfe51
Defer nightly backups, disable ASG processes during syncs and run syn…
EmlynK Jan 20, 2022
a54c09c
Added deploy.yml examples for Drupal 9 and Localgov. Updated tests (#97)
DionisioFG Mar 10, 2022
223a3b5
Adding a new SimpleSAMLphp meta role. (#100)
gregharvey Mar 31, 2022
32843d5
Allowing users to set cachetool version properly. (#102)
gregharvey Apr 13, 2022
91ea64e
Deploy ami pr 1.x (#106)
gregharvey Apr 14, 2022
371b6b9
Making the MySQL dump command for routine back-ups less aggressive. (…
gregharvey Apr 20, 2022
a9cc7ad
Fix database backups pr 1.x (#109)
gregharvey Apr 20, 2022
4c4c3ad
Fix MySQL backup deferral. (#110)
EmlynK Apr 22, 2022
029848d
Files recurse fix pr 1.x (#112)
EmlynK Apr 26, 2022
b71fa83
Improve multisite support (#115)
EmlynK Apr 29, 2022
c933dc8
Static credentials handling fix pr 1.x (#119)
EmlynK May 23, 2022
b6ea020
Making contents of deploy tar 'ownerless'. (#117)
gregharvey May 31, 2022
1c2b206
Implement file syncing (#124)
EmlynK Jun 7, 2022
1cd0ed0
Create Drupal-specific sync roles (#128)
EmlynK Jun 8, 2022
904033f
Fixing GRANT query for MySQL > 8.0. (#131)
gregharvey Jun 10, 2022
94715d8
Use IF NOT EXISTS when creating database user as that command fails i…
EmlynK Jun 13, 2022
42f12c8
Attempt to fix syncs whenever the 'dump' type is used for source or t…
EmlynK Jun 16, 2022
a9c6abf
Squashfs pr 1.x (#150)
gregharvey Jun 20, 2022
a4de771
Check deploy_code.mount_type is defined when setting facts in init ro…
EmlynK Jun 24, 2022
fe2aaf1
Make config imports during syncs optional (#157)
EmlynK Jun 28, 2022
451227d
Squashfs pr 1.x (#153)
gregharvey Jun 29, 2022
c7aae71
Add cache clears to Drupal deployments, before DB updates and stuff (…
EmlynK Aug 16, 2022
15b8daa
Avoid leaving exponentially growing sqsh files in build locations! (#…
gregharvey Sep 5, 2022
5cab4ae
Exclude sqsh file pr 1.x (#167)
gregharvey Oct 6, 2022
459faba
Removing unnecessary lines in Drupal config generation. (#169)
gregharvey Oct 7, 2022
200f590
Ensuring dump directory exists on backup step. (#172)
gregharvey Oct 10, 2022
2c74432
Allowing Drupal 7 jobs to disable cron. (#174)
gregharvey Oct 14, 2022
0385211
Suppress db revert pr 1.x (#177)
gregharvey Oct 14, 2022
86f138a
Fixing bad assumption that databases will have TCP connections. (#179)
gregharvey Nov 18, 2022
b236540
Handling the 'drush deploy' command more elegantly for Drupal 8+. (#180)
gregharvey Dec 9, 2022
58069b6
Attempt to clear the opcache during Drupal deployments. (#182)
gregharvey Dec 9, 2022
3ea0bfe
Better drush deploy support pr 1.x (#185)
gregharvey Dec 9, 2022
b835344
Cron job schedule params pr 1.x (#190)
tymofiisobchenko Dec 30, 2022
e97e852
Adding option to stop services that might interfere with a squashfs m…
gregharvey Jan 23, 2023
7bbb77f
Drush refactor pr 1.x (#197)
gregharvey Jan 25, 2023
b0f4033
Better deploy_code role docs.
gregharvey Jan 27, 2023
b67401b
Merge branch 'devel' into documentation_enhancements-PR-devel
gregharvey Jan 27, 2023
d5c9a17
Merge branch 'documentation_enhancements' into documentation_enhancem…
gregharvey Jan 27, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
80 changes: 79 additions & 1 deletion docs/roles/deploy_code.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,83 @@
# Deploy
Step that deploys the codebase.
Step that deploys the codebase. On standalone machines and "static" clusters of web servers (e.g. machines whose addressing never changes) this is reasonably straightforward, the default variables should "just work". This role also supports deployment to autoscaling clusters of web servers, such as AWS autoscaling groups or containerised architecture. More details on that after this section.

The shell script that wraps Ansible to handle the various build steps has various "stages" and the `deploy_code` role has a set of tasks for each stage. The key one for code building on the static/current cluster servers is [the `deploy.yml` file](https://github.com/codeenigma/ce-deploy/blob/1.x/roles/deploy_code/tasks/deploy.yml). Here you will find the steps for checking out and building code on web servers, as well as the loading in of any application specific deploy code, [e.g. these tasks for Drupal 8](https://github.com/codeenigma/ce-deploy/tree/1.x/roles/deploy_code/deploy_code-drupal8/tasks). You choose what extra tasks to load via the `project_type` variable. Current core options are:

* `drupal7`
* `drupal8`
* `matomo`
* `mautic`
* `simplesamlphp`

Patches to support other common applications are always welcome! Also, Ansible inheritance being what it is, you can create your own custom deploy role in the same directory as your deployment playbook and Ansible will detect it and make it available to you. For example, if you create `./deploy_code/deploy_code-myapp/tasks/main.yml` relative to your playbook and set `project_type: myapp` in your project variables then `ce-deploy` will load in those tasks.

# Autoscale deployment
For autoscaling clusters - no matter the underlying tech - the build code needs to be stored somewhere central and accessible to any potential new servers in the cluster. Because the performance of network attached storage (NAS) is often too poor or unreliable, we do not deploy the code to NAS - although this would be the simplest approach. Instead the technique we use is to build the code on each current server in the cluster, as though it were a static cluster or standalone machine, but *also* copy the code to the NAS so it is available to all future machines. This makes the existence of mounted NAS that is attached to all new servers a pre-requisite for `ce-deploy` to work with autoscaling.

**Important**, autoscale deployments need to be carefully co-ordinated with [the `mount_sync` role in `ce-provision`](https://github.com/codeenigma/ce-provision/tree/1.x/roles/mount_sync) so new servers/containers have the correct scripts in place to place their code after they initialise. Specifically, the `mount_sync.tarballs` or `mount_sync.squashed_fs` list variables in `ce-provision` must contain paths that match with the location specified in the `deploy_code.mount_sync` variable in `ce-deploy` so `ce-deploy` copies code to the place `ce-provision`'s `cloud-init` scripts expect to find it. (More on the use of `cloud-init` below.)

(An aside, we have previously supported S3-like object storage for storing the built code, but given all the applications we work with need to have NAS anyway for end user file uploads and other shared cluster resources, it seems pointless to introduce a second storage mechanism when we have one there already that works just fine.)

This packaging of a copy of the code all happens in [the `cleanup.yml` file of the role](https://github.com/codeenigma/ce-deploy/blob/1.x/roles/deploy_code/tasks/cleanup.yml). It supports three options:

* No autoscale (or AWS AMI-based autoscale - see below) - leave `mount_sync` as an empty string
* `tarball` type - makes a `tar.gz` with the code in and copies it to the NAS
* `squashfs` type - packs a [`squashfs`](https://github.com/plougher/squashfs-tools) image, copies to the NAS and mounts it on each web server

For both `tarball` and `squashfs` you need to set `mount_type` accordingly and the `mount_sync` variable to the location on your NAS where you want to store the built code.

## `tarball` builds
This is the simplest method of autoscale deployment, it simply packs up the code and copies it to the NAS at the end of the deployment. Everything else is just a standard "normal" build.

**Important**, this method is only appropriate if you do not have too many files to deploy. The packing and restoring takes a very long time if there are many small files, so it is not appropriate for things like `composer` built PHP applications.

### Rolling back
With this method the live code directory is also the build directory, therefore you can edit the code in place in an emergency and "rolling back" if there are issues with a build is just a case of pointing the live build symlink back to the previous build. As long as the `database_backup` is using the `rolling` method then the "roll back" database will still exist and the credentials will be correct in the application. If the backup is `dump` then you will need to inspect [the `mysql_backup.dumps_directory` variable](https://github.com/codeenigma/ce-deploy/blob/1.x/roles/database_backup/database_backup-mysql/defaults/main.yml#L4) to see where the backup was saved in order to restore it. By default this will be on the NAS so it is available to all web servers.

## `squashfs` builds
Because `tarball` is very slow, we have a second method using [`squashfs`](https://github.com/plougher/squashfs-tools). This filesystem is designed for packing and compressing files into read-only images - initially to deploy to removable media - that can simply be mounted, similar to a macOS Apple Disk Image (DWG) file. It is both faster to pack than a tarball *and* instant to deploy (it's just a `mount` command).

However, the build process is more complex. Because mounted `squashfs` images are read only, we cannot build over them as we do in other types of build. [We alter the build path variables in the `_init` role](https://github.com/codeenigma/ce-deploy/blob/1.x/roles/_init/tasks/main.yml#L25) so the build happens in a separate place and then in the `cleanup.yml` we pack the built code into an image ready to be deployed. Again, because the images are read-only mounts, the live site needs to be *unmounted* with an `umount` command and then remounted with a `mount` command to be completely deployed. This requires the `ce-deploy` user to have extra `sudo` permissions, which is handled by [the `mount_sync` role in `ce-provision`](https://github.com/codeenigma/ce-provision/tree/1.x/roles/mount_sync)

Consequently, at the build stage there are two important extra variables to set:

```yaml
deploy_code:
# List of services to manipulate to free the loop device for 'squashfs' builds, post lazy umount.
# @see the squashfs role in ce-provision where special permissions for deploy user to manipulate services get granted.
services: []
# services:
# - php8.0-fpm
# What action to take against the services, 'reload' or 'stop'.
# Busy websites will require a hard stop of services to achieve the umount command.
service_action: reload
```

`services` is a list of Linux services to stop/reload in order to ensure the mount point is not locked. Usually this will be your PHP service, e.g.

```yaml
deploy_code:
services:
- php8.1-fpm
```

`service_action` is whether `ce-deploy` should restart the services in the list of stop them, unmount and remount the image and start them again. The latter is the only "safe" way to deploy, but results in a second or two of down time.

Finally, as with the `tarball` method, the packed image is copied up to the NAS to be available to all future servers and is always named `deploy.sqsh`. The previous codebase is *also* packed and copied to the NAS, named `deploy_previous.sqsh` in the same directory.

### Rolling back
Rolling back from a bad `squashfs` build means copying `deploy_previous.sqsh` down from the NAS to a sensible location in the `ce-deploy` user's home directory, unmounting the current image and mounting `deploy_previous.sqsh` in its place.

Same as with the `tarball` method, as long as the `database_backup` is using the `rolling` method then the "roll back" database will still exist and the credentials will be correct in the `deploy_previous.sqsh` image. Again, if the backup method is `dump` then you will need to inspect [the `mysql_backup.dumps_directory` variable](https://github.com/codeenigma/ce-deploy/blob/1.x/roles/database_backup/database_backup-mysql/defaults/main.yml#L4) to see where the backup was saved in order to restore it.

Emergency code changes are possible but more fiddly. You have to copy the codebase from the mount to a sensible, *writeable* location, make your changes, [use the `squashfs` command to pack a new image](https://github.com/codeenigma/ce-deploy/blob/1.x/roles/deploy_code/tasks/cleanup.yml#L54), mount that image and, crucially, replace the `deploy.sqsh` image file on the NAS with your new image so future autoscale events will pick it up.

# Autoscaling events
Deploying code with autoscaling clusters relies on [cloud-init](https://cloudinit.readthedocs.io/) and is managed in our stack by [the `mount_sync` role in `ce-provision`](https://github.com/codeenigma/ce-provision/tree/1.x/roles/mount_sync). Whenever a new server spins up in a cluster, the `cloud-init` run-once script put in place by `ce-provision` is executed and that copies down the code from the NAS and deploys it to the correct location on the new server. At that point the server should become "healthy" and start serving the application.

# AMI-based autoscale
**This is experimental.** We are heavily based on [GitLab CE](https://gitlab.com/rluna-gitlab/gitlab-ce) and one of the options we support with [our provisioning tools](https://github.com/codeenigma/ce-provision/tree/1.x) is packing an [AWS AMI](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html) with the code embedded within, thus no longer requiring the [cloud-init](https://cloudinit.readthedocs.io/) step at all. [We call this option `repack` and the code is here.](https://github.com/codeenigma/ce-provision/blob/1.x/roles/aws/aws_ami/tasks/repack.yml) This makes provisioning of new machines in a cluster a little faster than the `squashfs` option, but requires the ability to trigger a build on our infrastructure `controller` server to execute a cluster build and pack the AMI. That is what the `api_call` dictionary below is providing for. You can see the API call constructed in [the last task of `cleanup.yml`](https://github.com/codeenigma/ce-deploy/blob/1.x/roles/deploy_code/tasks/cleanup.yml#L205).

<!--TOC-->
<!--ENDTOC-->

Expand Down
Loading