From 72efe0875f4e0919c4ec6317fdd05d71adceada2 Mon Sep 17 00:00:00 2001 From: "google-labs-jules[bot]" <161369871+google-labs-jules[bot]@users.noreply.github.com> Date: Wed, 6 Aug 2025 19:38:16 +0000 Subject: [PATCH 1/3] I will remove the outdated OADP 1.1.x and 1.2.x documentation. --- docs/TROUBLESHOOTING.md | 42 +- docs/config/plugins.md | 1 - docs/credentials.md | 20 - docs/examples/data_mover.md | 141 ------- .../examples/datamover_advanced_voloptions.md | 369 ------------------ docs/oadp_cheat_sheet.md | 52 --- docs/restic_troubleshooting.md | 182 --------- docs/upgrade_1-1_to_1-2.md | 56 --- docs/upgrade_1-2_to_1-3.md | 124 ------ docs/upgrade_1-3_to_1-4.md | 2 +- docs/upgrade_1-4_to_1-5.md | 2 +- 11 files changed, 4 insertions(+), 987 deletions(-) delete mode 100644 docs/examples/data_mover.md delete mode 100644 docs/examples/datamover_advanced_voloptions.md delete mode 100644 docs/upgrade_1-1_to_1-2.md delete mode 100644 docs/upgrade_1-2_to_1-3.md diff --git a/docs/TROUBLESHOOTING.md b/docs/TROUBLESHOOTING.md index be929850eb7..070f9d730e7 100644 --- a/docs/TROUBLESHOOTING.md +++ b/docs/TROUBLESHOOTING.md @@ -12,7 +12,6 @@ If you need help, first search if there is [already an issue filed](https://issu 1. [Debugging OpenShift Virtualization backup/restore](virtualization_troubleshooting.md) 1. [Debugging OADP Self Service](self-service_troubleshooting.md) 1. [Deleting Backups](#deleting-backups) -1. [Debugging Data Mover (OADP 1.2 or below)](https://github.com/migtools/volume-snapshot-mover/blob/master/docs/troubleshooting.md) 1. [OpenShift ROSA STS and OADP installation](https://github.com/rh-mobb/documentation/blob/main/content/docs/misc/oadp/rosa-sts/_index.md) 1. [Common Issues and Misconfigurations](#common-issues-and-misconfigurations) - [Credentials Not Properly Formatted](#credentials-secret-not-properly-formatted) @@ -36,10 +35,6 @@ If you need help, first search if there is [already an issue filed](https://issu ``` oc logs -f deploy/velero -n openshift-adp ``` - - If Data Mover (OADP 1.2 or below) is enabled, check the volume-snapshot-logs - ``` - oc logs -f deployment.apps/volume-snapshot-mover -n openshift-adp - ``` 1. Velero commands - Alias the velero command: @@ -77,10 +72,6 @@ This section includes how to debug a failed restore. For more specific issues re ``` oc logs -f deployment.apps/velero -n openshift-adp ``` - If Data Mover (OADP 1.2 or below) is enabled, check the volume-snapshot-logs - ``` - oc logs -f deployment.apps/volume-snapshot-mover -n openshift-adp - ``` 1. Velero commands - Alias the velero command: @@ -250,40 +241,11 @@ oc delete backuprepository -n openshift-adp ### Issue with Backup/Restore of DeploymentConfig with volumes or restore hooks -- (OADP 1.3+) **Error:** `DeploymentConfigs restore with spec.Replicas==0 or DC pods fail to restart if they crash if using DC with volumes or restore hooks` - - **Solution:** - - Solution is the same as in the (OADP 1.1+), except it applies to the use case if you are restoring DeploymentConfigs and have either volumes or post-restore hooks regardless of the backup method. - -- (OADP 1.1+) **Error:** `DeploymentConfigs restore with spec.Replicas==0 or DC pods fail to restart if they crash if using Restic/Kopia restores or restore hooks` - - **Solution:** - - This is expected behavior on restore if you are restoring DeploymentConfigs and are either using Restic or Kopia for volume restore or you have post-restore hooks. The pod and DC plugins make these modifications to ensure that Restic or Kopia and hooks work properly, and [dc-post-restore.sh](../docs/scripts/dc-post-restore.sh) should have been run immediately after a successful restore. Usage for this script is `dc-post-restore.sh ` - -- (OADP 1.0.z) **Error:** `Using Restic as backup method causes PartiallyFailed/Failed errors in the Restore or post-restore hooks fail to execute` +- **Error:** `DeploymentConfigs restore with spec.Replicas==0 or DC pods fail to restart if they crash if using DC with volumes or restore hooks` **Solution:** - The changes in the backup/restore process for mitigating this error would be a two step restore process where, in the first step we would perform a restore excluding the replicationcontroller and deploymentconfig resources, and the second step would involve a restore including these resources. The backup and restore commands are given below for more clarity. (The examples given below are a use case for backup/restore of a target namespace, for other cases a similar strategy can be followed). - - Please note that this is a temporary fix for this issue and there are ongoing discussions to solve it. - - Step 1: Initiate the backup as any normal backup for restic. - ``` - velero create backup -n openshift-adp --include-namespaces= - ``` - - Step 2: Initiate a restore excluding the replicationcontroller and deploymentconfig resources. - ``` - velero restore create --from-backup= -n openshift-adp --include-namespaces --exclude-resources replicationcontroller,deploymentconfig,templateinstances.template.openshift.io --restore-volumes=true - ``` - - Step 3: Initiate a restore including the replicationcontroller and deploymentconfig resources. - ``` - velero restore create --from-backup= -n openshift-adp --include-namespaces --include-resources replicationcontroller,deploymentconfig,templateinstances.template.openshift.io --restore-volumes=true - ``` + This is expected behavior on restore if you are restoring DeploymentConfigs and have either volumes or post-restore hooks. The pod and DC plugins make these modifications to ensure that Restic or Kopia and hooks work properly, and [dc-post-restore.sh](../docs/scripts/dc-post-restore.sh) should have been run immediately after a successful restore. Usage for this script is `dc-post-restore.sh ` ### New Restic Backup Partially Failing After Clearing Bucket diff --git a/docs/config/plugins.md b/docs/config/plugins.md index d0b318e6dd1..cdc195d399d 100644 --- a/docs/config/plugins.md +++ b/docs/config/plugins.md @@ -18,7 +18,6 @@ installing Velero: - `OpenShift` [OpenShift Velero Plugin](https://github.com/openshift/openshift-velero-plugin) - `CSI` [Plugins for CSI](https://github.com/vmware-tanzu/velero-plugin-for-csi) - `kubevirt` [Plugins for Kubevirt](https://github.com/kubevirt/kubevirt-velero-plugin) - - `VSM (OADP 1.2 or below)` [Plugin for Volume-Snapshot-Mover](https://github.com/migtools/velero-plugin-for-vsm) Note that only one of `AWS` and `Legacy AWS` may be installed at the same time. `Legacy AWS` is intended for use with certain S3 providers that do not support the V2 AWS SDK APIs used in the `AWS` plugin. diff --git a/docs/credentials.md b/docs/credentials.md index 00465beb4d8..f5ccdf15460 100644 --- a/docs/credentials.md +++ b/docs/credentials.md @@ -10,7 +10,6 @@ 1. [BSL and VSL share credentials for one provider](#backupstoragelocation-and-volumesnapshotlocation-share-credentials-for-one-provider) 2. [BSL and VSL use the same provider but use different credentials](#backupstoragelocation-and-volumesnapshotlocation-use-the-same-provider-but-use-different-credentials) 3. [No BSL specified but the plugin for the provider exists](#no-backupstoragelocation-specified-but-the-plugin-for-the-provider-exists) -5. [Creating a Secret: OADP with VolumeSnapshotMover](#creating-a-secret-for-volumesnapshotmover) ### Creating a Secret for OADP @@ -214,22 +213,3 @@ spec: If you don't need volumesnapshotlocation, you will not need to create a VSL credentials. If you need `VolumeSnapshotLocation`, regardless of the `noDefaultBackupLocation` setting, you will need a to create VSL credentials. - - -### Creating a Secret for volumeSnapshotMover (OADP 1.2 or below) - -VolumeSnapshotMover requires a restic secret. It can be configured as so: - -``` -apiVersion: v1 -kind: Secret -metadata: - name: -type: Opaque -stringData: - # The repository encryption key - RESTIC_PASSWORD: my-secure-restic-password -``` - -- *Note:* `dpa.spec.features.dataMover.credentialName` must match the name of the secret. - Otherwise it will default to the name `dm-credential`. diff --git a/docs/examples/data_mover.md b/docs/examples/data_mover.md deleted file mode 100644 index 67f7ef360a9..00000000000 --- a/docs/examples/data_mover.md +++ /dev/null @@ -1,141 +0,0 @@ -

Stateful Application Backup/Restore - VolumeSnapshotMover (OADP 1.2 or below)

-

Relocate Snapshots into your Object Storage Location

- -

Background Information:

-
- -OADP Data Mover enables customers to back up container storage interface (CSI) volume snapshots to a remote object store. When Data Mover is enabled, you can restore stateful applications from the store if a failure, accidental deletion, or corruption of the cluster occurs. OADP Data Mover solution uses the Restic option of VolSync.

- -- The official OpenShift OADP Data Mover documentation can be found [here](https://docs.openshift.com/container-platform/4.12/backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.html#oadp-using-data-mover-for-csi-snapshots_backing-up-applications) -- We maintain an up to date FAQ page [here](https://access.redhat.com/articles/5456281) -- Note: Data Mover is a tech preview feature in OADP 1.1.x. Data Mover is planned to be fully supported by Red Hat in the OADP 1.2.0 release. -- Note: We recommend customers using OADP 1.2.x Data Mover to backup and restore ODF CephFS volumes, upgrade or install OCP 4.12 for improved performance. OADP Data Mover can leverage CephFS shallow volumes in OCP 4.12+ which based on our testing improves the performance of backup times. - - [CephFS ROX details](https://issues.redhat.com/browse/RHSTOR-4287) - - [Provisioning and mounting CephFS snapshot-backed volumes](https://github.com/ceph/ceph-csi/blob/devel/docs/cephfs-snapshot-backed-volumes.md) - -

Prerequisites:

- -
- -- Have a stateful application running in a separate namespace. - -- Follow instructions for installing the OADP operator and creating an -appropriate `volumeSnapshotClass` and `storageClass`found [here](/docs/examples/CSI/csi_example.md). - -- Install the VolSync operator using OLM. - -Note: For OADP 1.2 you are not required to annotate the openshift-adp namespace (OADP Operator install namespace) with `volsync.backube/privileged-movers='true'`. This action -will be automatically performed by the Operator when the datamover feature is enabled. - -![Volsync_install](/docs/images/volsync_install.png) - -- We will be using VolSync's Restic option, hence configure a restic secret: - -``` -apiVersion: v1 -kind: Secret -metadata: - name: -type: Opaque -stringData: - # The repository encryption key - RESTIC_PASSWORD: my-secure-restic-password -``` - -- Create a DPA similar to below: - - Add the restic secret name from the previous step to your DPA CR in `spec.features.dataMover.credentialName`. - If this step is not completed then it will default to the secret name `dm-credential`. - - - Note the CSI and VSM as `defaultPlugins` and `dataMover.enable` flag. - - -``` -apiVersion: oadp.openshift.io/v1alpha1 -kind: DataProtectionApplication -metadata: - name: velero-sample - namespace: openshift-adp -spec: - features: - dataMover: - enable: true - credentialName: - backupLocations: - - velero: - config: - profile: default - region: us-east-1 - credential: - key: cloud - name: cloud-credentials - default: true - objectStorage: - bucket: - prefix: - provider: aws - configuration: - nodeAgent: - enable: false - uploaderType: restic - velero: - defaultPlugins: - - openshift - - aws - - csi - - vsm -``` - -
- -

For Backup

- -- Create a backup CR: - -``` -apiVersion: velero.io/v1 -kind: Backup -metadata: - name: - namespace: -spec: - includedNamespaces: - - - storageLocation: velero-sample-1 -``` - -- Wait several minutes and check the VolumeSnapshotBackup CR status for `completed`: - -`oc get vsb -n ` - -`oc get vsb -n -ojsonpath="{.status.phase}` - -- There should now be a snapshot in the object store that was given in the restic secret. -- You can check for this snapshot in your targeted `backupStorageLocation` with a -prefix of `/` - -

For Restore

- -- Make sure the application namespace is deleted, as well as the volumeSnapshotContent - that was created by the Velero CSI plugin. - -- Create a restore CR: - -``` -apiVersion: velero.io/v1 -kind: Restore -metadata: - name: - namespace: -spec: - backupName: -``` - -- Wait several minutes and check the VolumeSnapshotRestore CR status for `completed`: - -`oc get vsr -n ` - -`oc get vsr -n -ojsonpath="{.status.phase}` - -- Check that your application data has been restored: - -`oc get route -n -ojsonpath="{.spec.host}"` diff --git a/docs/examples/datamover_advanced_voloptions.md b/docs/examples/datamover_advanced_voloptions.md deleted file mode 100644 index 09d773c3907..00000000000 --- a/docs/examples/datamover_advanced_voloptions.md +++ /dev/null @@ -1,369 +0,0 @@ -#

OADP Data Mover 1.2 Advanced Volume Options

- - -- The official OpenShift OADP Data Mover documentation can be found [here](https://docs.openshift.com/container-platform/4.13/backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.html#oadp-using-data-mover-for-csi-snapshots_backing-up-applications) -- We maintain an up to date FAQ page [here](https://access.redhat.com/articles/5456281) - -

Background Information:

- - -OADP Data Mover 1.2 leverages some of the recently added features of Ceph to be -performant in large scale environments, one being the -[shallow copy](https://github.com/ceph/ceph-csi/blob/devel/docs/design/proposals/cephfs-snapshot-shallow-ro-vol.md) -method, which is available > OCP 4.11. This feature requires use of the Data Mover -1.2 feature for volumeOptions so that other storageClasses and accessModes can be -used other than what is found on the source PVC. - -1. [Prerequisites](#pre-reqs) -2. [CephFS with ShallowCopy](#shallowcopy) -3. [CephFS and CephRBD Split Volumes](#fsrbd) - -

Prerequisites:

- -- OCP > 4.11 - -- OADP operator and a credentials secret are created. Follow - [these steps](/docs/install_olm.md) for installation instructions. - -- A CephFS and a CephRBD `StorageClass` and a `VolumeSnapshotClass` - - Installing ODF will create these in your cluster: - -### CephFS VolumeSnapshotClass and StorageClass: - -**Note:** The deletionPolicy, annotations, and labels - -```yml -apiVersion: snapshot.storage.k8s.io/v1 -deletionPolicy: Retain # <--- Note the Retain Policy -driver: openshift-storage.cephfs.csi.ceph.com -kind: VolumeSnapshotClass -metadata: - annotations: - snapshot.storage.kubernetes.io/is-default-class: 'true' # <--- Note the default - labels: - velero.io/csi-volumesnapshot-class: 'true' # <--- Note the velero label - name: ocs-storagecluster-cephfsplugin-snapclass -parameters: - clusterID: openshift-storage - csi.storage.k8s.io/snapshotter-secret-name: rook-csi-cephfs-provisioner - csi.storage.k8s.io/snapshotter-secret-namespace: openshift-storage -``` - -**Note:** The annotations -```yml -kind: StorageClass -apiVersion: storage.k8s.io/v1 -metadata: - name: ocs-storagecluster-cephfs - annotations: - description: Provides RWO and RWX Filesystem volumes - storageclass.kubernetes.io/is-default-class: 'true' # <--- Note the default -provisioner: openshift-storage.cephfs.csi.ceph.com -parameters: - clusterID: openshift-storage - csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner - csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage - csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node - csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage - csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner - csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage - fsName: ocs-storagecluster-cephfilesystem -reclaimPolicy: Delete -allowVolumeExpansion: true -volumeBindingMode: Immediate -``` - -### CephRBD VolumeSnapshotClass and StorageClass: - -**Note:** The deletionPolicy, and labels -```yml -apiVersion: snapshot.storage.k8s.io/v1 -deletionPolicy: Retain # <--- Note: the Retain Policy -driver: openshift-storage.rbd.csi.ceph.com -kind: VolumeSnapshotClass -metadata: - labels: - velero.io/csi-volumesnapshot-class: 'true' # <--- Note velero - name: ocs-storagecluster-rbdplugin-snapclass -parameters: - clusterID: openshift-storage - csi.storage.k8s.io/snapshotter-secret-name: rook-csi-rbd-provisioner - csi.storage.k8s.io/snapshotter-secret-namespace: openshift-storage -``` - -```yml -kind: StorageClass -apiVersion: storage.k8s.io/v1 -metadata: - name: ocs-storagecluster-ceph-rbd - annotations: - description: 'Provides RWO Filesystem volumes, and RWO and RWX Block volumes' -provisioner: openshift-storage.rbd.csi.ceph.com -parameters: - csi.storage.k8s.io/fstype: ext4 - csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage - csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner - csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node - csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner - imageFormat: '2' - clusterID: openshift-storage - imageFeatures: layering - csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage - pool: ocs-storagecluster-cephblockpool - csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage -reclaimPolicy: Delete -allowVolumeExpansion: true -volumeBindingMode: Immediate -``` - -- Create an additional CephFS `StorageClass` to make use of the `shallowCopy` feature: - -```yml -kind: StorageClass -apiVersion: storage.k8s.io/v1 -metadata: - name: ocs-storagecluster-cephfs-shallow - annotations: - description: Provides RWO and RWX Filesystem volumes - storageclass.kubernetes.io/is-default-class: 'false' -provisioner: openshift-storage.cephfs.csi.ceph.com -parameters: - csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage - csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner - csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node - csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner - clusterID: openshift-storage - fsName: ocs-storagecluster-cephfilesystem - csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage - backingSnapshot: 'true' # <--- shallowCopy - csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage -reclaimPolicy: Delete -allowVolumeExpansion: true -volumeBindingMode: Immediate -``` - -- **Notes**: - - Make sure the default `VolumeSnapshotClass` and `StorageClass` are the same provisioner - - The `VolumeSnapshotClass` must have the `deletionPloicy` set to Retain - - The `VolumeSnapshotClasses` must have the label `velero.io/csi-volumesnapshot-class: 'true'` - -- Install the latest VolSync operator using OLM. - -![Volsync_install](/docs/images/volsync_install.png) - -- We will be using VolSync's Restic option, hence configure a restic secret: - -```yml -apiVersion: v1 -kind: Secret -metadata: - name: -type: Opaque -stringData: - # The repository encryption key - RESTIC_PASSWORD: my-secure-restic-password -``` - -

Backup/Restore with CephFS ShallowCopy

- -- Please ensure that a stateful application is running in a separate namespace with PVCs using - CephFS as the provisioner - -- Please ensure the default `StorageClass` and `VolumeSnapshotClass` as cephFS, as shown - in the [prerequisites](#pre-reqs) - -- **Helpful Commands**: - - Check the VolumeSnapshotClass retain policy: - ``` - oc get volumesnapshotclass -A -o jsonpath='{range .items[*]}{"Name: "}{.metadata.name}{" "}{"Retention Policy: "}{.deletionPolicy}{"\n"}{end}' - ``` - Check the VolumeSnapShotClass lables: - ``` - oc get volumesnapshotclass -A -o jsonpath='{range .items[*]}{"Name: "}{.metadata.name}{" "}{"labels: "}{.metadata.labels}{"\n"}{end}' - ``` - Check the StorageClass annotations: - ``` - oc get storageClass -A -o jsonpath='{range .items[*]}{"Name: "}{.metadata.name}{" "}{"annotations: "}{.metadata.annotations}{"\n"}{end}' - ``` - -- Create a DPA similar to below: - - Add the restic secret name from the previous step to your DPA CR - in `spec.features.dataMover.credentialName`. If this step is not completed - then it will default to the secret name `dm-credential`. - - -```yml -apiVersion: oadp.openshift.io/v1alpha1 -kind: DataProtectionApplication -metadata: - name: velero-sample - namespace: openshift-adp -spec: - backupLocations: - - velero: - config: - profile: default - region: us-east-1 - credential: - key: cloud - name: cloud-credentials - default: true - objectStorage: - bucket: - prefix: velero - provider: aws - configuration: - nodeAgent: - enable: false # [true, false] - uploaderType: restic # [restic, kopia] - velero: - defaultPlugins: - - openshift - - aws - - csi - - vsm - features: - dataMover: - credentialName: - enable: true - volumeOptionsForStorageClasses: - ocs-storagecluster-cephfs: - sourceVolumeOptions: - accessMode: ReadOnlyMany - cacheAccessMode: ReadWriteMany - cacheStorageClassName: ocs-storagecluster-cephfs - storageClassName: ocs-storagecluster-cephfs-shallow -``` - -
- -

For Backup

- -- Create a backup CR: - -```yml -apiVersion: velero.io/v1 -kind: Backup -metadata: - name: - namespace: -spec: - includedNamespaces: - - - storageLocation: velero-sample-1 -``` - -- Monitor the datamover backup and artifacts via [a debug script](/docs/examples/debug.md) - -OR -- Check the progress of the `volumeSnapshotBackup`(s): - -``` -oc get vsb -n -oc get vsb -n -ojsonpath="{.status.phase}` -``` - -- Wait several minutes and check the VolumeSnapshotBackup CR status for `completed`: - -- There should now be a snapshot(s) in the object store that was given in the restic secret. -- You can check for this snapshot in your targeted `backupStorageLocation` with a -prefix of `/` - -

For Restore

- -- Make sure the application namespace is deleted, as well as any volumeSnapshotContents - that were created during backup. - -- Create a restore CR: - -```yml -apiVersion: velero.io/v1 -kind: Restore -metadata: - name: - namespace: -spec: - backupName: -``` -- Monitor the datamover backup and artifacts via [a debug script](/docs/examples/debug.md) -OR -- Check the `VolumeSnapshotRestore`(s) progress: - -``` -oc get vsr -n -oc get vsr -n -ojsonpath="{.status.phase} -``` - -- Check that your application data has been restored: - -`oc get route -n -ojsonpath="{.spec.host}"` - - -

Backup/Restore with Split Volumes: CephFS and CephRBD

- -- Ensure a stateful application is running in a separate namespace with PVCs provisioned - by both CephFS and CephRBD - -- This assumes cephFS is being used as the default `StorageClass` and - `VolumeSnapshotClass` - -- Create a DPA similar to below: - - Add the restic secret name from the prerequisites to your DPA CR in - `spec.features.dataMover.credentialName`. If this step is not completed then - it will default to the secret name `dm-credential` - - Note: `volumeOptionsForStorageClass` can be defined for multiple storageClasses, - thus allowing a backup to complete with volumes with different providers. - -```yml -apiVersion: oadp.openshift.io/v1alpha1 -kind: DataProtectionApplication -metadata: - name: velero-sample - namespace: openshift-adp -spec: - backupLocations: - - velero: - config: - profile: default - region: us-east-1 - credential: - key: cloud - name: cloud-credentials - default: true - objectStorage: - bucket: - prefix: velero - provider: aws - configuration: - nodeAgent: - enable: false - uploaderType: restic - velero: - defaultPlugins: - - openshift - - aws - - csi - - vsm - features: - dataMover: - credentialName: - enable: true - volumeOptionsForStorageClasses: - ocs-storagecluster-cephfs: - sourceVolumeOptions: - accessMode: ReadOnlyMany - cacheAccessMode: ReadWriteMany - cacheStorageClassName: ocs-storagecluster-cephfs - storageClassName: ocs-storagecluster-cephfs-shallow - ocs-storagecluster-ceph-rbd: - sourceVolumeOptions: - storageClassName: ocs-storagecluster-ceph-rbd - cacheStorageClassName: ocs-storagecluster-ceph-rbd - destinationVolumeOptions: - storageClassName: ocs-storagecluster-ceph-rbd - cacheStorageClassName: ocs-storagecluster-ceph-rbd -``` -Note: The CephFS ShallowCopy feature can only be used for datamover backup operation, the ShallowCopy volume options are not supported for restore. - -- Now follow the backup and restore steps from the previous example diff --git a/docs/oadp_cheat_sheet.md b/docs/oadp_cheat_sheet.md index c8e34879c3b..ea2671b7015 100644 --- a/docs/oadp_cheat_sheet.md +++ b/docs/oadp_cheat_sheet.md @@ -190,55 +190,3 @@ Resource List: Velero-Native Snapshots: ``` - - - -## Data Mover (OADP 1.2 or below) Specific commands - -#### Clean up datamover related objects -**WARNING** Do not run this command on production systems. This is a remove *ALL* command. -``` -oc delete vsb -A --all; oc delete vsr -A --all; oc delete vsc -A --all; oc delete vs -A --all; oc delete replicationsources.volsync.backube -A --all; oc delete replicationdestination.volsync.backube -A --all -``` -Details: -``` ---all=false: - Delete all resources, in the namespace of the specified resource types. -``` -``` --A, --all-namespaces=false: - If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even - if specified with --namespace. -``` -A safer to execute a cleanup is to limit the delete to a namespace or a specific object. -* namespaced objecs: VSB, VSR, VSC, VS -* protected namespace (openshift-adp): replicationsources.volsync.backube, replicationdestination.volsync.backube - -``` -oc delete vsb -n --all -``` - - - -#### Remove finalizers -``` -for i in `oc get vsc -A -o custom-columns=NAME:.metadata.name`; do echo $i; oc patch vsc $i -p '{"metadata":{"finalizers":null}}' --type=merge; done -``` - -#### Watch datamover resources while backup in progress -``` -curl -o ~/.local/bin/datamover_resources.sh https://raw.githubusercontent.com/openshift/oadp-operator/oadp-dev/docs/examples/datamover_resources.sh -``` -###### Backups -``` -watch -n 5 datamover_resources.sh -b -d -``` -###### Restore -``` -watch -n 5 datamover_resources.sh -r -d -``` - -#### Watch the VSM plugin logs -``` -oc logs -f deployment.apps/volume-snapshot-mover -n openshift-adp -``` diff --git a/docs/restic_troubleshooting.md b/docs/restic_troubleshooting.md index 1d6292897a3..35ca0690d0a 100644 --- a/docs/restic_troubleshooting.md +++ b/docs/restic_troubleshooting.md @@ -189,188 +189,6 @@ oc -n openshift-adp get podvolumerestore -l velero.io/restore-name= ``` -## Data Mover (OADP 1.2 or below) + Restic - -#### get replicationsource info -``` -oc get replicationsource -A -NAMESPACE NAME SOURCE LAST SYNC DURATION NEXT SYNC -openshift-adp vsb-7rkn6-rep-src snapcontent-993aabe2-8170-4661-984e-00a560f486cd-pvc 2023-06-20T20:16:55Z 33.274853286s -openshift-adp vsb-vpqzd-rep-src snapcontent-a751884d-b148-4a7d-9f5d-90da7a522be7-pvc 2023-06-20T20:17:51Z 24.452515994s -``` - -``` -oc get replicationsource vsb-7rkn6-rep-src -n openshift-adp -o yaml -apiVersion: volsync.backube/v1alpha1 -kind: ReplicationSource -metadata: - creationTimestamp: "2023-06-20T20:16:22Z" - generation: 1 - labels: - datamover.oadp.openshift.io/vsb: vsb-7rkn6 - name: vsb-7rkn6-rep-src - namespace: openshift-adp - resourceVersion: "28136883" - uid: 1b6b4f33-41b2-4159-a396-545208742208 -spec: - restic: - accessModes: - - ReadWriteOnce - copyMethod: Direct - customCA: {} - moverServiceAccount: velero - repository: vsb-7rkn6-secret - retain: {} - storageClassName: gp2-csi - volumeSnapshotClassName: csi-aws-vsc-test - sourcePVC: snapcontent-993aabe2-8170-4661-984e-00a560f486cd-pvc - trigger: - manual: vsb-7rkn6-trigger -status: - conditions: - - lastTransitionTime: "2023-06-20T20:16:55Z" - message: Waiting for manual trigger - reason: WaitingForManual - status: "False" - type: Synchronizing - lastManualSync: vsb-7rkn6-trigger - lastSyncDuration: 33.274853286s - lastSyncTime: "2023-06-20T20:16:55Z" - latestMoverStatus: - logs: |- - no parent snapshot found, will read all files - Added to the repository: 8.102 MiB (408.500 KiB stored) - processed 101 files, 102.651 MiB in 0:00 - snapshot dcec01b1 saved - Restic completed in 4s - result: Successful - restic: {} -``` - -#### get restic repo information for data mover -``` -oc get secret dpa-sample-1-volsync-restic -n openshift-adp -o yaml -apiVersion: v1 -data: - AWS_ACCESS_KEY_ID: QUtJQVZCUsnip - AWS_DEFAULT_REGION: dXMtdsnip - AWS_SECRET_ACCESS_KEY: ZGZQsnip - RESTIC_PASSWORD: cmVzdGljcGFzc3dvcmQ= - RESTIC_REPOSITORY: czM6czMuYW1hem9uYXdzLmNvbS9jdnBidWNrZXR1c3dlc3Qy - restic-prune-interval: MQ== -kind: Secret -metadata: - creationTimestamp: "2023-06-14T17:53:41Z" - labels: - openshift.io/oadp: "True" - openshift.io/oadp-bsl-name: dpa-sample-1 - openshift.io/oadp-bsl-provider: aws - name: dpa-sample-1-volsync-restic - namespace: openshift-adp - ownerReferences: - - apiVersion: oadp.openshift.io/v1alpha1 - blockOwnerDeletion: true - controller: true - kind: DataProtectionApplication - name: dpa-sample - uid: 66568a80-778a-4478-bca1-d8ff7720b129 - resourceVersion: "28139203" - uid: 192bc903-e754-4cd3-9173-2af805c2b0d0 -type: Opaque -``` - -#### decode the restic passwd -``` -cho "cmVzdGljcGFzc3dvcmQ=" | base64 -d -resticpassword -``` - -#### datamover restic path - -The path in 1.2.0 is -`$bucket/openshift-adp/$snapcontent_name` - -The snapcontent_name = sourcePVC - -``` -spec: - restic: - accessModes: - - ReadWriteOnce - copyMethod: Direct - customCA: {} - moverServiceAccount: velero - pruneIntervalDays: 1 - repository: vsb-zg6gg-secret - retain: {} - storageClassName: gp2-csi - volumeSnapshotClassName: csi-aws-vsc-test - sourcePVC: snapcontent-2044fb64-253d-461b-93f3-1ce8d6b67ebe-pvc -``` - -#### list snapshots for DataMover restic snapshot - -``` -restic --cache-dir /tmp/.cache -r s3://openshift-adp/snapcontent-993aabe2-8170-4661-984e-00a560f486cd-pvc snapshots -enter password for repository: -repository 85c55159 opened (version 2, compression level auto) -created new cache in /tmp/.cache -ID Time Host Tags Paths ------------------------------------------------------------- -dcec01b1 2023-06-20 20:16:46 volsync /data ------------------------------------------------------------- -1 snapshots -``` - -## Update DPA for retain policy - restic forget -``` - features: - dataMover: - credentialName: restic-secret - enable: true - pruneInterval: "1" - snapshotRetainPolicy: - hourly: "1" -``` - -## Run a new backup and check replicationsource - -``` -oc get replicationsource vsb-zg6gg-rep-src -n openshift-adp -o yaml -apiVersion: volsync.backube/v1alpha1 -kind: ReplicationSource -metadata: - creationTimestamp: "2023-06-20T21:04:28Z" - generation: 1 - labels: - datamover.oadp.openshift.io/vsb: vsb-zg6gg - name: vsb-zg6gg-rep-src - namespace: openshift-adp - resourceVersion: "28168858" - uid: 53dc160a-d0c1-416a-95fb-77f316e8e0c1 -spec: - restic: - accessModes: - - ReadWriteOnce - copyMethod: Direct - customCA: {} - moverServiceAccount: velero - pruneIntervalDays: 1 - repository: vsb-zg6gg-secret -``` - -#### get snapshots -``` -restic --cache-dir /tmp/.cache -r s3://openshift-adp/snapcontent-2044fb64-253d-461b-93f3-1ce8d6b67ebe-pvc snapshots -enter password for repository: -repository 83b7f53a opened (version 2, compression level auto) -ID Time Host Tags Paths ------------------------------------------------------------- -ab60e48b 2023-06-20 21:04:41 volsync /data ------------------------------------------------------------- -1 snapshots -``` - ## Maintenance * Upstream Documentation: diff --git a/docs/upgrade_1-1_to_1-2.md b/docs/upgrade_1-1_to_1-2.md deleted file mode 100644 index b3cb53982d9..00000000000 --- a/docs/upgrade_1-1_to_1-2.md +++ /dev/null @@ -1,56 +0,0 @@ -# Upgrading from OADP 1.1 - -> **NOTE:** Always upgrade to next minor version, do NOT skip versions. To update to higher version, please upgrade one channel at a time. Example: to upgrade from 1.1 to 1.3, upgrade first to 1.2, then to 1.3. -## Changes from OADP 1.1 to 1.2 - -- Velero was updated from version 1.9 to 1.11 (Changes reference: https://velero.io/docs/v1.11/upgrade-to-1.11/#upgrade-from-version-lower-than-v1100) - - From this update, in the DPA's configuration `spec.configuration.velero.args` have changed: - - - The `default-volumes-to-restic` field was renamed `default-volumes-to-fs-backup`, **if you are using `spec.velero`, you need to add it back, with the new name, to your DPA after upgrading OADP** - - - The `default-restic-prune-frequency` field was renamed `default-repo-maintain-frequency`, **if you are using `spec.velero`, you need to add it back, with the new name, to your DPA after upgrading OADP** - - - The `restic-timeout` field was renamed `fs-backup-timeout`, **if you are using `spec.velero`, you need to add it back, with the new name, to your DPA after upgrading OADP** - -- The `restic` DaemonSet was renamed to `node-agent`. OADP will automatically update the name of the DaemonSet - -- The CustomResourceDefinition `resticrepositories.velero.io` was renamed to `backuprepositories.velero.io` - * The CustomResourceDefinition `resticrepositories.velero.io` can optionally be removed from the cluster - -## Upgrade steps - -### Backup the DPA configuration - -Save your current DataProtectionApplication (DPA) CustomResource config, be sure to remember the values. - -For example: -``` -oc get dpa -n openshift-adp -o yaml > dpa.orig.backup -``` - -### Upgrade the OADP Operator - -For general operator upgrade instructions please review the [OpenShift documentation](https://docs.openshift.com/container-platform/4.13/operators/admin/olm-upgrading-operators.html) -* Change the Subscription for the OADP Operator from `stable-1.1` to `stable-1.2` -* Allow time for the operator and containers to update and restart - -### Convert your DPA to the new version - -If you are using fields that were updated in `spec.configuration.velero.args`, you need to update there new names. Example -```diff - spec: - configuration: - velero: - args: -- default-volumes-to-restic: true -+ default-volumes-to-fs-backup: true -- default-restic-prune-frequency: 6000 -+ default-repo-maintain-frequency: 6000 -- restic-timeout: 600 -+ fs-backup-timeout: 600 -``` - -### Verify the upgrade - -Follow theses [basic install verification](../docs/install_olm.md#verify-install) to verify the installation. diff --git a/docs/upgrade_1-2_to_1-3.md b/docs/upgrade_1-2_to_1-3.md deleted file mode 100644 index 365619743ce..00000000000 --- a/docs/upgrade_1-2_to_1-3.md +++ /dev/null @@ -1,124 +0,0 @@ -# Upgrading from OADP 1.2 - -> **NOTE:** Always upgrade to next minor version, do NOT skip versions. To update to higher version, please upgrade one channel at a time. Example: to upgrade from 1.1 to 1.3, upgrade first to 1.2, then to 1.3. -## Changes from OADP 1.2 to 1.3 - -- The Velero server has been updated from version 1.11 to 1.12 (Changes reference: https://velero.io/docs/v1.12/upgrade-to-1.12/#upgrade-from-v110-or-higher) - - From this update, OADP 1.3 now uses the Velero Built-in Data Mover instead of the VSM/Volsync Data Mover. This changes the following: - - - The `spec.features.dataMover` field and the `vsm` plugin are not compatible with 1.3 and must be removed from the DPA configuration. - - - The Volsync operator is no longer required and can optionally be removed. - - - The CustomResourceDefinitions `volumesnapshotbackups.datamover.oadp.openshift.io` and `volumesnapshotrestores.datamover.oadp.openshift.io` are no longer required and can optionally be removed. - - - The secrets used for the OADP-1.2 Data Mover are no longer required and can optionally be removed. - -- OADP now supports Kopia, an alternative file system backup tool to Restic. - - - To employ Kopia, use the new `spec.configuration.nodeAgent` field. For example: - - ```yaml - spec: - configuration: - nodeAgent: - enable: true - uploaderType: kopia - ``` - -- The `spec.configuration.restic` field is being deprecated in OADP 1.3, and will be removed in OADP 1.4. To avoid seeing deprecating warnings about it, use the new syntax: -```diff - spec: - configuration: -- restic: -- enable: true -+ nodeAgent: -+ enable: true -+ uploaderType: restic -``` - -> **Note:** In the next version of OADP the Restic option will be deprecated and Kopia will be come the default uploaderType. - -## Upgrade steps - -### If the OADP 1.2 tech-preview Data Mover feature is in use, please read the following - -OADP 1.2 Data Mover backups can **NOT** be restored with OADP 1.3. To prevent a gap in the data protection of your applications we recommend the following to be **completed prior to the OADP upgrade**. - -* If on cluster backups are sufficient and CSI storage is available - * Backup the applications with a CSI backup - -* If off cluster backups are required - * Backup the applications with a filesystem backup using the `--default-volumes-to-fs-backup=true` or `backup.spec.defaultVolumesToFsBackup` options. - * Backup the applications with your object storage plugins e.g. velero-plugin-for-aws - -* If for any reason an OADP 1.2 Data Mover backup must be restored, OADP must be fully uninstalled and OADP 1.2 reinstalled and configured. - -### Backup the DPA configuration - -Save your current DataProtectionApplication (DPA) CustomResource config, be sure to remember the values. - -For example: -``` -oc get dpa -n openshift-adp -o yaml > dpa.orig.backup -``` - -### Upgrade the OADP Operator - -For general operator upgrade instructions please review the [OpenShift documentation](https://docs.openshift.com/container-platform/4.13/operators/admin/olm-upgrading-operators.html) -* Change the Subscription for the OADP Operator from `stable-1.2` to `stable-1.3` -* Allow time for the operator and containers to update and restart - -### Convert your DPA to the new version - -If relocating backups off cluster is required (Data Mover), please reconfigure the DPA with the following: - -* remove the features.dataMover key and values from DPA -* remove the VSM plugin - -Example -```diff - spec: - configuration: -- features: -- dataMover: -- enable: true -- credentialName: dm-credentials -+ nodeAgent: -+ enable: true -+ uploaderType: kopia - velero: - defaultPlugins: -- - vsm - - csi - - openshift -``` - -* Wait for the DPA to reconcile successfully. - -### Verify the upgrade - -Follow theses [basic install verification](../docs/install_olm.md#verify-install) to verify the installation. - -**NOTE**: Invoking data movement off cluster in OADP 1.3.0 is now an option per backup vs. a DPA configuration. - -For example: - -``` -velero backup create example-backup --include-namespaces mysql-persistent --snapshot-move-data=true -``` -or -```yaml -apiVersion: velero.io/v1 -kind: Backup -metadata: - name: example-backup - namespace: openshift-adp -spec: - snapshotMoveData: true - includedNamespaces: - - mysql-persistent - storageLocation: dpa-sample-1 - ttl: 720h0m0s -``` \ No newline at end of file diff --git a/docs/upgrade_1-3_to_1-4.md b/docs/upgrade_1-3_to_1-4.md index a61724f3356..850f25dc853 100644 --- a/docs/upgrade_1-3_to_1-4.md +++ b/docs/upgrade_1-3_to_1-4.md @@ -1,6 +1,6 @@ # Upgrading from OADP 1.3 -> **NOTE:** Always upgrade to next minor version, do NOT skip versions. To update to higher version, please upgrade one channel at a time. Example: to upgrade from 1.1 to 1.3, upgrade first to 1.2, then to 1.3. +> **NOTE:** Always upgrade to next minor version, do NOT skip versions. To update to higher version, please upgrade one channel at a time. Example: to upgrade from 1.3 to 1.5, upgrade first to 1.4, then to 1.5. ## Changes from OADP 1.3 to 1.4 diff --git a/docs/upgrade_1-4_to_1-5.md b/docs/upgrade_1-4_to_1-5.md index 44f3167aecc..91a8abebec0 100644 --- a/docs/upgrade_1-4_to_1-5.md +++ b/docs/upgrade_1-4_to_1-5.md @@ -1,6 +1,6 @@ # Upgrading from OADP 1.4 -> **NOTE:** Always upgrade to next minor version, do NOT skip versions. To update to higher version, please upgrade one channel at a time. Example: to upgrade from 1.1 to 1.3, upgrade first to 1.2, then to 1.3. +> **NOTE:** Always upgrade to next minor version, do NOT skip versions. To update to higher version, please upgrade one channel at a time. Example: to upgrade from 1.3 to 1.5, upgrade first to 1.4, then to 1.5. ## Changes from OADP 1.4 to 1.5 From b0fa34cb0ce0112707fa422527661d1067d9d7ba Mon Sep 17 00:00:00 2001 From: Wesley Hayutin Date: Wed, 6 Aug 2025 14:24:30 -0600 Subject: [PATCH 2/3] restore upgrade notes from 1.1 through --- docs/upgrade_1-1_to_1-2.md | 56 +++++++++++++++++ docs/upgrade_1-2_to_1-3.md | 124 +++++++++++++++++++++++++++++++++++++ 2 files changed, 180 insertions(+) create mode 100644 docs/upgrade_1-1_to_1-2.md create mode 100644 docs/upgrade_1-2_to_1-3.md diff --git a/docs/upgrade_1-1_to_1-2.md b/docs/upgrade_1-1_to_1-2.md new file mode 100644 index 00000000000..b3cb53982d9 --- /dev/null +++ b/docs/upgrade_1-1_to_1-2.md @@ -0,0 +1,56 @@ +# Upgrading from OADP 1.1 + +> **NOTE:** Always upgrade to next minor version, do NOT skip versions. To update to higher version, please upgrade one channel at a time. Example: to upgrade from 1.1 to 1.3, upgrade first to 1.2, then to 1.3. +## Changes from OADP 1.1 to 1.2 + +- Velero was updated from version 1.9 to 1.11 (Changes reference: https://velero.io/docs/v1.11/upgrade-to-1.11/#upgrade-from-version-lower-than-v1100) + + From this update, in the DPA's configuration `spec.configuration.velero.args` have changed: + + - The `default-volumes-to-restic` field was renamed `default-volumes-to-fs-backup`, **if you are using `spec.velero`, you need to add it back, with the new name, to your DPA after upgrading OADP** + + - The `default-restic-prune-frequency` field was renamed `default-repo-maintain-frequency`, **if you are using `spec.velero`, you need to add it back, with the new name, to your DPA after upgrading OADP** + + - The `restic-timeout` field was renamed `fs-backup-timeout`, **if you are using `spec.velero`, you need to add it back, with the new name, to your DPA after upgrading OADP** + +- The `restic` DaemonSet was renamed to `node-agent`. OADP will automatically update the name of the DaemonSet + +- The CustomResourceDefinition `resticrepositories.velero.io` was renamed to `backuprepositories.velero.io` + * The CustomResourceDefinition `resticrepositories.velero.io` can optionally be removed from the cluster + +## Upgrade steps + +### Backup the DPA configuration + +Save your current DataProtectionApplication (DPA) CustomResource config, be sure to remember the values. + +For example: +``` +oc get dpa -n openshift-adp -o yaml > dpa.orig.backup +``` + +### Upgrade the OADP Operator + +For general operator upgrade instructions please review the [OpenShift documentation](https://docs.openshift.com/container-platform/4.13/operators/admin/olm-upgrading-operators.html) +* Change the Subscription for the OADP Operator from `stable-1.1` to `stable-1.2` +* Allow time for the operator and containers to update and restart + +### Convert your DPA to the new version + +If you are using fields that were updated in `spec.configuration.velero.args`, you need to update there new names. Example +```diff + spec: + configuration: + velero: + args: +- default-volumes-to-restic: true ++ default-volumes-to-fs-backup: true +- default-restic-prune-frequency: 6000 ++ default-repo-maintain-frequency: 6000 +- restic-timeout: 600 ++ fs-backup-timeout: 600 +``` + +### Verify the upgrade + +Follow theses [basic install verification](../docs/install_olm.md#verify-install) to verify the installation. diff --git a/docs/upgrade_1-2_to_1-3.md b/docs/upgrade_1-2_to_1-3.md new file mode 100644 index 00000000000..365619743ce --- /dev/null +++ b/docs/upgrade_1-2_to_1-3.md @@ -0,0 +1,124 @@ +# Upgrading from OADP 1.2 + +> **NOTE:** Always upgrade to next minor version, do NOT skip versions. To update to higher version, please upgrade one channel at a time. Example: to upgrade from 1.1 to 1.3, upgrade first to 1.2, then to 1.3. +## Changes from OADP 1.2 to 1.3 + +- The Velero server has been updated from version 1.11 to 1.12 (Changes reference: https://velero.io/docs/v1.12/upgrade-to-1.12/#upgrade-from-v110-or-higher) + + From this update, OADP 1.3 now uses the Velero Built-in Data Mover instead of the VSM/Volsync Data Mover. This changes the following: + + - The `spec.features.dataMover` field and the `vsm` plugin are not compatible with 1.3 and must be removed from the DPA configuration. + + - The Volsync operator is no longer required and can optionally be removed. + + - The CustomResourceDefinitions `volumesnapshotbackups.datamover.oadp.openshift.io` and `volumesnapshotrestores.datamover.oadp.openshift.io` are no longer required and can optionally be removed. + + - The secrets used for the OADP-1.2 Data Mover are no longer required and can optionally be removed. + +- OADP now supports Kopia, an alternative file system backup tool to Restic. + + - To employ Kopia, use the new `spec.configuration.nodeAgent` field. For example: + + ```yaml + spec: + configuration: + nodeAgent: + enable: true + uploaderType: kopia + ``` + +- The `spec.configuration.restic` field is being deprecated in OADP 1.3, and will be removed in OADP 1.4. To avoid seeing deprecating warnings about it, use the new syntax: +```diff + spec: + configuration: +- restic: +- enable: true ++ nodeAgent: ++ enable: true ++ uploaderType: restic +``` + +> **Note:** In the next version of OADP the Restic option will be deprecated and Kopia will be come the default uploaderType. + +## Upgrade steps + +### If the OADP 1.2 tech-preview Data Mover feature is in use, please read the following + +OADP 1.2 Data Mover backups can **NOT** be restored with OADP 1.3. To prevent a gap in the data protection of your applications we recommend the following to be **completed prior to the OADP upgrade**. + +* If on cluster backups are sufficient and CSI storage is available + * Backup the applications with a CSI backup + +* If off cluster backups are required + * Backup the applications with a filesystem backup using the `--default-volumes-to-fs-backup=true` or `backup.spec.defaultVolumesToFsBackup` options. + * Backup the applications with your object storage plugins e.g. velero-plugin-for-aws + +* If for any reason an OADP 1.2 Data Mover backup must be restored, OADP must be fully uninstalled and OADP 1.2 reinstalled and configured. + +### Backup the DPA configuration + +Save your current DataProtectionApplication (DPA) CustomResource config, be sure to remember the values. + +For example: +``` +oc get dpa -n openshift-adp -o yaml > dpa.orig.backup +``` + +### Upgrade the OADP Operator + +For general operator upgrade instructions please review the [OpenShift documentation](https://docs.openshift.com/container-platform/4.13/operators/admin/olm-upgrading-operators.html) +* Change the Subscription for the OADP Operator from `stable-1.2` to `stable-1.3` +* Allow time for the operator and containers to update and restart + +### Convert your DPA to the new version + +If relocating backups off cluster is required (Data Mover), please reconfigure the DPA with the following: + +* remove the features.dataMover key and values from DPA +* remove the VSM plugin + +Example +```diff + spec: + configuration: +- features: +- dataMover: +- enable: true +- credentialName: dm-credentials ++ nodeAgent: ++ enable: true ++ uploaderType: kopia + velero: + defaultPlugins: +- - vsm + - csi + - openshift +``` + +* Wait for the DPA to reconcile successfully. + +### Verify the upgrade + +Follow theses [basic install verification](../docs/install_olm.md#verify-install) to verify the installation. + +**NOTE**: Invoking data movement off cluster in OADP 1.3.0 is now an option per backup vs. a DPA configuration. + +For example: + +``` +velero backup create example-backup --include-namespaces mysql-persistent --snapshot-move-data=true +``` +or +```yaml +apiVersion: velero.io/v1 +kind: Backup +metadata: + name: example-backup + namespace: openshift-adp +spec: + snapshotMoveData: true + includedNamespaces: + - mysql-persistent + storageLocation: dpa-sample-1 + ttl: 720h0m0s +``` \ No newline at end of file From 38fb7e082f14fc746175838e1b7af912056de817 Mon Sep 17 00:00:00 2001 From: Wesley Hayutin Date: Wed, 6 Aug 2025 14:27:24 -0600 Subject: [PATCH 3/3] keep restic troubleshooting --- docs/restic_troubleshooting.md | 182 +++++++++++++++++++++++++++++++++ 1 file changed, 182 insertions(+) diff --git a/docs/restic_troubleshooting.md b/docs/restic_troubleshooting.md index 35ca0690d0a..1d6292897a3 100644 --- a/docs/restic_troubleshooting.md +++ b/docs/restic_troubleshooting.md @@ -189,6 +189,188 @@ oc -n openshift-adp get podvolumerestore -l velero.io/restore-name= ``` +## Data Mover (OADP 1.2 or below) + Restic + +#### get replicationsource info +``` +oc get replicationsource -A +NAMESPACE NAME SOURCE LAST SYNC DURATION NEXT SYNC +openshift-adp vsb-7rkn6-rep-src snapcontent-993aabe2-8170-4661-984e-00a560f486cd-pvc 2023-06-20T20:16:55Z 33.274853286s +openshift-adp vsb-vpqzd-rep-src snapcontent-a751884d-b148-4a7d-9f5d-90da7a522be7-pvc 2023-06-20T20:17:51Z 24.452515994s +``` + +``` +oc get replicationsource vsb-7rkn6-rep-src -n openshift-adp -o yaml +apiVersion: volsync.backube/v1alpha1 +kind: ReplicationSource +metadata: + creationTimestamp: "2023-06-20T20:16:22Z" + generation: 1 + labels: + datamover.oadp.openshift.io/vsb: vsb-7rkn6 + name: vsb-7rkn6-rep-src + namespace: openshift-adp + resourceVersion: "28136883" + uid: 1b6b4f33-41b2-4159-a396-545208742208 +spec: + restic: + accessModes: + - ReadWriteOnce + copyMethod: Direct + customCA: {} + moverServiceAccount: velero + repository: vsb-7rkn6-secret + retain: {} + storageClassName: gp2-csi + volumeSnapshotClassName: csi-aws-vsc-test + sourcePVC: snapcontent-993aabe2-8170-4661-984e-00a560f486cd-pvc + trigger: + manual: vsb-7rkn6-trigger +status: + conditions: + - lastTransitionTime: "2023-06-20T20:16:55Z" + message: Waiting for manual trigger + reason: WaitingForManual + status: "False" + type: Synchronizing + lastManualSync: vsb-7rkn6-trigger + lastSyncDuration: 33.274853286s + lastSyncTime: "2023-06-20T20:16:55Z" + latestMoverStatus: + logs: |- + no parent snapshot found, will read all files + Added to the repository: 8.102 MiB (408.500 KiB stored) + processed 101 files, 102.651 MiB in 0:00 + snapshot dcec01b1 saved + Restic completed in 4s + result: Successful + restic: {} +``` + +#### get restic repo information for data mover +``` +oc get secret dpa-sample-1-volsync-restic -n openshift-adp -o yaml +apiVersion: v1 +data: + AWS_ACCESS_KEY_ID: QUtJQVZCUsnip + AWS_DEFAULT_REGION: dXMtdsnip + AWS_SECRET_ACCESS_KEY: ZGZQsnip + RESTIC_PASSWORD: cmVzdGljcGFzc3dvcmQ= + RESTIC_REPOSITORY: czM6czMuYW1hem9uYXdzLmNvbS9jdnBidWNrZXR1c3dlc3Qy + restic-prune-interval: MQ== +kind: Secret +metadata: + creationTimestamp: "2023-06-14T17:53:41Z" + labels: + openshift.io/oadp: "True" + openshift.io/oadp-bsl-name: dpa-sample-1 + openshift.io/oadp-bsl-provider: aws + name: dpa-sample-1-volsync-restic + namespace: openshift-adp + ownerReferences: + - apiVersion: oadp.openshift.io/v1alpha1 + blockOwnerDeletion: true + controller: true + kind: DataProtectionApplication + name: dpa-sample + uid: 66568a80-778a-4478-bca1-d8ff7720b129 + resourceVersion: "28139203" + uid: 192bc903-e754-4cd3-9173-2af805c2b0d0 +type: Opaque +``` + +#### decode the restic passwd +``` +cho "cmVzdGljcGFzc3dvcmQ=" | base64 -d +resticpassword +``` + +#### datamover restic path + +The path in 1.2.0 is +`$bucket/openshift-adp/$snapcontent_name` + +The snapcontent_name = sourcePVC + +``` +spec: + restic: + accessModes: + - ReadWriteOnce + copyMethod: Direct + customCA: {} + moverServiceAccount: velero + pruneIntervalDays: 1 + repository: vsb-zg6gg-secret + retain: {} + storageClassName: gp2-csi + volumeSnapshotClassName: csi-aws-vsc-test + sourcePVC: snapcontent-2044fb64-253d-461b-93f3-1ce8d6b67ebe-pvc +``` + +#### list snapshots for DataMover restic snapshot + +``` +restic --cache-dir /tmp/.cache -r s3://openshift-adp/snapcontent-993aabe2-8170-4661-984e-00a560f486cd-pvc snapshots +enter password for repository: +repository 85c55159 opened (version 2, compression level auto) +created new cache in /tmp/.cache +ID Time Host Tags Paths +------------------------------------------------------------ +dcec01b1 2023-06-20 20:16:46 volsync /data +------------------------------------------------------------ +1 snapshots +``` + +## Update DPA for retain policy - restic forget +``` + features: + dataMover: + credentialName: restic-secret + enable: true + pruneInterval: "1" + snapshotRetainPolicy: + hourly: "1" +``` + +## Run a new backup and check replicationsource + +``` +oc get replicationsource vsb-zg6gg-rep-src -n openshift-adp -o yaml +apiVersion: volsync.backube/v1alpha1 +kind: ReplicationSource +metadata: + creationTimestamp: "2023-06-20T21:04:28Z" + generation: 1 + labels: + datamover.oadp.openshift.io/vsb: vsb-zg6gg + name: vsb-zg6gg-rep-src + namespace: openshift-adp + resourceVersion: "28168858" + uid: 53dc160a-d0c1-416a-95fb-77f316e8e0c1 +spec: + restic: + accessModes: + - ReadWriteOnce + copyMethod: Direct + customCA: {} + moverServiceAccount: velero + pruneIntervalDays: 1 + repository: vsb-zg6gg-secret +``` + +#### get snapshots +``` +restic --cache-dir /tmp/.cache -r s3://openshift-adp/snapcontent-2044fb64-253d-461b-93f3-1ce8d6b67ebe-pvc snapshots +enter password for repository: +repository 83b7f53a opened (version 2, compression level auto) +ID Time Host Tags Paths +------------------------------------------------------------ +ab60e48b 2023-06-20 21:04:41 volsync /data +------------------------------------------------------------ +1 snapshots +``` + ## Maintenance * Upstream Documentation: