VMAX & Openstack Ocata: An Inside Look Pt. 4: Volume Snapshots & Backups

Welcome back to VMAX & Openstack Ocata: An Inside Look! Last time around we looked at basic operations in OpenStack using VMAX as the block-storage backend, if you would like to read that article again you can do so by clicking here. Topics included volume management, cloning a volume, and attaching or detaching a volume from a Nova compute instance. This time we are going to look at Snapshots & Backups in OpenStack, following a similar format to the previous articles, after breaking down the functionality, I will give a demonstration using CLI commands and provide a video demonstration at the end using the Horizon dashboard.

What are Cinder Volume Snapshots?

A Cinder volume snapshot is a read-only, point-in time copy of a block-storage volume. Snapshots can be created from existing block-storage volumes that in the state ‘available’ or ‘in-use’, meaning that the snapshot can be created when it is attached to a running instance in Openstack. A snapshot in Cinder can be used as the source for a new Cinder volume, allowing for a quick and easy way to either restore an volume to a previous point-in-time, or used almost as an ‘image’ from which to provision new volumes from.

What are Cinder Volume Backups?

A Cinder volume backup is an exact copy, a full clone, that can reside at the same site as its read/write source, or in a secondary site for disaster recovery solutions. A Cinder volume backup can be viewed as all volume copies from the source storage array that are stored externally of the source storage array. A backup captures the state of its source at the time the backup is made, and its contents cannot change. The Cinder-Backup service two backends to backup to: Ceph and Swift. For the VMAX Cinder block storage drivers, Swift is the chosen backend to backup to.

Backups vs Snapshots

Cinder is not the backup service itself but the means of managing persistent volumes services. With Cinder-snapshots the idea is that you can continually take snapshots that are of the exact point in time of the data within a volume that would reside on the same storage that the volume lives on. The idea behind taking a backup with Cinder is that you can backup a volume to a different platter or storage node that is not the same node that the original volume resides. So in best practice for production you would probably even want to setup your backup storage nodes in different data centres etc. Instance snapshots are not only the attached persistent volumes that live on the Virtual Machine but also the flat files that make up the virtual machine itself. So when you take a snapshot of the VM its the exact point in time of the OS, all of it’s files etc. Snap shot of a VM is a great practice before doing a major upgrade on software and or the Operating System. [REWRITE]

Creating & Managing Volume Snapshots

Snapshot operations in OpenStack are performed using the cinder snapshot-[command] format, where command is replaced by operations such as ‘create’, ‘delete’, or ‘list’. A full list of Cinder Snapshot CLI commands can be found in the Cinder CLI reference guide here. To create a snapshot of a volume in OpenStack you will need a volume, so I will create a 1GB bootable volume using the CirrOS image again.


Using the available bootable volume ‘cirros_vol’, we can create a point-in-time snapshot of the volume.

Command Format:

# cinder snapshot-create –name [snapshot_name] [volume_name/volume_id]

Command Example:

# cinder snapshot-create –name cirros_snap_1 cirros_vol


To list your available OpenStack snapshots, use the cinder snapshot-list CLI command. To view specific details about a particular snapshot, use the cinder snapshot-show command.

Command Format:

# cinder snapshot-show [snapshot_name/snapshot_id]

Command Example:

# cinder snapshot-show bb27b18b-c135-481c-939f-f3945c5cd070


Another possibility with snapshot management is the ability to rename the snapshot or add a description to it.

Command Format:

# cinder snapshot-rename [snapshot_name/snapshot_id] [new_snapshot_name]

Command Example:

# cinder snapshot-rename cirros_snap_1 cirros_snap_june


To take the rename command even further, if you want to add a description to the snapshot, you can either repeat the command exactly as above and add the –description [user_description] parameter and value, or you can leave out the new snapshot name entirelty (example in screenshot below).

Command Format:

# cinder snapshot-rename [snapshot_name/snapshot_id] [new_snapshot_name] –description [user_description]

Command Example:

# cinder snapshot-rename cirros_snap_1 cirros_snap_june –description “June 2nd 2017”

# cinder snapshot-rename cirros_snap_june –description “June 2nd 2017”


Deleting a snapshot in OpenStack using the CLI only requires a simple cinder snapshot-delete command and the snapshot name or ID. You can also additionally add the –force parameter to the command if a snapshot of a volume is in a state other that ‘available’ or ‘error’.

Command Format:

# cinder snapshot-delete [snapshot_name/snapshot_id][–force]

Command Example:

# cinder snapshot-delete cirros_snap_june


Creating a Volume from a Snapshot

Creating a volume from a snapshot is almost identical to creating a blank volume with no source, the only difference being the inclusion of the –snapshot-id parameter.

Command Format:

# cinder create –snapshot-id [snapshot_id]

Command Example:

# cinder create –snapshot-id 136d7787-7a4d-46ea-9337-a5ce6cbeddb5 –name vmax_vol1 –volume-type VMAX_ISCSI_DIAMOND 1


Creating & Managing Cinder Volume & Snapshot Backups

The Cinder CLI provides the tools for creating a volume backup. You can restore a volume from a backup as long as the backup’s associated database information (or backup metadata) is intact in the Block Storage database. In advance of using Cinder’s backup service, you will need to make sure that you have the backup service correctly configured. The configuration will be dependent on the type of service which will interact with Cinder backup to store the backups (in this example we will be using Swift object storage service). For more information on configuring your environment for Cinder backup, please follow the detailed instructions at the official OpenStack website for ‘Install and configure the backup service‘. At this point we will assume that you have your backups service installed and configured, and you have a running instance in OpenStack with a Cinder backed bootable volume attached.

To create a backup of a volume use the following command:

Command Format:

# cinder backup-create [–name <name>] [–incremental] [–force] <volume>

Command Example:

# cinder backup-create–name cirros_backup –force cirros_boot_vol


Where volume is the name or ID of the volume, incremental is a flag that indicates whether an incremental backup should be performed, and force is a flag that allows or disallows backup of a volume when the volume is attached to an instance.

Without the incremental flag, a full backup is created by default. With the incremental flag, an incremental backup is created. The incremental backup is based on a parent backup which is an existing backup with the latest timestamp. The parent backup can be a full backup or an incremental backup depending on the timestamp.

Without the force flag, the volume will be backed up only if its status is available. With the force flag, the volume will be backed up whether its status is available or in-use. A volume is in-use when it is attached to an instance. The backup of an in-use volume means your data is crash consistent. The force flag is False by default.

Note: The first backup of a volume has to be a full backup. Attempting to do an incremental backup without any existing backups will fail. There is an is_incremental flag that indicates whether a backup is incremental when showing details on the backup. Another flag, has_dependent_backups, returned when showing backup details, will indicate whether the backup has dependent backups. If it is true, attempting to delete this backup will fail.

To list all of the backups currently available in your environment use the cinder backup-list command, to show specific details about any given backup, use the command cinder backup-show <volume> where volume is the backup name or ID.


To restore a volume backup, there are number of options to take into consideration depending on the type of restoration you require. The CLI command is structured as follows:

Command Format:

# cinder backup-restore [–volume <volume>] [–name <name>] <backup>

Command Example:

# cinder backup-restore –name cirros_restored cirros_backup


Both the volume and name parameters in the backup-restore command are optional, specifying only the backup to restore will create a new volume with a randomly generated name assigned. Supplying the name will give the new volume a user-specified name, and including the volume parameter tells OpenStack which existing volume is to be restored using the backup.

Note: When restoring from a full backup, a full restore will be carried out. Also, for a restore to proceed successfully on a pre-existing volume, the target volume must have an ‘available’ status.

Deleting a Cinder volume backup is handled simarlily to all other delete operations in OpenStack, you need only supply the name or ID of the resource you want to delete, multiple backups can be specified to delete more than one backup in the same CLI command.

Command Format:

# cinder backup-delete [–force] <backup> [<backup> …]

Command Example:

# cinder backup-delete cirros_backup


The force flag is used in the delete command to remove backups which have a status other than ‘available’ or ‘in-use’.

Users can take snapshots from the volumes as a way to protect their data. These snapshots reside on the storage backend itself. Providing a way to backup snapshots directly allows users to protect the snapshots taken from the volumes on a backup device, separately from the storage backend. There are users who have taken many snapshots and would like a way to protect these snapshots. The functionality to backup snapshots provides another layer of data protection. To create a backup of a snapshot, use the same command format as creating a normal Cinder volume backup, but this time include the –snapshot-id parameter:

Command Format:

# cinder backup-create [–name <name>] –snapshot-id [snapshot][–force] <volume>

Command Example:

# cinder backup-create –name cirros_snap_backup –snapshot-id df942bd8-f509-4535-9e37-90a4958e0ea0 –force cirros_boot_vol


When backing up a snapshot, it is not enough to just specify the snapshot ID, you must also specify the ID of the source volume also. In addition, if the source volume is ‘in-use’ the –force parameter must also be included. When specifying the snapshot ID, only the snapshot ID will suffice, it is not possible to use the snapshot name.

Restoring a backup of a snapshot uses exactly the same process as restoring a standard backup, you specify the snapshot backup ID instead of a volume backup ID.


Important Note on Cinder Backup Service

Cinder volume backups are dependent on the Block Storage database, meaning that if the database becomes unavailable or corrupted your backups won’t be of any use to you. In order to prevent against unusable Cinder volume backups, you must also back up your Block Storage database regularly to ensure data recovery. To learn more about backing up OpenStack databases please read the Official OpenStack documentation ‘Backup and Recovery’.

Alternatively, you can export and save the metadata of selected volume backups. Doing so precludes the need to back up the entire Block Storage database. This is useful if you need only a small subset of volumes to survive a catastrophic database failure. For more information about how to export and import volume backup metadata, see the Official OpenStack documentation ‘Export and import backup metadata‘.


Re: Pre-seeding data in Avamar

I don’t know how much benefit you’ll get by pre-seeding if you’re using Data Domain storage. It might not be much.

As long as a chunk of file content data has the same hash, we don’t care what client it came from. If the atomic chunk (Avamar) or L0 segment (Data Domain) is already present on the storage, the client won’t send it again.

The reason you might not get as big a benefit from pre-seeding if you’re using DD is that DD intentionally salts higher level hashes (L1 segments, etc.) which forces the client to run fingerprint lookups for all the L0 segments. This is done to improve the spatial locality of the data which is important to DD for performance reasons.

Still, pre-seeding can’t really hurt so you don’t really have anything to lose.



So I have the port (12228) frm thje isilon to the CEE server, but then the CEE server sends the data on to Varonis. (in our case).

From Varonis I got this info:

As you confirmed, port 135 is used for the initial call to the CEE/CEPA server.

From there, any one of the ports within the standard dynamic port range is utilized. I may have mentioned before that I can’t speak to why any particular port is selected from that range, but it will always be between 49152 and 65535. In this case we see that the port used is just a bit higher than the low-end of the range.”


Re: Re: Issues using screen command with OneFS 7.2

Hi community,

normally I am using the screen command during important cluster actions as e.g. firmware updates, OneFS upgrades or patching to log and resume interrupted ssh sessions. But I have some clusters with non working screen command. The last tested one is running OneFS

The error message is

No more PTYsSorry could not find a PTYscreen is terminating

The actual TTYs/PTYs at that cluster are as following:

CLUSTER-XXX# ls -l /dev/ | grep 'pty|tty'crw-rw-rw- 1 root wheel 0, 19 Jun 7 02:18 cttycrw-rw-rw- 1 root wheel 0, 171 Jun 7 10:38 ptyp0crw------- 1 root wheel 1, 79 Jun 7 02:18 ttyU0crw------- 1 root wheel 1, 80 Jun 7 02:18 ttyU0.initcrw------- 1 root wheel 1, 81 Jun 7 02:18 ttyU0.lockcrw------- 1 root wheel 0, 40 Jun 7 02:20 ttyd0crw------- 1 root wheel 0, 41 Jun 7 02:18 ttyd0.initcrw------- 1 root wheel 0, 42 Jun 7 02:18 ttyd0.lockcrw------- 1 root wheel 0, 46 Jun 7 02:18 ttyd1crw------- 1 root wheel 0, 47 Jun 7 02:18 ttyd1.initcrw------- 1 root wheel 0, 48 Jun 7 02:18 ttyd1.lockcrw--w---- 1 root tty 1, 86 Jun 7 10:38 ttyp0crw------- 1 root wheel 0, 52 Jun 7 02:18 ttyv0crw------- 1 root wheel 0, 53 Jun 7 02:18 ttyv1crw------- 1 root wheel 0, 54 Jun 7 02:18 ttyv2crw------- 1 root wheel 0, 55 Jun 7 02:18 ttyv3crw------- 1 root wheel 0, 56 Jun 7 02:18 ttyv4crw------- 1 root wheel 0, 57 Jun 7 02:18 ttyv5crw------- 1 root wheel 0, 58 Jun 7 02:18 ttyv6crw------- 1 root wheel 0, 59 Jun 7 02:18 ttyv7crw------- 1 root wheel 0, 60 Jun 7 02:18 ttyv8crw------- 1 root wheel 0, 61 Jun 7 02:18 ttyv9crw------- 1 root wheel 0, 62 Jun 7 02:18 ttyvacrw------- 1 root wheel 0, 63 Jun 7 02:18 ttyvbcrw------- 1 root wheel 0, 64 Jun 7 02:18 ttyvccrw------- 1 root wheel 0, 65 Jun 7 02:18 ttyvdcrw------- 1 root wheel 0, 66 Jun 7 02:18 ttyvecrw------- 1 root wheel 0, 67 Jun 7 02:18 ttyvf

There are some hints and useful commands in different unix forums but I don’t want to test a remount or permissions change at a productive cluster that is running fine. Unfortunately the screen command is working fine as expected at all my virtual clusters.

In their documentation DellEMC recommends using the screen command with exception of OneFS and At these OneFS versions the command isn’t working as they told.

Any ideas?