VMAX & OpenStack Ocata: An Inside Look Pt. 2: Over-Subscription/QoS/Compression

Welcome back to VMAX & OpenStack Ocata: An Inside Look! Although we are on to part 2 of our multi-part series, this piece can be seen as more of an extension of what we covered in part 1, where we went through the basic setup of your VMAX & OpenStack environment. This time we are going to take your environment setup that bit further and talk about the areas of over-subscription, quality of service (QoS), and compression.

Again, and as always, if you have any feedback, comments, spot any inconsistencies, want something covered or just a question answer, please feel free to contact me directly or leave a comment in the comments section below!

1. Over-Subscription

OpenStack Cinder enables you to choose a volume back-end based on virtual capacities for thin provisioning using the over-subscription ratio. To support over-subscription in thin provisioning, a flag max_over_subscription_ratio is introduced into cinder.conf and the existing flag reserved_percentage must be set. These flags are both optional and do not need to be included if over-subscription is not required for the backend.

The max_over_subscription_ratio flag is a float representation of the over-subscription ratio when thin provisioning is involved. The table below will illustrate the float representation to over-subscribed provisioned capacity relationship:

Float Representation Over-subscription multiple (of total physical capacity)
20.0 (Default) 20x
10.5 10.5x
1.0 No over-subscription
0.9 or lower Ignored

Note: max_over_subscription_ratio can be configured for each back end when multiple-storage back ends are enabled. For a driver that supports multiple pools per back end, it can report this ratio for each pool.



The existing reserved_percentage flag is used to prevent over provisioning. This flag represents the percentage of the back-end capacity that is reserved. It is the high water mark where by the physical remaining space cannot be exceeded. For example, if there is only 4% of physical space left and the reserve percentage is 5, the free space will equate to zero. This is a safety mechanism to prevent a scenario where a provisioning request fails due to insufficient raw space.

Note: There is a change on how reserved_percentage is used. It was measured against the free capacity in the past. Now it is measured against the total capacity.



Example VMAX Configuration Group

The code snippet below demonstrates the settings configured in a VMAX backend configuration group within cinder.conf:

[CONF_GROUP_ISCSI]cinder_emc_config_file = /etc/cinder/cinder_emc_config_VMAX_ISCSI_SILVER.xmlvolume_driver = cinder.volume.drivers.dell_emc.vmax.iscsi.VMAXISCSIDrivervolume_backend_name = VMAX_ISCSI_SILVERmax_over_subscription_ratio = 2.0reserved_percentage = 10

Over-Subscription with EMCMaxSubscriptionPercent

For the second example of over-subscription, we are going to take into account the EMCMaxSubscriptionPercent property on the pool. This value is the highest that a pool can be over-subscribed. Setting the EMCMaxSubscriptionPercent property is done via SYMCLI:

# symconfigure -sid 0123 -cmd “set pool MyThinPool, type=thin, max_subs_percent=150;” commit

Viewing the pool details can be performed via the command:

# symcfg -sid 0123 list -thin -detail -pool -gb

When setting EMCMaxSubscriptionPercent via SYMCLI, it is important to remember that the max_over_subscription_ratio defined in cinder.conf cannot exceed what is set at pool level in the EMCMaxSubscriptionPercent property. For example, if EMCMaxSubscriptionPercent is set to 500 and the user defined max_over_subscription_ratio is set to 6, the latter is ignored and over-subscription is set to 500%.



EMCMaxSubscriptionPercent max_over_subscription_ratio Over-Subscription %

200 2.5 200
200 1.5 150
0 (no upper limit on pool) 1.5 150
0 (no upper limit on pool) 0 150 (default)
200 (pool1) 300 (pool2) 2.5 200 (pool1) 250 (pool2)



Note: If FAST is set and multiple pools are associated with a FAST policy, then the same rules apply. The difference is, the TotalManagedSpace and EMCSubscribedCapacity for each pool associated with the FAST policy are aggregated.



2. Quality of Service (QoS)

Quality of Service (QoS) is the measurement of the overall performance of a service, particularly the performance see by the users of a given network. To quantitatively measure QoS, several related aspects of the network service are often considered, but for QoS for VMAX & OpenStack environments we are going to focus on three:

  • I/O limit per second (IOPs) – This is the amount of read/write operations per second, in the context of QoS setting this value specifies the maximum IOPs, valid values range from 100 IOPs to 100,000 IOPs (in 100 increments)
  • Throughput per second (MB/s) – This is the amount of bandwidth in MB per second,similar to IOPs setting this will designate the value as the maximum allowed MB/s, valid values range from 1 MB/s to 100,000 MB/s.
  • Dynamic Distribution – Dynamic distribution refers to the automatic load balancing of IO across configured ports. There are two types of Dynamic Distribution; Always & Failure:
    • Always – Enables full dynamic distribution mode. When enabled, the configured host I/O limits will be dynamically distributed across the configured ports, thereby allowing the limits on each individual port to adjust to fluctuating demands
    • OnFailure – Enables port failure capability. When enabled, the fraction of configured host I/O limits available to a configured port will adjust based on the number of ports currently online.

For more information on setting host IO limits for VMAX please refer to the ‘Unisphere for VMAX Online Guide‘ section called ‘Setting Host I/O Limits’.

Configuring QoS in OpenStack for VMAX

In OpenStack, we create QoS settings for volume types so that all volumes created with a given volume type has the respective QoS settings applied. There are two steps involved in creating the QoS settings in OpenStack:

  • Creating the QoS settings
  • Associating the QoS settings with a volume type

When specifying the QoS settings, they are added in key/value pairs. The (case-sensitive) keys for each of the settings are:

  • maxIOPS
  • maxMBPS
  • DistributionType



As with anything in Openstack, there are two ways to do anything, there is no difference with QoS. You have the choice of either configuring QoS via the CLI or using the the Horizon web dashboard. Obviously using the CLI is the much quicker of the two, but if you do not understand CLI commands, or even QoS, I would recommend sticking the web dashboard method.You can find the CLI example below, but if you would like the UI step-by-step guide with screenshots, you can read the DECN hosted document created for this article ‘QoS for VMAX on OpenStack – A step-by-step guide‘.

Setting QoS Spec

1. Create QoS specs. It is important to note that here the QoS key/value pairs are optional, you need only include them if you want to set a value for that specific key/value pair. {QoS_Spec_Name} is the name which you want to assign to this QoS spec:

Command Structure:

# cinder qos-create {QoS_spec_name} maxIOPS={value} maxMBPS={value} DistributionType={Always/OnFailure}

Command Example:

# cinder qos-createFC_NONE_QOS maxIOPS=4000maxMBPS=4000DistributionType=Always

2. Associate the QoS spec from step 1 with a pre-existing VMAX volume type:

Command Structure:

# cinder qos-associate {QoS_spec_id} {volume_type_id}

Command Example:

# cinder qos-associate 0b473981-8586-46d5-9028-bf64832ef8a3 7366274f-c3d3-4020-8c1d-c0c533ac8578



QoS Use-Case Scenarios

When using QoS to set specs for your volumes, it is important to know how the specs behave when set at Openstack level, Unisphere level, or both. The following use-cases aims to clarify the expected behaviour, leaving you in complete control over your environment when done!

Use-Case 1 – Default Values

Settings:

SG QoS Specs in Unisphere

(before change)

QoS Specs set in Openstack

Host I/O Limit (MB/Sec) = No Limit

Host I/O Limit (IO/Sec) = No Limit

Set Dynamic Distribution = N/A

maxIOPS = 4000

maxMBPS = 4000

DistributionType = Always

Outcome:

SG QoS Specs in Unisphere

(after change)

Outcome – Block Storage (Cinder)

Host I/O Limit (MB/Sec) = 4000

Host I/O Limit (IO/Sec) = 4000

Set Dynamic Distribution = Always

Volume is created against volume type and

QoS is enforced with the parameters specified

in the OpenStack QoS Spec.

Use-Case 2 – Preset Limits

Settings:

SG QoS Specs in Unisphere

(before change)

QoS Specs set in Openstack

Host I/O Limit (MB/Sec) = 2000

Host I/O Limit (IO/Sec) = 2000

Set Dynamic Distribution = Never

maxIOPS = 4000

maxMBPS = 4000

DistributionType = Always

Outcome:

SG QoS Specs in Unisphere

(after change)

Outcome – Block Storage (Cinder)

Host I/O Limit (MB/Sec) = 4000

Host I/O Limit (IO/Sec) = 4000

Set Dynamic Distribution = Always

Volume is created against volume type and

QoS is enforced with the parameters specified

in the OpenStack QoS Spec.

Use-Case 3 – Preset Limits

Settings:

SG QoS Specs in Unisphere

(before change)

QoS Specs set in Openstack

Host I/O Limit (MB/Sec) = No limit

Host I/O Limit (IO/Sec) = No limit

Set Dynamic Distribution = N/A

DistributionType = Always

Outcome:

SG QoS Specs in Unisphere

(after change)

Outcome – Block Storage (Cinder)

Host I/O Limit (MB/Sec) = No limit

Host I/O Limit (IO/Sec) = No limit

Set Dynamic Distribution = N/A

Volume is created against volume type and

there is no volume change

3. Compression

If you are using a VMAX All-Flash (250F, 450F, 850F, 950F) in your environment, you can avail of inline compression in your OpenStack environment. By default compression is enabled, so if you want it right now you don’t even have to do a thing!

VMAX All Flash delivers a net 4:1 overall storage efficiency benefit for typical transactional workloads when inline compression is combined with snapshots and other HYPERMAX OS space saving capabilities. VMAX inline compression minimizes footprint while intelligently optimizing system resources to ensure the system is always delivering the right balance of performance and efficiency. VMAX All Flash inline compression is:

  • Granular: VMAX All Flash compression operates at the storage group (application) level so customers can target those workloads that provide the most benefit.
  • Performance optimized: VMAX All Flash is smart enough to make sure very active data is not compressed until it becomes less active. This allows the system to deliver maximum throughput leveraging cache and SSD technology, and ensures that system resources are always available when required.
  • Flexible: VMAX All Flash inline compression works with all data services such as including SnapVX & SRDF

Compression, VMAX & OpenStack

As mentioned previously, on an All Flash array the creation of any storage group has a compressed attribute by default and compression is enabled by default also. Setting compression on a volume type does not mean that all the devices associated with that type will be immediately compressed. It means that for all incoming writes compression will be considered. Setting compression off on a volume type does not mean that all the devices will be uncompressed. It means all the writes to compressed tracks will make these tracks uncompressed.

Controlling compression for VMAX volume types is handled through the extra specs of the volume type itself. Up until now, the only extra spec we set for a volume type is the volume_backend_name, compression requires an additional extra spec to be applied to the volume type called storagetype:disablecompression=[True/False].

Note: If extra spec storagetype:disablecompression is set on a VMAX-3 Hybrid array, it is ignored because compression is not a feature on a VMAX3 hybrid.

Using Compression for VMAX

Compression is enabled by default on all All-Flash arrays so you do not have to do anything to enable it for storage groups created by OpenStack. However, there are occasions whereby you may want to disable compression or retype (don’t worry, retype will be discussed in detail later in this article!) a volume from an uncompressed to a compressed volume type. Before each of the use-cases outlined below, please complete the following steps for each use-case:

  1. Create a new volume type called VMAX_COMPRESSION_DISABLED
  2. Set an extra spec volume_backend_name
  3. Set a new extra spec storagetype:disablecompression=True
  4. Create a new volume with the VMAX_COMPRESSION_DISABLED volume type

Use-Case 1: Compression disabled – create, attach, detach, and delete volume

  1. Check in Unisphere or SYMCLI to see if the volume exists in storage group OS-<srp>-<servicelevel>-<workload>-CD-SG, and compression is disabled on that storage group
  2. Attach the volume to an instance. Check in Unisphere or symcli to see if the volume exists in storage group OS-<shorthostname>-<srp>-<servicelevel>-<workload>-CD-SG, and compression is disabled on that storage group
  3. Detach volume from instance. Check in Unisphere or symcli to see if the volume exists in storage group OS-<srp>-<servicelevel>-<workload>-CD-SG, and compression is disabled on that storage group.
  4. Delete the volume. If this was the last volume in the OS-<srp>-<servicelevel>-<workload>-CD-SG storage group, it should also be deleted.

Use-Case 2: Compression disabled – create, delete snapshot and delete volume

  1. Check in Unisphere or SYMCLI to see if the volume exists in storage group OS-<srp>-<servicelevel>-<workload>-CD-SG, and compression is disabled on that storage group
  2. Create a snapshot. The volume should now exist in OS-<srp>-<servicelevel>-<workload>-CD-SG
  3. Delete the snapshot. The volume should be removed from OS-<srp>-<servicelevel>-<workload>-CD-SG
  4. Delete the volume. If this volume is the last volume in OS-<srp>-<servicelevel>-<workload>-CD-SG, it should also be deleted.

Use-Case 3: Retype from compression disabled to compression enabled

  1. Create a new volume type. For example VMAX_COMPRESSION_ENABLED
  2. Set extra spec volume_backend_name as before
  3. Set the new extra spec’s compression as storagetype:disablecompression = False or DO NOT set this extra spec
  4. Retype from volume type VMAX_COMPRESSION_DISABLED to VMAX_COMPRESSION_ENABLED
  5. Check in Unisphere or SYMCLI to see if the volume exists in storage group OS-<srp>-<servicelevel>-<workload>-SG, and compression is enabled on that storage group

Whats coming up in part 3 of ‘VMAX & OpenStack Ocata: An Inside Look’…

With the setup out of the way and extra functionality taken into consideration, we can now begin to get into the fun stuff, block storage functionality! Next time we will be starting at the start in terms of functionality, going through all of the basic operations that the VMAX driver supports in OpenStack.

Related:

Leave a Reply