How to Resolve the Error “Operation results in exceeding quota limits” while creating MCS catalog for the VDA hosted on Azure

Going by the error message it seems to be an issue with the quota limit which points to the Azure subscription.. Solution is to add additional resources.

Running the command “azure vm list-usage” will further confirm this.

On the Azure side, you may get the following error {“code”:”OperationNotAllowed”,”message”:”Operation results in exceeding quota limits of Core}.

Following articles talks about the Azure subscription.


Most of the time when provisioning failure occurs due to quota limits, it is either Cores, NICs or Storage Accounts quota issue. So make sure you have sufficient cores, NICs and Storage Accounts to provision the VDAs.

Please contact Microsoft support to increase the quota for your subscription, once the requested quota is provisioned by Microsoft retry the MCS catalog creation, it will be successful.

User-added image

Related:

Configuring Policy for Volume quota and Exception

I need a solution

Hi,

I have configured a VPM policy to limit the data utilization for one day. the policy throws default page of “volume quota exceed” once the policy is hit or 1 gB used by user per day. I want to use user defined exception (like my comapany logo, message and end user info) once this policy is hit. Any suggestion how this can be done?

0

1579785380

Related:

  • No Related Posts

Volume Quota issues

I need a solution

I have enabled volume quotas and have some funnies. The quota actually works in that I received the error message that I exceeded my quota for the hour. However the quota view show 0 when I check and even though the quota reset tells me it set the quota to 0 it did not and I was still blocked.

Running 6.6.5.13 on 400-20’s.

Petrus

0

Related:

  • No Related Posts

How Big is the 4.x User Layer Disk, and How Can You Change That?

By default, the User Layer is 10GB.

If you have set User Quotas on your file share, then we will size the User Layer disk to be equal to your user quote. We assume that the share is specific to layering, and the only thing a user is going to be writing to it is their User Disk. So we assume that any user-specific quota is how big you want to make the User Disk. This supersedes the default. Setting the file share quota is the standard, preferred method for setting the User Layer size.

There are three registry keys in your image which govern this behavior. If you want to modify them, you can do it with a GPO or a layer. I suspect the best place to put it is in the Platform Layer, but you could put it in the OS or App Layers too.

[HKEY_LOCAL_MACHINESoftwareUnideskUlayer]

“UseQuotaIfAvailable” String Value

Values: “True” (Default), “False”

True to enable discovery and use of quotas. False to disable.

“DefaultUserLayerSizeInGb” DWord Value

The size of the user layer in GB without quotas (E.g. 5, 10, 23, etc.)

When not specified, the default is 10.

“QuotaQuerySleepMS” DWord Value

The number of milliseconds to wait after creating the directory for the user layer before checking to see if it has a quota. This is necessary to give some quota systems time to apply the quota to the new directory (FSRM requires this).

When not specified the default is 1000.

You’ll probably never use the last one, but try it if you are sure you have set a quota and it seems to be not working.

Expanding the User Layer

If you already have a User Layer disk and you want to expand it, you just need to expand the VHD itself, and then expand the filesystem on it. PowerShell on Hyper-V servers, for instance, has a CmdLet named “resize-vhd” which will take a local filename and a total number of bytes and resize your disk. For some reason, resize-vhd is only available on Hyper-V servers, but you just need to enable the Role in your server to have access to the cmdlet. Obviously you can only do this while the user is not logged in and not using the disk. Otherwise, any third-party VHD-resizing tool will work.

Once you resize the VHD, run Disk Management and expand the filesystem in the disk to fill the extra space, and that space will be reflected when the user logs in. You can use the Attach VHD function in Disk Management to expand the filesystem after you expand it, or you can let the user log back in, run Disk Management themselves, and expand the filesystem themselves. It will be the only disk with free space at the end. The space will become immediately available.

Note: resize-vhd command can be run from Hyper-V only. You will need a Hyper-V server to run the command, or you can temporarily add Hyper-V role to a machine, then run the command.

Related:

  • No Related Posts

OneFS Nodepool-level Reporting

Been asked several times recently how to get the usage and capacity of individual pools across a heterogeneous cluster. As such, it seemed like a useful topic to explore in an article.

Depending on the configuration and licensed services, OneFS provides three main approaches for reporting utilization and capacity at a per-pool granularity. The following table shows the OneFS feature dependencies for each of the three reporting methods:

nodepool_reporting_1.png

Let’s look at each of these options:

1. If a cluster has tiering configured, or has any other SmartPools policy in place, the easiest method to get accurate per-pool usage, percentage, and overall capacity data from OneFS is via the following CLI command:

# isi stat –all-nodepools

For example:



nodepool_reporting_2.png

Theisi stat –all-nodepoolscommand only works for OneFS 8.0 and later release. For clusters still running OneFS 7.x releases, the following syntax can be used instead:

# isi stat –d



Alternatively, the following CLI command will also yield a similar output:



# isi storagepool list -v –format=table



The same information can be found in the WebUI by navigating to Storage Pools > SmartPools > Tiers & Node Pools and selecting ‘View node pool details’ for the desired pool:



nodepool_reporting_3.png

As an alternative, the OneFS RESTful platform API can also be used to query nodepool statistics. Use the following syntax, substituting ‘cluster_ip_address’ for the IPv4 dotted quad address of your cluster:

https://<cluster_ip_address>:8080/platform/1/storagepool/nodepools

For each nodepool or tier, the following metrics will be reported:

nodepool_reporting_4.png

With the exception of the percentage (pct) data, all the values are in bytes, so simple math will be required in order to convert to more human readable values.

2. For the next option, if everything on the cluster is being written to a single pool which is mapped to a particular directory tree under /ifs, another approach for nodepool capacity and usage reporting is to configure a container advisory quota. Note that this method also requires an active SmartQuotas license on the cluster.

Quota containers help compartmentalize /ifs, so that a directory with a container will appear as its own separate file system slice. As such, setting the container flag directly influences the metrics returned by the CLI command ‘df’. For instance, if you run ‘df’ on a container directory, it will report the free space that the quota container has been allocated.

To configure a directory quota with a 4TB container on /ifs/data/container1, you could use the following CLI command:

# isi quota quotas create /ifs/data/container1 directory –hard-threshold 4T –container true

Under the covers, the ‘container’ argument interacts with the ‘statfs’ system call. This provides the capacity statistics based on the hard limit of the container quota, as opposed to the entire cluster. So, for the example above, 4TB – the size of the container – is reported.

# df -h /ifs/data/container1

Filesystem Size Used Avail Capacity Mounted on

OneFS 4.0T 3.0G 4.0T 0% /ifs



Be aware that OneFS only currently supports containers for directory quotas, not for user or group quotas.



3. Finally, if the FSAnalyze job is running on a cluster, plus you have an instance of InsightIQ version 4.0 or above monitoring the cluster, this can be used to provide per-pool reporting. From the InsightIQ WebUI, select the ‘Cluster Capacity’ report with ‘Breakout Total Usage by Nodepool’. For example:

nodepool_reporting_5.png

As an added benefit, using InsightIQ reporting also has the option of historical nodepool usage data. This allows for simple trending analysis and explicit point-in-time statistics.

Related:

Re: User quota is not showing from linux client on particular FS

Hello,

I have this strange issue. I have some Users quota set on 3 different Filesystems on My VNX and from Linux client the users can only see their quota usage on the 2 first filesystems, while my quota settings seems to be same ?

Than you for any help/idea

Example for user id 4057

Here is the report from VNX

[nasadmin@VNXxxx ~]$ /nas/bin/nas_quotas -report -user -fs Users | grep 4057

|#4057 | 314811| 4718592| 5242880| | 1469| 0| 0| |

[nasadmin@VNXxxx ~]$ /nas/bin/nas_quotas -report -user -fs Projects | grep 4057

|#4057 | 141702862| 209715200| 209715200| | 2570275| 0| 0| |

[nasadmin@VNXxxx ~]$ /nas/bin/nas_quotas -report -user -fs tmp | grep 4057

|#4057 | 1176950056| 1572864000| 1572864000| | 33778940| 0| 0| |

root@lnxxx /mnt # df -k

Filesystem 1K-blocks Used Available Use% Mounted on

nasnfsxx:/Projects 722803200 676903936 45899264 94% /remote/projects

nasnfsxx:/Users 3820531200 1337473280 2483057920 36% /remote/users

nasnfsxx:/tmp 2994405888 1150564224 1843841664 39% /mnt/tmp

root@lnxxx /mnt # quota -u 4057

Disk quotas for user (uid 4057):

Filesystem blocks quota limit grace files quota limit grace

nasnfs01:/Projects

141704294 209715200 209715200 2570275 0 0

nasnfs01:/Users

314811 4718592 5242880 1469 0 0

As you can see Quota is not showing for FS tmp ??

Related:

User quota is not showing from linux client on particular FS

Hello,

I have this strange issue. I have some Users quota set on 3 different Filesystems on My VNX and from Linux client the users can only see their quota usage on the 2 first filesystems, while my quota settings seems to be same ?

Than you for any help/idea

Example for user id 4057

Here is the report from VNX

[nasadmin@VNXxxx ~]$ /nas/bin/nas_quotas -report -user -fs Users | grep 4057

|#4057 | 314811| 4718592| 5242880| | 1469| 0| 0| |

[nasadmin@VNXxxx ~]$ /nas/bin/nas_quotas -report -user -fs Projects | grep 4057

|#4057 | 141702862| 209715200| 209715200| | 2570275| 0| 0| |

[nasadmin@VNXxxx ~]$ /nas/bin/nas_quotas -report -user -fs tmp | grep 4057

|#4057 | 1176950056| 1572864000| 1572864000| | 33778940| 0| 0| |

root@lnxxx /mnt # df -k

Filesystem 1K-blocks Used Available Use% Mounted on

nasnfsxx:/Projects 722803200 676903936 45899264 94% /remote/projects

nasnfsxx:/Users 3820531200 1337473280 2483057920 36% /remote/users

nasnfsxx:/tmp 2994405888 1150564224 1843841664 39% /mnt/tmp

root@lnxxx /mnt # quota -u 4057

Disk quotas for user (uid 4057):

Filesystem blocks quota limit grace files quota limit grace

nasnfs01:/Projects

141704294 209715200 209715200 2570275 0 0

nasnfs01:/Users

314811 4718592 5242880 1469 0 0

As you can see Quota is not showing for FS tmp ??

Related:

Re: I need to list Shares and amount of disk space each uses

Shares don’t consume space, because of course Isilon is just one big filesystem. You could enable accounting quotas if you have a SmartQuotas license and track them that way. (Directory Quotas). That’s perhaps the most straightforward way to do it, however once a directory is a quota protection domain, ensure you understand the other items that come along with that.

InsightIQ doesn’t actually store on it’s VM the filesystem analytics data that you see in the GUI, that is actually stored on the cluster itself, as the result of an fsanalyze job. It’s stored in a sqlite database export, so if you have some reasonable SQL skills you could query the latest result set to get this information as well. It’s in /ifs/.ifsvar/modules/fsa/resultsets/ (there is a symbolic link to the latest one) (never modify anything in /ifs/.ifsvar/).

I’m sure others can link you to the proper steps for this if you want to go this manual, scripted route.

~Chris

Related: