About ProxySG license

I need a solution

Hello,

We are planning two use ProxySG S200-40. Obviously it has two appliance master and fail-over.  If we will be using Blue Coat WebFilter Subscription on ProxySG S200-40 then will we need two license for each appliance or one license for each appliance. Also could we use redundant proxysg server and how about license ?

Have a nice day.

Thank you.

0

Related:

  • No Related Posts

Threat Detected on a drive that doesn’t exist?

I need a solution

Hello-

We are receiving the following threat detections on a particular PC:

Resolved Threats:
No risks have been resolved

Unresolved Threats:
Trojan.Gen.MBT
 Type: Anomaly
 Risk: High (High Stealth, High Removal, High Performance, High Privacy)
 Categories: Virus
 Status: Remove Failed
 ———–
 1 Infected File
D:DHL_Label_Scan _  June 19 2019 at 2.21_06455210_PDF.exe – Failed
 1 Browser Cache

Heur.AdvML.C
 Type: Anomaly
 Risk: High (High Stealth, High Removal, High Performance, High Privacy)
 Categories: Heuristic Virus
 Status: Remove Failed
 ———–
 1 Infected File
D:DHL_Label_Scan _  June 19 2019 at 2.21_06455210_PDF.exe – Failed
 1 Browser Cache

The problem here is, there is no CD/DVD in the optical drive and there is no Drive D: on the machine — see attachment. I do recognize the filename as it was an attachment included in a spam email that was never opened and has since been deleted.

Any ideas on how to clear these alerts?..

0

Related:

  • No Related Posts

Multiple Pac Files

I need a solution

Hello,

we tried to serve two different pac Files via Exceptions to our users.
We configured the ASG discriped in https://www.symantec.com/docs/TECH241646 .

After configuring the Browser ( firefox ) to an auto proxy configuration url
http://proxy.web.mycompany.org/browserconfig.pac

the browser ask for the config an get an answer.

GET /browserconfig.pac HTTP/1.1
Host: proxy.web.mycompany.org
Connection: keep-alive
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.91 Safari/537.36
Accept-Encoding: gzip, deflate
Accept-Language: de-DE

HTTP/1.1 200 OK
Content-Type: application/x-ns-proxy-autoconfig
Cache-Control: no-cache
Pragma: no-cache
Proxy-Connection: Keep-Alive
Connection: Keep-Alive
Content-Length: 1504

function FindProxyForURL(url, host)
{
    if (
      shExpMatch(url, “https://int.login.mycompany.org/*”) ||
      shExpMatch(url, “https://login.mycompany.org/*”) ||
      shExpMatch(url, “https://entw.mycompany.org/*”)
    )
    return “PROXY proxy.web.mycompany.org:8080; DIRECT”;
    //
    if (shExpMatch(host,”highsecurehost.com”))
    return “PROXY highsecure.services.mycompany.org:1234”;
    if (
    shExpMatch(url, “*.companyfriends.org*”)  ||
    shExpMatch(url, “proxy.mycompany.org*”)  ||
    shExpMatch(url, “proxy.web.mycompany.org*”)  ||
    shExpMatch(url, “*.otherfriends.org/*”)  ||
    shExpMatch[url, “*localhost*”)  ||
    isInNet(host, “10.0.0.0”, “255.0.0.0”) ||
    isInNet(host, “127.0.0.0”, “255.0.0.0”) ||
    isInNet(host, “192.168.1.0”, “255.255.255.0”) ||
        )
    return “DIRECT”;
    if ( shExpMatch(url, “*.specialservice.org/*”))
    return “PROXY proxy.mycompany.org:7070; DIRECT”;
    if ( dnsResolve(“ntp.mycompany.org”) == “10.1.99.99” ||
         dnsResolve(“ntp.mycompany.org”) == “10.100.199.199”)
    return “PROXY proxy.web.mycompany.org:8080”;
    else
    return “PROXY proxy.web.mycompany.org:8080”;
}

But this settings are ignored by the browser and the configuration does not work.

Regards
Thorsten

0

Related:

  • No Related Posts

When using Custom Portal Theme, AAA or Gateway VServer gives “Cannot Complete Your Request” error.

The fix for this issue is under work and should come in upcoming builds.

Workaround:

AAA VServer:

sh cache object | grep plugins.xml

flush cache object -locator <locator for plugins.xml>

enable ns feature IC (If not already enabled)

add cache policy nocache_pol -rule “HTTP.REQ.URL.CONTAINS(“plugins.xml”)” -action NOCACHE

bind cache global nocache_pol -priority 1 -type REQ_OVERRIDE

Gateway VServer:

sh cache object | grep plugins.xml

flush cache object -locator <locator for plugins.xml>

enable ns feature IC (If not already enabled)

add cache policy nocache_pol -rule “HTTP.REQ.URL.CONTAINS(“plugins.xml”)” -action NOCACHE

bind vpn vserver <vpnvsname> -policy nocache_pol -priority 1 -type REQUEST

If ‘Integrated Caching’ feature is not licensed or enabled:

set aaa parameter -enableStaticPageCaching NO

Related:

  • No Related Posts

How to Configure the Integrated Caching Feature of a NetScaler Appliance for various Scenarios

You can configure the Integrated Caching feature of a NetScaler appliance for the following scenarios, as required:

Note: The memory limit of the NetScaler appliance is identified when the appliance starts. Therefore, any changes to the memory limit requires you to restart the appliance to make the changes applicable across the packet engines.

The Feature is Enabled and Cache Memory Limit is Set to Non-Zero

In this scenario, when you start the appliance, the Integrated Caching feature is enabled and the global memory limit is set to a positive number. Therefore, the memory you had set earlier is allocated to the Integrated Caching feature during the boot process. However, you might want to change the memory limit to another value depending on the available memory on the appliance.

To configure the Integrated Caching feature in this scenario, complete the following procedure:

  1. Run the following command to verify the value for the memory limit:

    NS> show cache parameter

    Integrated cache global configuration:

    Memory usage limit: 500 MBytes

    Memory usage limit (active value): 500 MBytes


    Maximum value for Memory usage limit: 843 MBytes

    Via header: NS-CACHE-9.3: 18

    Verify cached object using: HOSTNAME_AND_IP

    Max POST body size to accumulate: 0 bytes

    Current outstanding prefetches: 0

    Max outstanding prefetches: 4294967295

    Treat NOCACHE policies as BYPASS policies: YES

    Global Undef Action: NOCACHE

  2. Run the following command to set a non-zero memory limit for the Integrated Caching feature:

    set cache parameter –memLimit 600

    The preceding command displays the following warning message:

    Warning: To use new Integrated Cache memory limit, save the configuration and restart the NetScaler.

  3. Run the following command to save the configuration:

    save config

  4. From the shell prompt, run the following command to verify the memory limit in the configuration file:

    root@ns# cat /nsconfig/ns.conf | grep memLimit

    set cache parameter -memLimit 600 -via NS-CACHE-9.3: 18 -verifyUsing HOSTNAME_AND_IP -maxPostLen 0 -enableBypass YES -undefAction NOCACHE

  5. Run the following command to restart the appliance:

    root@ns# reboot

  6. Run the following command to verify the new value for the memory limit:

    show cache parameter

    Integrated cache global configuration:

    Memory usage limit: 600 Mbytes

    Memory usage limit (active value): 600 Mbytes

    Maximum value for Memory usage limit: 843 Mbytes


    Via header: NS-CACHE-9.3: 18

    Verify cached object using: HOSTNAME_AND_IP

    Max POST body size to accumulate: 0 bytes

    Current outstanding prefetches: 0

    Max outstanding prefetches: 4294967295

    Treat NOCACHE policies as BYPASS policies: YES

    Global Undef Action: NOCACHE

    After all packet engines start successfully, the Integrated Caching feature negotiates the memory you had configured. If the appliance cannot use the configured memory, then the memory is allocated accordingly. If the available memory is less than the one you allocated, the appliance recommends a lesser number and the Integrated Caching feature uses same as the active value.

The Feature is Disabled and Cache Memory Limit is Set to Non-Zero

In this scenario, when you start the appliance, the Integrated Caching feature is disabled and the global memory limit is set to a positive number. Therefore, no memory is allocated to the Integrated Caching feature during the boot process.

To configure the Integrated Caching feature to a new memory limit, complete the following procedure:

  1. Run the following command to verify the current value for the memory limit:

    show cache parameter

    Integrated cache global configuration:

    Memory usage limit: 600 Mbytes

    Maximum value for Memory usage limit: 843 Mbytes


    Via header: NS-CACHE-9.3: 18

    Verify cached object using: HOSTNAME_AND_IP

    Max POST body size to accumulate: 0 bytes

    Current outstanding prefetches: 0

    Max outstanding prefetches: 4294967295

    Treat NOCACHE policies as BYPASS policies: YES

    Global Undef Action: NOCACHE

  2. Run the following command to set a new memory limit for the Integrated Caching feature, such as 500 MB:

    set cache parameter –memLimit 500

    The preceding command displays the following warning message:

    Warning: Feature(s) not enabled [IC]

  3. Run the following command to save the configuration:

    save config

  4. From the shell prompt, run the following command to verify the memory limit in the configuration file:

    ns# cat /nsconfig/ns.conf | grep memLimit

    set cache parameter -memLimit 500 -via NS-CACHE-9.3: 18 -verifyUsing HOSTNAME_AND_IP -maxPostLen 0 -enableBypass YES -undefAction NOCACHE

  5. Run the following command to verify the new value for the memory limit:

    show cache parameter

    Integrated cache global configuration:

    Memory usage limit: 500 Mbytes

    Maximum value for Memory usage limit: 843 Mbytes


    Via header: NS-CACHE-9.3: 18

    Verify cached object using: HOSTNAME_AND_IP

    Max POST body size to accumulate: 0 bytes

    Current outstanding prefetches: 0

    Max outstanding prefetches: 4294967295

    Treat NOCACHE policies as BYPASS policies: YES

    Global Undef Action: NOCACHE

  6. Run the following command to enable the Integrated Caching feature:

    enable ns feature ic

    After running the preceding command, the appliance negotiates memory for the Integrated Caching feature and the available memory is assigned to the feature. This results in the appliance caching objects without restarting the appliance.

  7. Run the following command to verify the value for the memory limit:

    show cache parameter

    Integrated cache global configuration:

    Memory usage limit: 500 Mbytes

    Memory usage limit (active value): 500 Mbytes


    Maximum value for Memory usage limit: 843 Mbytes

    Via header: NS-CACHE-9.3: 18

    Verify cached object using: HOSTNAME_AND_IP

    Max POST body size to accumulate: 0 bytes

    Current outstanding prefetches: 0

    Max outstanding prefetches: 4294967295

    Treat NOCACHE policies as BYPASS policies: YES

    Global Undef Action: NOCACHE

    Notice that 500 MB of memory is allocated to the Integrated Caching feature.

  8. Save the configuration to ensure that memory is automatically allocated to the feature when the appliance is restarted.

The Feature is Enabled and Cache Memory Limit is Set to Zero

In this scenario, when you start the appliance, the Integrated Caching feature is enabled and the global memory limit is set to zero. Therefore, no memory is allocated to the Integrated Caching feature during the boot process.

To configure a cache memory limit in this scenario, complete the following procedure:

  1. Switch to the shell prompt and run the following command to verify the memory limits set in the ns.conf file:

    ns# cat ns.conf | grep memLimit

    set cache parameter -memLimit 0 -via “NS-CACHE-9.3: 18” -verifyUsing HOSTNAME -maxPostLen 4096 -enableBypass YES -undefAction NOCACHE

  2. Run the following command to verify the value for the memory limit:

    show cache parameter

    Integrated cache global configuration:

    Memory usage limit: 0 Mbytes

    Maximum value for Memory usage limit: 843 Mbytes


    Via header: NS-CACHE-9.3:

    Verify cached object using: HOSTNAME_AND_IP

    Max POST body size to accumulate: 0 bytes

    Current outstanding prefetches: 0

    Max outstanding prefetches: 4294967295

    Treat NOCACHE policies as BYPASS policies: YES

    Global Undef Action: NOCACHE

    Notice that the memory limit is set to 0 MB and no memory is allocated to the Integrated Caching feature.

  3. To ensure that the Integrated Caching feature caches objects, run the following command to set the memory limits:

    set cache parameter -memLimit 600

    After running the preceding command, the appliance negotiates memory for the Integrated Caching feature and the available memory is assigned to the feature. This results in the appliance caching objects without restarting the appliance.

  4. Run the following command to verify the value for the memory limit:

    show cache parameter

    Integrated cache global configuration:

    Memory usage limit: 600 Mbytes

    Memory usage limit (active value): 600 Mbytes

    Maximum value for Memory usage limit: 843 Mbytes


    Via header: NS-CACHE-9.3:

    Verify cached object using: HOSTNAME_AND_IP

    Max POST body size to accumulate: 0 bytes

    Current outstanding prefetches: 0

    Max outstanding prefetches: 4294967295

    Treat NOCACHE policies as BYPASS policies: YES

    Global Undef Action: NOCACHE

    Notice that 600 MB of memory is allocated to the Integrated Caching feature.

  5. Save the configuration to ensure that memory is automatically allocated to the feature when the appliance is restarted.

  6. Switch to the shell prompt and run the following command to verify the memory limits set in the ns.conf file:

    ns# cat /nsconfig/ns.conf | grep memLimit

    set cache parameter -memLimit 600 -via NS-CACHE-9.3: -verifyUsing HOSTNAME_AND_IP -maxPostLen 0 -enableBypass YES -undefAction NOCACHE

The Feature is Disabled and Cache Memory Limit is Set to Zero

In this scenario, when you start the appliance, the Integrated Caching feature is disabled and the global memory limit is set to zero. Therefore, no memory is allocated to the Integrated Caching feature during the boot process.

To configure the Integrated Caching feature in this scenario, complete the following procedure:

  1. Run the following command to verify the memory limits set in the ns.conf file:

    ns# cat /nsconfig/ns.conf | grep memLimit

    set cache parameter -memLimit 0 -via “NS-CACHE-9.3: 18” -verifyUsing HOSTNAME_AND_IP -maxPostLen 0 -enableBypass YES -undefAction NOCACHE-maxPostLen 4096 -enableBypass YES -undefAction NOCACHE

  2. Run the following command to verify the value for the memory limit:

    show cache parameter

    Integrated cache global configuration:

    Memory usage limit: 0 Mbytes

    Maximum value for Memory usage limit: 843 Mbytes


    Via header: NS-CACHE-9.3: 18

    Verify cached object using: HOSTNAME_AND_IP

    Max POST body size to accumulate: 0 bytes

    Current outstanding prefetches: 0

    Max outstanding prefetches: 4294967295

    Treat NOCACHE policies as BYPASS policies: YES

    Global Undef Action: NOCACHE

    Notice that the memory limit is set to 0 MB and no memory is allocated to the Integrated Caching feature. Additionally, when you run any cache configuration command, the following warning message is displayed:

    Warning: Feature(s) not enabled [IC]

  3. Run the following command to enable the Integrated Caching feature:

    enable ns feature ic

    At this stage, when you enable the Integrated Caching feature, the appliance does not allocate memory to the feature. As a result, no object is cached to the memory. Additionally, when you run any cache configuration command, the following warning message is displayed:

    No memory is configured for IC. Use set cache parameter command to set the memory limit.

  4. To ensure that the Integrated Caching feature caches objects, run the following command to set the memory limits:

    set cache parameter -memLimit 500

    After running the preceding command, the appliance negotiates memory for the Integrated Caching feature and the available memory is assigned to the feature. This results in the appliance caching objects without restarting the appliance.

    Note: The order in which you run, enable the feature, and set the memory limits is very important. If you set the memory limits before enabling the feature, then the following warning message is displayed:

    Warning: Feature(s) not enabled [IC]

  5. Run the following command to verify the value for the memory limit:

    show cache parameter

    Integrated cache global configuration:

    Memory usage limit: 500 Mbytes

    Memory usage limit (active value): 500 Mbytes

    Maximum value for Memory usage limit: 843 Mbytes


    Via header: NS-CACHE-9.3: 18

    Verify cached object using: HOSTNAME_AND_IP

    Max POST body size to accumulate: 0 bytes

    Current outstanding prefetches: 0

    Max outstanding prefetches: 4294967295

    Treat NOCACHE policies as BYPASS policies: YES

    Global Undef Action: NOCACHENotice that 500 MB of memory is allocated to the Integrated Caching feature.

  6. Run the following command to save the configuration:

    save config

  7. Run the following command to verify the memory limits set in the ns.conf file:

    ns# cat /nsconfig/ns.conf | grep memLimit

    set cache parameter -memLimit 500 -via NS-CACHE-9.3: 18 -verifyUsing HOSTNAME_AND_IP -maxPostLen 0 -enableBypass YES -undefAction NOCACHE

Related:

  • No Related Posts

VMAX3 & VMAX All Flash: Cache slot lock cleanup for EDS emulation.

Article Number: 504592 Article Version: 5 Article Type: Break Fix



VMAX3 Series,VMAX All Flash

Host encounters performance problem after redundant Director becomes non-operational.

When a Director fails, each code emulation within HYPERMAX OS performs a cache cleanup routine to unlock cache slots in Global Memory cache, which were in middle of tasks by the failing Director and its emulations. This unlock routine on the cache (known as Dead Director Cache Cleanup) allows the emulations to either re-purpose the cache slot or re-drive the task to completion by another emulation CPU on live, running Director. An issue has been found where cache slots with certain conditions and tasks that were locked by an Enterprise Delivery Services (EDS) emulation CPU on a Failed Director do not unlock the cache slots as designed and intended.

Failed redundant Director

Dell EMC engineering is currently investigating this problem. A permanent fix is still in progress. Contact the Dell EMC VMAX Support Center or your service representative for assistance and quote this Knowledgebase ID.

Related:

  • No Related Posts

OneFS and Synchronous Writes

The last article on multi-threaded I/O generated several questions on synchronous writes in OneFS. So thought this would make a useful topic to kick off the New Year and explore in a bit more detail.

OneFS natively provides a caching mechanism for synchronous writes – or writes that require a stable write acknowledgement to be returned to a client. This functionality is known as the Endurant Cache, or EC.

The EC operates in conjunction with the OneFS write cache, or coalescer, to ingest, protect and aggregate small, synchronous NFS writes. The incoming write blocks are staged to NVRAM, ensuring the integrity of the write, even during the unlikely event of a node’s power loss. Furthermore, EC also creates multiple mirrored copies of the data, further guaranteeing protection from single node and, if desired, multiple node failures.

EC improves the latency associated with synchronous writes by reducing the time to acknowledgement back to the client. This process removes the Read-Modify-Write (R-M-W) operations from the acknowledgement latency path, while also leveraging the coalescer to optimize writes to disk. EC is also tightly coupled with OneFS’ multi-threaded I/O (Multi-writer) process, to support concurrent writes from multiple client writer threads to the same file. Plus, the design of EC ensures that the cached writes do not impact snapshot performance.

The endurant cache uses write logging to combine and protect small writes at random offsets into 8KB linear writes. To achieve this, the writes go to special mirrored files, or ‘Logstores’. The response to a stable write request can be sent once the data is committed to the logstore. Logstores can be written to by several threads from the same node, and are highly optimized to enable low-latency concurrent writes.

Note that if a write uses the EC, the coalescer must also be used. If the coalescer is disabled on a file, but EC is enabled, the coalescer will still be active with all data backed by the EC.

So what exactly does an endurant cache write sequence look like?

Say an NFS client wishes to write a file to an Isilon cluster over NFS with the O_SYNC flag set, requiring a confirmed or synchronous write acknowledgement. Here is the sequence of events that occur to facilitate a stable write.

1) A client, connected to node 3, begins the write process sending protocol level blocks.



ec_1.png



4KB is the optimal block size for the endurant cache.

2) The NFS client’s writes are temporarily stored in the write coalescer portion of node 3’s RAM. The Write Coalescer aggregates uncommitted blocks so that the OneFS can, ideally, write out full protection groups where possible, reducing latency over protocols that allow “unstable” writes. Writing to RAM has far less latency that writing directly to disk.

3) Once in the write coalescer, the endurant cache log-writer process writes mirrored copies of the data blocks in parallel to the EC Log Files.



ec_2.png

The protection level of the mirrored EC log files is the same as that of the data being written by the NFS client.

4) When the data copies are received into the EC Log Files, a stable write exists and a write acknowledgement (ACK) is returned to the NFS client confirming the stable write has occurred.



ec_3.png



The client assumes the write is completed and can close the write session.

5) The write coalescer then processes the file just like a non-EC write at this point. The write coalescer fills and is routinely flushed as required as an asynchronous write via to the block allocation manager (BAM) and the BAM safe write (BSW) path processes.

6) The file is split into 128K data stripe units (DSUs), parity protection (FEC) is calculated and FEC stripe units (FSUs) are created.



ec_4.png

7) The layout and write plan is then determined, and the stripe units are written to their corresponding nodes’ L2 Cache and NVRAM. The EC logfiles are cleared from NVRAM at this point. OneFS uses a Fast Invalid Path process to de-allocate the EC Log Files from NVRAM.



ec_5.png

8) Stripe Units are then flushed to physical disk.

9) Once written to physical disk, the data stripe Unit (DSU) and FEC stripe unit (FSU) copies created during the write are cleared from NVRAM but remain in L2 cache until flushed to make room for more recently accessed data.



ec_6.png

As far as protection goes, the number of logfile mirrors created by EC is always one more than the on-disk protection level of the file. For example:

File Protection Level

Number of EC Mirrored Copies

+1n

2

2x

3

+2n

3

+2d:1n

3

+3n

4

+3d:1n

4

+4n

5

The EC mirrors are only used if the initiator node is lost. In the unlikely event that this occurs, the participant nodes replay their EC journals and complete the writes.

If the write is an EC candidate, the data remains in the coalescer, an EC write is constructed, and the appropriate coalescer region is marked as EC. The EC write is a write into a logstore (hidden mirrored file) and the data is placed into the journal.

Assuming the journal is sufficiently empty, the write is held there (cached) and only flushed to disk when the journal is full, thereby saving additional disk activity.

An optimal workload for EC involves small-block synchronous, sequential writes – something like an audit or redo log, for example. In that case, the coalescer will accumulate a full protection group’s worth of data and be able to perform an efficient FEC write.

The happy medium is a synchronous small block type load, particularly where the I/O rate is low and the client is latency-sensitive. In this case, the latency will be reduced and, if the I/O rate is low enough, it won’t create serious pressure.

The undesirable scenario is when the cluster is already spindle-bound and the workload is such that it generates a lot of journal pressure. In this case, EC is just going to aggravate things.

So how exactly do you configure the endurant cache?

Although on by default, setting the efs.bam.ec.mode sysctl to value ‘1’ will enable the Endurant Cache:

# isi_for_array –s isi_sysctl_cluster efs.bam.ec.mode=1

EC can also be enabled & disabled per directory:

# isi set -c [on|off|endurant_all|coal_only] <directory_name>

To enable the coalescer but switch of EC, run:

# isi set -c coal_only

And to disable the endurant cache completely:

# isi_for_array –s isi_sysctl_cluster efs.bam.ec.mode=0

A return value of zero on each node from the following command will verify that EC is disabled across the cluster:

# isi_for_array –s sysctl efs.bam.ec.stats.write_blocks efs.bam.ec.stats.write_blocks: 0

If the output to this command is incrementing, EC is delivering stable writes.

As mentioned previously, EC applies to stable writes. Namely:

  • Writes with O_SYNC and/or O_DIRECT flags set
  • Files on synchronous NFS mounts

When it comes to analyzing any performance issues involving EC workloads, consider the following:

  • What changed with the workload?
  • If upgrading OneFS, did the prior version also have EC enable?
  • If the workload has moved to new cluster hardware:
  • Does the performance issue occur during periods of high CPU utilization?
  • Which part of the workload is creating a deluge of stable writes?
  • Was there a large change in spindle or node count?
  • Has the OneFS protection level changed?
  • Is the SSD strategy the same?

Disabling EC is typically done cluster-wide and this can adversely impact certain workflow elements. If the EC load is localized to a subset of the files being written, an alternative way to reduce the EC heat might be to disable the coalescer buffers for some particular target directories, which would be a more targeted adjustment. This can be configured via the isi set –c off command.

One of the more likely causes of performance degradation is from applications aggressively flushing over-writes and, as a result, generating a flurry of ‘commit’ operations. This can generate heavy read/modify/write (r-m-w) cycles, inflating the average disk queue depth, and resulting in significantly slower random reads. The isi statistics protocol CLI command output will indicate whether the ‘commit’ rate is high.

It’s worth noting that synchronous writes do not require using the NFS ‘sync’ mount option. Any programmer who is concerned with write persistence can simply specify an O_FSYNC or O_DIRECT flag on the open() operation to force synchronous write semantics for that fie handle. With Linux, writes using O_DIRECT will be separately accounted-for in the Linux ‘mountstats’ output. Although it’s almost exclusively associated with NFS, the EC code is actually protocol-agnostic. If writes are synchronous (write-through) and are either misaligned or smaller than 8KB, they have the potential to trigger EC, regardless of the protocol.

The endurant cache can provide a significant latency benefit for small (eg. 4K), random synchronous writes – albeit at a cost of some additional work for the system.

However, it’s worth bearing the following caveats in mind:

  • EC is not intended for more general purpose I/O.
  • There is a finite amount of EC available. As load increases, EC can potentially ‘fall behind’ and end up being a bottleneck.
  • Endurant Cache does not improve read performance, since it’s strictly part of the write process.
  • EC will not increase performance of asynchronous writes – only synchronous writes.

Related:

  • No Related Posts

Re: unity and storops and metrics

Hi,

Apologies for question, but not sure who to ask.

I see in https://pypi.org/project/storops/

That

supported metrics

  • system
    • read/write/total IOPS
    • read/write/total bandwidth
  • disk
    • read/write/total IOPS
    • read/write/total bandwidth
    • utilization
    • response time
    • queue length
  • lun
    • read/write/total IOPS
    • read/write/total bandwidth
    • utilization
    • response time
    • queue length
  • filesystem
    • read/write IOPS
    • read/write bandwidth
  • storage processor
    • net in/out bandwidth
    • block read/write/total IOPS
    • block read/write/total bandwidth
    • CIFS read/write IOPS
    • CIFS read/write bandwidth
    • NFS read/write IOPS
    • NFS read/write bandwidth
    • utilization
    • block cache read/write hit ratio
    • block cache dirty size
    • fast cache read/write hits
    • fast cache read/write hit rate
  • fc port
    • read/write/total IOPS
    • read/write/total bandwidth
  • iscsi node
    • read/write/total IOPS
    • read/write/total bandwidth

metrics are exposed.

Looking at the methods visible, I see;

{“UnitySystem”: {“avg_power”: 310, “current_power”: 313, “existed”: true, “hash”: 1480349, “health”: {“UnityHealth”: {“hash”: 2129561}}, “id”: “0”, “internal_model”: “DPE OB BDW 25DRV 256GB 14C”, “is_auto_failback_enabled”: false, “is_eula_accepted”: true, “is_upgrade_complete”: false, “mac_address”: “08:00:1B:FF:B2:AD”, “model”: “Unity 550F”, “name”: “dubnas308-spa-mgmt”, “platform”: “Oberon_DualSP”, “serial_number”: “CKM00184801131”}}

[‘__class__’, ‘__delattr__’, ‘__dict__’, ‘__doc__’, ‘__eq__’, ‘__format__’, ‘__getattr__’, ‘__getattribute__’, ‘__getstate__’, ‘__hash__’, ‘__init__’, ‘__module__’, ‘__ne__’, ‘__new__’, ‘__reduce__’, ‘__reduce_ex__’, ‘__repr__’, ‘__setattr__’, ‘__sizeof__’, ‘__str__’, ‘__subclasshook__’, ‘__weakref__’, ‘_auto_balance_sp’, ‘_cli’, ‘_default_rsc_list_with_perf_stats’, ‘_get_first_not_none_prop’, ‘_get_name’, ‘_get_parser’, ‘_get_properties’, ‘_get_property_from_raw’, ‘_get_raw_resource’, ‘_get_unity_rsc’, ‘_get_value_by_key’, ‘_id’, ‘_is_updated’, ‘_ntp_server’, ‘_parse_raw’, ‘_parsed_resource’, ‘_preloaded_properties’, ‘_self_cache_’, ‘_self_cache_lock_’, ‘_system_time’, ‘action’, ‘add_dns_server’, ‘add_metric_record’, ‘add_ntp_server’, ‘build_nested_properties_obj’, ‘clear_dns_server’, ‘clear_ntp_server’, ‘clz_name’, ‘create_cg’, ‘create_host’, ‘create_io_limit_policy’, ‘create_iscsi_portal’, ‘create_nas_server’, ‘create_pool’, ‘create_tenant’, ‘delete’, ‘disable_perf_stats’, ‘disable_persist_perf_stats’, ‘dns_server’, ‘doc’, ‘enable_perf_stats’, ‘enable_persist_perf_stats’, ‘existed’, ‘get’, ‘get_battery’, ‘get_capability_profile’, ‘get_cg’, ‘get_cifs_server’, ‘get_cifs_share’, ‘get_dae’, ‘get_dict_repr’, ‘get_disk’, ‘get_disk_group’, ‘get_dns_server’, ‘get_doc’, ‘get_dpe’, ‘get_ethernet_port’, ‘get_fan’, ‘get_fc_port’, ‘get_feature’, ‘get_file_interface’, ‘get_file_port’, ‘get_filesystem’, ‘get_host’, ‘get_id’, ‘get_index’, ‘get_initiator’, ‘get_io_limit_policy’, ‘get_io_module’, ‘get_ip_port’, ‘get_iscsi_node’, ‘get_iscsi_portal’, ‘get_lcc’, ‘get_license’, ‘get_link_aggregation’, ‘get_lun’, ‘get_memory_module’, ‘get_metric_query_result’, ‘get_metric_timestamp’, ‘get_metric_value’, ‘get_mgmt_interface’, ‘get_nas_server’, ‘get_nested_properties’, ‘get_nfs_server’, ‘get_nfs_share’, ‘get_pool’, ‘get_power_supply’, ‘get_preloaded_prop_keys’, ‘get_property_label’, ‘get_resource_class’, ‘get_sas_port’, ‘get_snap’, ‘get_sp’, ‘get_ssc’, ‘get_ssd’, ‘get_system_capacity’, ‘get_tenant’, ‘get_tenant_use_vlan’, ‘info’, ‘is_perf_stats_enabled’, ‘is_perf_stats_persisted’, ‘json’, ‘metric_names’, ‘modify’, ‘ntp_server’, ‘parse’, ‘parse_all’, ‘parsed_resource’, ‘property_names’, ‘remove_dns_server’, ‘remove_ntp_server’, ‘resource_class’, ‘set_cli’, ‘set_preloaded_properties’, ‘set_system_time’, ‘shadow_copy’, ‘singleton_id’, ‘system_time’, ‘system_version’, ‘update’, ‘update_name_if_exists’, ‘upload_license’, ‘verify’]

Where do I get for example

  • storage processor

utilization

as a call to ‘get_sp’ give me the static data;

{“UnityStorageProcessorList”: [{“UnityStorageProcessor”: {“bios_firmware_revision”: “53.51”, “emc_part_number”: “110-297-014C-04”, “emc_serial_number”: “CE8HH184100349”, “existed”: true, “hash”: 1814369, “health”: {“UnityHealth”: {“hash”: 2129621}}, “id”: “spa”, “is_rescue_mode”: false, “manufacturer”: “”, “memory_size”: 131072, “model”: “ASSY OB SP BDW 14C 2.0G 128G STM”, “name”: “SP A”, “needs_replacement”: false, “parent_dpe”: {“UnityDpe”: {“hash”: 2129513, “id”: “dpe”}}, “post_firmware_revision”: “31.10”, “sas_expander_version”: “2.26.1”, “slot_number”: 0, “vendor_part_number”: “”, “vendor_serial_number”: “”}}, {“UnityStorageProcessor”: {“bios_firmware_revision”: “53.51”, “emc_part_number”: “110-297-014C-04”, “emc_serial_number”: “CE8HH184100370”, “existed”: true, “hash”: 1480429, “health”: {“UnityHealth”: {“hash”: 2283289}}, “id”: “spb”, “is_rescue_mode”: false, “manufacturer”: “”, “memory_size”: 0, “model”: “ASSY OB SP BDW 14C 2.0G 128G STM”, “name”: “SP B”, “needs_replacement”: false, “parent_dpe”: {“UnityDpe”: {“hash”: 2283305, “id”: “dpe”}}, “post_firmware_revision”: “31.10”, “sas_expander_version”: “2.26.1”, “slot_number”: 1, “vendor_part_number”: “”, “vendor_serial_number”: “”}}]}

but I assume its some variation of ‘get_metric_query_result’ to get performance metrics, but I an unsure of syntax of call

Any help appreciated;

Related:

  • No Related Posts

unity and storops and metrics

Hi,

Apologies for question, but not sure who to ask.

I see in https://pypi.org/project/storops/

That

supported metrics

  • system
    • read/write/total IOPS
    • read/write/total bandwidth
  • disk
    • read/write/total IOPS
    • read/write/total bandwidth
    • utilization
    • response time
    • queue length
  • lun
    • read/write/total IOPS
    • read/write/total bandwidth
    • utilization
    • response time
    • queue length
  • filesystem
    • read/write IOPS
    • read/write bandwidth
  • storage processor
    • net in/out bandwidth
    • block read/write/total IOPS
    • block read/write/total bandwidth
    • CIFS read/write IOPS
    • CIFS read/write bandwidth
    • NFS read/write IOPS
    • NFS read/write bandwidth
    • utilization
    • block cache read/write hit ratio
    • block cache dirty size
    • fast cache read/write hits
    • fast cache read/write hit rate
  • fc port
    • read/write/total IOPS
    • read/write/total bandwidth
  • iscsi node
    • read/write/total IOPS
    • read/write/total bandwidth

metrics are exposed.

Looking at the methods visible, I see;

{“UnitySystem”: {“avg_power”: 310, “current_power”: 313, “existed”: true, “hash”: 1480349, “health”: {“UnityHealth”: {“hash”: 2129561}}, “id”: “0”, “internal_model”: “DPE OB BDW 25DRV 256GB 14C”, “is_auto_failback_enabled”: false, “is_eula_accepted”: true, “is_upgrade_complete”: false, “mac_address”: “08:00:1B:FF:B2:AD”, “model”: “Unity 550F”, “name”: “dubnas308-spa-mgmt”, “platform”: “Oberon_DualSP”, “serial_number”: “CKM00184801131”}}

[‘__class__’, ‘__delattr__’, ‘__dict__’, ‘__doc__’, ‘__eq__’, ‘__format__’, ‘__getattr__’, ‘__getattribute__’, ‘__getstate__’, ‘__hash__’, ‘__init__’, ‘__module__’, ‘__ne__’, ‘__new__’, ‘__reduce__’, ‘__reduce_ex__’, ‘__repr__’, ‘__setattr__’, ‘__sizeof__’, ‘__str__’, ‘__subclasshook__’, ‘__weakref__’, ‘_auto_balance_sp’, ‘_cli’, ‘_default_rsc_list_with_perf_stats’, ‘_get_first_not_none_prop’, ‘_get_name’, ‘_get_parser’, ‘_get_properties’, ‘_get_property_from_raw’, ‘_get_raw_resource’, ‘_get_unity_rsc’, ‘_get_value_by_key’, ‘_id’, ‘_is_updated’, ‘_ntp_server’, ‘_parse_raw’, ‘_parsed_resource’, ‘_preloaded_properties’, ‘_self_cache_’, ‘_self_cache_lock_’, ‘_system_time’, ‘action’, ‘add_dns_server’, ‘add_metric_record’, ‘add_ntp_server’, ‘build_nested_properties_obj’, ‘clear_dns_server’, ‘clear_ntp_server’, ‘clz_name’, ‘create_cg’, ‘create_host’, ‘create_io_limit_policy’, ‘create_iscsi_portal’, ‘create_nas_server’, ‘create_pool’, ‘create_tenant’, ‘delete’, ‘disable_perf_stats’, ‘disable_persist_perf_stats’, ‘dns_server’, ‘doc’, ‘enable_perf_stats’, ‘enable_persist_perf_stats’, ‘existed’, ‘get’, ‘get_battery’, ‘get_capability_profile’, ‘get_cg’, ‘get_cifs_server’, ‘get_cifs_share’, ‘get_dae’, ‘get_dict_repr’, ‘get_disk’, ‘get_disk_group’, ‘get_dns_server’, ‘get_doc’, ‘get_dpe’, ‘get_ethernet_port’, ‘get_fan’, ‘get_fc_port’, ‘get_feature’, ‘get_file_interface’, ‘get_file_port’, ‘get_filesystem’, ‘get_host’, ‘get_id’, ‘get_index’, ‘get_initiator’, ‘get_io_limit_policy’, ‘get_io_module’, ‘get_ip_port’, ‘get_iscsi_node’, ‘get_iscsi_portal’, ‘get_lcc’, ‘get_license’, ‘get_link_aggregation’, ‘get_lun’, ‘get_memory_module’, ‘get_metric_query_result’, ‘get_metric_timestamp’, ‘get_metric_value’, ‘get_mgmt_interface’, ‘get_nas_server’, ‘get_nested_properties’, ‘get_nfs_server’, ‘get_nfs_share’, ‘get_pool’, ‘get_power_supply’, ‘get_preloaded_prop_keys’, ‘get_property_label’, ‘get_resource_class’, ‘get_sas_port’, ‘get_snap’, ‘get_sp’, ‘get_ssc’, ‘get_ssd’, ‘get_system_capacity’, ‘get_tenant’, ‘get_tenant_use_vlan’, ‘info’, ‘is_perf_stats_enabled’, ‘is_perf_stats_persisted’, ‘json’, ‘metric_names’, ‘modify’, ‘ntp_server’, ‘parse’, ‘parse_all’, ‘parsed_resource’, ‘property_names’, ‘remove_dns_server’, ‘remove_ntp_server’, ‘resource_class’, ‘set_cli’, ‘set_preloaded_properties’, ‘set_system_time’, ‘shadow_copy’, ‘singleton_id’, ‘system_time’, ‘system_version’, ‘update’, ‘update_name_if_exists’, ‘upload_license’, ‘verify’]

Where do I get for example

  • storage processor

utilization

as a call to ‘get_sp’ give me the static data;

{“UnityStorageProcessorList”: [{“UnityStorageProcessor”: {“bios_firmware_revision”: “53.51”, “emc_part_number”: “110-297-014C-04”, “emc_serial_number”: “CE8HH184100349”, “existed”: true, “hash”: 1814369, “health”: {“UnityHealth”: {“hash”: 2129621}}, “id”: “spa”, “is_rescue_mode”: false, “manufacturer”: “”, “memory_size”: 131072, “model”: “ASSY OB SP BDW 14C 2.0G 128G STM”, “name”: “SP A”, “needs_replacement”: false, “parent_dpe”: {“UnityDpe”: {“hash”: 2129513, “id”: “dpe”}}, “post_firmware_revision”: “31.10”, “sas_expander_version”: “2.26.1”, “slot_number”: 0, “vendor_part_number”: “”, “vendor_serial_number”: “”}}, {“UnityStorageProcessor”: {“bios_firmware_revision”: “53.51”, “emc_part_number”: “110-297-014C-04”, “emc_serial_number”: “CE8HH184100370”, “existed”: true, “hash”: 1480429, “health”: {“UnityHealth”: {“hash”: 2283289}}, “id”: “spb”, “is_rescue_mode”: false, “manufacturer”: “”, “memory_size”: 0, “model”: “ASSY OB SP BDW 14C 2.0G 128G STM”, “name”: “SP B”, “needs_replacement”: false, “parent_dpe”: {“UnityDpe”: {“hash”: 2283305, “id”: “dpe”}}, “post_firmware_revision”: “31.10”, “sas_expander_version”: “2.26.1”, “slot_number”: 1, “vendor_part_number”: “”, “vendor_serial_number”: “”}}]}

but I assume its some variation of ‘get_metric_query_result’ to get performance metrics, but I an unsure of syntax of call

Any help appreciated;

Related:

  • No Related Posts