Can ICT scan and classify files in a NFS share?

I need a solution

Hello, 

there is a NetApp NFS file share that needs to be scanned and all the files classified (add meta data, visual tags, etc.) based on content. Can ICT connect to and scan NFS shares hosted on NetApp? would it need DLP Network Discover to perform the scanning? I know Network Discover is supposed to be abel to connect to and scan NFS shared, but I have never done it. 

Thanks,

0

Related:

Fueling Growth with Machine Learning and Predictive Analytics

Drawing on the power of machine learning, predictive analytics and the Apache Hadoop platform, Epsilon helps some of the world’s top brands get the right message to the right person at the right time. Although you probably don’t realize it when you are interacting with some of the world’s most admired brands, their communications are often coming to you through a company called Epsilon. As a Dell EMC case study explains, Epsilon is the digital powerhouse behind the marketing and loyalty programs of many Fortune 500 companies,  including American Express, FedEx and Walgreens, to name just … READ MORE

Related:

Accelerating the Dell EMC Partnership with the ‘New’ Cloudera

The platform design paradigm from the early days of Hadoop has been to co-locate compute and storage on the same server, which requires expanding both in tandem as your Hadoop cluster grows. This is an acceptable approach for workloads that need to expand compute and storage simultaneously. But, as Hadoop gained mainstream adoption, enterprises started finding workloads where the need for storage outpaced the need for compute. And now, after a decade of big data, enterprises are finding that historical data sets, though accessed less frequently, still need to be easily accessible. This has brought forth … READ MORE

Related:

Powering New Insights with a High Performing Data Lake

EMC logo


To discover new insights with analytics, organizations are looking to find correlations in data across different, combined data sets. For those insights to be discovered, they need to be able to provide access to the data across multiple workgroups and stakeholders concurrently.

Most organizations use a data lake to store their data in its raw, native format. However, building data lakes and managing data storage can be challenging. The areas I most often see organizations struggle with across all types of environments running Hadoop are:

Set Up and Configuration

  • Hadoop services failing due to lack of proper configuration
  • Maintenance of multiple Hadoop environments is challenging and requires more resources

Security and Compliance

  • Lack of consistency and strong security controls in securing Hadoop and the data lake
  • Inability to integrate the data lake with LDAP or Active Directory

Storage and Compute

  • Low cluster utilization efficiency with varied workloads
  • Difficulty in scaling when the increase in data volumes is faster than anticipated
  • Server sprawl and challenges migrating multi-petabyte namespaces from direct attached storage (DAS) to network attached storage (NAS)
  • Lack of Hadoop Tiered Storage meaning cold and hot data are together causing performance issues

Multi-tenancy

  • Difficulty with the Hadoop Cluster supporting the different requirements of the Hadoop Distributed File System (HDFS) workflows
  • Challenges in moving data between environments (e.g., dev to prod and prod to dev) for data scientists to use production data in a secure environment

Hadoop on Isilon

Dell EMC’s Isilon Scale Out Network Attached Storage (NAS) makes the process of building data lakes much easier and offers many features that help organizations reduce maintenance and storage costs by keeping all of their data, including structured, semi-structured and unstructured data, in one place and file system.

Organizations can then extend the data lake to the cloud and to enterprise edge locations to consolidate and manage data more effectively, easily gain cloud-scale storage capacity, reduce overall storage costs, increase data protection and security, and simplify management of unstructured data.

Data Engineering Makes the Magic Happen

Hadoop is a consumer of Isilon, the data lake where all the data resides. To fully enable the capabilities of Isilon using Hadoop, and integrate the clusters securely and consistently, you need knowledgeable data engineers to set up and configure the environment. To illustrate the point, let’s look back at the common challenge areas and how you can mitigate them with proper data engineering and Hadoop on Isilon.

For more information on Multi-tenancy, refer to this whitepaper.

Implementation Process

In order to reap the benefits of Hadoop on Isilon, prior to implementation, data engineers need to secure your critical enterprise data, protect your valuable data, and simplify your storage capacity and performance management.

From there, the process of installing a Hadoop distribution and integrating it with an Isilon cluster varies by distribution, requirements, objectives, network topology, security policies, and many other factors. There is also a specific process to follow as illustrated in the diagram below.  For example, a supported distribution of Hadoop is installed and configured with Isilon before Hadoop is configured for authentication and both Hadoop and Isilon are authenticated with Kerberos.

For more information on setting up and managing Hadoop on Isilon, refer to this white paper.

Engaging a Trusted Partner

The good news is you and your teams don’t need to be experts in data engineering or navigate the implementation and configuration process on your own. Dell EMC Consulting Services can help you optimize your data lake and storage and maximize your investment, whether you’re just getting started with Hadoop on Isilon or have an existing environment that isn’t performing optimally. Our services are delivered by a global team of deeply skilled data engineers and include implementations, migrations, third party software integrations, ETL offloads, health checks and Hadoop performance optimizations as outlined in the graphic below.

Hadoop on Isilon, supported by data engineering services, offers a compelling business proposition for organizations looking to better manage their data to drive new insights and support advanced analytics techniques, such as artificial intelligence. If you are interested in learning more about our Hadoop on Isilon services or other Big Data and Analytics Consulting services, please contact your Dell EMC representative.

The post Powering New Insights with a High Performing Data Lake appeared first on InFocus Blog | Dell EMC Services.


Update your feed preferences


   

   


   


   

submit to reddit
   

Related:

OneFS Shadow Stores

The recent series of articles on SmartDedupe have generated several questions from the field around shadow stores. So this seemed like an ideal topic to explore in a bit more depth over the course of the next couple of articles.

A shadow store is a class of system file that contains blocks which can be referenced by different file – thereby providing a mechanism that allows multiple files to share common data. Shadow stores were first introduced in OneFS 7.0, initially supporting Isilon file clones, and indeed there are many overlaps between cloning and deduplicating files. As we will see, a variant of the shadow store is also used as a container for file packing in OneFS SFSE (Small File Storage Efficiency), often used in archive workflows such as healthcare’s PACS.

Architecturally, each shadow store can contain up to 256 blocks, with each block able to be referenced by 32,000 files. If this reference limit is exceeded, a new shadow store is created. Additionally, shadow stores do not reference other shadow stores. All blocks within a shadow store must be either sparse or point at an actual data block. And snapshots of shadow stores are not allowed, since shadow stores have no hard links.

Shadow stores contain the physical addresses and protection for data blocks, just like normal file data. However, a fundamental difference between a shadow stores and a regular file is that the former doesn’t contain all the metadata typically associated with traditional file inodes. In particular, time-based attributes (creation time, modification time, etc) are explicitly not maintained.

Consider the shadow store information for a regular, undeduped file (file.orig):

# isi get -DDD file.orig | grep –i shadow

* Shadow refs: 0

zero=36 shadow=0 ditto=0 prealloc=0 block=28

A second copy of this file (file.dup) is then created and then deduplicated:

# isi get -DDD file.* | grep -i shadow

* Shadow refs: 28

zero=36 shadow=28 ditto=0 prealloc=0 block=0

* Shadow refs: 28

zero=36 shadow=28 ditto=0 prealloc=0 block=0

As we can see, the block count of the original file has now become zero and the shadow count for both the original file and its copy is incremented to ‘28′. Additionally, if another file copy is added and deduplicated, the same shadow store info and count is reported for all three files.It’s worth noting that even if the duplicate file(s) are removed, the original file will still retain the shadow store layout.

Each shadow store has a unique identifier called a shadow inode number (SIN). But, before we get into more detail, here’s a table of useful terms and their descriptions:

Element

Description

Inode

Data structure that keeps track of all data and metadata (attributes, metatree blocks, etc.) for files and directories in OneFS

LIN

Logical Inode Number uniquely identifies each regular file in the filesystem.

LBN

Logical Block Number identifies the block offset for each block in a file

IFM Tree or Metatree

Encapsulates the on-disk and in-memory format of the inode. File data blocks are indexed by LBN in the IFM B-tree, or file metatree. This B-tree stores protection group (PG) records keyed by the first LBN. To retrieve the record for a particular LBN, the first key before the requested LBN is read. The retried record may or may not contain actual data block pointers.

IDI

Isi Data Integrity checksum. IDI checkcodes help avoid data integrity issues which can occur when hardware provides the wrong data, for example. Hence IDI is focused on the path to and from the drive and checkcodes are implemented per OneFS block.

Protection Group (PG)

A protection group encompasses the data and redundancy associated with a particular region of file data. The file data space is broken up into sections of 16 x 8KB blocks called stripe units. These correspond to the N in N+M notation; there are N+M stripe units in a protection group.

Protection Group Record

Record containing block addresses for a data stripe .There are five types of PG records: sparse, ditto, classic, shadow, and mixed. The IFM B-tree uses the B-tree flag bits, the record size, and an inline field to identify the five types of records.

BSIN

Base Shadow Store, containing cloned or deduped data

CSIN

Container Shadow Store, containing packed data (container or files).

SIN

Shadow Inode Number is a LIN for a Shadow Store, containing blocks that are referenced by different files; refers to a Shadow Store

Shadow Extent

Shadow extents contain a Shadow Inode Number (SIN), an offset, and a count.

Shadow extents are not included in the FEC calculation since protection is provided by the shadow store.

Blocks in a shadow store are identified with a SIN and LBN (logical block number).

# isi get -DD /ifs/data/file.dup | fgrep –A 4 –i “protection group”

PROTECTION GROUPS

lbn 0: 4+2/2

4000:0001:0067:0009@0#64

0,0,0:8192#32

A SIN is essentially a LIN that is dedicated to a shadow store file, and SINs are allocated from a subset of the LIN range. Just as every standard file is uniquely identified by a LIN, every shadow store is uniquely identified by a SIN. It is easy to tell if you are dealing with a shadow store because the SIN will begin with 4000. For example, in the output above:

4000:0001:0067:0009

Correspondingly, in the protection group (PG) they are represented as:

  • SIN
  • Block size
  • LBN
  • Run

The referencing protection group will not contain valid IDI data (this is with the file itself). FEC parity, if required, will be computed assuming a zero block.

When a file references data in a shadow store, it contains meta-tree records that point to the shadow store. This meta-tree record contains a shadow reference, which comprises a SIN and LBN pair that uniquely identifies a block in a shadow store.

A set of extension blocks within the shadow store holds the reference count for each shadow store data block. The reference count for a block is adjusted each time a reference is created or deleted from any other file to that block. If a shadow store block’s reference count drop to zero, it is marked as deleted, and the ShadowStoreDelete job, which runs periodically, deallocates the block.

Be aware that shadow stores are not directly exposed in the filesystem namespace. However, shadow stores and relevant statistics can be viewed using the ‘isi dedupe stats’, ‘isi_sstore list’ and ‘isi_sstore stats’ command line utilities.

Cloning

In OneFS, files can easily be cloned using the ‘cp –c’ command line utility. Shadow store(s) are created during the file cloning process, where the ownership of the data blocks is transferred from the source to the shadow store.

shadow_store_1.png



In some instances, data may be copied directly from the source to the newly created shadow stores. Cloning uses logical references to shadow stores, and the source and the destination data blocks refer to an offset in a shadow store. The source file’s protection group(s) are moved to a shadow store, and the PG is now referenced by both the source file and destination clone file. After cloning a file, both the source and the destination data blocks refer to an offset in a shadow store.

Dedupe

As we have seen in the recent blog articles, shadow Stores are also used for SmartDedupe. The principle difference with dedupe, as compared to cloning, is the process by which duplicate blocks are detected.

shadow_store_2.png

The deduplication job also has to spend more effort to ensure that contiguous file blocks are generally stored in adjacent blocks in the shadow store. If not, both read and degraded read performance may be impacted.

Small File Storage Efficiency

A class of specialized shadow stores are also used as containers for storage efficiency, allowing packing of small file into larger structures that can be FEC protected.

shadow_store_3.png

These shadow stores differ from regular shadow stores in that they are deployed as single-reference stores. Additionally, container shadow stores are also optimized to isolate fragmentation, support tiering, and live in a separate subset of ID space from regular shadow stores.

SIN Cache

OneFS provides a SIN cache, which helps facilitate shadow store allocations. It provides a mechanism to create a shadow store on demand when required, and then cache that shadow store in memory on the local node so that it can be shared with subsequent allocators. The SIN cache segregates stores by disk pool, protection policy and whether or not the store is a container.

Related:

OneFS and Synchronous Writes

The last article on multi-threaded I/O generated several questions on synchronous writes in OneFS. So thought this would make a useful topic to kick off the New Year and explore in a bit more detail.

OneFS natively provides a caching mechanism for synchronous writes – or writes that require a stable write acknowledgement to be returned to a client. This functionality is known as the Endurant Cache, or EC.

The EC operates in conjunction with the OneFS write cache, or coalescer, to ingest, protect and aggregate small, synchronous NFS writes. The incoming write blocks are staged to NVRAM, ensuring the integrity of the write, even during the unlikely event of a node’s power loss. Furthermore, EC also creates multiple mirrored copies of the data, further guaranteeing protection from single node and, if desired, multiple node failures.

EC improves the latency associated with synchronous writes by reducing the time to acknowledgement back to the client. This process removes the Read-Modify-Write (R-M-W) operations from the acknowledgement latency path, while also leveraging the coalescer to optimize writes to disk. EC is also tightly coupled with OneFS’ multi-threaded I/O (Multi-writer) process, to support concurrent writes from multiple client writer threads to the same file. Plus, the design of EC ensures that the cached writes do not impact snapshot performance.

The endurant cache uses write logging to combine and protect small writes at random offsets into 8KB linear writes. To achieve this, the writes go to special mirrored files, or ‘Logstores’. The response to a stable write request can be sent once the data is committed to the logstore. Logstores can be written to by several threads from the same node, and are highly optimized to enable low-latency concurrent writes.

Note that if a write uses the EC, the coalescer must also be used. If the coalescer is disabled on a file, but EC is enabled, the coalescer will still be active with all data backed by the EC.

So what exactly does an endurant cache write sequence look like?

Say an NFS client wishes to write a file to an Isilon cluster over NFS with the O_SYNC flag set, requiring a confirmed or synchronous write acknowledgement. Here is the sequence of events that occur to facilitate a stable write.

1) A client, connected to node 3, begins the write process sending protocol level blocks.



ec_1.png



4KB is the optimal block size for the endurant cache.

2) The NFS client’s writes are temporarily stored in the write coalescer portion of node 3’s RAM. The Write Coalescer aggregates uncommitted blocks so that the OneFS can, ideally, write out full protection groups where possible, reducing latency over protocols that allow “unstable” writes. Writing to RAM has far less latency that writing directly to disk.

3) Once in the write coalescer, the endurant cache log-writer process writes mirrored copies of the data blocks in parallel to the EC Log Files.



ec_2.png

The protection level of the mirrored EC log files is the same as that of the data being written by the NFS client.

4) When the data copies are received into the EC Log Files, a stable write exists and a write acknowledgement (ACK) is returned to the NFS client confirming the stable write has occurred.



ec_3.png



The client assumes the write is completed and can close the write session.

5) The write coalescer then processes the file just like a non-EC write at this point. The write coalescer fills and is routinely flushed as required as an asynchronous write via to the block allocation manager (BAM) and the BAM safe write (BSW) path processes.

6) The file is split into 128K data stripe units (DSUs), parity protection (FEC) is calculated and FEC stripe units (FSUs) are created.



ec_4.png

7) The layout and write plan is then determined, and the stripe units are written to their corresponding nodes’ L2 Cache and NVRAM. The EC logfiles are cleared from NVRAM at this point. OneFS uses a Fast Invalid Path process to de-allocate the EC Log Files from NVRAM.



ec_5.png

8) Stripe Units are then flushed to physical disk.

9) Once written to physical disk, the data stripe Unit (DSU) and FEC stripe unit (FSU) copies created during the write are cleared from NVRAM but remain in L2 cache until flushed to make room for more recently accessed data.



ec_6.png

As far as protection goes, the number of logfile mirrors created by EC is always one more than the on-disk protection level of the file. For example:

File Protection Level

Number of EC Mirrored Copies

+1n

2

2x

3

+2n

3

+2d:1n

3

+3n

4

+3d:1n

4

+4n

5

The EC mirrors are only used if the initiator node is lost. In the unlikely event that this occurs, the participant nodes replay their EC journals and complete the writes.

If the write is an EC candidate, the data remains in the coalescer, an EC write is constructed, and the appropriate coalescer region is marked as EC. The EC write is a write into a logstore (hidden mirrored file) and the data is placed into the journal.

Assuming the journal is sufficiently empty, the write is held there (cached) and only flushed to disk when the journal is full, thereby saving additional disk activity.

An optimal workload for EC involves small-block synchronous, sequential writes – something like an audit or redo log, for example. In that case, the coalescer will accumulate a full protection group’s worth of data and be able to perform an efficient FEC write.

The happy medium is a synchronous small block type load, particularly where the I/O rate is low and the client is latency-sensitive. In this case, the latency will be reduced and, if the I/O rate is low enough, it won’t create serious pressure.

The undesirable scenario is when the cluster is already spindle-bound and the workload is such that it generates a lot of journal pressure. In this case, EC is just going to aggravate things.

So how exactly do you configure the endurant cache?

Although on by default, setting the efs.bam.ec.mode sysctl to value ‘1’ will enable the Endurant Cache:

# isi_for_array –s isi_sysctl_cluster efs.bam.ec.mode=1

EC can also be enabled & disabled per directory:

# isi set -c [on|off|endurant_all|coal_only] <directory_name>

To enable the coalescer but switch of EC, run:

# isi set -c coal_only

And to disable the endurant cache completely:

# isi_for_array –s isi_sysctl_cluster efs.bam.ec.mode=0

A return value of zero on each node from the following command will verify that EC is disabled across the cluster:

# isi_for_array –s sysctl efs.bam.ec.stats.write_blocks efs.bam.ec.stats.write_blocks: 0

If the output to this command is incrementing, EC is delivering stable writes.

As mentioned previously, EC applies to stable writes. Namely:

  • Writes with O_SYNC and/or O_DIRECT flags set
  • Files on synchronous NFS mounts

When it comes to analyzing any performance issues involving EC workloads, consider the following:

  • What changed with the workload?
  • If upgrading OneFS, did the prior version also have EC enable?
  • If the workload has moved to new cluster hardware:
  • Does the performance issue occur during periods of high CPU utilization?
  • Which part of the workload is creating a deluge of stable writes?
  • Was there a large change in spindle or node count?
  • Has the OneFS protection level changed?
  • Is the SSD strategy the same?

Disabling EC is typically done cluster-wide and this can adversely impact certain workflow elements. If the EC load is localized to a subset of the files being written, an alternative way to reduce the EC heat might be to disable the coalescer buffers for some particular target directories, which would be a more targeted adjustment. This can be configured via the isi set –c off command.

One of the more likely causes of performance degradation is from applications aggressively flushing over-writes and, as a result, generating a flurry of ‘commit’ operations. This can generate heavy read/modify/write (r-m-w) cycles, inflating the average disk queue depth, and resulting in significantly slower random reads. The isi statistics protocol CLI command output will indicate whether the ‘commit’ rate is high.

It’s worth noting that synchronous writes do not require using the NFS ‘sync’ mount option. Any programmer who is concerned with write persistence can simply specify an O_FSYNC or O_DIRECT flag on the open() operation to force synchronous write semantics for that fie handle. With Linux, writes using O_DIRECT will be separately accounted-for in the Linux ‘mountstats’ output. Although it’s almost exclusively associated with NFS, the EC code is actually protocol-agnostic. If writes are synchronous (write-through) and are either misaligned or smaller than 8KB, they have the potential to trigger EC, regardless of the protocol.

The endurant cache can provide a significant latency benefit for small (eg. 4K), random synchronous writes – albeit at a cost of some additional work for the system.

However, it’s worth bearing the following caveats in mind:

  • EC is not intended for more general purpose I/O.
  • There is a finite amount of EC available. As load increases, EC can potentially ‘fall behind’ and end up being a bottleneck.
  • Endurant Cache does not improve read performance, since it’s strictly part of the write process.
  • EC will not increase performance of asynchronous writes – only synchronous writes.

Related:

Active directory import, then moving object causes double objects

I need a solution

Hi,

We want to start using the Active Directory import function to make sure all domain joined servers will have Symantec installed but are running into a problem.

The AD import function is working ok, we get a new group in the console containing all computer objects from the coresponding OU. But since we have multiple roles of servers in that OU that need different and sometimes overlapping exception policies we want to move the computer objects out of the created client group and into a client group that has the specific exception policy in place. But when we do that the it seems like a copy of the moved object is created in the correct client group but the from AD imported object stays in the client group corresponding to the OU.

For example. In AD we have an OU named Servers 2012 R2, in that OU we have multiple SQL servers and those SQL servers have different configurations so they need different exception policies in Symantec. So we move one of the SQL servers from the Servers 2012 R2 client group to the client group named SQL Servers 1 (for example). When we do that a computer object in the SQL Servers 1 is created, the object shows it is online and everything is working ok. But when we look at the Servers 2012 R2 client group the originally imported object is still there and the info says that it is offline.

This situation is causing confusion and is undesired.

Is this normal behaviour for Symantec?
Is there a way to import objects from AD and move those around to different client groups after initial import and not have double entries in the console?

Or are we doing things wrong and is it possible to have multiple exception policies placed on one client group in Symantec that handles specific computers in that client group but not all others and vice versa?

Kind regards,
Michiel

0

Related:

Nutanix AFS (Nutanix Files) might not function properly with the ELM

This information is very preliminary and has not been rigorously tested.

AFS appears to use DFS namespace redirection to point you to individual nodes in the AFS cluster where your data is actually held. The ELM does not support DFS redirection, so when the STATUS_PATH_NOT_COVERED comes back from the initial node we reached, we fail the attempt instead of moving to the requested server. If randomly you happen to connect to the node where your data is, there is no redirection and no error.

Unfortunately, there does not appear to be a workaround except to point the ELM to a specific node in the AFS cluster instead of the main cluster address. This node probably has to be the AFS “leader” node.

Related:

Isilon: NFS export creation fails in WebUI in OneFS 8.0.0.4

Article Number: 496256 Article Version: 5 Article Type: Break Fix



Isilon,Isilon OneFS,Isilon OneFS 8.0

In OneFS 8.0.0.4, when using WebUI to create new NFS exports. The following error can be encountered.

NFS Export not created.

User-added image

The export did not create due to the following errors:map failure : Field: map_failure has error: Incorrect type. Found: string, schema accepts: object.map non_root : Field: map_non_root has error: Incorrect type. Found: string, schema accepts: object.map root : Field: map_root has error: Incorrect type. Found: string, schema accepts: object.security flavors : Field: security_flavors has error: Incorrect type. Found: string, schema accepts: array.Input validation failed. 
Editing existing exports through the WebUI will still function normally.

A product defect exists in OneFS 8.0.0.4 which prevents NFS export creation via WebUI.

Workaround:

When creating the export via the WebUI, select “Use Custom” for the following options in the exports creation screen:

  • Root User Mapping
  • Non-Root User Mapping
  • Failed User Mapping
  • Security Flavors

You do not need to change anything for each of those options, but they should not be set to “Use Default.”

Alternatively, the command line interface (CLI) can be used to create NFS exports in OneFS 8.0.0.4.

Permanent solution:

Apply Patch-191603 which can be downloaded from:

https://support.emc.com/downloads/15209_Isilon-OneFS

The fix will be included in future OneFS releases as well.

Related:

Designing for Large Datasets

Received a couple of recent inquiries around how to best accommodate big, unstructured datasets and varied workloads, so it seemed like an interesting topic for a blog article. Essentially, when it comes to designing and scaling large Isilon clusters for large quantities and growth rates of data, there are some key tenets to bear in mind. These include:



  • Strive for simplicity
  • Plan ahead
  • Just because you can doesn’t necessarily mean you should



Distributed systems tend to be complex by definition, and this is amplified at scale. OneFS does a good job of simplifying cluster administration and management, but a solid architectural design and growth plan is crucial. Because of its single, massive volume and namespace, Isilon is viewed by many as a sort of ‘storage Swiss army knife’. Left unchecked, this methodology can result in unnecessary complexities as a cluster scales. As such, decision making that favors simplicity is key.



Despite OneFS’ extensibility, allowing an Isilon system to simply grow organically into a large cluster often results in various levels of technical debt. In the worst case, some issues may have grown so large that it becomes impossible to correct the underlying cause. This is particularly true in instances where a small cluster is initially purchased for an archive or low performance workload and with a bias towards cost optimized storage. As the administrators realize how simple and versatile their clustered storage environment is, more applications and workflows are migrated to Isilon. This kind of haphazard growth, such as morphing from a low-powered, near-line platform into something larger and more performant, can lead to all manner of scaling challenges. However, compromises, living with things, or fixing issues that could have been avoided can usually be mitigated by starting out with a scalable architecture, workflow and expansion plan.



Beginning the process with a defined architecture, sizing and expansion plan is key. What do you anticipate the cluster, workloads, and client access levels will look like in six months, one year, three years, or five years? How will you accommodate the following as the cluster scales?



  • Contiguous rack space for expansion
  • Sufficient power & Cooling
  • Network infrastructure
  • Backend switch capacity
  • Availability SLAs
  • Serviceability and spares plan
  • Backup and DR plans
  • Mixed protocols
  • Security, access control, authentication services, and audit
  • Regulatory compliance and security mandates
  • Multi-tenancy and separation
  • Bandwidth segregation – client I/O, replication, etc.
  • Application and workflow expansion



There are really two distinct paths to pursue when initially designing an Isilon clustered storage architecture for a large and/or rapidly growing environment – particularly one that includes a performance workload element to it. These are:

  • Single Large Cluster
  • Storage Pod Architecture



A single large, or extra-large, cluster is often deployed to support a wide variety of workloads and their requisite protocols and performance profiles – from primary to archive – within a single, scalable volume and namespace. This approach, referred to as a ‘data lake architecture’, usually involves more than one style of node.

Isilon can support up to fifty separate tenants in a single cluster, each with their own subnet, routing, DNS, and security infrastructure. OneFS’ provides the ability to separate data layout with SmartPools, export and share level segregation, granular authentication and access control with Access Zones, and network partitioning with SmartConnect, subnets, and VLANs.

Furthermore, analytics workloads can easily be run against the datasets in a single location and without the need for additional storage and data replication and migration.



data_lake_pod_architecture_1.png



For the right combination of workloads, the data lake architecture has many favorable efficiencies of scale and centralized administration.

Another use case for large clusters is in a single workflow deployment, for example as the content repository for the asset management layer of a content delivery workflow. This is a considerably more predictable, and hence simpler to architect, environment that the data lake.

Often, as in the case of a MAM for streaming playout for example, a single node type is deployed. The I/O profile is typically heavily biased towards streaming reads and metadata reads, with a smaller portion of writes for ingest.

There are trade-offs to be aware of as cluster size increases into the extra-large cluster scale. The larger the node count, the more components are involved, which increases the likelihood of a hardware failure. When the infrastructure becomes large and complex enough, there’s more often than not a drive failing or a node in an otherwise degraded state. At this point, the cluster can be in a state of flux such that composition, or group, changes and drive rebuilds/data re-protects will occur frequently enough that they can start to significantly impact the workflow.

Higher levels of protection are required for large clusters, which has a direct impact on capacity utilization. Also, cluster maintenance becomes harder to schedule since many workflows, often with varying availability SLAs, need to be accommodated.

Additional administrative shortcomings that also need to be considered when planning on an extra-large cluster include that InsightIQ only supports monitoring clusters of up to eighty nodes and the OneFS Cluster Event Log (CELOG) and some of the cluster WebUI and CLI tools can prove challenging at an extra-large cluster scale.

That said, there can be wisdom in architecting a clustered NAS environment into smaller buckets and thereby managing risk for the business vs putting the ‘all eggs in one basket’. When contemplating the merits of an extra-large cluster, also consider:

  • Performance management,
  • Risk management
  • Accurate workflow sizing
  • Complexity management.



A more practical approach for more demanding, HPC, and high-IOPS workloads often lies with the Storage Pod architecture. Here, design considerations for new clusters revolve around multiple (typically up to 40 node) homogenous clusters, with each cluster itself acting as a fault domain – in contrast to the monolithic extra-large cluster described above.



Pod clusters can easily be tailored to the individual demands of workloads as necessary. Optimizations per storage pod can include size of SSDs, drive protection levels, data services, availability SLAs, etc. In addition, smaller clusters greatly reduce the frequency and impact of drive failures and their subsequent rebuild operations. This, coupled with the ability to more easily schedule maintenance, manage smaller datasets, simplify DR processes, etc, can all help alleviate the administrative overhead for a cluster.



A Pod infrastructure can be architected per application, workload, similar I/O type (ie. streaming reads), project, tenant (ie. business unit), availability SLA, etc. This pod approach has been successfully adopted by a number of large Isilon customers in industries such as semiconductor, automotive, life sciences, and others with demanding performance workloads.



This Pod architecture model can also fit well for global organizations, where a cluster is deployed per region or availability zone. An extra-large cluster architecture can be usefully deployed in conjunction with Pod clusters to act as a centralized disaster recovery target, utilizing a hub and spoke replication topology. Since the centralized DR cluster will be handling only predictable levels of replication traffic, it can be architected using capacity-biased nodes.



data_lake_pod_architecture_2.png



Before embarking upon either a data lake or Pod architectural design, it is important to undertake a thorough analysis of the workloads and applications that the cluster(s) will be supporting.

Despite the flexibility offered by the data lake concept, not all unstructured data workloads or applications are suitable for a large Isilon cluster. Each application or workload that is under consideration for deployment or migration to a cluster should be evaluated carefully. Workload analysis involves reviewing the ecosystem of an application for its suitability. This requires an understanding of the configuration and limitations of the infrastructure, how clients see it, where data lives within it, and the application or use cases in order to determine:

  • How the application works?
  • How users interact with the application?
  • What is the network topology?
  • What are the workload-specific metrics for networking protocols, drive I/O, and CPU & memory usage?

More information onhow to perform workload analysiscan be found in this recentblog article.

Related: