Enabling 5G Use Cases at the Edge

The industry really can’t stop talking about the edge. In the interest of not starting a classic “edge” conversation with a definition, let’s focus very specifically about its role as a new business model for 5G. From a Telco perspective, 5G enables a whole range of new applications and services that will open the doors to new revenue streams and provide a path to differentiation. Aligning business outcomes to vertical industry requirements to operating models to technology enablers has become the foundation for how Dell Technologies is discussing the edge with our Telco customers and partners … READ MORE


  • No Related Posts

Smart Dubai Extends Its Blockchain-focused Products with New BPaaS

Emirates Integrated Telecommunications Company, commercially rebranded as du in 2007 has announced it received an endorsement for its Blockchain Platform as a Service (BPaaS) from the Smart Dubai, states the official statement.

Smart Dubai is the government office charged with facilitating Dubai’s citywide smart transformation, to empower, deliver and promote an efficient, seamless and safe city experience for residents and visitors.


How to Free Space From /var Directory of NetScaler Appliance When Unable to Log on to NetScaler GUI

  • Run the following commands to view the contents of the /var directory:

    cd /varls -l

    The directories that are usually of interest are as follows:

    /var/nstrace – This directory contains trace files.This is the most common reason for HDD being filled on the NetScaler. This is due to an nstrace being left running for indefinite amount of time. All traces that are not of interest can and should be deleted. To stop an nstrace, go back to the CLI and issue stop nstrace command.

    /var/log – This directory contains system specific log files.

    /var/nslog – This directory contains NetScaler log files.

    /var/tmp/support – This directory contains technical support files, also known as, support bundles. All files not of interest should be deleted.

    /var/core – Core dumps are stored in this directory. There will be directories within this directory and they will be labeled with numbers starting with 1. These files can be quite large in size. Clear all files unless the core dumps are recent and investigation is required.

    /var/crash – Crash files, such as process crashes are stored in this directory. Clear all files unless the crashes are recent and investigation is required.

    /var/nsinstall – Firmware is placed in this directory when upgrading. Clear all files, except the firmware that is currently being used.

  • Related:

    SyncIQ and Snapshot Restore

    Got an interesting snapshot restore question from the field that thought would make a useful blog article:

    “I need to restore a large amount of data. Unfortunately the snapshots are taken at the root level and I cannot restore every subdirectory under the snapshot. Although the files are not large the subdirectories contain anywhere from thousands to tens of millions of files. Restores are taking a very long time when copying the directories manually. Is there a faster way to do this?”

    There are two main issues here:

    • Since the snapshot is taken at a lower level in the directory tree and the entire snapshot cannot be restored in place, the SnapRevert job is not an option here.
    • The sheer quantity of files involved mean that a manual, serial restore of the data will be incredibly time consuming.

    Fortunately, there is a solution using replication. SyncIQ allows for snapshot subdirectories to be included or excluded, plus also provides the performance benefit of parallel job processing.

    SyncIQ contains an option only available via the command line (CLI) which allows replicate out of a snapshot.

    The procedure is as follows:

    1) Create a snapshot of a root directory:

    # isi snapshot snapshots create –name snaptest3 /ifs/data

    2) List the available snapshots and select the desired instance.

    For example:

    # isi snapshot list

    ID Name Path


    6 FSAnalyze-Snapshot-Current-1529557209 /ifs

    8 snaptest3 /ifs/data


    Total: 2

    Or from the WebUI:

    Note: There are a couple of caveats:

    • The subdirectory to be restored must still exist in the HEAD filesystem (ie. not have been deleted since the snapshot was taken).
    • You cannot replicate data from a SyncIQ generated snapshot.

    3) Create a local SyncIQ replication policy with the snapshot source as the original location and a new directory location on ‘localhost’ as the destination. The ‘—source-include-directories’ argument lists the desired subdirectory(s) to restore.

    For example, via the CLI:

    # isi sync policies create snapshot_sync3 sync /ifs/data localhost /ifs/file_sync3 –source-include-directories /ifs/data/local_qa

    Or via the WebUI:

    Note: You cannot configure the snapshot into the policy, or set source=snapshot.

    4) Next, run the sync job to replicate a subset of a snapshot. This step is CLI only (not WebUI) since the SyncIQ policy needs to be executed with ‘–source-snapshot’ argument specified.

    For example:

    # isi sync job start snapshot_sync3 –source-snapshot=snaptest3

    Note: This command is essentially a change root for the single run of the SyncIQ Job.

    5) Finally, rename the original directory to something else with mv, and then rename the restore location to the original name.

    For example:

    # mv /ifs/data/local_qa /ifs/data/local_qa_old

    # mv /ifs/file_sync3/local_qa /ifs/data/local_qa

    Note: If you do not have a current replication license on your cluster, you can enable the OneFS SyncIQ trial license from the WebUI by browsing to Cluster Management > Licensing. Failing that, contact your local account team and they can provide you a demo license.

    Using SyncIQ in this manner is a very efficient way to recover large amounts of data from within snapshots. However, this scenario also illustrates one of the drawbacks of taking snapshots at the root directory level. Consider whether it’s more advantageous to configure snapshot schedules to capture at the subdirectory directory level instead.


    7.3.0 patch 2 mlocate issue

    Since I have upgraded, I have been getting disk usage messages from /var being at 94%. After some research I found that /var/lib/mlocate/mlocate.db is being updated and the temp file is taking up 20% of the partition. When I run updatedb -v, I see that it is indexing everything in /store/ariel/. Is that expected behavior? I feel it is why the mlocate process is taking so much space.


    InsightIQ directory chart maximum depth settings

    I was helping an Isilon SE understand how InsightIQ utilizes the results from the File System Analyze job. It occurred to me that there may be some confusion between the “directory filter maximum depth” and the “directory chart maximum depth” settings in InsightIQ. Specifically, SEs and customers are wondering how they are able to drill deep into the directory structure using the directory browser widget when their FSA maximum depth is set to only 5 deep.

    These two settings are located in the InsightIQ application, under SETTINGS -> configure (for the selected cluster) -> FSA Configuration tab:


    Directory chart maximum depth

    Under FILE SYSTEM REPORTING -> File System Analytics, both the Data Usage and Data Properties reports offer the “directory chart” graph module. This module is also commonly referred to as the “directory browser module” or “directory browser widget”. It looks like this:


    Many users find this widget particularly useful as you could drill into various directories, see and sort by sub-directory count, file count, logical size, and physical sizes. The maximum depth at which you could drill into the /ifs directory structure is limited by the directory chart maximum depth setting. The default setting is -1, which means unlimited.

    In the screenshot above, I am able to drill down to 12levels deep from the /ifs root; while the directory filter maximum depth is only set to 5. Here lies the potential confusion.

    Directory filter (path_squash) maximum depth

    The directory filter maximum depth setting limits the depth of the directories that are included in the filtering options, which also affects the directory filter breakouts. The default depth is 5. The deeper this setting, the bigger the size of the FSA result set on-cluster.

    Please see the blog in the related resources section for some common uses of the directory filter.

    Related Resources



    Re: Re: WebUI login issues for Admin user

    In addition to D_Tracy’s suggestions, we can try to rule out the case

    that already removed files which are kept in open status by some process

    still consume disk space.

    du -sxh /var

    summarizes the disk space taken by all visible files in /var (excluding /var/crash).

    If this value is much lower than the number reported by df,

    then you have such ‘dark matter’, and if the related process(es)

    cannot be identified, the node would need to be rebooted.

    — Peter


    The Windows Directory Service database has %1 MB of free space out of %2 MB of allocated space.

    Product: Windows Operating System
    Event ID: 1646
    Source: Active Directory
    Version: 5.0
    Symbolic Name: DIRLOG_DB_FREE_SPACE
    Message: The Windows Directory Service database has %1 MB of free space out of %2 MB of allocated space.

    This is an Active Directory internal event. Internal events appear in Event Viewer only when the default logging level is changed. Most internal events are for informational purposes only. This event is logged when the Active Directory database has the specified amount of free hard disk space remaining.

    User Action

    No user action is required.