Unable to publish image, out of disk space, ELM console reports “No free mft record for $MFT: No space left on device”

First, try just expanding the image template. It’s entirely possible that the sheer size of the layers you have assigned is larger than the allotted space you have set in the Image Template.

However, if you are sure the Image Template is big enough, this might be a problem we sometimes have handling the NTFS Master File Table ($MFT), which sometimes causes the $MFT to run out of space. The $MFT is where NTFS stores file informatin, including the cluster maps, file attributes, and other meta-data. If a file is small enough, its entire contents can be stored in the $MFT instead if allocating cluster space for it. The $MFT should always dynamically resize, but sometimes it does not, leading to a spurious out-of-space issue.

However, we have found that regenerating the $MFT can help. Normally, when publishing, the OS Layer is much smaller than the published image. That allows us to clone the NTFS filesystem of the OS layer, expand it, and play in the remaining layers. Cloning the fliesystem copies the $MFT from the OS layer, including any odd issues that might be there.

But we cannot shrink a filesystem, so if ever the OS layer is actually larger than the published image, we can’t clone the NTFS filesystem and shrink it. We have to create a new, smaller disk, and copy the OS layer in the sam as we do all other layers. This allows the $MFT to be completely reconstructed. Reconstructing the $MFT during publishing in this way appears to workaround the $MFT problems we sometimes see.

To use this workaround, you need to make sure your OS layer is larger than the Image Template size. By default, OS layers are 60GB virtual disks, and templates are 100GB. If the number and size of the layers in your template is small, you could simply reduce the Image Template size to below 60GB. Otherwise, you can Add Version to your OS layer, and in the Add Version wizard, set the new OS version to be larger than the Image Template. After the Packaging Machine boots, Shutdown for Finalize, Finalize, and assign that new, larger version to your Image Template.

Since layer disks are thin provisioned inside the ELM (because they are VHD files), a 120GB disk that contains 20GB of files is exactly the same size as a 60GB disk that contains 20GB of files. So you can expand the OS layer to be whatever you need it to be, even as you reduce the Image Template as well. However, note that the larger OS layer disks may consume additional space in the Connector Cache.


SOLUTION NEEDED: Encryption 10.3 not 11

I need a solution

I developed this weird conflict error that would give me a blue screen and reboot my system. This was I believe non pgp related. I started a defrag but stopped remembering I read something about the defrag moving the pgp program and that was not good. * I continued to boot in just fine with the passphrase.

I started to decrypt the drive and everything was going ok and I was 10% away from completing the decrypt when that error came up with the blue screen of death then rebooted the drive while the decypt was going. I now cannot get in and a boot screen appears.

I took a short video which can be found here: https://www.dropbox.com/s/3y9k56e1hxvnnpo/J%20C%20…

The vid is slightly over 250megs thus the link.

No boot disk. Laptop Windows 8.1





Re: z/OS– Defragmentation in VP Environment

Hi David,

There is a still a valid requirement for defrag, from a host perspective as you have pointed out. There is no array requirement to perform host defrag.

Depending on the array configuration, defrag may impact array performance as follows:

– In a FAST (multi-tier) environment. Since tracks / extents are being moved around, FAST may have to demote / promote based on where the new locations of the hot / cold data has moved to. This can cause a lack of consistency in performance for a short period. Just something to be aware of as a explainer for variations in run times.

– Datasets that are highly fragmented can benefit, performance wise from defrag if they are read sequentially. As sequential detect is track based.

– If the volume being defragged is the source of a SNAP (copy on write), then the target can experience a higher change rate. Most Mainframe environments don’t over-provision so generally it is not a capacity issue, but the performance overheads of copy-on-write should be considered as part of the scheduling of the defrag. If it is a CLONE (copy) then again, scheduling the defrag outside the period that the background copy is taking place might be a consideration.


IT Management Suite 8.1 RU4 is now available

I do not need a solution (just sharing information)

Highlights and enhancements of this release are as follows:

  • Full support for Mac OS 10.13 High Sierra.
  • Enhancements in the Software Management Solution and Software Portal:
    • Ability to select a category for the published software and search by vendor or category.
    • Ability to prevent end users from requesting unlisted software in the Software Portal.
    • Ability to target devices for software publishing.
    • Ability to add or edit a custom icon for a software resource.
  • Ability to configure if the update notification icon is displayed in the Symantec Management Console.
  • Enhancements of package delivery:
    • Downloading files block by block.
    • Block chain hash validation.
  • Ability to configure how often a peer notifies other peers about the package download progress.
  • A Targeted Agent Settings policy with initial settings.
  • New report for checking the state of the NS web site certificate replacement.
  • (Windows only) Ability to apply a Cloud-enabled Management offline package to multiple organizational groups.
  • New default schedule for NS.SQL defragmentation schedule.{cdcd50e9-1c42-402b-921c-8ad6c9ff0d34} task.
  • 154 defects resolved, including 53 customer reported issues.

The release notes are located at the following URL:




“The SEE client was unable to apply your changes during the last Windows session prior to standby or hibernation. Your previous settings will be used”

I need a solution

I have seen a number of threads in this forum discussing this error. One of our users is now seeing this error before the OS starts.  The error comes up right after the Pre-Windows environment begins to load.  If you choose the option to restart the computer, it simply returns to the same place.  I would like to know what the root cause of the error is.  I saw where it was once stated that if a user attempts to do a defragmentation on the encrypted drive, this can happen.  In our case, that is not what happened since our users do not have sufficient permissions to run defrag.  At this point, our field services people are saying their only option is to reimage the machine.  I would just like to know what actually causes this error so we can educate our users as to how to prevent it in the future.  Thank you.



The AT schedule file could not be updated because the disk is full.

Product: Windows Operating System
Event ID: 3810
Source: System
Version: 5.0
Symbolic Name: APE_AT_DISKFULL
Message: The AT schedule file could not be updated because the disk is full.

This message should occur only on a workstation. Any action to correct the problem should be performed on that computer. You cannot update the schedule file because the disk is full.

User Action

Make room on the disk by deleting unnecessary files.


The Microsoft Exchange Replication Service encountered an error while replaying the logs on the passive node for %1. The database requires %2. The last log successfully replayed is %3.

Product: Exchange
Event ID: 2071
Source: MSExchangeRepl
Version: 8.0
Symbolic Name: ReplayCheckError
Message: The Microsoft Exchange Replication Service encountered an error while replaying the logs on the passive node for %1. The database requires %2. The last log successfully replayed is %3.

This Error event indicates that the Microsoft® Exchange Replication (MSExchangeRepl) service encountered an inconsistency when the service tried to replay a log file into the database copy on the passive node.

The error occurs if any of the following conditions is true:

  • You swapped file destinations between the source and target copies of the database.

  • You moved files.

  • You performed offline defragmentation of the database on the active node. This is the source database.

User Action

To resolve the error, follow one or more of these steps:

If you are not already doing so, consider running the tools that Microsoft Exchange offers to help administrators analyze and troubleshoot their Exchange environment. These tools can help you make sure that your configuration is in line with Microsoft best practices. They can also help you identify and resolve performance issues, improve mail flow, and better manage disaster recovery scenarios. Go to the Toolbox node of the Exchange Management Console to run these tools now. For more information about these tools, see Toolbox in the Exchange Server 2007 Help.