Unable to publish image, out of disk space, ELM console reports “No free mft record for $MFT: No space left on device”

First, try just expanding the image template. It’s entirely possible that the sheer size of the layers you have assigned is larger than the allotted space you have set in the Image Template.

However, if you are sure the Image Template is big enough, this might be a problem we sometimes have handling the NTFS Master File Table ($MFT), which sometimes causes the $MFT to run out of space. The $MFT is where NTFS stores file informatin, including the cluster maps, file attributes, and other meta-data. If a file is small enough, its entire contents can be stored in the $MFT instead if allocating cluster space for it. The $MFT should always dynamically resize, but sometimes it does not, leading to a spurious out-of-space issue.

However, we have found that regenerating the $MFT can help. Normally, when publishing, the OS Layer is much smaller than the published image. That allows us to clone the NTFS filesystem of the OS layer, expand it, and play in the remaining layers. Cloning the fliesystem copies the $MFT from the OS layer, including any odd issues that might be there.

But we cannot shrink a filesystem, so if ever the OS layer is actually larger than the published image, we can’t clone the NTFS filesystem and shrink it. We have to create a new, smaller disk, and copy the OS layer in the sam as we do all other layers. This allows the $MFT to be completely reconstructed. Reconstructing the $MFT during publishing in this way appears to workaround the $MFT problems we sometimes see.

To use this workaround, you need to make sure your OS layer is larger than the Image Template size. By default, OS layers are 60GB virtual disks, and templates are 100GB. If the number and size of the layers in your template is small, you could simply reduce the Image Template size to below 60GB. Otherwise, you can Add Version to your OS layer, and in the Add Version wizard, set the new OS version to be larger than the Image Template. After the Packaging Machine boots, Shutdown for Finalize, Finalize, and assign that new, larger version to your Image Template.

Since layer disks are thin provisioned inside the ELM (because they are VHD files), a 120GB disk that contains 20GB of files is exactly the same size as a 60GB disk that contains 20GB of files. So you can expand the OS layer to be whatever you need it to be, even as you reduce the Image Template as well. However, note that the larger OS layer disks may consume additional space in the Connector Cache.

Related:

SOLUTION NEEDED: Encryption 10.3 not 11

I need a solution

I developed this weird conflict error that would give me a blue screen and reboot my system. This was I believe non pgp related. I started a defrag but stopped remembering I read something about the defrag moving the pgp program and that was not good. * I continued to boot in just fine with the passphrase.

I started to decrypt the drive and everything was going ok and I was 10% away from completing the decrypt when that error came up with the blue screen of death then rebooted the drive while the decypt was going. I now cannot get in and a boot screen appears.

I took a short video which can be found here: https://www.dropbox.com/s/3y9k56e1hxvnnpo/J%20C%20…

The vid is slightly over 250megs thus the link.

No boot disk. Laptop Windows 8.1

Help!?

John

0

Related:

Re: z/OS– Defragmentation in VP Environment

Hi David,

There is a still a valid requirement for defrag, from a host perspective as you have pointed out. There is no array requirement to perform host defrag.

Depending on the array configuration, defrag may impact array performance as follows:

– In a FAST (multi-tier) environment. Since tracks / extents are being moved around, FAST may have to demote / promote based on where the new locations of the hot / cold data has moved to. This can cause a lack of consistency in performance for a short period. Just something to be aware of as a explainer for variations in run times.

– Datasets that are highly fragmented can benefit, performance wise from defrag if they are read sequentially. As sequential detect is track based.

– If the volume being defragged is the source of a SNAP (copy on write), then the target can experience a higher change rate. Most Mainframe environments don’t over-provision so generally it is not a capacity issue, but the performance overheads of copy-on-write should be considered as part of the scheduling of the defrag. If it is a CLONE (copy) then again, scheduling the defrag outside the period that the background copy is taking place might be a consideration.

Related:

IT Management Suite 8.1 RU4 is now available

I do not need a solution (just sharing information)

Highlights and enhancements of this release are as follows:

  • Full support for Mac OS 10.13 High Sierra.
  • Enhancements in the Software Management Solution and Software Portal:
    • Ability to select a category for the published software and search by vendor or category.
    • Ability to prevent end users from requesting unlisted software in the Software Portal.
    • Ability to target devices for software publishing.
    • Ability to add or edit a custom icon for a software resource.
  • Ability to configure if the update notification icon is displayed in the Symantec Management Console.
  • Enhancements of package delivery:
    • Downloading files block by block.
    • Block chain hash validation.
  • Ability to configure how often a peer notifies other peers about the package download progress.
  • A Targeted Agent Settings policy with initial settings.
  • New report for checking the state of the NS web site certificate replacement.
  • (Windows only) Ability to apply a Cloud-enabled Management offline package to multiple organizational groups.
  • New default schedule for NS.SQL defragmentation schedule.{cdcd50e9-1c42-402b-921c-8ad6c9ff0d34} task.
  • 154 defects resolved, including 53 customer reported issues.

The release notes are located at the following URL:

https://support.symantec.com/en_US/article.DOC10690.html

0

Related:

Locking Down a Win7 Pro so only a few programs can run

I need a solution

SEP Forum,

Got hit with a very specific question that I am not exactly sure this product can resolve. Wondering if anyone out there has some insights or suggestions.

Management wants a group of computers locked down so they can only run 2-3 installed programs.  All internet, chat, FTP, etc blocked.  No mapping of drives. No USB sticks.  No customizations of the PC what-so-ever.  You run these 2-3 programs only. The rest of the PC is basically a door stop.

On concept, I think I can tell SEP to block everything and then add exceptions but I think I will find it harder than that because applications may need components like .net and I don’t want to stop internal processes like Windows Updates and Defrag.

I think it is an interesting concept/challenge.  I don’t want to say no right off the bat without at least exploring the possibility.  Any productive thoughts would be appreciated.

Douglas

0

Related:

“The SEE client was unable to apply your changes during the last Windows session prior to standby or hibernation. Your previous settings will be used”

I need a solution

I have seen a number of threads in this forum discussing this error. One of our users is now seeing this error before the OS starts.  The error comes up right after the Pre-Windows environment begins to load.  If you choose the option to restart the computer, it simply returns to the same place.  I would like to know what the root cause of the error is.  I saw where it was once stated that if a user attempts to do a defragmentation on the encrypted drive, this can happen.  In our case, that is not what happened since our users do not have sufficient permissions to run defrag.  At this point, our field services people are saying their only option is to reimage the machine.  I would just like to know what actually causes this error so we can educate our users as to how to prevent it in the future.  Thank you.

0

Related: