SEP 14 & VMWare Workstation 15 Player

I need a solution

I am using SEP 14.2 build 1031 on Windows 10 Pro 1809 build 17763.292 and VM Workstation 15 Player.

I have found that if I install the Network Intrusion Prevention / Firewall components of SEP that it inherently blocks incoming network access to any virtual machines.  I could see this if I used NAT configuration for the VM but I am using Bridged, which means that as far as the network and host machine are concerned this should behave like it’s own entity.

I have Googled the problem and the closest solution I have found is essentially put an allow all rule in the Symantec firewall, which seems to defeat the purpose entirely.  How do I configure SEP allow traffic into and out of a VM without haivng to effectively turn off the firewall on host machine?

0

Related:

  • No Related Posts

Re: Sharing a vm proxy node between different Avamar servers

Hi all,

Some backgroud info:

We are planning to have different Avamar servers pointing to their own respective DD Boost account on the same DataDomain hardware.

The idea is that each Avamar server is dedicated to a single business group.

We can’t make use of Avamar subdomains because Avamar forces all subdomains on the same Avamar server to point to the same DD boost account.

But we need to use different DD Boost accounts because we are going for the “secure multi-tenancy” architecture.

We also need DataDomain’s per-account storage hard-quota feature that Avamar subdomains cannot provide.

I asked about this topic in a previous thread.

Question for this discussion:

Since we will be using multiple Avamar servers, I would like to know if vm proxy nodes can be shared between them?

For example, if we deploy a single Avamar vm proxy node inside an ESXi cluster (let’s say the cluster has 2 ESXi hosts and one shared datastore), can all Avamar servers use this same vm proxy node to backup their own VMs from the ESXi cluster?

If different Avamar servers cannot share vm proxy nodes, then that would mean each Avamar server must deploy their own vm proxy node in the same ESXi cluster. This would be considered an inefficient use of resources.

If that is indeed the case, base on our overall architecture, what would be the best workaround to mitigate this issue?

Thanks all!

RLeon

Related:

  • No Related Posts

App Layering: Machine Time on a Published Image is Wrong at First Boot

You can always manually set the time once the machine starts, but that might be a pain to remember to do every time you publish a new image.

The initial clock time in Windows on a physical machine comes from a battery-powered clock on the motherboard (called TOD for Time Of Day), which is set to the local time in Windows’ current timezone. In a virtual machine, the virtualized TOD clock is set by the hypervisor at bootup. Since hypervisors normally know the time in GMT rather than a local timezone, your hypervisor has to know what “local time” is for your Windows instance in your virtual machine before it powers on. If the hypervisor doesn’t know the conversion factor for the VM’s local timezone, the initial time can be off by hours. Hypervisors learn the machine’s local time zone pretty quickly, but it means that the first boot for any VM is usually wrong.

In a published App Layering image, unless your template is derived from a VM that was originally a full Windows machine set to the correct timezone, the first boot usually has bad clock time. However, if your Platform Layer was added to the domain, your published VM should also have the correct information for how to sync its clock with the Domain Controller.

So make sure your Platform Layer was joined to the domain, so it can immediately correct the clock discrepancy.

Otherwise, consider setting this registry key so that Windows will treat the motherboard clock as being in UTC rather than the local timezone:

[HKEY_LOCAL_MACHINESystemCurrentControlSetControlTimeZoneInformation]

“RealTimeIsUniversal”=DWORD 1

Some hypervisors store the local timezone offset for a VM as a virtual motherboard resource. When Windows is running, every time it updates the clock time, it sets the motherboard resource to be the correct time. This is how your hypervisor finds out what the timezone offset for this VM is: because Windows is always writing local time to the motherboard, all your hypervisor has to do is compare the motherboard resource for the TOD clock to the hypervisor’s own clock. That timezone offset is an attribute of the VM itself, not part of Windows and not part of the virtual disk.

Note that Nutanix does not currently notice and record the time zone offset of a VM. You would need to set it manually. See this thread, for instance:

https://next.nutanix.com/installation-configuration-23/windows-vm-time-issues-22562

It may be worthwhile to generate a new template for your Connector, by having (or building) a Windows VM that has booted in the correct time zone. If you have a template you want to continue using, for instance, convert it to a VM, attach a bootable Windows disk (or boot from PVS or something like that – it’s just important that Windows run on this machine), power the machine on, and set the clock correctly. When you adjust the clock, Windows writes it to the motherboard, and your hypervisor records the offset in the virtual machine parameters. Then you can shut the machine down, remove any extra disks, and convert it back to a template.

You can also just take a working Windows machine with a correct local time, shut it down, clone it, remove any extra disks, and convert that to a VM template. This is one good reason to make a template out of a clone of your original Gold VM that you imported your OS Layer from: it already has all the virtual hardware parameters you want, including the local clock offset. Now that your template includes the current timezone offset, your hypervisor will be able to set the initial motherboard TOD clock correctly, meaning Windows has the correct time immediately and doesn’t need to wait for a jump when AD comes in to set the clock.

Configure your Connector to use this template so that newly published images will be correct. If you are using PVS, you should also use this template to build your Target Machines so that the virtual hardware of your Target Machines matches the hardware your layers were built from, including the local timezone offset.

Note that it’s also possible to have your hypervisor’s internal clock wrong. Also, your PVS server will try to set the machine’s clock based on the PVS server’s local clock. If any of these are wrong, you will need to get them synched as well.

Related:

Help: Need upgrade solution or guide how to Upgrade from SEPM 12.1.5 RU5 (Windows server 2003) to SEPM 14.x

I need a solution

Hi everyone,

I’m new to upgrading SEPM from version 12.1.5 RU5 (Server OS: Win2003).

Planning to create a new [VM] Win Server 2016 STD and use this as our new SEPM server. I would like to ask for advice or best practice approach.

Since my existing clients, Policies are still intact on the old server, is it possible to bring all the clients, settings and policies to the new SEPM 14 server?

Once the new SEPM 14 server is up and running and all my existing clients have migrated, I plan to retire this server since its hardware are old and obsolete.

Any helpful guides and information is greatly appreciated.

Thank you and
Kind Regards,

Jay Arc De Rama

0

Related:

  • No Related Posts

App Layering does not support downgrading the ELM

App Layering does not support downgrading the ELM. There is no way to downgrade the software of your ELM, not even an internal-only one. The database schema changes are one-way.

There are two ways you can hope to achieve the same basic end, however.

Restore Snapshot: Before upgrading, you should always make a VM snapshot. After upgrading, you should test your environment by publishing a new image and verifying that your software works before deleting that snapshot. If there is a problem, you can restore the snapshot you made, reverting you to the unupgraded state with the previous ELM version. Similarly, if your SAN has a snapshot mechanism, you can revert the ELM VM to a previous SAN snapshot copy. The ELM VM is nothing but a single VM with a large disk, so any mechanism you have for restoring previous backups of that VM will work.

Make sure you never have both an old and a new copy of the same ELM powered on at once. Also, make sure the replacement ELM is using the same IP address; otherwise, your App Layering Agents (on your Hyper-V and PVS servers) might not be able to communicate with the ELM. You might need to reregister your agents with the ELM (see https://docs.citrix.com/en-us/citrix-app-layering/4/install-agent.html).

Export/Import Layers: If you do not have a previous backup or snapshot of the ELM, you cannot take your existing ELM and revert its state. You can. however, deploy a completely new ELM and import the layers from the one you upgraded. Layer export and import became available in version 4.3 (it was a Labs feature in 4.3, so you might need to enable it before you can import). Go to the existing/upgraded ELM and go to Layers, and select Export. Select everything, and export them all to your configured Network File Share. They will be exported into the UnideskExported Layers folder. Then go to the newly deployed, older ELM, Layers tab, Import, and point it to the Network File Share location. Do not point it directly to the Exported Layers folder, we require that directory structure to be present under the folder you select.

This will import all of your layers into the down-rev ELM. This does not transfer your Image Templates or your Connector configurations, so you will need to recreate them, as well as any additional configuration like your Active Directory junction. You will need to manually register any App Layering Agent installations to the new ELM as well. The old and new ELM can absolutely coexist, because they do not both believe they are the same system. They will not conflict.

Related:

  • No Related Posts

Re: Recoverpoint v3.5 – ‘Failover options’ greyed out

Hello All,

I am trying to test my failover capabilities through RPA. After enabling ‘Image access’ (physical) and mounting the replica LUN to the DR ESXi 5.5 hosts, I am not able to do the ‘Failover Actions – Failover to remote replica’ (as shown in the picture below). This options is grayed out. I don’t have any SRM and my policy setting for ‘Stretch cluster/VMWare SRM support’ etc are set to NONE. Without the failover actions, I am not able to replicate back my changes to the Prod_Test site. I need suggestions. Am I missing any policy or any configuration?

Also, as seen in the picture below, the storage status in the DR_Test (remote side) is showing ‘Enabling logged access’. So does this mean that this is still processing this enabling and that’s the reason why it is grayed out? I am however able to mount the replica LUN, add the replicated VM to the inventory, boot it up and browse the replicated VM at my DR site.

Thanks in advance!

RPA_Failover_Options.jpg

Regards,

Vilas

Related:

Data Domain Instant Access and Restore

EMC logo


The most recent Data Domain release, DD OS 6.1.2, enhanced both the efficiency and cloud readiness of the Data Domain platform. This blog will focus on the efficiency improvements of this release, specifically when it comes to Data Domains Instant Access and Restore. So what is Instant Access? Instant Access is the ability to boot a VM directly from the Data Domain appliance. You simply find the VM you would like to access, power it on and connect to it. By doing this, you can decrease the downtime. If you need to perform a full restore, … READ MORE



ENCLOSURE:https://blog.dellemc.com/uploads/2018/11/football-stadium-3404535_1280-600×356.jpg

Update your feed preferences


   

   


   


   

submit to reddit
   

Related:

  • No Related Posts

Error: “Preparation of the Master VM Image failed” when Creating MCS Catalog in XenApp or XenDesktop

This article contains information about creating a Machine Creation Services (MCS) catalog in XenApp or XenDesktop fails with error “Preparation of the Master VM Image failed”.

Background

As part of the process of creating a machine catalog using MCS, the contents of the shared base disk are updated and manipulated in a process referred to as Image Preparation. Under some conditions, this Preparation step can fail completely without generating any information that can be used to derive the exact cause. Possible reasons for this are:

  1. The Machine Image does not have the correct version of the Citrix VDA software installed.

  2. The virtual machine used to perform the preparation never starts or takes so long to start that the operation is cancelled with the following symptoms:

  • Blue screen on boot.

  • No operating system found.

  • Slow start/execution of the virtual machine. This particularly affects Cloud based deployments where machine templates might have to be copied between different storage locations.

Triaging Issues

The issue detailed under Point 1 in the Background section should be easy to diagnose using the following procedure:

  1. Start the original master Virtual Machine (VM).

  2. Ensure that the Citrix VDA Software has been installed from the correct version using XenDesktopVdaSetup.exe.

Notes

  • It is not sufficient to install individual MSI installers, as the complete VDA functionality is provided by several distinct components which XenDesktopVdaSetup will install as required.

  • If a VDA from XenDesktop 5.x is installed (for example, to support Windows XP or Vista), then this does not support image preparation and you must select the option in Studio to tell the system that an older VDA is in use and this will instruct the services to skip the image preparation operation.

For the issues detailed in Point 2, more diagnosis is required as explained in this section:

Caution! Refer to the Disclaimer at the end of this article before using Registry Editor.

Note: The preparation step will be performed by a VM named Preparation – <catalog name> – <unique id>. The Hypervisor management console can be used to check that this VM manages to start and progress past the point of starting the guest operating system. If this does not occur then check that the snapshot used to create the catalog was made of a fully functioning VM.

MCS will only support a single virtual disk, so ensure that your master image only uses a single disk.

If the VM manages to start Windows, then information will be required from inside the running VM. As the machine will be forcibly stopped if the preparation step is not completed in the expected time, hence it is necessary to prevent this from happening. In order to do this, run a PowerShell console from the Studio management console and issue the following command:

Set-ProvServiceConfigurationData -Name ImageManagementPrep_NoAutoShutdown -Value $True –AdminAddress <your Controller Address>

This is a global configuration option and will affect all catalog creation operations, so ensure that no other creation operations are being run whilst you are triaging your issues. It also requires the issuing user to be a Full Administrator of the XenDesktop site.

After this property is set, the preparation VM will no longer shutdown automatically or be forced down on expiry of allowed time.

  1. To obtain additional information from the preparation operation, it is necessary to enable diagnostic logging on the Machine Creation components in the master image. To do this, set the following DWORD registry value to 1 in the master image and create a new snapshot of the master image.

    HKLMSoftwareCitrixMachineIdentityServiceAgentLOGGING

    After this is set, the image preparation operation will create log files in C: on the preparation VM. The logfiles are “image-prep.log” and “PvsVmAgentLog.txt”.

  2. When the preparation VM is running, use the Hypervisor management console to log into the VM.

  3. Check that image-prep.log exists in C: and open it.

  4. Check for errors in the log file. These might be sufficient for the problem to be resolved directly, otherwise the details must be used in reporting an issue to Citrix Support.

  5. As the preparation VM is created, the network adapters disconnects in order to isolate it from the rest of the Active Directory domain, it is not possible to copy the log file to an external destination. Screen shots of the VM console will be the best way to obtain a report. Ensure that all parts of the log file are captured.
  6. When finished with the investigations, remove the global setting as follows:

    Remove-ProvServiceConfigurationData -Name ImageManagementPrep_NoAutoShutdown –AdminAddress <your Controller Address>

  7. Set the registry key on the master image to 0 and create a new snapshot to provision.

Related:

  • No Related Posts

Pop-up on install with November 2018 APSB18-40 Adobe Acrobat Reader

I do not need a solution (just sharing information)

Just checking if anyone else is having issues with  AcroRdrDCUpd1900820081_MUI.msp generating an installation pop-up. Tested on a clean Windows 10 VM with Adobe Reader DC no other products open. Appears as though the /quiet swith wasnt added to the command line from the internal tools. Not enough testing to determine if its an issue with the patch itself. 

The S.

0

Related:

  • No Related Posts

Alibaba cloud with SEP.

I need a solution

Dear All,

Since there are no spdcific indication for Alicloud on SEP I was wondering if I create a new instance (VM) on Alicloud and assign this instance as Symantec Endpoint Protection Manager (SEPM), and the rest of the servers resides in Alicloud will have an assigned SEP agent which eventually point back to the SEPM Server on Alicloud.

Is this possible?

Thank you.

0

Related:

  • No Related Posts