How to Obtain a XenCenter View of a XenServer Status Report

To complete the procedure, you require a XenCenter version compatible, with the XenServer version of the host or pool from where the server status report was collected from.

The XenCenter version should be of the same version or a version higher than that of the XenServer version.

To view the status report in XenCenter, complete the following procedure:

  1. Extract the server status report and save it to a local directory. Extracting this file generates a new set of directories, one directory for one server.

  2. Open the pool master directory.

    Note: You can open any host directory, but it is preferable to open the pool master’s information.

  3. Open the subfolder, <bug-report-DATE>.

  4. Copy the complete path of the directory. Following is a sample for reference:

    c:server reports2011-06-28-14-51-22-1-bugtool-XEN01bug-report-20110628145134

  5. Open XenCenter and click Add New Server.

  6. On the Add New Server dialog box, paste the previously copied directory path in the Server field and add “xapi-db.xml” at the path. Following is a sample of the Server field entry:

    c:server reports2011-06-28-14-51-22-1-bugtool-XEN01bug-report-20110628145134xapi-db.xml

  7. Ignore the username and password fields and press OK.

    The status report opens and is displayed in XenCenter, like a normally connected and accessible pool, although with limited functionality.

Related:

  • No Related Posts

How to Configure LDAP Authentication on NetScaler or NetScaler Gateway

This article describes how to configure LDAP authentication on NetScaler or NetScaler Gateway.

Background

In this article, an LDAP authentication policy is created at a global level for the NetScaler, which all users use when authenticating. You can also create an LDAP authentication policy only for the users authenticating to the SSL VPN under the NetScaler Gateway node.

For a NetScaler to authenticate users through LDAP, create a LDAP policy. Then, bind the LDAP policy to the target virtual server. A NetScaler appliance will default to the standard LDAP TCP port of 389 or to the secure LDAP TCP port of 636 if a Security Type selected during configuration.

Note: If using Microsoft Active Directory Global Catalog Server, the standard port is 3268 or the secure port is 3269.

User-added image

Prerequisites

  1. An Active Directory account that meets the following requirements:

At a minimum, the Bind DN account must have:

  • Read access to the user objects in the LDAP directory in order to search for user accounts.
  • Read access to the Base DN (for example, DC=angsupport, DC=com) with the correct attribute that is used as the LDAPLogin Name (for example, samAccountName).
In order to perform Group Extraction, which is the process of determining a user’s group membership and returning those values to NetScaler Gateway, the Bind DN account must have:
  • Read access to the group attributes in the LDAP directory.

In order to support password expiration during authentication, the Bind DN account must have read access to the following attributes in the LDAP directory

  • PwdLastSet
  • UserAccountControl
  • msDS-User-Account-Control-Computed

In order to use an alternative Single Sign-On attribute (SSO Name Attribute), such as UPN format, the Bind DN account must have:

  • Read access is required to the particular SSO Name Attribute of interest in the LDAP directory.​
Note: You can use an account that is part of the default read-only domain controllers group in Active Directory. Check with your Active Directory Administrator for confirmation.
  1. ​The NetScaler IP needs to be able to communicate to the LDAP server on the port that the LDAP server is listening:
  • 389 for plain text LDAP
  • 636 for SSL LDAP
  • 3268 for plain text Global Catalog Server
  • 3269 for SSL Global Catalog Server)

Note: If the NetScaler IP cannot communicate to the LDAP servers, you can configure a Load Balancing VIP for LDAP and the NetScaler will send the request from a MIP/SNIP. MIP/SNIP would need to reach the LDAP servers.

  1. If password change is a requirement, Microsoft requires the connection to LDAP server to be SSL/TLS for password change to work. This requires the LDAP server is set up to accept TLS/SSL connections. By default, Global catalog Servers are read-only and usually cannot be used for password change. Consult your Active Directory Admin to access the Global Catalog Servers for password change and the domain controllers are ready to accept SSL/TLS connections. The NetScaler appliance allows password change for naturally expired passwords. New user accounts may not work until the user has logged in to the Active Directory Domain and build their profile.

Related:

7771764: Which AppManager folders and file should be excluded from virus scanning? (NETIQKB71764)

The following folders and their contents should be excluded from virus scanning:

  • %NetIQAppManagerdb
  • %NetIQAppManagerdat
  • %NetIQTemp
  • %NetIQAppManagerbin

These four directories contain the AppManager log files, PIOC files, binary files and the Local Repository of the agent and are constantly being updated. If these folders are not excluded from Anti-Virus scanning, you may encounter performance issues such as high CPU utilization on the machine.

NOTE: If your Anti-Virus Software performs Active Scanning or Script Scanning, it may interpret nearly everything the NetIQ AppManager Agent for Windows does as a potential threat, as the Agent runs jobs that are purely script-based.

In such a case, the Agent’s Services may experience long delays in starting or loading and/or severe Resource usage (CPU/Memory) as a result of the constant scanning of the Anti-Virus software. Additionally, jobs may take significantly longer then normally necessary to complete a single iteration, as the work being done is being scrutinized by the Anti-Virus software.

If you are seeing issues with the NetIQ AppManager Agent for Windows similar to those described above, and excluding the three directories mentioned in the first part of this Fix does not help, you should either Disable Script Scanning, or you should exclude the entire ..NetIQ directory from Active Scanning or Script Scanning to free the NetIQ AppManager Agent for Windows from being constantly scanned.

Related:

  • No Related Posts

Unable to login to SEPM 14.2 MP1 with domain users

I need a solution

Hi all,

After the upgrade of SEPM to the newest version 14.2 MP1, i’m unable to login with the domain user. I can login only with local SEPM user. In a server logs i get message “Symantec endpoint protection manager could not connect to the target directory server. Check the directory server configuration and try again”. I have edit server properties and checked directory server settings. Everything is set correctly. For test i made a successful telnet connection to the directory server on port 389.

  Please help me to resolve this issue.    

  Thanks in advance.

  Dejan

0

1540475074

Related:

  • No Related Posts

Best Practices for Virtualizing Active Directory Domain Controllers (AD DC), Part II

EMC logo


Virtualized Active Directory is ready for Primetime, Part II!

In the first of this two-part blog series, I discussed how virtualization-first is the new normal and fully supported; and elaborated on best practices for Active Directory availability, achieving integrity in virtual environments, and making AD confidential and tamper-proof.

In this second installment, I’ll discuss the elements of time in Active Directory, touch on replication, latency and convergence; the preventing and mediating lingering objects, cloning and of much relevance, preparedness for Disaster Recovery.

Proper Time with Virtualized Active Directory Domain Controllers (AD DC)

Time in virtual machines can easily drift if they are not receiving constant and consistent time cycles. Windows operating systems keep time based on interrupt timers set by CPU clock cycles. In a VMware ESXi host with multiple virtual machines, CPU cycles are not allocated to idle virtual machines.

To plan for an Active Directory implementation, you must carefully consider the most effective way of providing accurate time to domain controllers and understand the relationship between the time source used by clients, member servers, and domain controllers.

The Domain Controller with the PDC Emulator role for the forest root domain ultimately becomes the “master” timeserver for the forest – the root time server for synchronizing the clocks of all Windows computers in the forest. You can configure the PDC to use an external source to set its time. By modifying the defaults of this domain controller’s role to synchronize with an alternative external stratum 1 time source, you can ensure that all other DCs and workstations within the domain are accurate.

Why Time Synchronization Is Important in Active Directory

Every domain-joined device is affected by time!

Ideally, all computer clocks in an AD DS domain are synchronized with the time of an authoritative computer. Many factors can affect time synchronization on a network. The following factors often affect the accuracy of synchronization in AD DS:

  • Network conditions
  • The accuracy of the computer’s hardware clock
  • The amount of CPU and network resources available to the Windows Time service

Prior to Windows Server 2016, the W32Time service was not designed to meet time-sensitive application needs. Updates to Windows Server 2016 allow you to implement a solution for 1ms accuracy in your domain.

Figure 1: How Time Synchronization Works in Virtualized Environments

See Microsoft’s How the Windows Time Service Works for more information.

How Synchronization Works in Virtualized Environments

An AD DS forest has a predetermined time synchronization hierarchy. The Windows Time service synchronizes time between computers within the hierarchy, with the most accurate reference clocks at the top. If more than one time source is configured on a computer, Windows Time uses NTP algorithms to select the best time source from the configured sources based on the computer’s ability to synchronize with that time source. The Windows Time service does not support network synchronization from broadcast or multicast peers.

Replication, Latency and Convergence

Eventually, changes must converge in a multi-master replication model…

The Active Directory database is replicated between domain controllers. The data replicated between controllers called ‘data’ are also called ‘naming context.’ Only the changes are replicated, once a domain controller has been established. Active Directory uses a multi-master model; changes can be made on any controller and the changes are sent to all other controllers. The replication path in Active Directory forms a ring which adds reliability to the replication.

Latency is the required time for all updates to be completed throughout all domain controllers on the network domain or forest.

Convergence is the state at which all domain controllers have the same replica contents of the Active Directory database.

Figure 2: How Active Directory Replication Works

For more information on Replication, Latency and Convergence, see Microsoft’s Detecting and Avoiding Replication Latency.”

Preventing and Remediating Lingering Objects

Don’t revert to snapshot or restore backups beyond the TSL.

Lingering objects are objects in Active Directory that have been created, replicated, deleted, and then garbage collected on at least the Domain Controller that originated the deletion but still exist as live objects on one or more DCs in the same forest. Lingering object removal has traditionally required lengthy cleanup sessions using various tools, such as the Lingering Objects Liquidator (LoL).

Dominant Causes of Lingering Objects

  1. Long-term replication failures

While knowledge of creates and modifies are persisted in Active Directory forever, replication partners must inbound replicate knowledge of deleted objects within a rolling Tombstone Lifetime (TSL) # of days (default 60 or 180 days depending on what OS version created your AD forest). For this reason, it’s important to keep your DCs online and replicating all partitions between all partners within a rolling TSL # of days. Tools like REPADMIN /SHOWREPL * /CSV, REPADMIN /REPLSUM and AD Replication Status should be used to continually identify and resolve replication errors in your AD forest.

  1. Time jumps

System time jump more than TSL # of days in the past or future can cause deleted objects to be prematurely garbage collected before all DCs have inbound replicated knowledge of all deletes. The protection against this is to ensure that:

  • The forest root PDC is continually configured with a reference time source (including following FSMO transfers).
  • All other DCs in the forest are configured to use NT5DS hierarchy.
  • Time rollback and roll-forward protection has been enabled via the maxnegphasecorrection and maxposphasecorrection registry settings or their policy-based equivalents.
  • The importance of configuring safeguards can’t be stressed enough.
  1. USN rollbacks

USN rollbacks are caused when the contents of an Active Directory database move back in time via an unsupported restore. Root causes for USN Rollbacks include:

  • Manually copying previous version of the database into place when the DC is offline.
  • P2V conversions in multi-domain forests.
  • Snapshot restores of physical and especially virtual DCs. For virtual environments, both the virtual host environment AND the underlying guest DCs should be compatible with VM Generation ID. Windows Server 2012 or later, and vSphere 5.0 Update 2 or later, support this feature.
  • Events, errors and symptoms that indicate you have lingering objects.

Figure 3: USN Rollbacks – How Snapshots Can Wreak Havoc on Active Directory

Cloning

You should always use a test environment before deploying the clones to your organization’s network.

DC Cloning enables fast, safer Domain Controller provisioning through clone operation.

When you create the first domain controller in your organization, you are also creating the first domain, the first forest, and the first site. It is the domain controller, through group policy, that manages the collection of resources, computers, and user accounts in your organization.

Active Directory Disaster Recovery Plan: It’s a Must

Build, test, and maintain an Active Directory Disaster Recovery Plan!

AD is indisputably one of an organization’s most critical pieces of software plumbing and in the event of a catastrophe – the loss of a domain or forest – its recovery is a monumental task. You can use Site Recovery to create a disaster recovery plan for Active Directory.

Microsoft Active Directory Disaster Recovery Plan is an extensive document; a set of high-level procedures and guidelines that must be extensively customized for your environment and serves as a vital point of reference when determining root cause and how to proceed with recovery with Microsoft Support.

Summary

There are several excellent reasons for virtualizing Windows Active Directory. The release of Windows Server 2012 and its virtualization-safe features and support for rapid domain controller deployment alleviates many of the legitimate concerns that administrators have about virtualizing AD DS. VMware® vSphere® and our recommended best practices also help achieve 100 percent virtualization of AD DS.

Please reach out to your Dell EMC representative or checkout Dell EMC Consulting Services to learn how we can help you with virtualizing AD DS or leave me a comment below and I’ll be happy to respond back to you.

Sources

Virtualizing a Windows Active Directory Domain Infrastructure

Related Blog

Best Practices for Virtualizing Active Directory Domain Controllers (AD DC), Part I

The post Best Practices for Virtualizing Active Directory Domain Controllers (AD DC), Part II appeared first on InFocus Blog | Dell EMC Services.


Update your feed preferences


   

   


   


   

submit to reddit
   

Related:

  • No Related Posts

Custom Monitors Configured on NetScaler Missing After an Upgrade

Important! Starting with release 10.1 build 122.17, the script files for user monitors are in a new location.

If you upgrade an MPX or VPX virtual appliance to release 10.1 build 122.17 or later, the changes are as follows:

  • A new directory named conflicts is created in /nsconfig/monitors/ and all the built-in scripts of the previous builds are moved to this directory.
  • All new built-in scripts are available in the /netscaler/monitors/ directory. All custom scripts are available in the/nsconfig/monitors/ directory.
  • You must save a new custom script in the /nsconfig/monitors/ directory.
  • After the upgrade is completed, if a custom script is created and saved in the /nsconfig/monitors/ directory, with the same name as that of a built-in script, the script in the /netscaler/monitors/ directory takes priority. That is, the custom script does not run.
If you provision a virtual appliance with release 10.1 build 122.17 or later, the changes are as follows:

  • All built-in scripts are available in the /netscaler/monitors/ directory.
  • The /nsconfig/monitors/ directory is empty.
  • If you create a new custom script, you must save it in the /nsconfig/monitors/ directory.

For the scripts to function correctly, the name of the script file must not exceed 63 characters, and the maximum number of script arguments is 512. To debug the script, you must run it by using the nsumon-debug.pl script from the NetScaler command line. You use the script name (with its arguments), IP address, and the port as the arguments of the nsumon-debug.pl script. Users must use the script name, IP address, port, time-out, and the script arguments for the nsumon-debug.pl script.

Related:

  • No Related Posts

7023319: Syncing Web Console customization to all Web Servers with DRA

This document (7023319) is provided subject to the disclaimer at the end of this document.

Environment

Directory & Resource Administrator 9.0.x
Directory & Resource Administrator 9.1.x
Directory & Resource Administrator 9.2.x

Situation

A new web page was added on Web01, but it cannot be seen on other web pages. Only the previous customization is seen, how can this be updated? Is it an issue with the server? Is there a way to copy the numerous customizations to another server?

Resolution

Open File Explorer and navigate to the following path:
“C:inetpubwwwrootDRAClientcompnentslibui-templates”
This folder shows the previously backed up Web Console customizations, as well as the currently implemented customizations within the Web Console.
This can be manually added by navigating to the folder containing the Web Console customizations, and copying it over to the same folder on another server.

Cause

The customization may not have synced with another server within the Multi Master Set. Nothing new is being seen within the Web Console because the customizations have not appeared on the server in question. If customizations have gone missing, they may still be found under the “ui-template” folder as the customizations are backed up when DRA is upgraded.

Additional Information

Although the Web Console customizations are backed up automatically when upgrading, Tech Support would still recommend manually backing the folder up as well.

Disclaimer

This Support Knowledgebase provides a valuable tool for NetIQ/Novell/SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented “AS IS” WITHOUT WARRANTY OF ANY KIND.

Related:

7014954: SSPR config manager not available

This document (7014954) is provided subject to the disclaimer at the end of this document.

Environment

Self Service Password Reset
SSPR 3.x
SSPR 4.x

Situation

Link to SSPR Configuration Editor is not shown on SSPR page
SSPR configuration has been locked
How to unlock SSPR Configuration Manager

Resolution

For appliance installations of SSPR do the following:
  1. Go to the appliance administrative console at https://myserver.whatever.something.com:9443
  2. Select “Administrative Commands”
  3. Click “unlock configuration”

After you no longer need configuration to be unlocked be sure to select the option to lock the configuration when you are done.

For Linux or Windows (i.e. non-appliance) installations of SSPR do the following:
Edit the SSPRConfiguration.xml file. In Linux, this is found by default in the Tomcat directory under webapps/SSPR/WEB-INF. In Windows, this is found in by default in C:Program FilesNetIQ Self Service Password Resetconfig

Then restart the SSPR service.
Edit the SSPRConfiguration.xml as follows:
Set “configIsEditable” to true. It should look like this:
<property key=”configIsEditable”>true</property>
Also, if the configuration password has been forgotten or no longer works, delete the property “configPasswordHash”
<property key=”configPasswordHash”>

Additional Information

Note: SSPRConfiguration.xml can be edited with any text editor that can do UTF8 character encoding. Be sure to use a texteditor that can save UTF8. Notepad.exe can not save UTF8; Wordpad.exe can.

Disclaimer

This Support Knowledgebase provides a valuable tool for NetIQ/Novell/SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented “AS IS” WITHOUT WARRANTY OF ANY KIND.

Related: