Is Symetec Enterprise Security Manager supported in RHEL 7x ppc64le platform?

I need a solution

I wanted to know if Enterprise Security Manager is supported in REdhat 7x  systems that run on ppc64le  systems.

As per the following link http://eval.symantec.com/mktginfo/enterprise/fact_sheets/ent-factsheet_enterprise_security_manager_6.5_06-2005.en-us.pdf   

RHEL 7.x on ppc64le is not in the supported platform list.  Is this correct? Is there a plan to support this plform in future ?

Thanks

Rajesh

0

Related:

  • No Related Posts

Re: Webtop installation on JBOSS : Windows

While installing Webtop on JBOSS I am encountered with below error . Kindly suggest how to troubleshoot . Also attached complete log file.

18:53:22,798 INFO [FileSystemContext] VFS forced fallback to vfsjar is enabled.

18:53:23,188 INFO [AbstractVirtualFileHandler] VFS force copy (NestedJarHandler) is enabled.

18:53:25,294 INFO [ServerInfo] Java version: 1.6.0_30,Sun Microsystems Inc.

18:53:25,294 INFO [ServerInfo] Java Runtime: Java(TM) SE Runtime Environment (build 1.6.0_30-b12)

18:53:25,294 INFO [ServerInfo] Java VM: Java HotSpot(TM) Server VM 20.5-b03,Sun Microsystems Inc.

18:53:25,294 INFO [ServerInfo] OS-System: Windows 7 6.1,x86

18:53:25,294 INFO [ServerInfo] VM arguments: -Dprogram.name=run.bat -Xms512M -Xmx512M -XX:MaxPermSize=512M -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 -Dorg.jboss.resolver.warning=true -Dsun.lang.ClassLoader.allowArraySyntax=true -Djboss.vfs.forceVfsJar=true -Djboss.vfs.forceCaseSensitive=true -Djava.endorsed.dirs=C:Installsjboss-5.1.0.GA-jdk6jboss-5.1.0.GAlibendorsed

18:53:25,356 INFO [JMXKernel] Legacy JMX core initialized

18:53:28,289 INFO [ProfileServiceBootstrap] Loading profile: ProfileKey@bd5df[domain=default, server=default, name=default]

18:53:31,659 INFO [WebService] Using RMI server codebase: http://127.0.0.1:8083/

18:53:41,238 INFO [NativeServerConfig] JBoss Web Services – Stack Native Core

18:53:41,238 INFO [NativeServerConfig] 3.1.2.GA

18:53:41,800 INFO [AttributeCallbackItem] Owner callback not implemented.

18:53:42,892 INFO [LogNotificationListener] Adding notification listener for logging mbean “jboss.system:service=Logging,type=Log4jService” to server org.jboss.mx.server.MBeanServerImpl@41a12f[ defaultDomain=’jboss’ ]

18:55:05,467 INFO [CopyMechanism] VFS temp dir: C:Installsjboss-5.1.0.GA-jdk6jboss-5.1.0.GAserverdefaulttmp

18:55:41,161 ERROR [ProfileDeployAction] Failed to add deployment: krbutil.jar

org.jboss.deployers.spi.DeploymentException: Unable to find class path entry ClassPathEntryImpl{path=webtop/WEB-INF/lib/commons-codec-1.3.jar} from krbutil.jar

at org.jboss.deployers.spi.DeploymentException.rethrowAsDeploymentException(DeploymentException.java:49)

at org.jboss.deployers.vfs.plugins.structure.VFSStructureBuilder.applyContextInfo(VFSStructureBuilder.java:188)

at org.jboss.deployers.structure.spi.helpers.AbstractStructureBuilder.populateContext(AbstractStructureBuilder.java:82)

at org.jboss.deployers.structure.spi.helpers.AbstractStructuralDeployers.determineStructure(AbstractStructuralDeployers.java:89)

at org.jboss.deployers.plugins.main.MainDeployerImpl.determineStructure(MainDeployerImpl.java:1004)

at org.jboss.deployers.plugins.main.MainDeployerImpl.determineDeploymentContext(MainDeployerImpl.java:440)

at org.jboss.deployers.plugins.main.MainDeployerImpl.addDeployment(MainDeployerImpl.java:390)

at org.jboss.deployers.plugins.main.MainDeployerImpl.addDeployment(MainDeployerImpl.java:300)

at org.jboss.system.server.profileservice.repository.MainDeployerAdapter.addDeployment(MainDeployerAdapter.java:86)

at org.jboss.system.server.profileservice.repository.ProfileDeployAction.install(ProfileDeployAction.java:61)

at org.jboss.system.server.profileservice.repository.AbstractProfileAction.install(AbstractProfileAction.java:53)

at org.jboss.system.server.profileservice.repository.AbstractProfileService.install(AbstractProfileService.java:361)

at org.jboss.dependency.plugins.AbstractControllerContext.install(AbstractControllerContext.java:348)

at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:1631)

at org.jboss.dependency.plugins.AbstractController.incrementState(AbstractController.java:934)

at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1082)

at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:984)

at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:822)

at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:553)

at org.jboss.system.server.profileservice.repository.AbstractProfileService.activateProfile(AbstractProfileService.java:306)

at org.jboss.system.server.profileservice.ProfileServiceBootstrap.start(ProfileServiceBootstrap.java:271)

at org.jboss.bootstrap.AbstractServerImpl.start(AbstractServerImpl.java:461)

at org.jboss.Main.boot(Main.java:221)

at org.jboss.Main$1.run(Main.java:556)

at java.lang.Thread.run(Thread.java:662)

Caused by: java.io.IOException: Child not found webtop/WEB-INF/lib/commons-codec-1.3.jar for JarHandler@26746731[path=webtop/WEB-INF/lib/krbutil.jar context=file:/C:/Installs/jboss-5.1.0.GA-jdk6/jboss-5.1.0.GA/server/default/deploy/ real=file:/C:/Installs/jboss-5.1.0.GA-jdk6/jboss-5.1.0.GA/server/default/deploy/webtop/WEB-INF/lib/krbutil.jar], available children: [JarEntryHandler@3583147[path=webtop/WEB-INF/lib/krbutil.jar/META-INF context=file:/C:/Installs/jboss-5.1.0.GA-jdk6/jboss-5.1.0.GA/server/default/deploy/ real=jar:file:/C:/Installs/jboss-5.1.0.GA-jdk6/jboss-5.1.0.GA/server/default/deploy/webtop/WEB-INF/lib/krbutil.jar!/META-INF/], JarEntryHandler@25309840[path=webtop/WEB-INF/lib/krbutil.jar/com context=file:/C:/Installs/jboss-5.1.0.GA-jdk6/jboss-5.1.0.GA/server/default/deploy/ real=jar:file:/C:/Installs/jboss-5.1.0.GA-jdk6/jboss-5.1.0.GA/server/default/deploy/webtop/WEB-INF/lib/krbutil.jar!/com/]]

at org.jboss.virtual.VirtualFile.findChild(VirtualFile.java:461)

at org.jboss.deployers.vfs.plugins.structure.VFSStructureBuilder.applyContextInfo(VFSStructureBuilder.java:184)

… 23 more

18:55:46,138 ERROR [ProfileDeployAction] Failed to add deployment: bpm_infra.jar

org.jboss.deployers.spi.DeploymentException: Unable to find class path entry ClassPathEntryImpl{path=webtop/WEB-INF/lib/castor-1.1-xml.jar} from bpm_infra.jar

at org.jboss.deployers.spi.DeploymentException.rethrowAsDeploymentException(DeploymentException.java:49)

at org.jboss.deployers.vfs.plugins.structure.VFSStructureBuilder.applyContextInfo(VFSStructureBuilder.java:188)

at org.jboss.deployers.structure.spi.helpers.AbstractStructureBuilder.populateContext(AbstractStructureBuilder.java:82)

at org.jboss.deployers.structure.spi.helpers.AbstractStructuralDeployers.determineStructure(AbstractStructuralDeployers.java:89)

at org.jboss.deployers.plugins.main.MainDeployerImpl.determineStructure(MainDeployerImpl.java:1004)

at org.jboss.deployers.plugins.main.MainDeployerImpl.determineDeploymentContext(MainDeployerImpl.java:440)

at org.jboss.deployers.plugins.main.MainDeployerImpl.addDeployment(MainDeployerImpl.java:390)

at org.jboss.deployers.plugins.main.MainDeployerImpl.addDeployment(MainDeployerImpl.java:300)

at org.jboss.system.server.profileservice.repository.MainDeployerAdapter.addDeployment(MainDeployerAdapter.java:86)

at org.jboss.system.server.profileservice.repository.ProfileDeployAction.install(ProfileDeployAction.java:61)

at org.jboss.system.server.profileservice.repository.AbstractProfileAction.install(AbstractProfileAction.java:53)

at org.jboss.system.server.profileservice.repository.AbstractProfileService.install(AbstractProfileService.java:361)

at org.jboss.dependency.plugins.AbstractControllerContext.install(AbstractControllerContext.java:348)

at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:1631)

at org.jboss.dependency.plugins.AbstractController.incrementState(AbstractController.java:934)

at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1082)

at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:984)

at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:822)

at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:553)

at org.jboss.system.server.profileservice.repository.AbstractProfileService.activateProfile(AbstractProfileService.java:306)

at org.jboss.system.server.profileservice.ProfileServiceBootstrap.start(ProfileServiceBootstrap.java:271)

at org.jboss.bootstrap.AbstractServerImpl.start(AbstractServerImpl.java:461)

at org.jboss.Main.boot(Main.java:221)

at org.jboss.Main$1.run(Main.java:556)

at java.lang.Thread.run(Thread.java:662)

Caused by: java.io.IOException: Child not found webtop/WEB-INF/lib/castor-1.1-xml.jar for JarHandler@12116679[path=webtop/WEB-INF/lib/bpm_infra.jar context=file:/C:/Installs/jboss-5.1.0.GA-jdk6/jboss-5.1.0.GA/server/default/deploy/ real=file:/C:/Installs/jboss-5.1.0.GA-jdk6/jboss-5.1.0.GA/server/default/deploy/webtop/WEB-INF/lib/bpm_infra.jar], available children: [JarEntryHandler@3185168[path=webtop/WEB-INF/lib/bpm_infra.jar/META-INF context=file:/C:/Installs/jboss-5.1.0.GA-jdk6/jboss-5.1.0.GA/server/default/deploy/ real=jar:file:/C:/Installs/jboss-5.1.0.GA-jdk6/jboss-5.1.0.GA/server/default/deploy/webtop/WEB-INF/lib/bpm_infra.jar!/META-INF/], JarEntryHandler@13808752[path=webtop/WEB-INF/lib/bpm_infra.jar/com context=file:/C:/Installs/jboss-5.1.0.GA-jdk6/jboss-5.1.0.GA/server/default/deploy/ real=jar:file:/C:/Installs/jboss-5.1.0.GA-jdk6/jboss-5.1.0.GA/server/default/deploy/webtop/WEB-INF/lib/bpm_infra.jar!/com/], JarEntryHandler@21247461[path=webtop/WEB-INF/lib/bpm_infra.jar/org context=file:/C:/Installs/jboss-5.1.0.GA-jdk6/jboss-5.1.0.GA/server/default/deploy/ real=jar:file:/C:/Installs/jboss-5.1.0.GA-jdk6/jboss-5.1.0.GA/server/default/deploy/webtop/WEB-INF/lib/bpm_infra.jar!/org/]]

at org.jboss.virtual.VirtualFile.findChild(VirtualFile.java:461)

at org.jboss.deployers.vfs.plugins.structure.VFSStructureBuilder.applyContextInfo(VFSStructureBuilder.java:184)

… 23 more

18:55:55,452 ERROR [ProfileDeployAction] Failed to add deployment: jaxb-impl.jar

Related:

  • No Related Posts

Unable to update defintions, in RHEL 6.10 , sep::lux::Cseplux: Failed to run session, error code: 0x80010830 Error 2: No Such file or directoty.

I need a solution

Unable to update defintions, in RHEL 6.10 ,(2.6.32-754.2.1), I am installing the Latest client, 14.2.1031.0100.

Hence the component malfunction is not fixed.

Once SEP client is installed, Liveupdate, comes with thebelow error. We are using Reverse proxy for defintion update and it works for all other servers.

sep::lux::Cseplux: Failed to run session, error code: 0x80010830

Error 2: No Such file or directoty.

ilve update session failed. Please enable debug logging for more information

0

Related:

  • No Related Posts

XenServer 7.1 Cumulative Update 2

Who Should Install This Cumulative Update?

XenServer 7.1 Cumulative Update 2 (XS71ECU2) must be installed by customers running XenServer 7.1 LTSR CU1. It includes all previously released XenServer 7.1 CU1 hotfixes. Installation of XS71ECU2 is required for all future functional hotfixes for XenServer 7.1 LTSR.

XenServer 7.1 Cumulative Update 2 and its subsequent hotfixes are available only to customers on the Customer Success Services program.

Citrix will continue to provide security updates to the base XenServer 7.1 CU1 product for a period of three months from the release date of the XenServer 7.1 Cumulative Update 2 (until March 12, 2019). After this three month period elapses, any new hotfixes released will only support XenServer 7.1 with CU2 applied.

For more information about XenServer 7.1 CU2, see the Citrix XenServer 7.1 Cumulative Update 2 Release Notes.

Information About this Cumulative Update

Component Details
Prerequisite
  • XenServer 7.1 CU1.

  • Licensed customer with Customer Success Service

Post-update tasks Restart Host
Content live patchable* No
Revision History Published on December 12, 2018
* Available to Enterprise Customers.

Included in this Cumulative Update

This cumulative update includes the following previously released hotfixes.

This cumulative update also includes additional fixes. For a list of the issues resolved by XenServer 7.1 CU2, see the Citrix XenServer 7.1 Cumulative Update 2 Release Notes.

Installing the Cumulative Update

Customers should use either XenCenter or the XenServer Command Line Interface (CLI) to apply this cumulative update. When the installation is complete, see the Post-update tasks in the table Information About this Cumulative Update for information about any post-update tasks you should perform for the update to take effect. As with any software update, back up your data before applying this update. Citrix recommends updating all hosts within a pool sequentially. Upgrading of hosts should be scheduled to minimize the amount of time the pool runs in a “mixed state” where some hosts are upgraded and some are not. Running a mixed pool of updated and non-updated hosts for general operation is not supported.

Note: The attachment to this article is a zip file. It contains the cumulative update update package only. Click the following link to download the source code for any modified open source components XS71ECU2-sources.iso. The source code is not necessary for cumulative update installation: it is provided to fulfill licensing obligations.

Installing the Cumulative Update by using XenCenter

Before installing this cumulative update, it is recommended that you update your version of XenCenter to the latest available version for XenServer 7.1 CU 2.

Choose an Installation Mechanism

There are three mechanisms to install a cumulative update:

  1. Automated Updates
  2. Download update from Citrix
  3. Select update or Supplemental pack from disk

The Automated Updates feature is available for XenServer Enterprise Edition customers, or to those who have access to XenServer through their XenApp/XenDesktop entitlement. For information about installing a cumulative update using the Automated Updates feature, seeApply Automated Updates.

For information about installing a cumulative update using the Download update from Citrix option, see Apply an updates to a pool.

The following section contains instructions on option (3) installing a cumulative update that you have downloaded to disk:

  1. Download the cumulative update to a known location on a computer that has XenCenter installed.
  2. Unzip the cumulative update zip file and extract the .iso file
  3. In XenCenter, on the Tools menu, select Install Update. This displays the Install Update wizard.
  4. Read the information displayed on the Before You Start page and click Next to start the wizard.
  5. Click Browse to locate the iso file, select XS71ECU2.iso and then click Open.
  6. Click Next.
  7. Select the pool or hosts you wish to apply the cumulative update to, and then click Next.
  8. The Install Update wizard performs a number of update prechecks, including the space available on the hosts, to ensure that the pool is in a valid configuration state. The wizard also checks whether the hosts need to be rebooted after the update is applied and displays the result.
  9. In addition, the Install Update wizard checks whether a live patch (this is an Enterprise Edition feature) is available for the cumulative update and if the live patch can be successfully applied to the hosts.

    Follow the on-screen recommendations to resolve any update prechecks that have failed. If you want XenCenter to automatically resolve all failed prechecks, click Resolve All. When the prechecks have been resolved, click Next.

  10. Choose the Update Mode. Review the information displayed on the screen and select an appropriate mode. If the update contains a live patch that can be successfully applied to the hosts, it displays No action required on the Tasks to be performed screen.
  11. Note: If you click Cancel at this stage, the Install Update wizard reverts the changes and removes the update file from the host.

  12. Click Install update to proceed with the installation. The Install Update wizard shows the progress of the update, displaying the major operations that XenCenter performs while updating each host in the pool.
  13. When the update is applied, click Finish to close the wizard.
  14. If you chose to carry out the post-update tasks, do so now.

Installing the Cumulative Update by using the xe Command Line Interface

  1. Download the cumulative update file to a known location.
  2. Extract the .iso file from the zip.
  3. Upload the .iso file to the Pool Master by entering the following commands:

    (Where -s is the Pool Master’s IP address or DNS name.)

    xe -s <server> -u <username> -pw <password> update-upload file-name=<filename>XS71ECU2.iso

    XenServer assigns the update file a UUID which this command prints. Note the UUID.

    1dbfd41c-a53d-41e7-aa64-a8f39e0baac8

  4. Apply the update to all hosts in the pool, specifying the UUID of the update:

    xe update-pool-apply uuid=<UUID_of_file>

    Alternatively, if you want to update and restart hosts in a rolling manner, you can apply the update file to an individual host by running the following:

    xe upload-apply host-uuid=<UUID_of_host> uuid=<UUID_of_file>

  5. Verify that the update was applied by using the update-list command.

    xe update-list -s <server> -u root -pw <password> name-label=XS71ECU2

    If the update is successful, the hosts field contains the UUIDs of the hosts to which this cumulative update was successfully applied. This should be a complete list of all hosts in the pool.

  6. If the cumulative update is applied successfully, carry out any specified post-update task on each host, starting with the master.

Files

Hotfix File

Component Details
Hotfix Filename XS71ECU2.iso
Hotfix File sha256 64b9d387cfc79fb97245a2425babaa4f46ccf9edf4730cde18fea7fe954a0f42
Hotfix Source Filename XS71ECU2-sources.iso
Hotfix Source File sha256 e46f7d3fb31b8e4bafcd5a49c06ce7b0d55f486489efd618514452aafb6c4305
Hotfix Zip Filename XS71ECU2.zip
Hotfix Zip File sha256 7cb0130b2ac64e4c5c7ccbe57ea232ff1224888b0a4848d20d8ca0674614ba08
Size of the Zip file 277.16 MB

Files Updated

biosdevname-0.3.10-3.x86_64.rpm
blktap-3.5.0-xs.1+1.0_71.1.2.x86_64.rpm
broadcom-bnxt-en-1.5.5.p1-1.x86_64.rpm
busybox-1.22.1-2.x86_64.rpm
ca-certificates-2015.2.6-73.el7.noarch.rpm
citrix-crypto-module-1.0.2n-11.x86_64.rpm
dracut-033-360.el7.centos.xs13.x86_64.rpm
dracut-network-033-360.el7.centos.xs13.x86_64.rpm
ethtool-4.5-3.el7.x86_64.rpm
forkexecd-1.1.2-2.el7.centos.x86_64.rpm
glibc-2.17-106.el7_2.4.x86_64.rpm
glibc-common-2.17-106.el7_2.4.x86_64.rpm
gpumon-0.4.0-3.el7.centos.x86_64.rpm
guest-templates-json-1.1.4-1.noarch.rpm
guest-templates-json-data-linux-1.1.4-1.noarch.rpm
guest-templates-json-data-other-1.1.4-1.noarch.rpm
guest-templates-json-data-windows-1.1.4-1.noarch.rpm
guest-templates-json-data-xenapp-1.1.4-1.noarch.rpm
host-upgrade-plugin-1.1.3.1-1.x86_64.rpm
intel-e1000e-3.4.0.2-1.x86_64.rpm
kernel-4.4.27-600.1.12.x86_64.rpm
kexec-tools-2.0.4-32.x86_64.rpm
kpatch-0.3.2-4.x86_64.rpm
kpatch-4.4.0+2-modules-0.3.2-4.x86_64.rpm
kpatch-modules-0.3.2-4.x86_64.rpm
linux-guest-loader-2.0.2-1.noarch.rpm
linux-firmware-20170622-3.2.noarch.rpm
linux-guest-loader-data-2.0.2-1.noarch.rpm
message-switch-1.4.1-2.el7.centos.x86_64.rpm
microcode_ctl-2.1-26.xs1.x86_64.rpm
ocaml-xenops-tools-1.0.1-5.el7.centos.x86_64.rpm
openssl-1.0.2k-8.el7.x86_64.rpm
openssl-libs-1.0.2k-8.el7.x86_64.rpm
openssl-perl-1.0.2k-8.el7.x86_64.rpm
openvswitch-2.3.2-24.1.3.1.x86_64.rpm
pbis-open-8.2.2-1.7.2.x86_64.rpm
pbis-open-upgrade-8.2.2-4.x86_64.rpm
qlogic-fastlinq-8.10.11.0.p1-1.x86_64.rpm
rrd2csv-1.0.2-3.el7.centos.x86_64.rpm
rrdd-plugins-1.0.4-4.el7.centos.x86_64.rpm
security-tools-1.0.1-1.x86_64.rpm
sm-1.17.0-xs.2+1.0_71.1.6.x86_64.rpm
sm-cli-0.9.8-2.el7.centos.x86_64.rpm
sm-rawhba-1.17.0-xs.2+1.0_71.1.6.x86_64.rpm
squeezed-0.13.2-2.el7.centos.x86_64.rpm
stunnel_xs-4.56-6.el7.centos.xs4.x86_64.rpm
tzdata-2018c-1.el7.noarch.rpm
v6d-citrix-10.0.9-1.el7.centos.x86_64.rpm
vendor-drivers-1.0.0-1.x86_64.rpm
vendor-update-keys-1.3.6-2.noarch.rpm
vgpu-7.1.0.1-1.x86_64.rpm
vhd-tool-0.11.4-2.el7.centos.x86_64.rpm
xapi-storage-0.9.0-3.el7.centos.x86_64.rpm
xapi-core-1.14.45-2.x86_64.rpm
xapi-storage-script-0.13.0-2.el7.centos.x86_64.rpm
xapi-tests-1.14.45-2.x86_64.rpm
xapi-xe-1.14.45-2.x86_64.rpm
xcp-networkd-0.13.6-2.el7.centos.x86_64.rpm
xcp-python-libs-2.0.5-2.noarch.rpm
xcp-rrdd-1.2.0-4.el7.centos.x86_64.rpm
xencenter-7.1.3.6923-1.noarch.rpm
xenopsd-0.17.12-2.el7.centos.x86_64.rpm
xenopsd-xc-0.17.12-2.el7.centos.x86_64.rpm
xenopsd-xenlight-0.17.12-2.el7.centos.x86_64.rpm
xenops-cli-1.0.2-3.el7.centos.x86_64.rpm
xenserver-docs-7.1.0-17.noarch.rpm
xenserver-firstboot-1.0.1.3-1.noarch.rpm
xenserver-pv-tools-7.2.101-1.noarch.rpm
xenserver-release-7.1.2-2.x86_64.rpm
xenserver-release-config-7.1.2-2.x86_64.rpm
xen-dom0-libs-4.7.6-1.26.x86_64.rpm
xen-device-model-0.10.2.xs-1.3.x86_64.rpm
xen-dom0-tools-4.7.6-1.26.x86_64.rpm
xen-hypervisor-4.7.6-1.26.x86_64.rpm
xen-libs-4.7.6-1.26.x86_64.rpm
xen-tools-4.7.6-1.26.x86_64.rpm
xsconsole-10.0.1-1.x86_64.rpm
xs-obsolete-packages-1-1.noarch.rpm

More Information

For more information, see the XenServer 7.1 Documentation.

If you experience any difficulties, contact Citrix Technical Support.

Related:

  • No Related Posts

7022520: Troubleshooting boot issues (multipath with lvm).

1 – Insert a SLES media which corresponds to the current running version on the system or the version . Then, set bios to boot primarily from the CD/iso inserted. On the menu, select “Rescue system” or “more” and “Rescue system”.

2 – By default, the rescue system will activate the LVM volume group right from the boot, which is not optimal when rescuing a system with multipath configured.

In order to deactivate the volume group, use the following command :

# vgchange -an

Now, start the multipath service with :

# modprobe dm_multipath

For SLES 12 :

# systemctl restart multipathd

For SLES 11 :

# service multipathd restart

Confirm that the paths are visible with the command :

# multipath -ll

3 – Make sure that the filter on /etc/lvm/lvm.conf is set to reach the devices over multipath by changing the “filter =” line as described below :

filter = [ “a|/dev/disk/by-id/dm-name-.*|”, “r/.*/” ]

After that, restart lvmetad with command:

# systemctl restart lvm2-lvmetad (SLES 12 only)

# pvscan ; vgscan

If set correctly, the “pvscan” command shouldn’t output “Found duplicate PV” messages.

4 – Mount the root LV using the /dev/mapper and bind the /proc, /dev, /sys and /run to it:

# mount /dev/mapper/<rootvg>-<rootlv> /mnt

For SLES 12 :

# for i in dev proc sys run ; do mount -o bind /$i /mnt/$i ; done

For SLES 11 :

# for i in dev proc sys ; do mount -o bind /$i /mnt/$i ; done

after, change the root to /mnt :

# chroot /mnt

5 – Once in the chroot environment, make sure that the multipath is enabled again and that the paths are visible too :

For SLES12 :

# systemctl enable multipathd

# systemctl start multipathd

For SLES 11 :

# chkmod multipath on

# service multipathd start ;

# multipath -ll

6 – Make sure that local and multipathed devices are correctly listed on /etc/fstab.

For multipath devices, use “UUID=” or “/dev/disk/by-id/dm-name-*” instead of “/dev/sd*” or “/dev/disk/by-*”, which should be used only for local disks.

7 – As we are still on the system activated by the chroot, change the /etc/lvm/lvm.conf and reload the lvmetad if necessary as described on step 3. Also make sure that there are “Found duplicate PV” entries after running the “pvscan” command.

8 – Mount all volumes with “mount -a” command.
NOTE: In case /usr is a separate filesystem, exit chroot here and mount it manually to /mnt/usr
# mount /dev/mapper/<rootvg>-<usrlv> /mnt/usr ; chroot /mnt ; mount -a

Make sure that all volumes were correct mounted and listed on “mount” or “findmnt” command.

It is also a good idea to verify if the output of “cat /proc/mounts” match with the “cat /etc/mtab”. If there are any differences between the two, please do as below :

# cat /proc/mounts > /etc/mtab

Note: this is not necessary on SLE12 !
9 – Make sure that the file /etc/dracut.conf.d/10-mp.conf has the line “force_drivers+=”dm_multipath dm_service_time”” in it.

This can be added using the following command :

# echo ‘force_drivers+=”dm_multipath dm_service_time”‘ >> /etc/dracut.conf.d/10-mp.conf

10 – Next, make a backup of the old initrd and create a new one using the following commands :

# cd /boot

# mkdir brokeninitrd

# cp initrd-<version> brokeninitrd

For SLES 12 :

# dracut –kver <kernelnumber>-default -f -a multipath

For SLES 11 :

# mkinitrd -f multipath -k vmlinuz-<kernelnumber>-default -i initrd-<kernelnumber>-default

And also a new grub config :

# yast bootloader

Once on the YaST interface, under “Bootloader Options” tab, change the value (by increasing or decreasing) on “Timeout in Seconds”.

This action will force the system to see the changes made and re-create the grub configuration. It is also a good idea to mark the “Boot from Master Boot Record” on “Boot Code Options” tab.

After the changes, leave the YaST interface by selecting “ok”.

Alternatively, it is possible to manually make the changes as below :

# grub2-mkconfig -o /boot/grub2/grub.cfg

# grub2-install /dev/mapper/dm-name-… # choose the multipath device here!

11 – Ensure that the dm-multipath and lvm2 modules are included on the initrd image created previously with :

# lsinitrd /boot/init<version> |less

12 – Verify that the local devices are blacklisted in /etc/multipath.conf file. If this file doesn’t exist, create one and insert a “blacklist” session.

Example below :

blacklist {

wwid “3600508b1001030343841423043300400”

}

13 – Reboot the system.

Related:

  • No Related Posts

XenServer 7.6 Clustering Common Issues and Troubleshooting guide (GFS2 SR / Thin Provisioning)

Problem scenario 1: All hosts can ping each other, but creating a cluster is not possible.

  1. The clustering mechanism uses following specific ports. Please check if there are any firewalls or network configurations between the hosts in the pool are blocking these ports. Please ensure that these ports are open.
  • TCP: 8892, 21064.
  • UDP:5404,5405(not multicast)
  1. If you have configured HA in the pool, please disable the HA before enabling clustering.

Problem scenario 2: Cannot add a new host to the clustered pool.

  1. Please make sure the new host has following ports open.
  • TCP: 8892, 21064.
  • UDP:5404,5405(not multicast)
  1. Please make sure the new host can ping all hosts in the clustered pool.
  2. Please ensure no host is offline when a new host is trying to join the clustered pool.Ensure that the host has an IP address allocated on the NIC that will be joining the cluster network of the pool

Problem scenario 3: A host in the clustered pool is offline and it can’t be recovered. How to remove the host from the cluster?

You can mark a host as dead forcefully by below command:

xe host-declare-dead uuid=<host uuid>

If you need to mark multiple hosts as dead, you must include all of their <host uuid> in on single CLI invocation.

This above command removes the host from the cluster permanently and decreases the number of live hosts required for quorum.

Please note, once a host is marked as dead, it cannot be added back into the cluster. To add this host back into the cluster, you must do a fresh installation of XenServer on the host.


Problem scenario 4: Some members of the clustered pool are not joining cluster automatically.

  1. You can use following command to resync the members of the clustered pool to fix the issue.

xe cluster-pool-resync cluster-uuid=<cluster_uuid>

  1. You can run xcli diagnostic dbg on the problematic hosts and other hosts to confirm if the cluster information consists on those hosts.

Items to be checked from the command output:

  • id: node ID
  • addr: IP address used to communicates with other hosts
  • Cluster_token_timeout_ms: Cluster token timeout
  • Config_valid: if configuration is valid
Command output example:

xcli diagnostic dbg

{

is_running: true is_quorate: true num_times_booted: 1 token: (Value filtered) node_id: 1 all_members: [

{id: 3addr: [IPv4 192.168.180.222]

}{id: 2addr: [IPv4 192.168.180.221]

}{id: 1addr: [IPv4 192.168.180.220]

}] is_enabled: true saved_cluster_config: {

cluster_token_coefficient_ms: 1000 cluster_token_timeout_ms: 20000 config_version: 1 authkey: (Value filtered)

…etc…

config_valid: true
  1. In case above actions do not help, you may try a re-attaching GFS2 SR following below steps(These steps can also be used to recover from a situation where you end up with an invalid cluster configuration):
  1. ) Detach GFS2 SR from XenCenter or xe CL xe pbd-unplug uuid=<UUID of PBD> on each host
  2. ) Disable clustered pool from XenCenter or xe CL xe cluster-pool-destroy cluster-uuid=<cluster_uuid>
Or forcefully disable clustered pool by xe cluster-host-force-destroy uuid=<cluster_host> on each host
  1. ) Re-enable clustered pool from XenCenter or xe CL
xe cluster-pool-create network-uuid=<network_uuid> [cluster-stack=cluster_stack] [token-timeout=token_timeout] [token-timeout- coefficient=token_timeout_coefficient]
  1. ) Re-attach GFS2 SR from from XenCenter or xe CL xe pbd-plug uuid=<UUID of PBD> on each host

Problem scenario 5: A host in the clustered pool encountered into self-fencing loop.

In this case, you can start the host by adding “nocluster” option. To do this, connect to the hosts physical or serial console and edit the boot arguments in grub.

Example:

/boot/grub/grub.cfg

menuentry ‘XenServer’ {

search –label –set root root-oyftuj

multiboot2 /boot/xen.gz dom0_mem=4096M,max:4096M watchdog ucode=scan dom0_max_vcpus=1-16 crashkernel=192M,below=4G console=vga vga=mode-0x0311

module2 /boot/vmlinuz-4.4-xen root=LABEL=root-oyftuj ro nolvm hpet=disable xencons=hvc console=hvc0 console=tty0 quiet vga=785 splash plymouth.ignore-serial-consoles nocluster

module2 /boot/initrd-4.4-xen.img

}

menuentry ‘XenServer (Serial)’ {

search –label –set root root-oyftuj

multiboot2 /boot/xen.gz com1=115200,8n1 console=com1,vga dom0_mem=4096M,max:4096M watchdog ucode=scan dom0_max_vcpus=1-16 crashkernel=192M,below=4G

module2 /boot/vmlinuz-4.4-xen root=LABEL=root-oyftuj ro nolvm hpet=disable console=tty0 xencons=hvc console=hvc0 nocluster

module2 /boot/initrd-4.4-xen.img

Problem scenario 6: A pool master get restarted in a clustered pool.

For a cluster to have quorum, at least 50% of the hosts (rounding up) must be able to reach each other. For example in a 5 host pool, 3 hosts are required for quorum. In a 4 host pool, 2 hosts are required for quorum.

In a situation where the pool is evenly split (i.e. 2 groups of 2 hosts that can reach each other) then the segment with the lowest node id will stay up whilst the other half fences. You can find the node IDs of hosts by using the command xcli diagnostic dbg. Note that the pool master may not have the lowest node id.


Problem scenario 7: After a host is shut down forcibly in the clustered pool, all of pool has vanished.

If a host is shutdown non-forcefully then it will be temporarily removed from quorum calculations until it is turned back on. However you force shutdown a host or it loses power then it will still count towards quorum calculations. For example if you had a pool of 3 hosts and forcefully shutdown 2 of them the remaining host would fence as it would no longer have quorum.


Problem scenario 8: All of the hosts within the clustered pool get restarted at same time.

If number of contactable hosts in the pool is less than

  • even number of hosts in total: n/2
  • odd number of hosts in total: (n+1)/2

all hosts would be considered as not having quorum, hence all hosts would self-fence, and you would see all hosts restarted.

You may check following to get more information.

  • /var/opt/xapi-clusterd/boot-times to see if any boot occurred at an unexpected time.
  • Crit.log to check if there is any self-fencing message outputted.
  • XenCenter to check the notification of the timestamp you encountered the issue to see if self-fencing occurred.
  • dlm_tool status command output, check fence x at x x
example of a working case of dlm_tool status output
dlm_tool status

cluster nodeid 1 quorate 1 ring seq 4 4

daemon now 436 fence_pid 0

node 1 M add 23 rem 0 fail 0 fence 0 at 0 0

In case of collecting logs for debugging, please collect diagnostic information from all hosts in the cluster. In the case where a single host has self-fenced, the other hosts in the cluster are more likely to have useful information.

If the host is connected to XenCenter, from the menu select Tools > Server Status Report. Choose the all hosts to collect diagnostics from and click Next. Choose to collect all available diagnostics. Click Next. After the diagnostics have been collected, you can save them to your local system.

Or you can connect to the host console and use the xen-bugtool command to collect diagnostics.

Problem scenario 9: Error when changing the cluster settings

You might receive the following error message about an invalid token (“[[“InternalError”,”Invalid token”]]”) when updating the configuration of your cluster.

Resolve this issues by completing the following steps:

  1. (Optional) Backup the current cluster configuration by collecting an SSR with the xapi-clusterd and system log boxes ticked
  2. Use XenCenter to detach the GFS2 SR from the clustered pool
  3. From the CLI of any host in the cluster, force destroy the cluster: xe cluster-pool-force-destroy cluster-uuid=<uuid>
  4. Use XenCenter to re-enable clustering on your pool
  5. Use XenCenter to reattach the GFS2 SR to the pool

Related:

  • No Related Posts

7018732: Data distribution not equal across OSDs

Note that by default the “reweight-by-utilization” command will have the following defaults:
oload 120

max_change 0.05

max_change_osds 5
When running the command it is possible to change the default values, for example:
# ceph osd reweight-by-utilization 110 0.05 8
The above will target OSDs 110% over utilized, 0.05 max_change and adjust a total of eight (8) OSDs for the run. To first verify the changes that will occur without any changes actually being done, use:

# ceph osd test-reweight-by-utilization 110 0.05 8

OSD utilization can be affected by various factors, for example:

– Cluster health

– Amount of configured Pools

– Configured Placement Groups (PGs) per Pool

– CRUSH Map configuration and configured rule sets.

Before making any changes to a production system it should be verified that any output, in this case OSD utilization, are understood and that the cluster is at least reported as being in a healthy state. This can be checked using for example “ceph health” and “ceph -s”.

Below is some example output for the above commands showing a healthy cluster:

:~ # ceph health detail

HEALTH_OK

:~ # ceph -s

cluster 70e9c50b-e375-37c7-a35b-1af02442b751

health HEALTH_OK

monmap e1: 3 mons at {ses-srv-1=192.168.178.71:6789/0,ses-srv-2=192.168.178.72:6789/0,ses-srv-3=192.168.178.73:6789/0}

election epoch 50, quorum 0,1,2 ses-srv-1,ses-srv-2,ses-srv-3

fsmap e44: 1/1/1 up {0=ses-srv-4=up:active}

osdmap e109: 9 osds: 9 up, 9 in

flags sortbitwise,require_jewel_osds

pgmap v81521: 732 pgs, 11 pools, 1186 MB data, 515 objects

3936 MB used, 4085 GB / 4088 GB avail

732 active+clean

The following article may also be of helpful / of interest: Predicting which Ceph OSD will fill up first

Related:

  • No Related Posts

ViPR SRM: missing LVM Physical Volumes in reports for Linux hosts

Article Number: 525094 Article Version: 3 Article Type: Break Fix



ViPR SRM,ViPR SRM 3.7,ViPR SRM 4.0,ViPR SRM 4.1,ViPR SRM 4.2

The RSH collector doesn’t correctly collect LVM information on some Linux host. As a result, there are inconsistencies in LVM reports regarding LV and PV numbers and names.

The “/sbin/lvs -o lv_name,vg_name,devices” command displays multiple PVs for some LVs if the LVM is configured that way. E.g.

LV VG Devices

root rpool /dev/sda2(0)

root rpool /dev/sda2(1920)

swap rpool /dev/sda2(768)



data_LV host_VG /dev/mapper/VPLEX.09fc(0),/dev/mapper/VPLEX.09fd(0)

Notice that LV data_LV has 2 devices/PVs listed. Currently, the LunMappingDetection.pl script used to do discovery on hosts cannot handle multiple PVs per LV.

This problem is fixed in SRM v 4.3

Jira SRS-38372

Related:

  • No Related Posts