Re: Best Practice for VTL Migration Networker

“By the “replicate”, I said that the data of the previous VTL were copied to the new DXi.”

Here, I am assuming that this is an *exact* replica or duplicate. Not a volume that was cloned by NetWorker.

If the volumes (i.e. virtual tapes) were replicated and “loaded” in the new VTL, then the volumes would be identical to the original copy, and NetWorker should therefore be able to load and read the tape label use the tape. I equate this to be the same as removing a physical tape from one physical autochanger, and loading it to another autochanger.

“For the “Label Design”, I wanted to know if NetWorker re-write labels.”

If the virtual tape volumes are identical to the original, then NetWorker would not see any difference and would not know that this is a replicated volume. So NetWorker should be able to load and mount this replicated volume.

Note: The NetWorker volume label is different that the volume bar code.

The bar code is an optical, machine-readable, representation of data. This code is is usually imprinted on a piece of paper and placed outside the tape cartridge. The autochanger’s bar code reader is used to read this bar code label. This bar code can be changed by replacing the bar code label. For a virtual tape… you’ll need to look at the VTL manual.



The NetWorker volume label is information that is written in the first 64 KB of the tape itself. This information is used by NetWorker to identify what this tape is, and it contains information such as: volume name, volume id, and pool. This information is read when the volume is mounted and then verified with the NetWorker media database. If the label matches what is in the media database, then the mount is complete, and it is ready for use. This label can only be changed by relabeling the volume.

Related:

  • No Related Posts

Organizational Groups missing from Cubes

I need a solution

Does anyone know which cubes require processing to show the Organizational Groups I’ve created within the Computers Cube. After my upgrade from 7.6 when I use either Organizational Group – Full Path or Organizational Group – Name, the only entries are the ones scraped from AD. The other view I have which contains additional groups I want to filter on are no longer there post upgrade. What do I need to do to get these additional Groups showing up in my cubes again for filtering?

My former saved views execute since the Computers cube has been processed, however, the results are blank with the message “The current configuration has no query results”.

0

Related:

  • No Related Posts

App Layering: How to troubleshoot the layer repository disk

The Enterprise Layer Manager is a Linux CentOS system. Initially, it contains a 30GB boot disk and a 300GB Layer Repository disk. Both are XFS filesystems.

The boot disk is a standard MBR-partitioned disk with 3 partitions: /dev/sda1 is the boot/initrd disk, /dev/sda2 is the 8GB swap space partition, and /dev/sda3 is the root filesystem. Standard disk troubleshooting techniques in Linux will work fine.

/dev/sda1 on /boot type xfs (rw,relatime,attr2,inode64,noquota)

/dev/sda3 on / type xfs (rw,relatime,attr2,inode64,noquota)

The Layer Repository disk is more complicated. It is a logical volume in a Linux LVM volume group. We put it in LVM to allow us to expand it more conveniently. You can expand the repository disk, or add additional blank disks to the ELM, and expand the LVM and the filesystem on it live. The actual filesystem is mounted here:

/dev/mapper/unidesk_vg-xfs_lv on /mnt/repository type xfs (rw,relatime,attr2,inode64,noquota)

To get a list of all disks currently involved in the LVM, run the command “pvdisplay”. PV in this context stands for “physical volume”. It will report data like this for each physical volume currently used in the LVM. Note that /dev/sda, the boot disk, is not listed because it is not an LVM disk.

[root@localhost ~]# pvdisplay

— Physical volume —

PV Name /dev/sdb

VG Name unidesk_vg

PV Size 300.00 GiB / not usable 3.00 MiB

Allocatable yes (but full)

PE Size 4.00 MiB

Total PE 76799

Free PE 0

Allocated PE 76799

PV UUID 1LVMFX-od7x-eegR-RWrc-QoCa-7ZYh-UcaKk3

Each physical volume is assigned to a single Volume Group. You can get a list of volume groups with the command “vgdisplay”, like this:

[root@localhost ~]# vgdisplay

— Volume group —

VG Name unidesk_vg

System ID

Format lvm2

Metadata Areas 1

Metadata Sequence No 7

VG Access read/write

VG Status resizable

MAX LV 0

Cur LV 1

Open LV 1

Max PV 0

Cur PV 1

Act PV 1

VG Size 300.00 GiB

PE Size 4.00 MiB

Total PE 76799

Alloc PE / Size 76799 / 300.00 GiB

Free PE / Size 0 / 0

VG UUID J5ALWe-AVK0-izRN-RvlJ-we7s-1vwf-trPECf

Each Volume Group can be divided into separate Logical Volumes. A logical volume is the equivalent of an attached virtual disk. You can partition it or just put a filesystem on the raw disk. You can see a list of logical volumes defined across all VGs with the “lvdisplay” command.

[root@localhost ~]# lvdisplay

— Logical volume —

LV Path /dev/unidesk_vg/xfs_lv

LV Name xfs_lv

VG Name unidesk_vg

LV UUID hLn1sC-nrXj-Cdxa-opQp-IhNF-yu4O-DFfK34

LV Write Access read/write

LV Creation host, time localhost.localdomain, 2016-03-16 13:56:24 -0400

LV Status available

# open 1

LV Size 300.00 GiB

Current LE 76799

Segments 1

Allocation inherit

Read ahead sectors auto

– currently set to 256

Block device 253:0

Normally in App Layering, there is exactly one VG, called “unidesk_vg”, and exactly one LV, called “xfs_lv”, which is available at /dev/unidesk_vg/xfs_lv. This is in fact just a symlink to /dev/dm-0 (device-mapper drive 0), so any reference you see to /dev/dm-0 is actually referring to your LVM. There is a single XFS filesystem written to the raw disk of the Logical Volume, and it is mounted at /mnt/repository.

Troubleshooting guidance for LVM is well beyond the scope of this article, however all normal LVM functions are completely supported in our Linux image. All normal LVM limitations and problems can apply here as well. If /mnt/repository does not mount correctly, consider investigating it as an LVM problem and search the web for troubleshooting guides.

The Layer Repository itself is an XFS filesystem, and the standard XFS tools are installed. Most likely, the only one you will ever need is xfs_repair. XFS is a journalled filesystem with remarkable resilience, but when you do have an error, you will need to run “xfs_repair /dev/dm-0”. This is complicated by the fact that the Unidesk services will always maintain an open file-handle within that mounted filesystem, and xfs_repair cannot work on a mounted filesystem.

[root@localhost ~]# xfs_repair /dev/dm-0

xfs_repair: /dev/dm-0 contains a mounted filesystem

xfs_repair: /dev/dm-0 contains a mounted and writable filesystem

What you might have to do is switch to Single User Mode by running “init S” while logged in as root, to get a place where the App Layering services are not running, unmount the volume then, and run the repair on it. Thus:

[root@localhost ~]# init S



Enter the root password for access



[root@localhost ~]# umount /mnt/repository

[root@localhost ~]# xfs_repair /dev/dm-0



[root@localhost ~]# reboot

Related:

  • No Related Posts

Error: The Citrix Desktop Service was refused a connection to the delivery controller ” (IP Address ‘xxx.xxx.xxx.xxx’)

Try to determine which files are taking up disk space. on Identity disk

For access to the junction linked to the Identity Disk volume at C:Program FilesCitrixPvsVmServicePersistedData, you will need to execute the command prompt under the context of the Local System account via PsExec tool

The PsExec tool is available for download at this location

http://docs.microsoft.com/en-us/sysinternals/downloads/psexec

Follow these steps to access the Identity disk volume on the VDA:

1. Open elevated command prompt <Run as administrator>

2. execute the command under the context of the Local System account via PsExec:

PSEXEC -i -s cmd.exe

This it to access to the junction linked to the Identity Disk volume

3. Navigate to the root of the junction “PersistedData”, and execute the following command:

DIR /O:S /S > C:{location}Out.txt

4. Open out.txt using Notepad or text editor

5. Check the files taking up the disk space.

6. Move the unwanted files to an alternate location or delete them

Note: You may see .gpf files which shouldn’t be deleted. BrokerAgent.exe writes changed farm policies to %ProgramData%CitrixPvsAgentLocallyPersistedDataBrokerAgentInfo<GUID>.gpf. BrokerAgent.exe then triggers a policy evaluation via CitrixCseClient.dll.

Related:

  • No Related Posts

Event ID 1023 Error: The Citrix Desktop Service was refused a connection to the delivery controller ” (IP Address ‘xxx.xxx.xxx.xxx’). VDAs in Delivery Group losing registration with the Delivery Controllers

Try to determine which files are taking up disk space. on Identity disk

For access to the junction linked to the Identity Disk volume at C:Program FilesCitrixPvsVmServicePersistedData, you will need to execute the command prompt under the context of the Local System account via PsExec tool

The PsExec tool is available for download at this location

http://docs.microsoft.com/en-us/sysinternals/downloads/psexec

Follow these steps to access the Identity disk volume on the VDA:

1. Open elevated command prompt <Run as administrator>

2. execute the command under the context of the Local System account via PsExec:

PSEXEC -i -s cmd.exe

This it to access to the junction linked to the Identity Disk volume

3. Navigate to the root of the junction “PersistedData”, and execute the following command:

DIR /O:S /S > C:{location}Out.txt

4. Open out.txt using Notepad or text editor

5. Check the files taking up the disk space.

6. Move the unwanted files to an alternate location or delete them

Note: You may see .gpf files which shouldn’t be deleted. BrokerAgent.exe writes changed farm policies to %ProgramData%CitrixPvsAgentLocallyPersistedDataBrokerAgentInfo<GUID>.gpf. BrokerAgent.exe then triggers a policy evaluation via CitrixCseClient.dll.

Related:

  • No Related Posts

7022590: Cinder volume already mounted but not visible.

This document (7022590) is provided subject to the disclaimer at the end of this document.

Environment

SUSE Openstack Cloud 7

SUSE Linux Enterprise Server SP3

Situation

In a SUSE Openstack Cloud 7 with pacemaker barclamp, and SUSE Linux Enterprise Server SP3 with NFS Server environment, a cinder volume is mounted, whereas the mount command does not reveal the cinder-volume as currently mounted.

This function return volume cinder already mounted while a mount command dos not show this.

/usr/lib/python2.7/site-packages/os_brick/remotefs/remotefs.py

and in : /var/log/cinder/cinder-volume.log the following is observed :

2018-01-15 13:21:37.967 23904 INFO os_brick.remotefs.remotefs [req-89e01f56-f90b-47cf-aee8-f9638ce5ef83 – – – – -] Already mounted: /var/lib/cinder/mnt/

Note :

– the cinder-volume role is deployed on the cluster, using a NFS share as the back-end.

– a mount command doesn’t show the cinder resource mounted :

root@d0c-c4-7a-d2-88-ea:~ # mount | grep nfs

10.0.0.5:/srv/nfs/glance on /glance type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.0.0.5,mountvers=3,mountport=20048,mountproto=udp,local_lock=none,addr=10.0.0.5)

root@d0c-c4-7a-d2-88-ea:~ #

– An attempt to create a cinder volume will fail :

root@d0c-c4-7a-d2-88-ea:~ # cinder create –name testvolume 1

root@d0c-c4-7a-d2-88-ea:~ # cinder show testvolume2

+——————————–+————————————–+

| Property | Value |

+——————————–+————————————–+

| attachments | [] |

| availability_zone | nova |

| bootable | false |

| consistencygroup_id | None |

| created_at | 2018-01-19T14:18:39.370185 |

| description | None |

| encrypted | False |

| id | 3bfc347f-35d3-4a1a-ba20-afdcc6321b96 |

| metadata | {} |

| migration_status | None |

| multiattach | False |

| name | testvolume |

| os-vol-host-attr:host | None |

| os-vol-mig-status-attr:migstat | None |

| os-vol-mig-status-attr:name_id | None |

| os-vol-tenant-attr:tenant_id | 30d0a619bb76450fb398435af81fbab6 |

| replication_status | disabled |

| size | 1 |

| snapshot_id | None |

| source_volid | None |

| status | error |

| updated_at | 2018-01-19T14:18:40.141022 |

| user_id | e1c8eee635864e2f9e87100f74e4bc60 |

| volume_type | None |

+——————————–+————————————–+

root@d0c-c4-7a-d2-88-ea:~ #

Resolution

The permissions were not properly set on the NFS server, and a ‘chown’ command using the cinder’s uid & gid will correct the permission problem.

To verify the current uid & gid :

ssh controller01

Last login: Thu Jan 25 01:55:52 2018 from 192.168.124.10

root@d52-54-00-63-a1-01:~ # id cinder

uid=193(cinder) gid=480(cinder) groups=480(cinder)

root@d52-54-00-63-a1-01:~ #

run the following ‘chown’ command

nfsserv:~ # chown 193:480 /export/cinder

And finally restart the resource :

systemctl restart openstack-cinder-volume.service

Cause

The NFS mount is not visible here, because a cinder-volume runs in its own MNT namespace. As such, from the cinder-volume namespace perspective, the NFS mount is really visible, although it is not visible to the ‘root’ user (since ‘root’ runs in a different name space).

The ‘nsenter’ command can show this :

nsenter -t `pgrep cinder-volume | head -1` -m df -h

Filesystem Size Used Avail Use% Mounted on

/dev/sda3 38G 2.6G 33G 8% /

devtmpfs 2.7G 0 2.7G 0% /dev

tmpfs 2.7G 54M 2.7G 2% /dev/shm

tmpfs 2.7G 0 2.7G 0% /sys/fs/cgroup

tmpfs 2.7G 18M 2.7G 1% /run

tmpfs 547M 0 547M 0% /run/user/196

/dev/sdb 1014M 36M 979M 4% /var/lib/rabbitmq

192.168.124.62:/export/cinder 15G 3.1G 12G 22% /var/lib/cinder/mnt/cfecea6f4744f85a0a802c9d4ed2b8d1

tmpfs 547M 0 547M 0% /run/user/0

Disclaimer

This Support Knowledgebase provides a valuable tool for NetIQ/Novell/SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented “AS IS” WITHOUT WARRANTY OF ANY KIND.

Related:

  • No Related Posts

Re: The configuration of this client does not support browsing on Red Hat clients

thats what I thought.

so when you choose the folders in Backup and Restore – you get an error correct?

I am the same way NONE of my RH servers quilify for FLR (see below for Limitations)

so I have to backup my Linux VM’s as a vm once a week (so I can rebuild if needed)

and daily I do backups as a physical with the client installed on the server.

File-level restore limitations

The following limitations apply to file-level restore as described in “Restoring specific

folders or files” on page 86:

The following virtual disk configurations are not supported:

• Unformatted disks

• Dynamic disks

• GUID Partition Table (GPT) disks

• Ext4 filesystems

• FAT16 filesystems

• FAT32 filesystems

• Extended partitions (that is, any virtual disk with more than one partition or when

two or more virtual disks are mapped to single partition)

• Encrypted partitions

• Compressed partitions

Symbolic links cannot be restored or browsed

You cannot restore more than 5,000 folders or files in the same restore operation

The following limitations apply to logical volumes managed by Logical Volume

Manager (LVM):

• One Physical Volume (.vmdk) must be mapped to exactly one logical volume

• Only ext2 and ext3 formatting is supported

Related:

  • No Related Posts

7020233: How To Manually Test dbcopy

This document (7020233) is provided subject to the disclaimer at the end of this document.

Environment

Micro Focus Disaster Recover (Reload) 2 – 5

SUSE Linux Enterprise Server 9 – 12

Situation

Seeing dbcopy errors in the log and would like to test if dbcopy is working. This will determine whether Reload or dbcopy is the problem.

Resolution

To test dbcopy when a dbcopy error in the Reload log occurred, run the following steps:

1. Create a temporary directory in the /mnt directory. In this example will use ‘temp’ (/mnt/temp). This is where you will map (mount) the remote file system locally.

  • To create a directory in terminal, type:
mkdir /mnt/temp 



2. Now a connection to the remote file system will need to be made. This is done using the ‘mount’ command.

  • This example is for an nfs exported directory (commonly used in Reload for live GroupWise systems on the Linux platform. Make sure that the live PO is exported prior to attempting this mount. This will be done already if a Reload profile was configured using the Linux to Linux profile wizard):

mount -v -t nfs <hostname or server IP>:/<exportdir> /mnt/temp 

  • This example is for an nss type filesystem (used with NetWare based GroupWise systems):

ncpmount -A 192.168.1.111 -S NW65_SERVER1 -U admin.context -V gw7vol /mnt/temp

* Note, gw7vol is the name of the NetWare volume that your live GroupWise Post Office is on.

* Notice the space between the first and second directories. In Linux mount commands, after the flags, you need to specify the remote source, and the local mount path to map to.

* In keeping with the NetWare example, the contents in the remote location of 192.168.1.111 on volume GW7VOL, will appear locally on the Linux box at /temp.

3. Next a second temporary directory needs to be created. This is where we will instruct dbcopy to copy your GroupWise Post Office to. Will call it /potemp and will create it in the /mnt directory as well (/mnt/potemp).

4. Finally, will call the dbcopy process to begin PO migration.

  • Browse to the /opt/novell/groupwise/agents/bin directory on your Reload box and type ‘ls’ to verify that the dbcopy file is in there. It should have been placed there automatically when Reload was installed.

  • Now type the following at the command prompt:

./dbcopy -o -k -m /mnt/temp /mnt/potemp 

This should begin a migration from the remote PO to the local machine

*Note, it may be necessary to kill the process id in order to stop the migration. Once dbcopy starts, it doesn’t like to stop. Type “ps aux | grep dbcopy” at the command line find the process id, and type “kill -9 [id number]” (it’s usually 4 or 5 digits right near the first).

Additional Information

This article was originally published in the GWAVA knowledgebase as article ID 215.

Disclaimer

This Support Knowledgebase provides a valuable tool for NetIQ/Novell/SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented “AS IS” WITHOUT WARRANTY OF ANY KIND.

Related:

Re: vplex mapping file for netapp

Hello Najman,

if Ankur or someone else given you the answer here is the answer:

To create a mapping file for the VPLEX Claiming Wizard:

Type the following commands to change to the storage-volumes context:

cd /clusters/cluster-ID/storage-elements/storage-volumes/

From the storage-volumes context, type the ll command to list all storage volumes.

Cut and paste the output on the screen and save it to a file.

Create a txt file and type the heading Generic storage-volumes at the beginning of the file as shown in the following example:

Generic storage-volumes

VPD83T3:60060e801004f2b0052fabdb00000006 ARRAY_NAME_1

VPD83T3:60060e801004f2b0052fabdb00000007 ARRAY_NAME_2

VPD83T3:60060e801004f2b0052fabdb00000008 ARRAY_NAME_3

VPD83T3:60060e801004f2b0052fabdb00000009 ARRAY_NAME_4

Use this file as a name mapping file for the VPLEX Claiming Wizard by using either the VPlexcli or the GUI. If using VPlexcli, the name mapping file should reside on the SMS. If using the VPLEX GUI, the name mapping file should be on the same system as the GUI.

Hope it is going to help you

Best Regards

Related:

7019470: How To Manually mount a Netware Server from Reload

The default protocol for mounting a Netware server is NCPFS. Go through the following steps to create a mount;

1) Create a temporary directory under the /mnt directory. In this example will use ‘temp’ (/mnt/temp). This is where a mount will be created for the remote file system.

– mkdir /mnt/temp

2) Create a mount to the remote file system. This is done using the “mount” command.

– ncpmount -A <ip of remote server> -S <netware server name> -U <typeless distinguished name> -V <netware volume name and path> /<path to local directory>

example: ncpmount -A 192.168.1.111 -S gwava1 -U admin.gwava -V vol/grpwise/po1 /mnt/temp

3) Change directory to the newly mounted directory, this should look like the root of the post office directory.

example: cd /mnt/temp

Related: