Error: The Citrix Desktop Service was refused a connection to the delivery controller ” (IP Address ‘xxx.xxx.xxx.xxx’)

Try to determine which files are taking up disk space. on Identity disk

For access to the junction linked to the Identity Disk volume at C:Program FilesCitrixPvsVmServicePersistedData, you will need to execute the command prompt under the context of the Local System account via PsExec tool

The PsExec tool is available for download at this location

http://docs.microsoft.com/en-us/sysinternals/downloads/psexec

Follow these steps to access the Identity disk volume on the VDA:

1. Open elevated command prompt <Run as administrator>

2. execute the command under the context of the Local System account via PsExec:

PSEXEC -i -s cmd.exe

This it to access to the junction linked to the Identity Disk volume

3. Navigate to the root of the junction “PersistedData”, and execute the following command:

DIR /O:S /S > C:{location}Out.txt

4. Open out.txt using Notepad or text editor

5. Check the files taking up the disk space.

6. Move the unwanted files to an alternate location or delete them

Note: You may see .gpf files which shouldn’t be deleted. BrokerAgent.exe writes changed farm policies to %ProgramData%CitrixPvsAgentLocallyPersistedDataBrokerAgentInfo<GUID>.gpf. BrokerAgent.exe then triggers a policy evaluation via CitrixCseClient.dll.

Related:

  • No Related Posts

Event ID 1023 Error: The Citrix Desktop Service was refused a connection to the delivery controller ” (IP Address ‘xxx.xxx.xxx.xxx’). VDAs in Delivery Group losing registration with the Delivery Controllers

Try to determine which files are taking up disk space. on Identity disk

For access to the junction linked to the Identity Disk volume at C:Program FilesCitrixPvsVmServicePersistedData, you will need to execute the command prompt under the context of the Local System account via PsExec tool

The PsExec tool is available for download at this location

http://docs.microsoft.com/en-us/sysinternals/downloads/psexec

Follow these steps to access the Identity disk volume on the VDA:

1. Open elevated command prompt <Run as administrator>

2. execute the command under the context of the Local System account via PsExec:

PSEXEC -i -s cmd.exe

This it to access to the junction linked to the Identity Disk volume

3. Navigate to the root of the junction “PersistedData”, and execute the following command:

DIR /O:S /S > C:{location}Out.txt

4. Open out.txt using Notepad or text editor

5. Check the files taking up the disk space.

6. Move the unwanted files to an alternate location or delete them

Note: You may see .gpf files which shouldn’t be deleted. BrokerAgent.exe writes changed farm policies to %ProgramData%CitrixPvsAgentLocallyPersistedDataBrokerAgentInfo<GUID>.gpf. BrokerAgent.exe then triggers a policy evaluation via CitrixCseClient.dll.

Related:

  • No Related Posts

Re: The configuration of this client does not support browsing on Red Hat clients

thats what I thought.

so when you choose the folders in Backup and Restore – you get an error correct?

I am the same way NONE of my RH servers quilify for FLR (see below for Limitations)

so I have to backup my Linux VM’s as a vm once a week (so I can rebuild if needed)

and daily I do backups as a physical with the client installed on the server.

File-level restore limitations

The following limitations apply to file-level restore as described in “Restoring specific

folders or files” on page 86:

The following virtual disk configurations are not supported:

• Unformatted disks

• Dynamic disks

• GUID Partition Table (GPT) disks

• Ext4 filesystems

• FAT16 filesystems

• FAT32 filesystems

• Extended partitions (that is, any virtual disk with more than one partition or when

two or more virtual disks are mapped to single partition)

• Encrypted partitions

• Compressed partitions

Symbolic links cannot be restored or browsed

You cannot restore more than 5,000 folders or files in the same restore operation

The following limitations apply to logical volumes managed by Logical Volume

Manager (LVM):

• One Physical Volume (.vmdk) must be mapped to exactly one logical volume

• Only ext2 and ext3 formatting is supported

Related:

  • No Related Posts

7020233: How To Manually Test dbcopy

This document (7020233) is provided subject to the disclaimer at the end of this document.

Environment

Micro Focus Disaster Recover (Reload) 2 – 5

SUSE Linux Enterprise Server 9 – 12

Situation

Seeing dbcopy errors in the log and would like to test if dbcopy is working. This will determine whether Reload or dbcopy is the problem.

Resolution

To test dbcopy when a dbcopy error in the Reload log occurred, run the following steps:

1. Create a temporary directory in the /mnt directory. In this example will use ‘temp’ (/mnt/temp). This is where you will map (mount) the remote file system locally.

  • To create a directory in terminal, type:
mkdir /mnt/temp 



2. Now a connection to the remote file system will need to be made. This is done using the ‘mount’ command.

  • This example is for an nfs exported directory (commonly used in Reload for live GroupWise systems on the Linux platform. Make sure that the live PO is exported prior to attempting this mount. This will be done already if a Reload profile was configured using the Linux to Linux profile wizard):

mount -v -t nfs <hostname or server IP>:/<exportdir> /mnt/temp 

  • This example is for an nss type filesystem (used with NetWare based GroupWise systems):

ncpmount -A 192.168.1.111 -S NW65_SERVER1 -U admin.context -V gw7vol /mnt/temp

* Note, gw7vol is the name of the NetWare volume that your live GroupWise Post Office is on.

* Notice the space between the first and second directories. In Linux mount commands, after the flags, you need to specify the remote source, and the local mount path to map to.

* In keeping with the NetWare example, the contents in the remote location of 192.168.1.111 on volume GW7VOL, will appear locally on the Linux box at /temp.

3. Next a second temporary directory needs to be created. This is where we will instruct dbcopy to copy your GroupWise Post Office to. Will call it /potemp and will create it in the /mnt directory as well (/mnt/potemp).

4. Finally, will call the dbcopy process to begin PO migration.

  • Browse to the /opt/novell/groupwise/agents/bin directory on your Reload box and type ‘ls’ to verify that the dbcopy file is in there. It should have been placed there automatically when Reload was installed.

  • Now type the following at the command prompt:

./dbcopy -o -k -m /mnt/temp /mnt/potemp 

This should begin a migration from the remote PO to the local machine

*Note, it may be necessary to kill the process id in order to stop the migration. Once dbcopy starts, it doesn’t like to stop. Type “ps aux | grep dbcopy” at the command line find the process id, and type “kill -9 [id number]” (it’s usually 4 or 5 digits right near the first).

Additional Information

This article was originally published in the GWAVA knowledgebase as article ID 215.

Disclaimer

This Support Knowledgebase provides a valuable tool for NetIQ/Novell/SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented “AS IS” WITHOUT WARRANTY OF ANY KIND.

Related:

Re: vplex mapping file for netapp

Hello Najman,

if Ankur or someone else given you the answer here is the answer:

To create a mapping file for the VPLEX Claiming Wizard:

Type the following commands to change to the storage-volumes context:

cd /clusters/cluster-ID/storage-elements/storage-volumes/

From the storage-volumes context, type the ll command to list all storage volumes.

Cut and paste the output on the screen and save it to a file.

Create a txt file and type the heading Generic storage-volumes at the beginning of the file as shown in the following example:

Generic storage-volumes

VPD83T3:60060e801004f2b0052fabdb00000006 ARRAY_NAME_1

VPD83T3:60060e801004f2b0052fabdb00000007 ARRAY_NAME_2

VPD83T3:60060e801004f2b0052fabdb00000008 ARRAY_NAME_3

VPD83T3:60060e801004f2b0052fabdb00000009 ARRAY_NAME_4

Use this file as a name mapping file for the VPLEX Claiming Wizard by using either the VPlexcli or the GUI. If using VPlexcli, the name mapping file should reside on the SMS. If using the VPLEX GUI, the name mapping file should be on the same system as the GUI.

Hope it is going to help you

Best Regards

Related:

7019470: How To Manually mount a Netware Server from Reload

The default protocol for mounting a Netware server is NCPFS. Go through the following steps to create a mount;

1) Create a temporary directory under the /mnt directory. In this example will use ‘temp’ (/mnt/temp). This is where a mount will be created for the remote file system.

– mkdir /mnt/temp

2) Create a mount to the remote file system. This is done using the “mount” command.

– ncpmount -A <ip of remote server> -S <netware server name> -U <typeless distinguished name> -V <netware volume name and path> /<path to local directory>

example: ncpmount -A 192.168.1.111 -S gwava1 -U admin.gwava -V vol/grpwise/po1 /mnt/temp

3) Change directory to the newly mounted directory, this should look like the root of the post office directory.

example: cd /mnt/temp

Related:

7020920: Mounting A NetWare Volume On A Linux Box (Temporary)

***This is NOT a permanent mount. The connection will dissolve if either server is reset, or disconnected, and must be re-established for functionality.

REMEMBER, if the NetWare box becomes disconnected, or the cluster fails over, this would disconnect the mount point. You would need to remount the volume.***

To mount NetWare volume on a linux box, you may use the following ncpmount. CIFS type mount also works, but is not provided as an example here. You may add more variables to this mount command (type ncpmount -h to get a list of all varibles), this is the simplified version. You will be asked for a password to be allowed to access and mount volume.

ncpmount -A 10.1.1.1 -S NW65_SERVER1 -U admin.context -V gw7vol /gwvolmnt

-A IP Address

-S Server Name

-U User name + context

-V NetWare volume that is being mounted (gw7vol)

/ Mount directory on the Linux server (/gwvolmnt)

-P (optional) Password to server mounting to. If not included it will prompt.

***NOTE: This mount command is a not a permanent mount. Once the Linux server is reset, this mount will not exist, and the volume will need to be re-mounted. Volume mounts can be made permanent. Refer to your Linux man pages, or a search on the Internet can assist with setting up a permanent mount point.

Related:

VMAX & Openstack Ocata: An Inside Look Pt. 3: Basic Operations

In my last post I went over over-subscription, quality of service (QoS), and compression. If you would like to see that blog article again click here. This time around I will be going over the basic operations that you can carry out using the VMAX drivers for OpenStack Cinder. The operations I will be looking at are:

  • Create, list, delete, attach, and detach volumes
  • Copying images to/from volumes
  • Clone a volume
  • Extend a volume

When going through each of the operations, I will include examples of performing that operation using the OpenStack Cinder CLI and at the end of the article there will be a video going over all functionality using the Horizon dashboard. The hope is that by the end you will be fully familiar with each operation and confident to carry them out on your own! With the formalities out of the way, lets begin…

Note: At this point it is assumed that you have fully configured your OpenStack environment for your VMAX and created a VMAX back-end with one or more volume types to provision volumes with.

1. Create, list, delete, attach, and detach volumes

Provisioning volumes and managing them in OpenStack couldn’t be easier using the Cinder CLI or dashboard, it is intuitive and at no point are you left guessing what to do next. The first operation we will look at is creating a volume.



Creating & deleting volumes

When creating a volume there are only a few things you need to know in advance:

  • The size of the volume (in GB)
  • The name of the volume
  • The volume type

If you want to list the volume types you have created in OpenStack use the command cinder type-list. To view your volume types using the dashboard, navigate to Admin>Volume>Volume Types.

To then create a volume, use the following CLI command:

Command Format:

# cinder create –name <volume_name> –volume-type <vmax_vol_type> <size>

Command Example:

# cinder create –name vmax_vol1 –volume-type VMAX_ISCSI_DIAMOND 10

CreateVol.PNG.png

To view the details of your newly created volume, use the command:

Command Format:

# cinder show <volume_id>/<volume_name>

Command Example:

# cinder show vmax_vol1

ShowVol_censored.PNG.png



To view all of the volumes you have created use the command cinder list. If you want to delete any of the volumes you have created the CLI command is as follows:

Command Format:

# cinder delete <volume_id>/<volume_name> [<volume_id>/<volume_name>…]

Command Example:

# cinder delete vmax_vol1

DeleteVol.PNG.png

You will notice that in the cinder delete command there is the option to add another volume name or ID, if you want to delete more than one volume at a time you only need list the volumes in the one command, separating each with a space.

Command Example:

# cinder delete vmax_vol1 vmax_vol2 vmax_vol3



Attaching & detaching volumes

When you have created a volume for use in OpenStack, it is likely that you will want to then attach it to a compute instance for later use. It is assumed here that you already have a running Nova compute instance, it is that instance we will be attaching our VMAX volume to. In advance of this operation you need only to know the instance ID that you would like to attach the volume to, and the volume ID to be attached. If no device mount-point is specified, or the value is set to auto, OpenStack will automatically pick the next available mount-point to attach the volume to.



The required IDs can be obtained using the Nova & Cinder commands # nova list and # cinder list

ShowVMs.PNG.png

CinderList.PNG.png

Taking these IDs, you can attach a volume to the instance using the below Nova command:

Command Format:

# nova volume-attach <instance_id> <volume_id> [<device>/auto]

Command Example:

# nova volume-attach ee356911-3855-4197-a1ef-aba3437f6571 28021528-e641-4af6-8984-a6ea0815df7f /dev/vdb

NovaAttach.PNG.png

ShowAttached.PNG.png



To detach a volume you need only replace ‘attach’ in the command above with ‘detach, there is no need to specify the device mount-point as it is already attached to a mount-point and so not required for detach operations:

Command Format:

# nova volume-detach <instance_id> <volume_name/volume_id>

Command Example

# nova volume-detach aaaa-bbbb-cccc-dddd vmax_vol1

NovaDetach.PNG.png

2. Copying images to/from volume

Images in OpenStack are pre-configured virtual-machine images which can be used to provision volumes. This is especially useful in production environments, where you can create a master image of a volume, and use that image to provision multiple identical volumes, saving you countless time setting up each.

For this operation it is assumed that you have an image already in OpenStack Glance (image service), any image will do but for this example we are going to use a lightweight CirrOS image. The process is almost identical to provisioning a blank volume, the only difference is the addition of the –image parameter where we can specify the image name we would like to use.

Command Format:

# cinder create –name <volume_name> –volume-type <vmax_vol_type> –image <image_name>/<image_id> <size>

Command Example:

# cinder create –name vmax_cirros –volume-type VMAX_ISCSI_DIAMOND –image cirros-0.3.5-x86_64-disk 10

CreateBootVol.PNG.png

The outcome of the above command will be a new 10GB VMAX volume called ‘vmax_cirros’, volume type VMAX_ISCSI_DIAMOND, with the CirrOS system image copied to it. When this volume is attached to a Nova compute instance, it can be powered on to run a CirrOS virtual machine.



After setting up this virtual machine with all the necessary packages and environment configuration, it is in a state which can be used for whatever purposes the user has. To avoid having to go through the process of copying the system image to the volume each time, and configuring it as necessary, it is possible to take the configured virtual-machine and turn its present state into an image to be used to provision new virtual machines. To copy a volume to a Glance image, use the command:

Command Format:

# cinder upload-to-image <volume> <image-name>

Command Example:

# cinder upload-to-image vmax_cirros cirros_full_config

UploadVolToImage.PNG.png

3. Cloning a volume

Cloning a volume does exactly what it says on the tin, it take a volume and copies it to make a new volume. The new volume is an exact replica of the volume at that point in time and can be used straight away to perform operations on such as attaching to a compute instance. Just like creating a volume from an image, creating a cloned volume is very similar to the create volume command. The command is the same but this time the additional parameter is –source-volid which lets us specify the source volume that we would like to create a clone from. To create a cloned volume, the command is as follows:

Command Format:

# cinder create –name <volume_name> –volume-type <vmax_vol_type> –source-volid <volume_id> <size>

Command Example:

# cinder create –name vmax_cirros_clone –volume-type VMAX_ISCSI_DIAMOND –source-volid 54249fab-3629-4017-becf-1134c7e56cf0 20

CloneVol_censored.png

The example above will take the CirrOS volume from part 2 of this guide, and creates a bigger volume (20GB) called ‘vmax_cirros_clone’ with the same volume type as before.

4. Extending a volume

Extending a volume takes a pre-existing volume and increases its size to a size of your choosing. It is only possible to increase the size of a volume, it is not possible to make a volume smaller than its current size. Extending a volume is really straight forward using the CLI, you only have to specify the volume ID and the size you want to increase it to.

Command Format:

# cinder extend <volume_id> <new_size>

Command Example:

# cinder extend 54249fab-3629-4017-becf-1134c7e56cf0 30

CinderExtend.PNG.png

Video Demonstration

To show all of the above operations using the Horizon dashboard, I have created a short demo video. Full-screen viewing of the embedded video is not possible from the DECN website, to view the video full-screen click here or click on the YouTube logo on the video controls to redirect to the video on the YouTube website.

Whats coming up in part 4 of ‘VMAX & OpenStack Ocata: An Inside Look’…

Now you can create and manage basic operations, we can start to look at some of the advanced functionality, next on the list is volume snapshots. We will be taking a look at what snapshots are, the advantages of having SnapVX backed snapshots, and how to work with them using the same methods we covered in this article.

Related:

Event ID 5133 — Cluster Shared Volume Functionality

Event ID 5133 — Cluster Shared Volume Functionality

Updated: November 25, 2009

Applies To: Windows Server 2008 R2

In a failover cluster, virtual machines can use Cluster Shared Volumes that are on the same LUN (disk), while still being able to fail over (or move from node to node) independently of one another. Virtual machines can use a Cluster Shared Volume only when communication between the cluster nodes and the volume is functioning correctly, including network connectivity, access, drivers, and other factors.

Event Details

Product: Windows Operating System
ID: 5133
Source: Microsoft-Windows-FailoverClustering
Version: 6.1
Symbolic Name: DCM_CANNOT_RESTORE_DRIVE_LETTERS
Message: Cluster Disk ‘%1’ has been removed and placed back in the ‘Available Storage’ cluster group. During this process an attempt to restore the original drive letter(s) has taken longer than expected, possibly due to those drive letters being already in use.

Resolve
CSV – Review drive letter assignments

When a disk is removed from Cluster Shared Volumes (CSV), the system attempts to re-assign the disk the same drive letter that it previously had (based on information that was stored when the disk was added to CSV). If that drive letter is now assigned to another disk in use on any node of the cluster, it is no longer available. You can choose one of the following responses:

  • Find the disk that is currently using the drive letter and manually change it to another drive letter. Then assign the original drive letter back to the disk that was removed from CSV.
  • Assign a new drive letter to the disk that was removed from CSV.
  • Use a GUID instead of a drive letter for the disk that was removed from CSV.

If you do not currently have Event Viewer open, see “To open Event Viewer and view events related to failover clustering.”

To perform the following procedure, you must be a member of the local Administrators group on each clustered server, and the account you use must be a domain account, or you must have been delegated the equivalent authority.

To open Event Viewer and view events related to failover clustering:

  1. If Server Manager is not already open, click Start, click Administrative Tools, and then click Server Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Continue.
  2. In the console tree, expand Diagnostics, expand Event Viewer, expand Windows Logs, and then click System.
  3. To filter the events so that only events with a Source of FailoverClustering are shown, in the Actions pane, click Filter Current Log. On the Filter tab, in the Event sources box, select FailoverClustering. Select other options as appropriate, and then click OK.
  4. To sort the displayed events by date and time, in the center pane, click the Date and Time column heading.

Verify

Confirm that the Cluster Shared Volume can come online. If there have been recent problems with writing to the volume, it can be appropriate to monitor event logs and monitor the function of the corresponding clustered virtual machine, to confirm that the problems have been resolved.

To perform the following procedures, you must be a member of the local Administrators group on each clustered server, and the account you use must be a domain account, or you must have been delegated the equivalent authority.

Confirming that a Cluster Shared Volume can come online

To confirm that a Cluster Shared Volume can come online:

  1. To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.
  2. In the Failover Cluster Manager snap-in, if the cluster you want to manage is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
  3. If the console tree is collapsed, expand the tree under the cluster you want to manage, and then click Cluster Shared Volumes.
  4. In the center pane, expand the listing for the volume that you are verifying. View the status of the volume.
  5. If a volume is offline, to bring it online, right-click the volume and then click Bring this resource online.

Using a Windows PowerShell command to check the status of a resource in a failover cluster

To use a Windows PowerShell command to check the status of a resource in a failover cluster:

  1. On a node in the cluster, click Start, point to Administrative Tools, and then click Windows PowerShell Modules. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.
  2. Type:

    Get-ClusterSharedVolume

    If you run the preceding command without specifying a resource name, status is displayed for all Cluster Shared Volumes in the cluster.

Related Management Information

Cluster Shared Volume Functionality

Failover Clustering

Related: