7022884: Determine host settings converted when migrating legacy Host session Files to Reflection Desktop 16

Reflection will automatically migrate legacy Host session files and a conversion report can be generated which shows the settings that have or have not been converted. If the legacy display session setting does not have an equivalent in Reflection, it is not converted.

To convert legacy IBM display session files in Reflection, follow these steps:

1. Locate and launch the display session file in Reflection.

2. Examine the connection and terminal properties, the converted settings, in Reflection.

3. Save the settings in Reflection in the new session file format.

When closing the Host session in Reflection a prompt will apeear that says “Do you want to save changes to <sessionName>.

Answer Yes and enter the appropriate new file name with extension *.RD3X or *.RD5X .

Follow the steps below to create a conversion report:

1. Start Reflection from the command line using the –legacyreport option.

For example:

“C:Program Files (x86)Micro FocusReflectionAttachmate.Emulation.Frame.exe” -f “<sessionName>” –legacyreport

The other option is to open the Attachmate.Emulation.Frame.exe with the -legacyreport command line option

and then each time a legacy Host session file is opened from the Reflection Workspace via the File / Open menu

the conversion report will be generated.

2. Open the conversion report file, located in the My Documents folder on the workstation, and examine the converted session settings.

See Additional Information for the conversion report file name, which is different for each product session file converted.

The report is in a comma-delimited format, so is best viewed in a spreadsheet program, such as Microsoft Excel.

Each time a session file is converted the new output file will overwrite the existing file. Thus to retain the conversion information for each session, rename the output report file in the My Documents folder between each conversion.


  • No Related Posts

7021495: Supported Legacy Reflection, EXTRA!, and KEA! Files in Reflection 2014 and 2011

Enabling Support for Legacy Files

Enabling support for legacy files depends on your product and previous installations.

Reflection 2014

All legacy Reflection and EXTRA! compatibility features are enabled by default in Reflection 2014.

Support for using legacy KEA! files is available starting with Reflection 2014. This feature is available on the Attachmate installation program “Feature Selection” tab, under UNIX and OpenVMS > Compatibility.

Figure 1: Reflection 2014 feature selection.

Reflection 2011

Legacy Reflection Compatibility features are enabled by default in Reflection 2011.

If you are installing Reflection 2011 on a machine that has EXTRA! or was previously upgraded from EXTRA! to Reflection, the Legacy EXTRA! Compatibility features are enabled by default.

If you are installing Reflection 2011 on a machine that does not have EXTRA! and you want to use EXTRA! files, you must select “Legacy EXTRA!” when you install Reflection. This feature is available on the Attachmate installation program “Feature Selection” tab, under 3270/5250 > Compatibility or UNIX and OpenVMS > Compatibility.

View Full Size

Figure 2: Reflection 2011 feature selection.
Figure 2: Reflection 2011 feature selection.

Supported Reflection Legacy Files

The following Reflection legacy files are supported:

Reflection 2014
Reflection 2011
Reflection for UNIX and OpenVMS Settings File
Reflection for Secure IT Settings File
yes (R3)
Reflection for REGIS Graphics Settings File
Reflection for IBM Settings
VBA Macro File
FTP Settings File
Mainframe Transfer Request File
AS/400 Transfer Request File
Reflection Command Language Scripting File
Reflection Basic Scripting Files
Reflection Macro File

Supported EXTRA! Legacy Files

Reflection 2014
Reflection 2011
EXTRA! QuickPads (read only)
EXTRA! Toolbars (read only)
Display Session
EXTRA! Keyboard Map
yes (.ekm files can now be browsed for in the keyboard map file picker)
EXTRA! File Transfer Schemes

(R1 SP1)

EXTRA! Layout Files *
EXTRA! Basic Macros
EXTRA! Basic Header

* EXTRA! layout files are one type of EXTRA! macro. To run EXTRA! layout files in Reflection you must create a file association with the .elf file extension. Follow these steps:

  1. Copy your .edp and .elf files to DocumentsAttachmateReflection.
  2. Double-click an .elf file in the DocumentsAttachmateReflection folder.
  3. Windows will prompt you for a file association.
    1. Browse to the Reflection install directory and select ebrun.exe.
    2. Select the check box to always use this program to open this type of file.

Supported KEA! Legacy Files

The following KEA! file type is supported.

Reflection 2014
Reflection 2011
KEA! configuration file

Supported Third Party Product Files

You can run a majority of macros created by the following products. The macros run directly without conversion.

Reflection 2014
Reflection 2011
OpenText/Hummingbird HostExplorer – Hummingbird Basic macro
yes (R3)
Brandon Systems/Jolly Giant QWS 3270 macro
yes (R3)
Micro Focus Rumba macro
IBM Personal Communications macro
IBM Personal Communications VBScript

For more information about running macros, see http://docs.attachmate.com/reflection/2014/r1/tshelp/en/user-html/macros_run_pr.htm.

All trademarks, trade names, and company names referenced herein are used for identification only and are the property of their respective owners.

Troubleshooting Legacy Files

If you have problems with your legacy files, review the following issues.

Trusted Location

If your EXTRA! .edp files are stored in Program FilesAttachmateEXTRA! (and not in My Documents), and you open the .edp files in Reflection , then an error displays: “Unable to open because it is not in a trusted location.” To resolve this error, add the .edp file location (path) to Reflection’s Trusted Locations (specified in Reflection Workspace Settings).

Custom Configuration File Location

If you open an .edp file that refers to other custom configuration files (for example, a custom keyboard map file), the custom configuration file must be in either:

  • The “schemes” subfolder in the “Default legacy EXTRA! directory” (configured in Reflection Workspace Settings), typically My DocumentsAttachmateEXTRA!schemes, OR
  • The same location as the .edp file

If no custom configuration file is found, Reflection will use a default file, and you may think that the EXTRA! custom configuration settings have been lost.

In Reflection 2014, when an .edp session file is opened and a custom keyboard map is found, a new .xkb file is created. To determine if Reflection 2014 or 2011 found the custom keyboard map, open the Manage Keyboard Maps dialog and look at the filename being used (.ekm or .xkb). If Reflection cannot find the keyboard map file, defaults are used.


  • No Related Posts

7021692: Settings File Format in Reflection for IBM 14.x

Version Information

For information about using update settings files in Reflection 12.0 through 14.x, see Technical Note 1566.


Starting with version 12.0, Reflection for IBM settings files (*.rsf) are saved in binary file format rather than as simple text files. The new binary file format means that macros can be included in your settings files, making it easier to distribute macros to other users.

Partial settings files, which include Key/Mouse (*.map), Toolbar (*.btp), Colors (*.clr), Hotspot/Hotlist (*.hsp), and Menus (*.mnu) continue to use text format. Settings update files (*.rsu) also continue to use text format.

Prior Version Settings Files are Updated Automatically

When you open a settings file created with Reflection 11.0 or earlier, Reflection 12.0 through 14.x automatically updates its settings to the newer binary format. However, the updated format isn’t retained until you save the settings file.

Note the following:

  • Your saved settings file is changed to the new binary format the first time you use Reflection’s File > Save command. If you close Reflection without using the Save command, the file will retain its original (text-based) format.
  • Prior to version 12.0, macros were saved in separate files that had the same base name as your settings file and used an *.rvx extension. Macros are now incorporated directly into your settings files. The first time you save a prior version settings file using Reflection 12.0 through 14.x, Reflection moves all macro information to the saved settings file (*.rsf) and deletes the *.rvx file.

Sharing Settings Files with Earlier Versions of Reflection

Earlier versions of Reflection cannot open the binary settings file that were created using version 12.0 through 14.x. If you create and save a settings file using version 12.0 through 14.x and then try to open that file using an earlier version of Reflection, you will see the following error message:

<file name> is not a valid Reflection settings file

Reflection users running version 12.0 through 14.x can share settings with users running older versions of Reflection by saving partial settings files or creating settings update files. These files can be opened in earlier versions of Reflection.

To share macros created using version 12.0 through 14.x, you can open the Visual Basic editor and export your Visual Basic project files. Use the Visual Basic Editor again in the earlier Reflection application to import the project files.

“Not a valid file” Errors seen when Launching Reflection from a Web Page

If you use a web page to launch your Reflection sessions and see an error message saying that the settings file is not a valid Reflection settings file, contact your system administrator. Administrators who have installed and configured Reflection for the Web can use the Reflection for the Web Administrative WebStation to distribute Reflection sessions to end users. If the administrator is using version 12.0 through 14.x on the workstation that creates the Reflection for IBM settings files uploaded to the web server, end users running Reflection 11.x or earlier will not be able to launch those sessions.

Administrators who use the Administrative WebStation to distribute Reflection sessions should use one of the following strategies to ensure that all end users can run Reflection sessions.

  • Upgrade all users to the latest version of Reflection.
  • Run version 11.x (or earlier) on the administrator’s workstation. This will ensure that all settings files uploaded to the web server use the earlier file format.

Tips for Viewing Settings

Because the new settings files are saved in binary format, you can no longer view and edit these files directly. You can still use either of the following methods to view information about your settings.

  • To see a quick list of non-default settings, open the View Settings dialog box and set Display settings to Changed. (Note: Some Reflection configuration information is not included in the View Settings dialog box list.)
  • To see a comprehensive list of all current configuration information, click File > Save As, and then in the Save as type drop-down list, select XML Settings (*.xml). Open the resulting file in your web browser. If your session is configured to use the default value for Transform Settings to HTML, the XML file you created will display an HTML document summarizing all of your current settings.


  • No Related Posts

Path-based File Pool Policies

Got the following question from the field recently:

I have a cluster with an primary X410 pool and archive NL410 pool. There is a nightly job that moves inactive files from primary to archive. However, can I set it up so that when I copy files to a folder they go directly to the NL pool without waiting for the nightly job to run?

The answer to the above is yes, with a couple of caveats.

Since the filepool policy applies to the directory, any new files written to it will automatically inherit the settings from the parent directory. Typically, there is not much variance between the directory and the new file. So, assuming the settings are correct, the file is writtenstraight to the desired pool or tier, with the appropriate protection, etc. This applies to access protocols like NFS and SMB, as well as copy commands like ‘cp’ issued directly from the OneFS command line interface (CLI). However, if the file settings differ from the parent directory, the SmartPools job will correct them and restripe the file. This will happen when the job next runs, rather than at the time of file creation.

However, simply moving a file into the directory (via the UNIX CLI commands such as cp, mv, etc) will not occur until a SmartPools, SetProtectPlus, Multiscan, or Autobalance job runs to completion. Since these jobs can each perform a re-layout of data, this is when the files will be re-assigned to the desired NL pool. The file movement can be verified by running the following command from the OneFS CLI:

# isi get -dD <dir>

So the key is whether you’re doing a copy (that is, a new write) or not. As long as you’re doing writes and the parent directory of the destination has the appropriate file pool policy applied, you should get the behavior you want.

One thing to note: If the actual operation that is desired is really a move rather than a copy, it may be faster to change the file pool policy and then do a recursive “isi filepool apply –recurse” on the affected files.

There’s negligible difference between using an NFS or SMB client versus performing the copy on-cluster via the OneFS CLI. As mentioned above, using isi filepool apply will be slightly quicker than a straight copy and delete, since the copy is parallelized above the filesystem layer.

Let’s take a quick file pools refresher…

File pools is the SmartPools logic layer, where user configurable policies govern where data is placed, protected, accessed, and how it moves among the Node Pools and Tiers. This is conceptually similar to storage ILM (information lifecycle management), but does not involve file stubbing or other file system modifications. File Pools allow data to be automatically moved from one type of storage to another within a single cluster to meet performance, space, cost or other requirements, while retaining its data protection settings.

For the scenario above, a File Pool policy may be crafted which dictates that anything written to path /ifs/path1is automatically moved directly to the Archive tier. For example:


To simplify management, there are defaults in place for Node Pool and File Pool settings which handle basic data placement, movement, protection and performance. All of these can also be configured via the simple and intuitive UI, delivering deep granularity of control. Also provided are customizable template policies which are optimized for archiving, extra protection, performance and VMware files.

When a SmartPools job runs, the data may be moved, undergo a protection or layout change, etc. There are no stubs. The file system itself is doing the work so no transparency or data access risks apply.

Data movement is parallelized with the resources of multiple nodes being leveraged for speedy job completion. While a job is in progress all data is completely available to users and applications.

The performance of different nodes can also be augmented with the addition of system cache or Solid State Drives (SSDs). Within a File Pool, SSD ‘Strategies’ can be configured to place a copy of that pool’s metadata, or even some of its data, on SSDs in that pool.

Overall system performance impact can be configured to suit the peaks and lulls of an environment’s workload. Change the time or frequency of any SmartPools job and the amount of resources allocated to SmartPools. For extremely high-utilization environments, a sample File Pool policy can be used to match SmartPools run times to non-peak computing hours. While resources required to execute SmartPools jobs are low and the defaults work for the vast majority of environments, that extra control can be beneficial when system resources are heavily utilized.

SmartPools file pool policies can be used to broadly control the three principal attributes of a file:

1. Where a file resides.

    • Tier
    • Node Pool

2. The file performance profile (I/O optimization setting).

    • Sequential
    • Concurrent
    • Random
    • SmartCache write caching

3. The protection level of a file.

    • Parity protected (+1n to +4n, +2d:1n, etc)
    • Mirrored (2x – 8x)


A file pool policy is built on a file attribute the policy can match on. The attributes a file Pool policy can use are any of: File Name, Path, File Type, File Size, Modified Time, Create Time, Metadata Change Time, Access Time or User Attributes.

Once the file attribute is set to select the appropriate files, the action to be taken on those files can be added – for example: if the attribute is File Size, additional settings are available to dictate thresholds (all files bigger than… smaller than…). Next, actions are applied: move to Node Pool x, set to y protection level and lay out for z access setting.

File Attribute


File Name

Specifies file criteria based on the file name


Specifies file criteria based on where the file is stored

File Type

Specifies file criteria based on the file-system object type

File Size

Specifies file criteria based on the file size

Modified Time

Specifies file criteria based on when the file was last modified

Create Time

Specifies file criteria based on when the file was created

Metadata Change Time

Specifies file criteria based on when the file metadata was last modified

Access Time

Specifies file criteria based on when the file was last accessed

User Attributes

Specifies file criteria based on custom attributes – see below

‘And’ and ‘Or’ operators allow for the combination of criteria within a single policy for flexible, granular data manipulation.

As we saw earlier, for file Pool Policies that dictate placement of data based on its path, data typically lands on the correct node pool or tier without a SmartPools job running. File Pool Policies that dictate placement of data on other attributes besides path name get written to Disk Pool with the highest available capacity and then moved, if necessary to match a File Pool policy, when the next SmartPools job runs. This ensures that write performance is not sacrificed for initial data placement.

Any data not covered by a File Pool policy is moved to a tier that can be selected as a default for exactly this purpose. If no Disk Pool has been selected for this purpose, SmartPools will default to the Node Pool with the most available capacity.


How do you tell if an old SPSS file is corrupted?

I have this old SPSS file from 1989 that won’t open. At first, I opened it in UltraEditor to double check that is was in fact an SPSS file which the file read as “PCSPSS SYSTEM FILE. IBM PC DOS, SPSS/PC+”. I then tried to open it in the PSPP application, but nothing happens and the program sends me to the syntax page. From there, I tried to convert it using a .por converter since I thought the file was originally saved as a portable SPSS file, but that did not work either. don’t think the file is coded incorrectly seeing that I was able to open a file very similiar to this one in terms of content, date, and size and convert it, so I am not sure what exactly the issue is. Even when I try to attach a .txt version of the file, I get an error stating that the file type is invalid I am thinking that it may be a corrupted file, and I was wondering if there was any way to tell and possibly any way to repair it? Or, if you have any other suggestions in opening this file, I would glad to try other options