7020987: Error: “Secure Connection Failed” when accessing FILR administration page

This document (7020987) is provided subject to the disclaimer at the end of this document.

Environment

Micro Focus Filr 3.2

Situation

When attempting to access the Filr port 9443 Appliance Console, the error “Secure Connection Failed” appears.
Attempting to access the Filr port 8443 Administration Console also fails (may simply time out).
PING and TELNET to the Filr server IP address are successful.

Resolution

1. From the Filr server’s console:
Verify that the Filr server has not run out of disk space
Check storage using the command
df -m
Check inodes using the command
df -i

If either of these commands indicate a problem with available resources, correct the problem and retest.

2. From the Filr server’s console:
Restart services
rcnovell-datamodel-service restart
rcnovell-jetty restart
After executing these commands, retest.

If the above steps do not resolve the problem:
Review the Filr logs for possible errors:
/var/opt/novell/tomcat-filr/logs/appserver.log
/var/opt/novell/tomcat-filr/logs/catalina.out
In the catalina.out log file, look for an error such as:

java.lang.Exception: Unable to load certificate key /vastorage/conf/certs/vaserver.key (error:02001002:system library:fopen:No such file or directory)
If this error appears, the certificate is either missing or corrupt. In this case, contact Micro Focus Customer Care for assistance in re-creating the certificate.

Disclaimer

This Support Knowledgebase provides a valuable tool for NetIQ/Novell/SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented “AS IS” WITHOUT WARRANTY OF ANY KIND.

Related:

7020971: HP 3PAR array has moved to support ALUA

This document (7020971) is provided subject to the disclaimer at the end of this document.

Environment

SUSE Linux Enterprise Server 11 Service Pack 4 (SLES 11 SP4)

SUSE Linux Enterprise Server 11 Service Pack 3 (SLES 11 SP3)

Situation

HP 3PAR array has moved to support ALUA. Starting in SLES12-GA, the defaults for “3PARdata” devices have been changed and updated,

however those changes were not back ported to SLES11 due to the potential for conflicts.

Since the built-in defaults in SLES11 will not be changed for 3PARData devices, it is recommended to follow vendor supplied configurations for the device, as seen under Resolution.

The change is to override the defaults of what’s built into the multipath hardware table with the correct settings.

Resolution

Updated device settings in multipath.conf should be:

device { vendor "3PARdata" product "VV" path_grouping_policy "group_by_prio" path_checker "tur" hardware_handler "1 alua" prio "alua" rr_weight "uniform" no_path_retry "18" failback "immediate" }

Additional Information

Bug# 922105 explains the situation and change for SLES12 – change for SLES11 needs to be done by hand

Disclaimer

This Support Knowledgebase provides a valuable tool for NetIQ/Novell/SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented “AS IS” WITHOUT WARRANTY OF ANY KIND.

Related:

7020973: SUSE products and a new security bug class referred to as “Stack Smashing”.

A new class of vulnerabilitieshave been identified under the umbrella name “StackSmashing”.

This bug class exploits a weakness in theaddress space model of operating systems like Linux.

How does itwork…

The programs in operatingsystems use a so called stack for storing variables and returnaddresses used in functions. The stack grows depending on the amountof variables used and the depth of the called function tree. Thegrowth direction is also special, on most platforms it growsdownwards.

As the stack shares the same address space with theregular program, heap and libraries and other program memory regionscare needs to be taken that the automatic growing stack does notcollide with other memory regions.

For this some years ago a”stack guard gap” page of 4KB was introduced, that is alsoused for automatic growing the stack if a stack memory access goesinto the guard page.

The security research company Qualys hasidentified that in some libraries and programs under specificconditions the stack pointer can “jump over” this 4KB stackguard page and proceed below it or even overwrite memory areaspositioned there.

This can for happen with large arrays on thestack over 4KB which are accessed only in some places, or by programsusing the alloca() function to getstack memory that is also not accessed fully.

This grown stackcould then be made to “smash” into other memory areas,containing code, data, function pointers or similar and which in turncould be used to execute code.

Note that these problems arenot bugs in the programs, libraries or the kernel themselves, butcaused by vague interpretation of the stack grow magic ABI betweenthe compiler and kernel.

To mitigatethis class of attacks we will be doing the following :

– LinuxKernels are being released immediately.

The kernel updates willincrease the stack gap size to be much larger (1 MB / 16 MB),which should mitigate most of the cases found during research.

Thismitigation is tracked under CVE-2017-1000364

– glibcpackages are being released immediately.

glibc itself contains severalcases of being able to effect these stack jumps, happening evenbefore a binary is loaded in the dynamic loader.

When used withsetuid root binaries these could be used to escalate privilege fromuser to root using stack smashing.

This security fix is tracked underCVE-2017-1000366

– gcc (GNUCompiler Collection) updates will be released in the near future.

These updates will feature aflag that enables touching all stack memory pages when dynamic largestack allocations are done, to avoid having large jumps.

Note thatas the stack code is directly built into the libraries and binaries, recompiling packages is necessary to make it effective.

– Variousapplications might be updated in the near future.

We will identify and releaseupdates for various applications that have such stack usage patternsand rebuild them with the new gcc compiler flag.

Related:

7020984: IDM Designer appears too small on high resolution screens.

Create fileC:netiqidmappsDesignerjrebinjavaw.exe.manifest

with the following contents:

<?xmlversion=”1.0″ encoding=”UTF-8″standalone=”yes”?>

<assemblyxmlns=”urn:schemas-microsoft-com:asm.v1″manifestVersion=”1.0″xmlns:asmv3=”urn:schemas-microsoft-com:asm.v3″>

<description>eclipse</description>

<trustInfoxmlns=”urn:schemas-microsoft-com:asm.v2″>

<security>

<requestedPrivileges>

<requestedExecutionLevelxmlns:ms_asmv3=”urn:schemas-microsoft-com:asm.v3″

level=”asInvoker”

ms_asmv3:uiAccess=”false”>

</requestedExecutionLevel>

</requestedPrivileges>

</security>

</trustInfo>

<asmv3:application>

<asmv3:windowsSettingsxmlns=”http://schemas.microsoft.com/SMI/2005/WindowsSettings”>

<ms_windowsSettings:dpiAwarexmlns:ms_windowsSettings=”http://schemas.microsoft.com/SMI/2005/WindowsSettings”>false</ms_windowsSettings:dpiAware>

</asmv3:windowsSettings>

</asmv3:application>

</assembly>

Run regedit

HKEY_LOCAL_MACHINESOFTWAREMicrosoftWindowsCurrentVersionSideBySide

Add DWORD (32bit)”PreferExternalManifest” and set value to 1

Related:

7020982: DHCP Leases File Does Not Get Updated

This document (7020982) is provided subject to the disclaimer at the end of this document.

Environment

Novell Open Enterprise Server 2015 (OES 2015) Linux Support Pack 1
Novell Open Enterprise Server 2015 (OES 2015) Linux
Novell Open Enterprise Server 11 (OES 11) Linux Support Pack 3
Novell Open Enterprise Server 11 (OES 11) Linux Support Pack 2

Situation

DHCP leases file does not get updated

Error: “dhcpd: Can’t backup lease database /var/lib/dhcp/db/dhcpd.leases to /var/lib/dhcp/db/dhcpd.leases~: Operation not permitted”

dhpcd creates dhpcd.lease file with ownership of root:root

Resolution

Option 1 – Update the dhpcd apparmor profile to include chmod
  1. Modify /etc/apparmor.d/usr.sbin.dhcpd and add a new line at the top of the capability section with ” capability chown,”
  2. Restart apparmor with “rcapparmor restart”
Option 2 – Disable apparmor
  1. Stop apparmor with “rcapparmor stop”
  2. Prevent apparmor from loading on boot with “chkconfig boot.apparmor off”

Cause

The apparmor profile for dhcpd does not allow chmod by default.

Disclaimer

This Support Knowledgebase provides a valuable tool for NetIQ/Novell/SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented “AS IS” WITHOUT WARRANTY OF ANY KIND.

Related:

7020979: Building and executing Pro*COBOL applications on UNIX

Problem:

  • How do I build Pro*COBOL applications on UNIX?
  • Why, when executing my application, do I receive an RTS173 against SQLADR or SQLBEX?
  • Why, when executing my application, do I receive an RTS114 against SQLBEX?
  • Why, when executing my application, do I receive an RTS114 at program termination.
  • Why, on HP-UX platforms do I receive an RTS118 at execution-time?
  • Why, when executing my application with rtsora or rtsora32, do I can get an invalid magic number error against libwtc9.so?
  • How do I call a Pro*COBOL application from Java ?
  • Why, when compiling my application, do I get a procob not found error?
  • Why, when compiling my application, do I get a procob18 not found error?
  • Why, when relinking rtsora, do I get a “library could not be found” error?
  • Why, when compiling my Oracle application, do I get an RTS114 against token ?

Resolution:

Summary

When using Pro*COBOL — or the Cobsql preprocessor — to precompile Pro*COBOL applications on UNIX, you need to be aware of the following information. Detailed reference material can be found later in this article.

Error scenario Likely cause Solution
RTS173 errors Typically occur when the COBOL run-time cannot resolve calls to Oracle APIs planted by Pro*COBOL. See the background and resolution sections of this document.
RTS118 errors Typically occur on HP/UX platforms (PA-RISC and Itanium), due to unresolved symbols within Oracle client-side libraries. See the resolution section of this document.
RTS114 and “bad magic number” errors Typically due to a mismatch of 32-bit and 64-bit between your COBOL working mode and your Oracle client-side software. Both need to be wholly 32-bit , or wholly 64-bit. See the Oracle environment section of this document.
Can also occur due to byte-ordering issues. If working on Intel-based platforms either on Windows or UNIX, refer to Knowledge Base article #3957
“procob not found” and “library could not be found” errors Typically occur when the Oracle environment is not set correctly, or when Pro*COBOL has not been installed. See the Oracle environment section of this document.

Background

Pro*COBOL will precompile your EXEC SQL statements into one or more calls to internal Oracle APIs. In order to execute or debug your application, those calls will need to be resolved by the COBOL run-time.

If you are building your application to executable, linking your application to executable will resolve these API calls. We recommend that you base your build command-line on that as shown by the Oracle sample programs, under $ORACLE_HOME/precomp/demo/procob2 . Note that if you are using Oracle 10, the sample programs are only available from the Oracle Companion CD.

If, however, you wish to execute or debug your application as intermediate code (.int), generated code (.gnt), or as a callable shared object (.so, or .sl on HP-UX PA-RISC), you need to explicitly resolve those Oracle API calls prior to executing or debugging your application.

Oracle environment

Depending on the UNIX platform, Oracle may provide client-side support for 32-bit only, 64-bit only, or 32-bit and 64-bit. Under the Oracle installation directory, $ORACLE_HOME, you will typically see either :

lib32 lib

or

lib lib64

directories, which contain the 32-bit and 64-bit Oracle shared objects etc. respectively.

That is, the directory will be $ORACLE_HOME/lib for a 32-bit only Oracle product, and $ORACLE_HOME/lib32 or $ORACLE_HOME/lib for a combined 32/64 Oracle product, depending on whether you are operating in 32-bit or 64-bit mode.

We need to ensure that the setting of LD_LIBRARY_PATH/LIBPATH/SHLIB_PATH — depending on the OS involved — is set to the appropriate Oracle “lib” directory to match the effective working mode, i.e. cobmode, of the UNIX COBOL product.

If you invoke the “mfsupport” command, the resultant mfpoll.txt will show your current COBOL working mode, and also the setting of LD_LIBRARY_PATH/LIBPATH/SHLIB_PATH, to assist in verifying your current environment configuration.

Oracle’s COBOL precompiler, Pro*COBOL will typically be provided as a 32-bit and 64-bit executable, named either procob/procob32 and procob64/procob for 32-bit or 64-bit environments.

The 32-bit and 64-bit versions of Pro*COBOL will generate different COBOL code, for example different sizes for data-items defined as USAGE POINTER, so it’s important that you use the correct version of Pro*COBOL for your target environment. If you end up with a mismatch of Pro*COBOL environment against the COBOL working mode, you will likely see RTS114 errors at execution time.

Note that Cobsql does some verification of the user environment, along with which Pro*COBOL executables it finds on the PATH, to invoke the correct version. If, when using Cobsql, you get an error such as

"procob" not found

then either :

  • The Oracle environment is not set up correctly
  • You have not installed Pro*COBOL as part of your Oracle client installation. Note that the option to install Pro*COBOL is only enabled if your COBOL environment is set up prior to invoking the Oracle installation program.

If you pass the SQLDEBUG option to Cobsql, it will output the exact command it is trying to invoke. You should confirm that the Pro*COBOL executable it is looking for resides within your $ORACLE_HOME/bin directory.

If you’re not sure which “bitness” the Pro*COBOL executables are, you should be able to use the OS command, file, to determine which is which. Here’s what you’ll see on a couple of the more common environments from executing:

file $ORACLE_HOME/bin/procob*
  • x86-64 / Linux
    procob: ELF 64-bit LSB executable, AMD x86-64, version 1 (SYSV), for GNU/Linux 2.6.4, dynamically linked (uses shared libs), for GNU/Linux 2.6.4, not strippedprocob32: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), for GNU/Linux 2.6.4, dynamically linked (uses shared libs), for GNU/Linux 2.6.4, not stripped
  • IBM System p / AIX
    procob: 64-bit XCOFF executable or object module not strippedprocob32: executable (RISC System/6000) or object module not stripped
  • Itanium / HP/UX
    procob: ELF-64 executable object file - IA64procob32: ELF-32 executable object file - IA64
  • SPARC / Solaris
    procob: ELF 64-bit MSB executable SPARCV9 Version 1, dynamically linked, not strippedprocob32: ELF 32-bit MSB executable SPARC Version 1, dynamically linked, not stripped
  • IBM System z / Linux
    procob: ELF 64-bit MSB executable, IBM S/390, version 1 (SYSV), for GNU/Linux 2.6.4, dynamically linked (uses shared libs), for GNU/Linux 2.6.4, not strippedprocob32: ELF 32-bit MSB executable, IBM S/390, version 1 (SYSV), for GNU/Linux 2.2.10, dynamically linked (uses shared libs), for GNU/Linux 2.2.10, not stripped
  • PA-RISC / HP/UX
    procob: ELF-64 executable object file - PA-RISC 2.0 (LP64)procob32: PA-RISC1.1 shared executable dynamically linked -not stripped

If you have used the wrong Pro*COBOL executable to precompile the code, for example, you’ve used ‘procob’, but are running in 32-bit mode, you will most likely see RTS114 errors at execution time.

Resolution

On most UNIX platforms, the Oracle client-side libraries themselves require the threaded C run-time. You must therefore build and execute your application using the threaded COBOL run-time.

When you install the Pro*COBOL precompiler as part of your Oracle installation, the Oracle installer will relink the COBOL run-time to pull in Oracle client-side support. These files are located in your $ORACLE_HOME/bin directory, and are typically named rtsora or rtsora32 (for 64-bit and 32-bit support respectively). On 32-bit only platforms, the executable is typically named rtsora. The linking of these re-linked COBOL run-time modules will resolve the calls to the Oracle APIs, as generated by Pro*COBOL.

These executables are effectively a replacement for the threaded COBOL run-time executables as provided with Server Express, i.e. $COBDIR/bin/rts32_t or $COBDIR/bin/rts64_t .

In order to execute your Pro*COBOL application, you need to utilise the rtsora* executables as your trigger program, for example

$COBDIR/bin/rtsora32 my32bitapp.int$COBDIR/bin/rtsora my64bitapp.int

For debugging your application, you would have to set the COBSW environment variable to +A.

Should you wish to, you can — after having made a backup! — replace these rts??_t modules with the rtsora equivalent files, e.g. (logged in as the root user) :

mv $COBDIR/bin/rts32_t $COBDIR/bin/rts32.oldcp $ORACLE_HOME/bin/rtsora32 $COBDIR/bin/rts32_t

Having done this, you would be able to use the regular trigger programs, cobrun_t (for execution) or anim_t (for debugging) your application.

To avoid rebuilding the COBOL run-time to resolve the Oracle API calls, there is an alternate approach you can use. This approach will create a callable shared object, which you can then preload from your application, by compiling your source with the INITCALL “sharedobjectname compiler directive.

If your application has a Java front-end — i.e. you use cobjrun as the trigger — or you are deploying your application under Enterprise Server , you must utilise this alternate method.

To create the callable shared object, firstly download makeorarts.zip from the Attachments link at the end of this article. You should then extract the makeorarts.sh script , and then execute the script as follows, having firstly configured your COBOL and Oracle environment :

sh ./makeorarts.sh

The script will create rtsora_t.so (or .sl on HP-UX PA-RISC) in your current working directory, which is the callable shared object. It will also rebuild the ‘rtsora’ executable in the current directory, as rtsora32 (32-bit) or rtsora (64-bit).

You can then compile your program with -C ‘initcall “rtsora_t.so”‘ (or .sl for HP-UX PA-RISC platforms), and you will be able to execute or debug your application .

NOTE. If you are working on HP/UX platforms (both Itanium and PA-RISC), when using the second approach, you need to perform a couple of additional steps, otherwise you may receive an RTS118 error at execution-time. This is documented within the Server Express product readme .

If you are creating a callable shared object to resolve the calls to Oracle APIs, and you are working on either HP-UX 11.0 or 11.11 (11i), you may firstly need to install an operating system patch from Hewlett Packard. For HP-UX 11.0, you need to install PHSS_30969, and for HP-UX 11.11, PHSS_30970. Note that these patches may have been superseded by later versions, so you should install the latest recommended patch. For later versions of the Operating System, e.g. HP-UX 11.23, no OS patches should be necessary.

Having installed the patch, you need to set an environment variable, LD_PRELOAD, to point to the Oracle client shared library, for example on HP/UX Itanium:

LD_PRELOAD=$ORACLE_HOME/lib32/libclntsh.so cobrun_t myapp.int

or on HP/UX PA-RISC :

LD_PRELOAD=$ORACLE_HOME/lib32/libclntsh.sl cobrun_t myapp.int

(for 64-bit HP-UX operation, substitute lib32 for lib)

If you wish to work with both Java and Pro*COBOL on HP-UX, you therefore need to build your application as follows :

# Compile the Pro*COBOL application using Cobsqlcob -itk mycobapp.pco -C 'initcall "rtsora_t.so" p(cobsql)'# Compile the main java program javac *.java# Compile the application to be called directly from Javacob -it mainapp.cblLD_PRELOAD=Oracle_lib_dir/libclntsh.so:rtsora_t.so cobjrun JavaClientAp

where Oracle_lib_dir will be $ORACLE_HOME/lib for a 32-bit only Oracle product, and $ORACLE_HOME/lib32 or $ORACLE_HOME/lib for a combined 32/64 Oracle product, depending on whether you are operating in 32-bit or 64-bit mode (see section 2. above for more details on this).

Related:

7020978: A sequence of sequence gives segmentation fault during code generation on Solaris.

This document (7020978) is provided subject to the disclaimer at the end of this document.

Environment

env

Situation

sit

Resolution

Summary A sequence of sequence gives segmentation fault during code generation on Solaris.
Article Number 18246
Environment Orbix 3.3.8Orbix 3.3.9Orbix 3.3.10Solaris
Question/Problem Description A sequence of sequence gives segmentation fault during code generation on Solaris.

The IDL compiler core dumps with a segmentation fault on Solaris

Recursive type in IDL containing a sequence of a sequence does not IDL compile on Solaris.

The IDL compiler on Solaris cannot deal with sequence of sequence in the IDL file.
Clarifying Information
Error Message
Defect/Enhancement Number ORBTHREE-864
Cause
Resolution This is a bug and was fixed in Orbix 3.3.10 TP7
Workaround
Notes

Here’s an example of an IDL that would cause the problem:

struct Parameter2

{

sequence< sequence<Parameter2> > paramlist;

};

interface test

{

// Either one of the following usage of Parameter2 would cause a coredump on Solaris

//void getMethod(in Parameter2 a);

Parameter2 getMethod();

};

Attachment
Created date: 06 September 2011
Last Modified: 13 February 2013
Last Published: 23 June 2012
First Published date: 09 September 2011

Disclaimer

This Support Knowledgebase provides a valuable tool for NetIQ/Novell/SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented “AS IS” WITHOUT WARRANTY OF ANY KIND.

Related:

7020977: What UNIX/Linux version is supported by this release of Micro Focus COBOL? Will the product run on unsupported UNIX/Linux versions?

The following web page shows Micro Focus product versions, and the general OS versions under which they are supported:

http://supportline.microfocus.com/prodavail.aspx

For more detail, on UNIX/Linux systems, there is a file named “env.txt” that is installed with each Micro Focus COBOL product. On disk, in the directory where the product is installed, there is a subdirectory named “docs”, and within that subdirectory, the file named “env.txt”. In other words, the file is:

$COBDIR/docs/env.txt

The “env.txt” file specifies the exact OS versions that Micro Focus used when producing and testing and certifying that particular Micro Focus product. More than one OS version is listed in the file, because certification testing is performed on more than one OS version.

Besides specifying OS versions, the file shows the commands used to reveal the OS versions. For example, the file may say:

Operating System----------------AIX 6.1.0.0 6100-00uname -sosleveloslevel -r

In the above, the OS version AIX 6.1.0.0 6100-00 is shown as being certified, and the commands:

uname -sosleveloslevel -r

were the commands used to reveal the OS version.

Searching through the env.txt file for the pattern

also certified on

will reveal additional certified OS versions. The env.txt file also specifies particular versions of C/C++, the system assembler “as”, the system linker “ld”, and Java versions.

During installation, when the “install” script is run, the contents of the env.txt file are displayed on the screen, and if the OS version on the machine is different than the OS versions in the env.txt file, the script displays warning messages.

The question often comes up, “What if my OS version is different than the versions in the env.txt file? To what extent is the product supported? What are the chances the product will work on a different OS?”

Strictly speaking the OS versions specified in the env.txt file are the only OS versions supported.

This does not mean Micro Focus would refuse any kind of support for running under different OS versions. For customers with active maintenance agreements, Micro Focus is always willing to talk on the phone or respond in writing, to answer technical questions and give advice for solving or working-around problems. But if a bug is discovered, so a patch or a fix must be released, Micro Focus will provide the fix only if the bug reproduces under a supported OS version.

For example: an end-user encounters a situation where the compiler doesn’t work correctly or the compiled code doesn’t run correctly. The end-user experiences the bug under an OS version not listed in the env.txt file. The end-user submits an example test case to demonstrate the bug. Micro Focus tests and discovers that the bug also reproduces under a certified OS version. Thus it is a Micro Focus bug (not caused by OS incompatibility), and Micro Focus will provide a fix. The fix will be included in the next scheduled release of the product, or in a “hotfix” applicable to a current version of the product.

As another example, an end-user discovers a bug, but the bug occurs only under an uncertified OS, and when tested under a certified OS, the bug disappears. In that case the OS is too different than the certified OS, the bug is caused by OS incompatibility, and Micro Focus is not obligated to provide a fix.

Micro Focus products on UNIX/Linux often work successfully on OS versions different than those specified in the env.txt file. This is especially true when the OS versions are relatively close together and not too different.

For example, Server Express v5.1 WrapPack 5 is certified under Red Hat 5.5, but an end-user intends to run it under Red Hat 5.6. The two OS versions are similar enough that few problems are expected, and it will probably work. However, it is not guaranteed. The OS versions in env.txt are the only versions Micro Focus actually tested, and the only versions under which Micro Focus guarantees the behavior. Nevertheless, the chances are good that it will work.

Bugs that are actually the fault of the Micro Focus product nearly always reproduce under supported OS versions as well as similar and related OS versions, so if you decide to run on an unsupported OS version, as long as it is not too different than a supported OS, bugs that you encounter will likely reproduce on the supported OS as well, and Micro Focus will be obligated to fix them.

OS vendors sometimes provide a measure of backward-compatibility, so software that ran on a prior level of the OS will also run on the newer OS level. For example, Solaris 10 may claim a certain amount of “binary backward compatibility” with Solaris 9. One reason Micro Focus products often run successfully on OS versions not appearing in the env.txt, is because of OS compatibility features.

The greater the difference between OS versions, the greater the risk. For example, Server Express v4.0 SP2 is certified under AIX 5.1. What are the chances it will work under AIX 6.1? Server Express v4.0 SP2 is several years old; it reached its “end of service” December 2008 according to this web page:

http://supportline.microfocus.com/prodavail.aspx

Also, the difference between AIX 5.1 and AIX 6.1 is substantial; there is a “major number” change between them (5 versus 6). The old Micro Focus product may initially appear to work under AIX 6.1, but there is substantial risk of undefined and unpredictable behavior.

Micro Focus products that have reached “end of service” are no longer supported, but again this does not mean Micro Focus will refuse to provide service of any kind. For customers with active maintenance, Micro Focus is always willing to take phone calls or respond in writing to answer technical questions and provide “advice and avoidance”. If a bug is discovered in an “end of service” product, and a test case is submitted capable of demonstrating the bug, Micro Focus will test to determine whether the bug reproduces also in the latest version, and if it does, Micro Focus will fix it in the next scheduled release of the latest version. “End of service” means only that Micro Focus will no longer create fixes for the older version.

Running on an OS version not specified in the env.txt file is a calculated risk. If the OS version is relatively close to an OS version specified in the env.txt file, the risk is low. But we recommend that customers test application code thoroughly before putting applications into production.

Related: