Prevention policy for DB2

I need a solution

I see under the AIX prevention the only DB prevention policy for MySQl.  Is there a best practice or template  for DB2? We are trying to set up prevention for an AIX 7.2 server running IBM DB2. We have a policy in place for the  OS – but want one geared toward the Data Base.

0

Related:

7022986: DB2 sqlcode -2044 error – backups fail

This document (7022986) is provided subject to the disclaimer at the end of this document.

Environment

SUSE Linux Enterprise Server 11 Service Pack 3 (SLES 11 SP3)

SUSE Linux Enterprise Server 11 Service Pack 4 (SLES 11 SP4)

Situation

After updating to kernel version 3.0.101-108.38.1 or version 3.0.101-108.41.1 storage backups were starting to fail. The DB2 errors pointed to ipc (interprocess communication) related errors.

The DB2 log (db2idag log) is showing an error like this:
FUNCTION: DB2 UDB, database utilities, sqluBufInUse, probe:1453

MESSAGE : SQL2044N An error occurred while accessing a message queue."

Resolution

The SLES11 SP4 kernel update: 3.0.101-108.52.1 and SLES11 SP3 LTSS kernel update: 3.0.101-0.47.106.32.1, both released May/31 include the patch to resolve the problem.

Cause

Cause is a faulty parsing of msgctl args in a fix of an ipc data structure inconsistency.

Disclaimer

This Support Knowledgebase provides a valuable tool for NetIQ/Novell/SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented “AS IS” WITHOUT WARRANTY OF ANY KIND.

Related:

DB2 Load taking too much time

We are using Datastage verion 11.3 running on Red Hat Enterprise Linux Server release 6.9. loading data into a DB2 (DB2/AIX64 – Product Version SQL10055 (Driver Version 4.19.49)). We are using a load utility to load. In the recent months, we are facing issues where the load processing is taking to much time. In early December, the issue became critical, the network teams came in (the Datastage server and the DB2 servers are on two different Geographical locations) analyzed the network issues through collection of dumps and found none.
Eventually, the Unix/Linux server was rebooted. This seemed to solve the issue. The load process greatly improved (took under 20 minutes compared to 4 hours when we were facing the issue).
Staring last week, the long load issue is back. We cant convince the Unix teams reboot the server on regular basis – they wont buy that. No one seems to know exactly what the issue is. The record count remains about the same. The data structure remain the same. No changes in the jobs design.

Related: