Excluding Software Backups From Scanning by ILMT

When updating their IBM products our Users insist on creating online backup copies of the old software “just in case”. Repeated requests not to do so or to create their backups offline have no effect and they carry on doing it and this screws up the PVU calculations in ILMT for these products.

Is there any way to prevent these backups being identified by the ILMT scanner? If they ZIPed the files would that work? If we told them to use a standard folder/directory name can we exclude that from scanning?

Suggestions and recommendations please. Thanks in advance.


Data backup strategies in QRadar

Hi All,
@JonathanPechtaIBM ,

We need to make backup strategy for saving our data as per the URL given below.


QRadar data backups
Data classification is an important consideration for backup strategies for the following reasons:

**Data such as personal identity information (PII) needs to be stored securely, and might need to be kept separate from bulk data backups, and retained for longer periods for compliance reasons.**
How this can be achieved in Qradar?

**Keep QRadar® system configuration data separate from your security data such as events and flows. It is safer to keep the system configuration separate and easier to restore this data if it stored separately.**
How to configure it separately for storing config and security data simultaneously?

**Store data such as PCI data in a separate location so that you can easily access this data when auditors want to see it.**.
How can we classify the specific data to be stored for longer period of time. How this can be achieved in Qradar?

**Think about types of data and retention periods when you develop your backup strategies.
You can back up some types of data more frequently than others and you can use offsite storage for some data to insure against data loss.**
How can the frequency of data backup be utilized? Can we configure backup service to be run multiple time after a specific interval? For example, it is experienced that QRadar starts data and config backup on every night at 12AM. Now we want some filtration on data e.g. we need to store only PCI related data and make a backup of this filtered data.

Hope I have clarified my queries.



DB2 LUW enable incremental backup – when to perform first FULL backup?

I’m planning to start using incremental backup on our DB2 database.
The procedure about enabling incremental backup is clear – set TRACKMOD ON and perform FULL DB backup after that. But the documentation is not very clear about the timing of the FULL backup. It says the full backup should be performed after the TRACKMOD is set, but can I let the applications to connect and start working with the DB first and then start FULL ONLINE backup? Or do I have to wait the backup to complete and then start the applications?
I’m asking this because we have 24/7 service depending on DB and since the backup takes about an hour, we cannot afford to be down so long.
Thanks !


Re: Is it possible to remove a file from a backup so it can never be restored?

It’s not possible to alter the original backups. The way Avamar stores data means that removing individual files from a backup isn’t possible because it would damage the referential integrity of the backup, leading to hfscheck errors.

I see two viable options if you really, really want to get rid of the affected files.

1. You could delete the original backups. This would prevent the files from being restored but it also means you would lose the rest of the backup content.

2. You could get in touch with your account team about setting up ADMe to restore the affected backups to a staging server, remove the affected files, write the backups back to the system, and delete the originals. There are some caveats with this process (e.g. the backups would be associated with a staging server instead of the original client system; it may take significant time to process the backups since they have to be restored in full, then backed up again) but it would allow you to retain the unaffected data while still purging the affected files. There may be some other caveats. Adam Kirkpatrick?


db2 incremental backup takes longer time than a full online backup


Our current setup has a db2 9.1 fp 8 with database size around 1.4 TB. Our os is AIX 6.1.
Daily online backup to a media server (via media agent) will take around 8 hours.
We tried to implement incremental backup but the time it took is way beyond 8 Hours, sometime up to 12 hours.
The incremental backup size is around 130GB

My question:

1. What is effecting the time for the
incremental backup to be more than a
full online backup?
2. Is there a way to check the
undermining factors in db2 itself?


Can we copy DB2 archived logs from offsite location?

If we run an online backup of our database to a NFS drive on the local network can we then also copy them to an offsite location for storage so we have a copy locally and offsite? Then, if for some reason we have to restore from that offline location we just copy them back to the log dir specified in db cfg and run the restore?



Do the daily Data backups backup all of the data on the QRadar console or is it just for the day the same as the config ?


Do the daily Data backups backup all of the event and flow data on the QRadar console or is it just for the day the same as the config ?

We have just configured the Backup and Recovery tab on the admin tab to take a data backup as well as config backup on a daily basis and the data backup sizes have been decreasing since we enabled this,



What is the best strategy for backing up the UCD file system?

I’ve read the UCD documentation on backups and the good article Sean Wilbur wrote on backups. I plan on backing up the appdata, conf, and tomcat conf dir since it contains our keystore and slightly modified server.xml. The database side is straight forward with DB2 backups.

1) I’m curious what the difference would be between an online and offline backup of this file system if I’m rsync-ing the files to another server? Is the risk that I might miss a file that’s “in use” while doing an online backup? In that case I would be better with a month offline backup at the same time as the DB2 database and incremental rsync backups in-between?

2) under ../appdata there’s a logs directory that’s quite large on our system. Is that recommended to be backed up as well? Ours is currently over 1GB in size.

Thanks for the help.


7020674: Advanced Backup Options: High Performance, Consistency, and Split OFUSER

This article will explain three Standard Backup Advanced Menu options:

  • High Performance Backups
  • Backup Consistency Level
  • Split OFUSER Directory Backup for Speed.

These options impact the performance of backups; and, depending on the environment, they can have varied results from slowing down the backup to speeding it up.

Each customer should keep a log that tracks backup times correlated with Reload settings to determine the optimal settings for their environment.

High (High Performance Standard Backups)

This setting allows Reload to launch multiple backup processes simultaneously. Its behavior depends upon the Consistency setting:

Consistency = High/Highest: It will launch the backup of the user databases followed by the indexes. Once that it complete, it begins the backup process of the BLOBS (a.k.a., “offiles” or “attachments” directory) as well as the message databases. These run simultaneously.

“Highest” differs from “High” in that it verifies that every user database in the post office got copied down to the backup.

Consistency = Normal: It will launch the backup of the user databases and immediately launch the BLOBS backup as well. While the BLOBS are being backed up, the user databases typically finish beforehand and so the index backup begins. The BLOBS may still be backing up when the indexes are finished. Once the indexes are done, the message database backup begins.

In some environments, this setting speeds up the backup. If speed is the highest priority, then consider setting Consistency to “normal”.

When High Performance is disabled, each backup process runs serially one by one. The user databases are the first to be backed up, followed by the indexes, the message databases, and finally the BLOBS.

A slow WAN link or slow I/O may be factors that would cause an administrator to consider disabling this option. Another factor for considering disabling this option would be troubleshooting a backup. It is easier to read the logs if the processes are launched one after the other rather than simultaneously; in addition, having multiple processes running at once might be the issue under certain circumstances.

But again, experiment with your environment and be sure to log the results until you find the optimal setting.

Consistency (Backup Consistency Level)

As you can tell, this option affects the order of the processes within the backup. Consistency should be set to high in most circumstances, unless the POA is unloaded or there is a high level of confidence that the production message store will not change during the backup process (additions/deletions). It is the default setting except for Reload to Reload profiles, where the “post office” it is backing up is actually a Reload profile; yet, again, if there is a chance that the primary Reload profile you are backing up will change (i.e., Access Mode, Restore Mode, or DR), then set Consistency to high.

Split (Split OFUSER Directory Backup for Speed)

With this setting enabled, Reload divides the backup of the OFUSER and OFUSER / INDEX directories into two separate backup processes. This allows Reload to start backing up the OFFILES directory sooner, which results in a quicker backup; however, when a post office has several thousand users, this setting can actually slow down the backup performance. Consider disabling this setting if the post office has more than 3,000 users. But, again, we recommend that you test the performance with it enabled and disabled to see what is best in your environment.