Re: Oracle wallet on RMAN backup thru avamar

OK – not an Oracle guru by any means, but worked on a project this year where I found out some things.

1) Avamar itself does not have a way to use Oracle Wallet if you’re setting up an Avamar group/policy, as far as I know

2) However, if the customer has their own RMAN scripts and you set up Avamar as the target device, apparently the following lines assist in “magically” getting your credentials from Oracle Wallet (caveat – there may be other lines needed, but as far as I know, these are the important ones):

# —————————————————————————

# Set the Oracle Home, the Oracle SID, and the Server Name.

# —————————————————————————

export ORATAB=/etc/oratab

export ORACLE_SID=`egrep -v “^#” ${ORATAB} | awk -F: ‘{ print $1 }’`

export ORACLE_HOME=`cat $ORATAB | grep ^$ORACLE_SID: | cut -d”:” -f2`

export LOG_DIR=/home/oracle/log

export ServerName=`hostname | awk -F. ‘{ print $1 }’`

(then later on before you set the RMAN name)

# —————————————————————————

# Set the target and catalog connect strings.

# Use the ORACLE WALLET for authentication

# —————————————————————————

TARGET_CONNECT_STR=${ORACLE_SID}_sys

CATALOG_CONNECT_STR=rcatowner

(and then later when you actually begin the backup section)

connect TARGET /@$TARGET_CONNECT_STR;

connect catalog /@$CATALOG_CONNECT_STR;

(additional advisory – I have no idea who “rcatowner” is – this is from a customer script)

Hope that helps in some manner.

Related:

Avamar VM image restores – disk provisioning overview?

Can someone provide an overview on Avamar VM image restores and VM disk provisioning relative to the data that was backed up?

That is, I want to better understand the restore process with respect to how a VM’s disks will be created/provisioned by Avamar when that VM image backup is restored to “new”, restored to “existing” but different, and restored to “original”.

Working with a customer and it appears that when they do a restore at their DR site of a VM image that has been replicated from the primary site, all of the disks appear to be created with thick provisioning – and for some disks depending on the VM image being restored, this results in substantially longer restore times due to disks that have large provisioned capacities but not much actual data on them.

(e.g. a disk on the original VM that was thin provisioned for 1TB and only has a few GB on it will restore at the DR site as the full thick provisioned 1TB – or at least it seems to take the time that a 1TB disk would take to restore)

Posting this because I’ve tried to search for it and can’t seem to find anything definitive.

All comments/feedback appreciated – thanks.

Related:

Dell EMC X-Series Networking available exclusively through Distribution

Dell EMC’s X-Series family of smart web-managed 10GbE and 10GbE Ethernet switches are now available exclusively through Dell EMC approved distributors. Partners can now order X-Series switches with quicker delivery time and with a simple, predictable process.

X-Series.png

Dell EMC X-Series Switches

The Dell EMC X-Series switches are the only web-managed 10GbE all-fibre switch in the industry and offers end users access to enterprise-class features in a simple, intuitive way. Its interface allows for easy setup and management from a single screen. In addition, its ease of use and flexibility enables customers to run their businesses with minimal interference and for a variety of needs.

By offering the X-Series through distribution, Dell EMC offers partners quicker delivery — a key benefit of working through distribution. Partners are also provided the choice to work with any of Dell EMC’s approved distributors so that you can leverage the distribution partnerships that best meet your company’s needs.

To learn more about the X-Series, download the Sales Battle Card.

Related:

Re: Networker memory requirement

I have networker 8.2.2.6 and has CPU 2.5 GH , 24 CORE , Memory 32 GB , Its using datadomain. Has around 165 groups and 2600 clients. We have some job waits for days before starting. I have tried different combination of server parallelism , savegroup parallelism , device target and max session and still not getting good session start. Groups wait for 2/1 days before starting. What is logic behind parallelism ? why i am not getting good session.

Ran vmstat and see lots of swapping …

[amaurya@dccbrmsta ~]$ vmstat 1 10

procs ———–memory———- —swap– —–io—- –system– —–cpu—–

r b swpd free buff cache si so bi bo in cs us sy id wa st

4 0 669588 5051176 913076 21007152 0 0 135 178 2 1 8 2 90 0 0

4 0 669588 5050272 913076 21007368 0 0 0 1360 8899 5073 15 3 82 0 0

6 0 669588 5051460 913076 21007476 0 0 0 644 7882 4500 14 3 82 0 0

4 0 669588 5053084 913084 21007600 0 0 0 456 7877 4836 14 3 83 0 0

4 0 669588 5054036 913084 21007848 0 0 0 1244 8141 4309 14 4 82 0 0

4 0 669588 5052812 913084 21007992 0 0 0 80 7364 4249 14 3 83 0 0

3 0 669588 5052300 913084 21008028 0 0 0 14476 7325 4072 13 2 85 0 0

5 0 669588 5053532 913084 21008256 0 0 0 812 5195 3168 13 0 87 0 0

3 0 669588 5053260 913084 21008404 0 0 0 832 6366 3836 14 0 86 0 0

4 0 669588 5054044 913084 21008484 0 0 0 776 5534 3517 13 0 87 0 0

Any comment ???

Related:

Re: ScaleIO Node failure scenario question

HI Gary,

Whether in a 3-node setup or a larger setup, the answer is going to (most likely) be the same. In your 3-node setup, what you are describing is a double failure. If the rebuild from the first node failure has not completed by the time the HDD in node 2 fails, then you would likely be looking at a data unavailable scenario. Meaning, there were blocks of data that only existed on that HDD from node 2 that had yet been rebuilt fully before it failed. I say likely as the disks in this case would all be participating in the rebuild from the original failure (node 1) and they will all be sending and receiving blocks of data equally to keep things balanced. If that rebuild has not finished, then you will have some primary data blocks unavailable to be sent from that HDD.

If the rebuild finished from node 1 and then the HDD from node 2 fails, if you don’t have enough spare capacity then you will have an issue with not being able to complete the rebuild. But there will still be at least 1 copy of those data blocks from the failed disk residing elsewhere and clients will still be able to access their data.

It all depends on timing and when the failures occurred relative to the rebuild.

Hope that helps,

Rick

Related:

Re: isi_job_d problems…

Alright if you get this issue again, it’s the MCP process that is the problem. Here’s how you fix it:



isi_for_array -sX ps auwxx | grep -v grep | grep isi_job_d

isi_for_array -sX ps auwwx | grep -v grep | grep isi_mcp

isi services -a isi_job_d disable

isi_for_array -sX ps auwxx | grep -v grep | grep isi_job_d

If processes are still running for isi_job_d after 60 seconds, run:

isi_for_array -sX killall -9 isi_job_d

isi_for_array -sX killall -6 isi_mcp

isi_for_array -sX ps auwwx | grep -v grep | grep isi_mcp

If processes are still running for isi_mcp after 60 seconds, run:

isi_for_array -sX killall -9 isi_mcp

Then once truly dead, restart isi_mcp with

isi_for_array -sX isi_mcp

Then isi_job_d enable:

isi services -a isi_job_d enable

Check ps auwwx again:

isi_for_array -sX ps auwxx | grep -v grep | grep isi_job_d

isi_for_array -sX ps auwwx | grep -v grep | grep isi_mcp

Also, check the thread count too for FSAnalyze.

Related:

Re: VMAX -SRDF/A Write Pacing – Help

Hello Community,

I am looking for some detailed information on SRDF/A Write Pacing, we have a situation where the cache utilization is high on the Remote Array (VMAX 20 K, running v 5876.288) and reaches WP limit very frequently causing the SRDF link to drop.

We have followed recommendations and have implemented DSE pool as well as rectified any link related problems which has improved the situation a lot.

However I would like to explore the Device Write Pacing option, and see if that can improve the cache utilization on the Remote Array. I did read the below content in the SRDF CLI user guide.

” Device-level pacing is for SRDF/A solutions in which the SRDF/A R2 devices participate in TimeFinder copy sessions.

SRDF/A device-level write pacing addresses conditions that lead to cache overflow specifically due to TimeFinder/Snap and TimeFinder/Clone sessions on an R2 device running in asynchronous mode.”



All of our R2 devices have a Gold Copy associated with it in a precopy state, if I activate write pacing on the Gold Copy devices would it improve the cache utilization, also does it cause any performance impact on the hosts masked to it ?

Related: