LOB_TABLESPACE increasing drastically

I need a solution

DLP 15, Oracle 12 (OEM)

Issue is as the title says.  I’ve contacted support and it seems to be going in circles or takes a day to respond only to be told something that is already tried or disproven.

The issue started about the beginning of this month (May 2019).

From the beginning.

“Event Code 2301” LOB_TABLESPACE is almost full.  This event code has been like this for about 3 years we delete the complete incident and according to Oracle it will essentially clear up some areas within the tablespace but will not say that there is now available space which is fine considering as long as we delete incidents we should not have to add more data files to the tablespace.  I have not needed to add data files for the past couple years only suddenly I have needed to add data files.

“Event Code 2316” Over 1000000 incidents currently contained in the database.  This is another one that has been like this for a couple years but just this year in early April 2019 I decided I no longer wanted to listen to what our previous procedure was of keeping a couple months worth of incidents in the enforce server and decided to clear out about 1,800,000 incidents to stop from this event code being generated.  Logs show that the official deletion was successful.

May 4th 2019 – “Event Code 1802” Corrupted incident received.  Starts going crazy over the weekend.

May 7th 2019 – Finally realized that incidents were not coming. Did a quick google search and found solution.

 https://support.symantec.com/en_US/article.TECH221815.html

Added more datafiles to Oracle database about 60 GB worth.  Went in to folder and renamed all .bad files back to .idc files incidents were then normally pushed in Oracle and appeared in the enforce server.  

May 8th 2019 – 60GB worth of tablespace gone.  Okay well then added about 100GBs more and again did renaming of files to get push back to database. 

Checked to see if any policies were creating a large number of incidents.  NONE.

Decided to clear more incidents now down to about 300,000 incidents stored in enforce server.  Hoping that the LOB_TABLESPACE would stop growing.  WRONG.  

Everyday the LOB_TABLESPACE grows about 20GB.  I have added in about 200GBs in hoping support would provide a resolution before it gets filled again and still nothing.

No major changes have been done to Oracle database and to any of the DLP servers.  What could be causing the LOB_TABLESPACE to grow so uncontrollably?

0

Related:

Re: Need explanation on Parent and Child Storage Group in VMAX All Flash Array

One example, from the Oracle space, is to create a child SG (storage group) for the Oracle data files (e.g. data_sg) and another for the redo logs (e.g. redo_sg). Aggregate them under a parent SG (e.g. database_sg). Then use the parent for operations that relate to the whole database:

1. Masking view.

2. Remote replications with SRDF (create a consistency group that includes both data and logs).

3. Local database snapshots using SnapVX (a snapshot containing both data and log is considered ‘restartable’ and you can use it to instantiate new copies of the database, or as a ‘savepoint’ before patch update or for any other reason).

However, if you need to recover production database, you only want to restore data_sg and not redo_sg (in case the online redo logs survived, as they contain the latest transactions). Therefore, although the snapshots are made with the parent (database_sg), you can restore just the child (data_sg), and proceed with database recovery – all the way to the current redo logs.

Another advantage is separate performance monitoring of the database data files and logs. When using the child SG’s you can monitor them separately for performance KPIs without the redo logs metrics mix up with the data files.

Finally, with the introduction of Service Levels (SL’s) back into the code, you can use them differently on the child SG’s if you so wanted (e.g. Silver for data_sg, gold for redo_sg, etc.)

Related:

SAV for Linux – Oracle DB server, suggested exclusions to on-access scanning

This article describes the suggested exclusions to Sophos Antivirus for Linux on-access scanning where Oracle Data Base is installed.

The following sections are covered:

Applies to the following Sophos products and versions

Sophos Anti-Virus for Linux

Sophos Anti-Virus for Linux 9.15.0

Sophos Linux Security 10.4.0

When an Oracle Data Base is installed and running on a Linux server, there may be a performance impact seen, which is caused by On-access scanning. This is because during normal running, many integral DB files are constantly being opened and written or used in the processing of the data. Sometimes these files are opened and scanned many 100s of times a minute.

On Windows platforms these files can be excluded using the file extensions to identify them. On Linux this is not so easy as file extensions are not used by the OS. So the file exclusions should be made with the help of the local DB Administrator.

The following table describes the Oracle file types that should be considered for exclusion with reference to their Windows equivalent extension:

File Type Description Example
DataFiles

Oracle data files would have an extension of “.dbf” when found on a windows platform.

These are generally found under …/oracle/oradata/
Log Files Log files may have an extension of “.log” and these will be created when creating/restoring database backup copies. These could be found under …/oracle/inventory/logs/
Redo files Redo files are Real-time Oracle execution files and may also have a .log extension or a “.rdo” extension on a Windows platform. NOTE: Redo logs will exist if the Oracle Development toolkit or backup and recovery are used.
Control Files Oracle Control files would have an extension of “.ctl” on a windows platform. Thes are often found under …/oracle/oradata/

The files included in the above file types should be identified by the local DBA so they can be considered for exclusion.

Exclusions can be made on an individual file name basis, or as a block using wildcards and common name attributes. When any exclusions are made made it is recommended to review the file and consider whether a scheduled or named scan needs to be created to check the file/directory regularly.

Note:

Enterprise Console only supports path-based Linux and UNIX exclusions. You can also set up other types of exclusion directly on the managed computers. Then you can use regular expressions, exclude file types and filesystems. For information on how to do this, see the Sophos Anti-Virus for Linux configuration guide or the Sophos Anti-Virus for UNIX configuration guide

  • Please see the Enterprise Console Help documentation for details of adding exclusions on Linux servers through SEC
  • Please see the Sophos Central Admin help documentation for details of adding exclusions on Linux Servers managed by Central

If you’ve spotted an error or would like to provide feedback on this article, please use the section below to rate and comment on the article.

This is invaluable to us to ensure that we continually strive to give our customers the best information possible.

Related:

Provisioning Services: PVS Servers May Stop Responding Or Target Devices May Freeze During Startup Due To Large Size Of MS SQL Transaction Logs

Backup the XenApp/XenDesktop Site and PVS database and the Transaction log file to trigger the Transaction log auto truncation.

The transaction log should be backed up on the regular basis to avoid the auto growth operation and filling up a transaction log file.

Reference: https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/back-up-a-transaction-log-sql-server?view=sql-server-2017​

ADDITIONAL INFORMATION

Ideally Transaction log will be truncated automatically after the following events:

  • Under the simple recovery model, unless some factor is delaying log truncation, an automatic checkpoint truncates the unused section of the transaction log.In the Simple recovery there is little chance for the transaction log growing – just in specific situations when there is a long running transaction or transaction that creates many changes
  • By contrast, under the full and bulk-logged recovery models, once a log backup chain has been established, automatic checkpoints do not cause log truncation. Under the full recovery model or bulk-logged recovery model, if a checkpoint has occurred since the previous backup, truncation occurs after a log backup (unless it is a copy-only log backup). There is no automated process of transaction log truncation, the transaction log backups must be made regularly to mark unused space available for overwriting. Bulk-logged recovery model reduces transaction log space usage by using minimal logging for most bulk operations

Transaction log file size may not decrease even if transaction log has been truncated automatically.

Log truncation frees space in the log file for reuse by the transaction log. Log truncation is essential to keep the log from filling. Log truncation deletes inactive virtual log files from the logical transaction log of a SQL Server database, freeing space in the logical log for reuse by the Physical transaction log. If a transaction log were never truncated, it would eventually fill all the disk space that is allocated to its physical log files.

It is recommended also to keep the transaction log file in a separate drive from the database data files, as placing both data and log files on the same drive can result poor database performance.

Related:

A easy and simple way to establish Oracle ADG

Yes, thanks to link.JPG.jpg

Then, I can give simple and reasy way to make it.

Suppose hosts and IPs like that:

150.150.186.16 linux3 (primary)

150.150.186.26 linux4 (secondary or standby)

On primary node:

(1) parameter:

SQL> show parameter db_name

NAME TYPE VALUE

———————————— ———– ——————————

db_name string TEST

SQL> show parameter db_uniq

NAME TYPE VALUE

———————————— ———– ——————————

db_unique_name string TEST

SQL> show parameter LOG_ARCHIVE_CONFIG

NAME TYPE VALUE

———————————— ———– ——————————

log_archive_config string DG_CONFIG=(TEST,TESTDR)

SQL>show parameter LOG_ARCHIVE_DEST_1

NAME TYPE VALUE

———————————— ———– ——————————

log_archive_dest_1 string LOCATION=/oracle/archive

SQL> show parameter LOG_ARCHIVE_DEST_2

NAME TYPE VALUE

———————————— ———– ——————————

log_archive_dest_2 string SERVICE=TESTDR NOAFFIRM ASYNC

VALID_FOR=(ONLINE_LOGFILES,PRI

MARY_ROLE) DB_UNIQUE_NAME=TEST

DR

SQL> show parameter log_archive_dest_state_2

NAME TYPE VALUE

———————————— ———– ——————————

log_archive_dest_state_2 string ENABLE

SQL> show parameter log_archive_forma

NAME TYPE VALUE

———————————— ———– ——————————

log_archive_format string %t_%s_%r.arc

SQL> show parameter log_archive_max_processes

NAME TYPE VALUE

———————————— ———– ——————————

log_archive_max_processes integer 30

SQL> show parameter remote_login_passwordfile

NAME TYPE VALUE

———————————— ———– ——————————

remote_login_passwordfile string EXCLUSIVE

SQL> show parameter fal_server

NAME TYPE VALUE

———————————— ———– ——————————

fal_server string TESTDR

SQL> show parameter standby_file_management

NAME TYPE VALUE

———————————— ———– ——————————

standby_file_management string AUTO

SQL> show parameter service

NAME TYPE VALUE

———————————— ———– ——————————

service_names string TEST

The redo log would better like:

SQL> select member from v$logfile;

MEMBER

——————————————————————————–

/oracle/system/redo01.log

/oracle/system/redo02.log

/oracle/system/redo03.log

/oracle/system/standby_redo01.log

/oracle/system/standby_redo02.log

/oracle/system/standby_redo03.log

/oracle/system/standby_redo04.log

(2) listener and TNS

[oracle@linux3 admin]$ cat listener.ora

LISTENER =

(DESCRIPTION_LIST =

(DESCRIPTION =

(ADDRESS = (PROTOCOL = TCP)(HOST = linux3)(PORT = 1521))

(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))

)

)

SID_LIST_LISTENER =

(SID_DESC =

(ORACLE_HOME = /oracle/app/product/11.2.0)

(SID_NAME = TEST)

)

)

ADR_BASE_LISTENER = /oracle

[oracle@linux3 admin]$ cat tnsnames.ora

TEST =

(DESCRIPTION =

(ADDRESS_LIST =

(ADDRESS = (PROTOCOL = TCP)(HOST = linux3)(PORT = 1521))

)

(CONNECT_DATA =

### (SID = TEST)

(SERVICE_NAME = TEST) (UR = A)

)

)

TESTDR =

(DESCRIPTION =

(ADDRESS_LIST =

(ADDRESS = (PROTOCOL = TCP)(HOST = linux4)(PORT = 1521))

)

(CONNECT_DATA =

### (SID = TEST)

(SERVICE_NAME = TESTDR) (UR = A)

)

)

On Secondary node:

(1) parameter

SQL> show parameter db_name

NAME TYPE VALUE

———————————— ———– ——————————

db_name string TEST

SQL> show parameter db_uniq

NAME TYPE VALUE

———————————— ———– ——————————

db_unique_name string TESTDR

SQL> show parameter LOG_ARCHIVE_CONFIG

NAME TYPE VALUE

———————————— ———– ——————————

log_archive_config string DG_CONFIG=(TEST,TESTDR)

SQL> show parameter LOG_ARCHIVE_DEST_1

NAME TYPE VALUE

———————————— ———– ——————————

log_archive_dest_1 string LOCATION=/oracle/archive

SQL> show parameter LOG_ARCHIVE_DEST_2

NAME TYPE VALUE

———————————— ———– ——————————

log_archive_dest_2 string SERVICE=TEST NOAFFIRM ASYNC VA

LID_FOR=(ONLINE_LOGFILES,PRIMA

RY_ROLE) DB_UNIQUE_NAME=TEST

SQL> show parameter log_archive_dest_state_2

NAME TYPE VALUE

———————————— ———– ——————————

log_archive_dest_state_2 string ENABLE

SQL> show parameter log_archive_format

NAME TYPE VALUE

———————————— ———– ——————————

log_archive_format string %t_%s_%r.arc

SQL> show parameter log_archive_max_processes

NAME TYPE VALUE

———————————— ———– ——————————

log_archive_max_processes integer 30

SQL> show parameter remote_login_passwordfile

NAME TYPE VALUE

———————————— ———– ——————————

remote_login_passwordfile string EXCLUSIVE

SQL> show parameter fal_server

NAME TYPE VALUE

———————————— ———– ——————————

fal_server string TEST

SQL> show parameter standby_file_management

NAME TYPE VALUE

———————————— ———– ——————————

standby_file_management string AUTO

SQL> show parameter service

NAME TYPE VALUE

———————————— ———– ——————————

service_names string TESTDR

The redo log would better like:

SQL> select member from v$logfile;

MEMBER

——————————————————————————–

/oracle/system/redo01.log

/oracle/system/redo02.log

/oracle/system/redo03.log

/oracle/system/standby_redo01.log

/oracle/system/standby_redo02.log

/oracle/system/standby_redo03.log

/oracle/system/standby_redo04.log

(2) listener and TNS

[oracle@linux4 admin]$ cat listener.ora

LISTENER =

(DESCRIPTION_LIST =

(DESCRIPTION =

(ADDRESS = (PROTOCOL = TCP)(HOST = linux4)(PORT = 1521))

(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))

)

)

SID_LIST_LISTENER =

(SID_DESC =

(ORACLE_HOME = /oracle/app/product/11.2.0)

(SID_NAME = TEST)

(SERVICE_NAME = TESTDR)

)

)

ADR_BASE_LISTENER = /oracle

[oracle@linux4 admin]$ cat tnsnames.ora

TEST =

(DESCRIPTION =

(ADDRESS_LIST =

(ADDRESS = (PROTOCOL = TCP)(HOST = linux3)(PORT = 1521))

)

(CONNECT_DATA =

(SID = TEST)

(SERVICE_NAME = TEST) (UR = A)

)

)

TESTDR =

(DESCRIPTION =

(ADDRESS_LIST =

(ADDRESS = (PROTOCOL = TCP)(HOST = linux4)(PORT = 1521))

)

(CONNECT_DATA =

(SID = TEST)

(SERVICE_NAME = TESTDR) (UR = A)

)

)

So , at the end, when both nodes are ready. You execute following command like:

SQL> startup nomount;

ORACLE instance started.

Total System Global Area 3657797632 bytes

Fixed Size 2258600 bytes

Variable Size 805308760 bytes

Database Buffers 2835349504 bytes

Redo Buffers 14880768 bytes

SQL> alter database mount standby database;

SQL> alter database recover managed standby database disconnect from session;

Database altered.

then you just get an normal style DG

SQL> select name,open_mode,database_role from v$database;

NAME OPEN_MODE DATABASE_ROLE

——— ——————– —————-

TEST MOUNTED PHYSICAL STANDBY

If we want to get active data guard. we should do like :

SQL> startup nomount;

ORACLE instance started.

Total System Global Area 3657797632 bytes

Fixed Size 2258600 bytes

Variable Size 805308760 bytes

Database Buffers 2835349504 bytes

Redo Buffers 14880768 bytes

SQL> alter database mount standby database;

Database altered.

SQL> alter database open read only;

Database altered.

SQL> alter database recover managed standby database disconnect from session;

Database altered.

SQL> select name,open_mode,database_role from v$database;

NAME OPEN_MODE DATABASE_ROLE

——— ——————– —————-

TEST READ ONLY WITH APPLY PHYSICAL STANDBY

Then , please enjoy yourself

Related:

XenDesktop 7.6 Studio Failed to Connect to Database, Error Id:XDDS:9B560459

Step1.Check the event log of SQL server, because large transaction log may cause such error message. If transaction log is full, try to shrink transaction log

  1. Use BACKUP LOG [databasename] to disk = ‘nul’. For reference please turn to http://www.cnblogs.com/TeyGao/p/3519954.html or http://realit1.blogspot.com/2016/02/shrinking-database-log-files-in.html. This method may take a lot of time depending on the size of the log , so please be patient
  2. Break the mirror configuration to shrink the transaction log quickly. This method may have a higher risk, since we make too many changes to the environment and Citrix has no professional database support to deal with anything urgent caused by such big changes.

Remove mirror from the primary SQL Server

Modify the recovery model to simple

Right click the database and shrink the transaction log

Backup the database and transactionlog, copy them to Mirror database server

Restore the backup with NORECOVERY option on Mirror database server

select the database and select mirror task on primary database server

click ConfigureSecurity to start Mirroring wizard

Select Yes on Witnessserver page

Step2: Try to restart SQL Server and DDC if possible

Step3: Use ODBC to connect to database to test if database can accept remote connection

Open ODBC on any windows system, Click Add and select SQL Server and click Finish to create a New Data Source to SQL Server.

User-added image
Give a name of the Data Source and select the SQL server name you want to connect, then click Next > Next > Finish > Test Data Source. If the DB can accept remote connection, TESTS COMPLETED SUCCESSFULLY message is displayed.

User-added image
User-added image

Other troubleshooting steps including but not limited to checking the 1433 port of SQL Server and “Allow remote connection to this server configurations” Configuration of Database.

Related:

XenDesktop 7.6 Studio Failed to Connect to Database Error Id:XDDS:9B560459

Step1.Check the event log of SQL server, because large transaction log may cause such error message. If transaction log is full, please try to shrink transaction log

  • For only one database environment,use the method in this KB to shrink transaction log:https://support.citrix.com/article/CTX126916
  • For Mirror or AlwayOn database environment
  1. Use BACKUP LOG [databasename] to disk = ‘nul’. For reference please turn to http://www.cnblogs.com/TeyGao/p/3519954.html or http://realit1.blogspot.com/2016/02/shrinking-database-log-files-in.html. This method may take a lot of time depending on the size of the log , so please be patient
  2. Break the mirror configuration to shrink the transaction log quickly. This method may have a higher risk, since we make too many changes to the environment and Citrix has no professional database support to deal with anything urgent caused by such big changes.

Remove mirror from the primary SQL Server

Modify the recovery model to simple

Right click the database and shrink the transaction log

Backup the database and transactionlog, copy them to Mirror database server

Restore the backup with NORECOVERY option on Mirror database server

select the database and select mirror task on primary database server

click ConfigureSecurity to start Mirroring wizard

Select Yes on Witnessserver page

Step2: Try to restart SQL Server and DDC if possible

Step3: Use ODBC to connect to database to test if database can accept remote connection

Open ODBC on any windows system, Click Add and select SQL Server and Click Finish to Creat a New Data Source to SQL Server.

User-added image
Give a name of the Data Source and select the SQL server name you want to connect, then Click Next->Next->Finish->Test Data Source. If the DB can accept remote connection, TESTS COMPLETED SUCCESSFULLY message will show up.

User-added image
User-added image

Other troubleshooting steps including but not limited to checking the 1433 port of SQL Server and “Allow remote connection to this server configurations” Configuration of Database.

Related:

Viability of Exadata/Dataguard/ASM setup with IIDR/CDC

Dear,

I would like know if CDC can be integrated with the architecture below.

– Oracle(exadata) primary and stanby syncronized with dataguard
– Standby configured with “Standby Redo Logs”
– Transport is in “Max Performance mode” (async) and real-time apply
– Use of automatic storage management (ASM) for log files.
– CDC is installed on file system accessible both from the primary and the standby (NFS)

The exadata system should be usable as a CDC source as well as a CDC target (not necessarily bidirectional though).

Another must is resilience against data loss when performing a planned switch-over and ofcourse without the need for refreshes. In such a planned switch-over there will be complete synchronisation so the async transport above can be disregard. We know that in the case of a real disaster the async character above could lead to data consistency issues on fail-over (although maybe the dmfailoverrecovery command could help us in that case).

I have clients using Db2 for LUW in which case such a switch-over can be as easy as stopping the CDC on the primary side, doing a take-over and starting it up again on the new primary side (previously the standby side).

Having no previous experience with this setup, it is especially the usage of ASM which looks like an attention point to me. I know from the documentation that there are some extra parameters which need to be applied during instance setup to configure the ASM connection details.

We are a bit worried that even if it would work, that it would be difficult to configure the CDC instance to operate on the former standby database after planned switch-over.

In the ideal case the needed configuration changes should be scriptable (automated) such that the CDC layer added to the system is transparent.

So the mayor question is if such a setup is viable, and secondly what would be the pitfalls and needed instance configuration changes when performing such a switch-over.

Any input is welcome.

Kind regards,
Erwin

Related:

What is a SID, how to change it, how to find out what it is.

The SID is a site identifier. It plus the Oracle_home are hashed together in Unix to create a unique key name for attaching an SGA. If your Oracle_sid or Oracle_home is not set correctly, you’ll get “oracle not available” since we cannot attach to a shared memory segment that is identified by magic key. On NT, we don’t use shared memory but the SID is still important. We can have more then 1 database on the same oracle home so we need a way to id them.

Changing it harder then it looks. I know you are on unix, so here are the steps for changing it (or the database name) under Unix – they are different on NT.

How to find the sid — “select instance from v$thread” will do that.


PURPOSE

This entry describes how to find and change the “db_name” for a database, or the ORACLE_SID for an instance, without recreating the database.

SCOPE & APPLICATION

For DBAs requiring to either find or change the db_name or ORACLE_SID.


To find the current DB_NAME and ORACLE_SID:

===========================================

Query the views v$database and v$thread.

V$DATABASE gives DB_NAME

V$THREAD gives ORACLE_SID

If ORACLE_SID = DB_SID and db_name = DBNAME:

To find the current value of ORACLE_SID:

SVRMGR> select instance from v$thread;

INSTANCE

—————-

DB_SID

To find the current value of DB_NAME:

SVRMGR> select name from v$database;

NAME

———

DBNAME


Modifying a database to run under a new ORACLE_SID:

===================================================

1. Shutdown the instance

2. Backup all control, redo, and data files.

3. Go through the .profile, .cshrc, .login, oratab, tnsnames.ora, (for SQL*Net version 2), and redefine the ORACLE_SID environment variable to a new value.

For example, search through disks and do a grep ORACLE_SID *

4. Change locations to the “dbs” directory

% cd $ORACLE_HOME/dbs

and rename the following files:

o init<sid>.ora (or use pfile to point to the init file.)

o control file(s). This is optional if you do not rename any

of the controlfiles, and the control_files parameter is used.

The “control_files” parameter is set in the “init<SID>.ora” file or in a file it references with the ifile parameter. Make sure that the control_files parameter does not point to old file names, if you have renamed them.

o “crdb<sid>.sql” & “crdb2<sid>.sql”, This is optional. These are only used at database creation.

5. Change locations to the “rdbms/admin” directory

% cd $ORACLE_HOME/rdbms/admin

and rename the file:

o startup<sid>.sql. This is optional. On some platforms, this file may be in the “$ORACLE_HOME/rdbms/install” directory. Make sure that the contents of this file do not reference old init<SID>.ora files that have been renamed. This file simplifies the “startup exclusive” process to start your database.

6. To rename the database files and redo log files, you would follow the instructions in <Note:9560.1>.

7. Change the ORACLE_SID environment variable to the new value.

8. Check in the “$ORACLE_HOME/dbs” directory to see if the password file has been enabled. If enabled, the file “orapw<OLD_SID>” will exist and a new password file for the new SID must be created (renaming the old file will not work). If “orapw<OLD_SID>” does not exist, skip to step 9. To create a new password file, issue the following command as oracle owner:

orapwd file=orapw<NEWSID> password=?? entries=<number of users to be granted permission to start the database instance>

9. Start up the database and verify that it works. Once you have done this, shutdown the database and take a final backup of all control, redo, and data files.

10. When the instance is started, the control file is updated with the current ORACLE_SID.


Changing the “db_name” for a Database:

======================================

1. Login to Server Manager

% svrmgrl

SVRMGR> connect internal

2. Type

SVRMGR> alter system switch logfile;

to force a checkpoint.

3. Type

SVRMGR> alter database backup controlfile to trace resetlogs;

This will create a trace file containing the “CREATE CONTROLFILE”

command to recreate the controlfile in its current form.

4. Shutdown the database and exit SVRMGR

SVRMGR> shutdown

SVRMGR> exit

The database must be shutdown with SHUTDOWN NORMAL or SHUTDOWN IMMEDIATE. It must not be shutdown abnormally using SHUTDOWN ABORT.

5. Change locations to the directory where your trace files are located. They are usually in the “$ORACLE_HOME/rdbms/log” directory. If “user_dump_dest” is set in the “init<SID>.ora” file, then go to the directory listed in the “user_dump_dest” variable. The trace file will have the form “ora_NNNN.trc with NNNN being a number.

6. Get the “CREATE CONTROLFILE” command from the trace file and put it in a new file called something like “ccf.sql”.

7. Edit the “ccf.sql” file

FROM: CREATE CONTROLFILE REUSE DATABASE “olddbname” NORESETLOGS …

TO: CREATE CONTROLFILE set DATABASE “newdbname” RESETLOGS …

FROM:

# Recovery is required if any of the datafiles are restored backups,

# or if the last shutdown was not normal or immediate.

RECOVER DATABASE USING BACKUP CONTROLFILE

TO:

# Recovery is required if any of the datafiles are restored backups,

# or if the last shutdown was not normal or immediate.

# RECOVER DATABASE USING BACKUP CONTROLFILE

8. Save and exit the “ccf.sql” file

9. Rename the old control files for backup purposes and so that they do not exist when creating the new ones.

10. Edit the “init<SID>.ora” file so that db_name=”newdb_name” .

11. Login to Server Manager

% svrmgrl

SVRMGR> connect internal

12. Run the “ccf.sql” script

SVRMGR> @ccf

This will issue a startup nomount, and then recreate the controlfile.

If, at this point, you receive the error that a file needs media recovery, the database was not shutdown normally as specified in step 4. You can try recovering the database using the redo in the current logfile, by issuing:

SVRMGRL> recover database using backup controlfile;

This will prompt for an archived redologfile. It may be possible to open the database after applying the current logfile. BUT this is not guaranteed. If, after applying the current logfile, the database will not open then it is highly likely that the operation must be restarted having shutdown the database normally.

To apply the necessary redo, you need to check the online logfiles and apply the one with the same sequence number as reported in the message. This usually is the logfile with status=CURRENT.

To find a list of the online logfiles:

SVRMGR> select group#, seq#, status from v$log;

GROUP# SEQUENCE# STATUS

———- ——— —————-

1 123 CURRENT <== this redo needs to be applied

2 124 INACTIVE

3 125 INACTIVE

4 126 INACTIVE

5 127 INACTIVE

6 128 INACTIVE

7 129 INACTIVE

7 rows selected.

SVRMGR> select member

from v$logfile

where GROUP# = 1;

Member

————————————

/u02/oradata/V815/redoV81501.log

The last command in ccf.sql should be:

SVRMGR> alter database open resetlogs;

13. You may also need to change the global database name:

alter database rename global_name to <newdb_name>.<domain>

See <Note:1018634.102> for further detail.

14. Make sure the database is working.

15. Shutdown and backup the database.

Related: