Monday, January 21, 2013

Benefits and consequences of the NOLOGGING option

I still find confusion out there about the LOGGING and NOLOGGING clauses when performing DML and DDL operations, the reality is that the NOLOGGING clause will work only on particular conditions, but all regular inserts,updates and deletes will still log the operations.
The benefits of the NOLOGGING option are:
  • Will save disk space when the archive option is enabled.
  • Will largely reduce I/O on the redologs.
  • Will reduce the time it takes to complete the operation.
Please note that NOLOGGING operations will only reduce -not eliminate- the logging.
Lets see an example -
01-- First: we create an empty table with the nologging clause
02SQL> create table logging_example nologging as select * from dba_objects where 1=2;
03
04Table created.
05
06--Now let's enable the statistics and perform a couple of tests:
07
08SQL> set autotrace on statistics
09SQL> set timing on
10
11-- insert the records the traditional way
12SQL> alter system flush buffer_cache; --clean the cache to compare the speeds in equal conditions
13
14System altered.
15
16Elapsed: 00:00:01.49
17
18SQL> insert into logging_example select * from dba_objects;
19
2050864 rows created.
21
22Elapsed: 00:00:01.59
23
24Statistics
25----------------------------------------------------------
26 0 recursive calls
27 5250 db block gets
28 6766 consistent gets
29 982 physical reads
305636712 redo size --without the APPEND hint
31 670 bytes sent via SQL*Net to client
32 586 bytes received via SQL*Net from client
33 3 SQL*Net roundtrips to/from client
34 1 sorts (memory)
35 0 sorts (disk)
36 50864 rows processed
37
38-- insert the records with the APPEND hint (nologging)
39SQL> alter system flush buffer_cache; --clean the cache to compare the speeds in equal conditions
40
41System altered.
42
43Elapsed: 00:00:01.06
44
45SQL> insert /*+ append */ into logging_example select * from dba_objects;
46
4750864 rows created.
48
49Elapsed: 00:00:00.59
50
51Statistics
52----------------------------------------------------------
53 0 recursive calls
54 743 db block gets
55 5374 consistent gets
56 944 physical reads
572200 redo size --with the APPEND hint
58 654 bytes sent via SQL*Net to client
59 604 bytes received via SQL*Net from client
60 3 SQL*Net roundtrips to/from client
61 1 sorts (memory)
62 0 sorts (disk)
63 50864 rows processed

We can see that there is a big difference on the redo size generated by each insert, there are many post and articles on the internet that show the speed benefits when using the NOLOGGING option, but here I mainly want to clarify that a regular insert (no APPEND hint) will still generate redologs even if the table have been created with the NOLOGGING option.
What happens to the data after a restore when a nologging operation was performed on it?

I will present some scenarios to show the consequences when we need to perform a restore after a nologging transaction, this way we will know what to expect and we can better prepare ourselves in case of a disaster.
I took a full database backup, now I will create several tables with different options to see what happens after a restore, you might see some surprises here!
Scenarios:
  • Table “create as select” with the nologging option (table_ctas_nologging).
  • Regular table “create as select” (table_ctas_logging)
  • A nologging table created empty, and a regular (logging) insert (table_ctas_nologging_insert)
  • Table created with nologging, then two inserts, one with and one without logging (table_insert_mixed)
  • Regular logging table, with a nologging index (table_ctas_index_nologging)
01SQL> create table table_ctas_nologging nologging as select * from dba_objects;
02
03Table created.
04
05SQL> create table table_ctas_logging as select * from dba_objects;
06
07Table created.
08
09SQL> create table table_ctas_nologging_insert nologging as select * from dba_objects where 1=2;
10
11Table created.
12
13SQL> insert into table_ctas_nologging_insert select * from dba_objects;
14
1550864 rows created.
16
17SQL> commit;
18
19
20Commit complete.
21
22SQL> create table table_insert_mixed nologging as select * from dba_objects where 1=2;
23
24Table created.
25
26SQL> insert into table_insert_mixed select * from dba_objects;
27
2850866 rows created.
29
30SQL> insert into table_insert_mixed select /*+ append */ * from dba_objects;
31
3250866 rows created.
33
34SQL> commit;
35
36Commit complete.
37
38SQL> select count(*) from table_insert_mixed;
39
40 COUNT(*)
41----------
42 101732
43
44SQL> create table table_ctas_index_nologging as select * from dba_objects;
45
46Table created.
47
48SQL> create index IDXNOLOG on table_ctas_index_nologging (object_id) nologging;
49
50Index created.

Now I will shutdown the database and restore the tablespace from the backup.
Next is an extract from RMAN
01rman target /
02
03Recovery Manager: Release 10.2.0.4.0 - Production on Wed Aug 25 17:32:20 2010
04
05Copyright (c) 1982, 2007, Oracle. All rights reserved.
06
07connected to target database: ORCL (DBID=1247573001)
08
09RMAN> shutdown immediate
10
11using target database control file instead of recovery catalog
12database closed
13database dismounted
14Oracle instance shut down
15
16RMAN> startup mount;
17
18Oracle instance started
19database mounted
20
21Total System Global Area 285212672 bytes
22
23Fixed Size 1267068 bytes
24Variable Size 155191940 bytes
25Database Buffers 125829120 bytes
26Redo Buffers 2924544 bytes
27
28RMAN> restore tablespace users;
29
30Starting restore at 25-AUG-10
31using target database control file instead of recovery catalog
32allocated channel: ORA_DISK_1
33channel ORA_DISK_1: sid=152 devtype=DISK
34
35channel ORA_DISK_1: starting datafile backupset restore
36channel ORA_DISK_1: specifying datafile(s) to restore from backup set
37restoring datafile 00004 to +DATA/orcl/datafile/users.259.719792191
38channel ORA_DISK_1: reading from backup piece +DATA/orcl/backupset/2010_08_25/nnndf0_tag20100825t171657_0.272.727982219
39channel ORA_DISK_1: restored backup piece 1
40piece handle=+DATA/orcl/backupset/2010_08_25/nnndf0_tag20100825t171657_0.272.727982219 tag=TAG20100825T171657
41channel ORA_DISK_1: restore complete, elapsed time: 00:00:05
42Finished restore at 25-AUG-10
43
44RMAN> recover tablespace users;
45
46Starting recover at 25-AUG-10
47using channel ORA_DISK_1
48
49starting media recovery
50media recovery complete, elapsed time: 00:00:05
51
52Finished recover at 25-AUG-10
53
54RMAN> alter database open;
55
56database opened

Now lets see the status of the tables:
1SQL> select count(*) from table_ctas_nologging ;
2 select count(*) from table_ctas_nologging
3 *
4 ERROR at line 1:
5 ORA-01578: ORACLE data block corrupted (file # 4, block # 404)
6 ORA-01110: data file 4: '+DATA/orcl/datafile/users.259.719792191'
7 ORA-26040: Data block was loaded using the NOLOGGING option

That doesn’t look good, lets see the next table
1SQL> select count(*) from table_ctas_logging ;
2
3COUNT(*)
4----------
550863

Good, no problem here, the next scenario is more interesting, the table was created with the NOLOGGING option, but the inserts were done without the APPEND hint
1SQL> select count (*) from table_ctas_nologging_insert;
2
3COUNT(*)
4----------
550864

Good, no problem here, now let’s see our table with half data inserted with logging and half with nologging
1SQL> select count(*) from table_insert_mixed;
2select count(*) from table_insert_mixed
3*
4ERROR at line 1:
5ORA-01578: ORACLE data block corrupted (file # 4, block # 4363)
6ORA-01110: data file 4: '+DATA/orcl/datafile/users.259.719792191'
7ORA-26040: Data block was loaded using the NOLOGGING option

Wow, the whole table is unredable!
Now lets see the table with the NOLOGGING index .
1<pre>SQL> select count(*) from table_ctas_index_nologging;
2
3COUNT(*)
4----------
550865

Ok, thats nice, the table is accessible, but what happend if we try to use the index?
1SQL> select object_id from table_ctas_index_nologging where object_id=1;
2select object_id from table_ctas_index_nologging where object_id=1
3*
4ERROR at line 1:
5ORA-01578: ORACLE data block corrupted (file # 4, block # 2821)
6ORA-01110: data file 4: '+DATA/orcl/datafile/users.259.719792191'
7ORA-26040: Data block was loaded using the NOLOGGING option

I tried to rebuil the index but I was still getting the same error message, at the end I was forced to drop it and recreate it.
Conclusions:
  • Use the NOLOGGING option only on temporary/working/staging tables.
  • Always perform a backup after a NOLOGGING operation.
  • Unless explicitly indicated, DDLs like CTAS and DMLs like inserts will log all operations.
FROM ORACLE DOCUMENTATION:
NOLOGGING is supported in only a subset of the locations that support LOGGING. Only the following operations support the NOLOGGING mode:

DML:
  • Direct-path INSERT (serial or parallel) resulting either from an INSERT or a MERGE statement. NOLOGGING is not applicable to any UPDATE operations resulting from the MERGE statement.
  • Direct Loader (SQL*Loader)

DDL:
  • CREATE TABLEAS SELECT
  • CREATE TABLELOB_storage_clauseLOB_parametersNOCACHE | CACHE READS
  • ALTER TABLELOB_storage_clauseLOB_parametersNOCACHE | CACHE READS (to specify logging of newly created LOB columns)
  • ALTER TABLEmodify_LOB_storage_clausemodify_LOB_parametersNOCACHE | CACHE READS (to change logging of existing LOB columns)
  • ALTER TABLEMOVE
  • ALTER TABLE … (all partition operations that involve data movement)
    • ALTER TABLEADD PARTITION (hash partition only)
    • ALTER TABLEMERGE PARTITIONS
    • ALTER TABLESPLIT PARTITION
    • ALTER TABLEMOVE PARTITION
    • ALTER TABLEMODIFY PARTITIONADD SUBPARTITION
    • ALTER TABLEMODIFY PARTITIONCOALESCE SUBPARTITION
  • CREATE INDEX
  • ALTER INDEXREBUILD
  • ALTER INDEXREBUILD [SUB]PARTITION
  • ALTER INDEXSPLIT PARTITION
For objects other than LOBs, if you omit this clause, then the logging attribute of the object defaults to the logging attribute of the tablespace in which it resides.
For LOBs, if you omit this clause, then:
  • If you specify CACHE, then LOGGING is used (because you cannot have CACHE NOLOGGING).
  • If you specify NOCACHE or CACHE READS, then the logging attribute defaults to the logging attribute of the tablespace in which it resides.
NOLOGGING does not apply to LOBs that are stored internally (in the table with row data). If you specify NOLOGGING for LOBs with values less than 4000 bytes and you have not disabled STORAGE IN ROW, then Oracle ignores the NOLOGGING specification and treats the LOB data the same as other table data.

 
Source : http://oracleexamples.wordpress.com/2010/08/28/benefits-and-consequences-of-the-nologging-option/

Tuesday, October 2, 2012

Logical vs Physical Standby database


A Quick Primer
 
Standby databases, in a nutshell, acts as a failover for our mission critical production databases. When production database crashes, applications can quickly switchover to the stand by databases.
Oracle provides two types of standby databases:
1. Physical Standby Database
Standby database is called “physical” if the physical structure of stand by exactly matches with stand by structure. Archived redo log transferred from primary database will be directly applied to the stand by database.
2. Logical Standby Database
Stand by database is called “logical”, the physical structure of both
databases do not match and from the archived redo log we create SQL statements then these statements will be applied to stand by database.
Not supported by SAP because SAP using LOB data type (such as photo) that cannot insert into database only using SQL statement without the existing file.

Administer Standby Databases

I would say there are three ways we can administer standby database.
1. SQL/Shell Scripts
This is the basic and simple method to create standby database, monitor the log transport and log apply services.
2. DGMGRL
DGMGRL stands for Data Guard Line Mode, a tool provided by oracle to administer standby database.
How it works?
In any typical standby db environment there will be only one primary database and one or more standby databases. With dgmgrl, we need to create one object for primary db called “Configuration” and for each stand by database we create one object alled“site”.Theseobjects“Configuration” and “sites” are interrelated.
Things to remember when using DGMGRL
  • You can not create standby databases thru dgmgrl.You can only use this tool to monitor or switch over dbs.
  • You need to have “sp file” created for all primary/stand by dbs.
  • From dgmgrl, you need to always connect with primary db to perform any activities otherwise its not going work.
  • The parameter dg_broker_start must be set to TRUE
3. OEM
We can use Oracle Enterprise Manager (OEM) to administer both physical and logical standby databases.
 
How to Create Physical Standby Database

Before you go thru the steps listed below in our example I call primary database as “TEST” and standby database as “TEST_S1”.But keep in mind when you create standby database the environment variable ORACLE_SID is going to be same as primary db name (TEST in our case).

1) Ensure the Primary database is in ARCHIVELOG mode:
 SQL> archive log list
 Database log mode              No Archive Mode
 Automatic archival             Disabled
 Archive destination         /export/home/oracle/temp/oracle/arch
 Oldest online log sequence     7
 Current log sequence           9
 SQL> alter database close;    

 Database altered.

 SQL> alter database archivelog;    

 Database altered.

 SQL> shutdown immediate                     
 ORA-01109: database not open


 Database dismounted.
 ORACLE instance shut down.
Modify the Primary database init.ora so that log_archive_start=true and restart the instance. Verify that database is in archive log mode and that automatic archiving is enabled.
 SQL> archive log list
 Database log mode              Archive Mode
 Automatic archival             Enabled
 Archive destination            /export/home/oracle/temp/oracle/arch
 Oldest online log sequence     7
 Next log sequence to archive   9
 Current log sequence           9

2) Create a backup of the Primary database
You can use an existing backup of the Primary database as long as you have the archive logs that have been generated since that backup. You may also take a hot backup as long as you have all archive logs through the end of the backup of the last table space. To create a cold backup do the following:
  SQL> select name from v$datafile;

  NAME
  ----------------------------------------------------------------
          /export/home/oracle/temp/oracle/data/sys.dbf

  SQL> shutdown immediate
  Database closed.
  Database dismounted.
  ORACLE instance shut down.
  SQL> exit
Create a backup of all data files and online redo logs using an OS command or utility. A backup of the online redo logs is necessary to facilitate switchover.
Once complete startup the instance:

  SQL> startup
  ORACLE instance started.

  Total System Global Area   80512156 bytes
  Fixed Size                   279708 bytes
  Variable Size              71303168 bytes
  Database Buffers            8388608 bytes
  Redo Buffers                 540672 bytes
  Database mounted.
  Database opened.
3) Connect to the primary database and create the standby control file:
 SQL> alter database create standby control file as
             '/export/home/oracle/temp/oracle/data/backup/standby.ctl';

 Database altered.

4) Copy files to the Standby host
Copy the backup data files, standby control file, all available archived redo logs, and online redo logs from the primary site to the standby site. Copying of the online redo logs is necessary to facilitate switchover. If the standby is on a separate site with the same directory structure as the primary database then you can use the same path names for the standby files as the primary files. In this way, you do not have to rename the primary data files in the standby control file. If the standby is on the same site as the primary database, or the standby database is on a separate site with a different directory structure the you must rename the primary data files in the standby control file after copying them to the standby site. This can be done using the db_file_name_convert and log_file_name_convert parameters or by manually using the alert database statements.

5) Set the initialization parameters for the primary database
It is suggested that you maintain two init.ora files on both the primary and the standby databases. This allows you to facilitate role reversal during switchover operations more easily.
For primary database ========
1. initTEST.ora
2. switch_to_stdby_initTEST
For Standby database =======
1. initTEST_s1.ora
2. switch_to_primary_initTEST
On primary initTEST.ora file add the following lines:
 log_archive_dest_1='LOCATION=/export/home/oracle/temp/oracle/arch’
 log_archive_dest_2='SERVICE=TEST_s1 reopen=60'
 log_archive_dest_state_1=enable
 log_archive_dest_state_2=enable
 log_archive_format=%t_%s.dbf
 log_archive_start=true
 remote_archive_enable=true
Create the standby initialization parameter file and set the parameters for the standby database. Depending on your configuration, you may need to set filename conversion parameters.
Standby init.ora on Primary host:
 log_archive_dest_1='LOCATION=/export/home/oracle/temp/oracle/arch'
 log_archive_dest_state_1=enable
 log_archive_format=%t_%s.dbf
 log_archive_start=true
 standby_archive_dest=/export/home/oracle/temp/oracle/arch
 standby_file_management=auto
 fal_server=TEST_s1
 fal_client=TEST
 remote_arhive_enable=true
NOTE: In the above example, db_file_name_convert and log_file_name_convert are not needed as the directory structure on the two hosts are the same. If the directory structure is not the same then setting of these parameters is recommended.

6) Configure networking components
On the Primary host create a net service name that the Primary database can use to connect to the Standby database. On the primary host creates a net service name that Standby, when running on the Primary host, can use to connect to the Primary, when it is running on the Standby host. Example from
Primary’s host tnsnames.ora:
TEST =
    (DESCRIPTION =
        (ADDRESS= (PROTOCOL= TCP) (Host= 172.20.69.83) (Port= 1523))
        (CONNECT_DATA = (SID= TEST))
    )

 TEST_s1 =
    (DESCRIPTION =
        (ADDRESS= (PROTOCOL= TCP) (Host= 172.20.89.7) (Port= 1530))
        (CONNECT_DATA = (SID= TEST))
    )
The above two net service names must exist in the Standby hosts tnsnames.ora also. You must also configure a listener on the standby database. If you plan to manage this standby database using the Data Guard broker, you must configure the listener to use the TCP/IP protocol and statically register the standby database service using its SID.

7) Start up and mount the stand by database
 oracle@hasunclu2:/export/home/oracle/temp/oracle> sqlplus "/ as  sysdba"   

 SQL*Plus: Release 9.0.1.0.0 - Production on Thu Mar 14 18:00:57  2002 (c) Copyright 2001 Oracle Corporation.  All rights reserved.
 Connected to an idle instance.

 SQL> startup nomount pfile=?/dbs/initTEST_s1.ora
 ORACLE instance started.

 Total System Global Area   80512156 bytes
 Fixed Size                   279708 bytes
 Variable Size              71303168 bytes
 Database Buffers            8388608 bytes
 Redo Buffers                 540672 bytes
 SQL> alter database mount standby database;

 Database altered.

 SQL>
8) Create standby redo log files, if necessary

Standby redo logs are necessary for the higher protection levels such as Guaranteed, Instant, and Rapid. In these protection modes LGWR from the Primary host writes transactions directly to the standby redo logs. This enables no data loss solutions and reduces the amount of data loss in the event of failure. Standby redo logs are not necessary if you are using the delayed protection mode.
If you configure standby redo on the standby then you should also configure standby redo logs on the primary database. Even though the standby redo logs are not used when the database is running in the primary role, configuring the standby redo logs on the primary database is recommended in preparation for an eventual switchover operation.
Standby redo logs must be archived before the data can be applied to the standby database. The standby archival operation occurs automatically, even if the standby database is not in ARCHIVELOG mode. However, the archive process must be started on the standby database. Note that the use of the archiver process (ARCn) is a requirement for selection of a standby redo log.
You must have the same number of standby redo logs on the standby as you have online redo logs on production. They must also be exactly the same size.
The following syntax is used to create standby redo logs:
 SQL> alter database add standby log file
   '/export/home/oracle/temp/oracle/data/srl_1.dbf' size 20m;

 Database altered.
 SQL> alter database add standby log file
   2  '/export/home/oracle/temp/oracle/data/srl_2.dbf' size 20m;

 Database altered.

 SQL> alter database add standby log file
   2 ‘/export/home/oracle/temp/oracle/data/srl_3.dbf' size 20m;

 Database altered.

 SQL>

9) Copy database files
Manually change the names of the primary data files and redo logs in the standby control file for all files not automatically renamed using DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT as noted in step 5.Datafile names can be changed on the standby at a mounted state with the following syntax:
 SQL> alter database rename file
      '/export/home/oracle/temp/oracle/data/sys.dbf'
      to
      '/export/home/oracle/temp/oracle/data2/sys.dbf';

10) Restart the listeners
On the primary database, and start the listener on the standby database so that changes made to the listener.ora can be implemented.

11) Activate parameter changes
Manually enable initialization parameter changes on the primary database so that it can initiate archiving to the standby site.
At runtime, the LOG_ARCHIVE_DEST_n initialization parameter can be changed using ALTER SYSTEM and ALTER SESSION statements. Activate the changes made to these parameters by either bouncing the instance or activating via alter system.

For example:
 SQL> alter system set log_archive_dest_2='SERVICE=TEST_s1  reopen=60';

 System altered.

12) Verify Auto Archive
On the Primary database switch a log and verify that it has been shipped properly using the v$archive_dest view.
 SQL> alter system switch logfile;

 System altered.

 SQL> select status, error from v$archive_dest where dest_id=2;



 STATUS    ERROR
 --------- ------------------------------------------------------
 VALID

 SQL>

13) Put Standby database in managed standby database
 SQL> recover managed standby database disconnect;
 Media recovery complete.
 SQL> exit

14)Apply redo logs in the standby database
Remember archived logs will be transferred to stand by database. But we need to apply the archived logs to standby database. We can apply these archived logs either thru SQL or data guard command line tool (DGMGRL)
Let us see how we can apply thru SQL.
Go to standby database environment:
 SQL> startup nomount pfile=initTEST_s1.ora

 SQL> alter database mount standby database;

 SQL> recover managed standby database disconnect;
Warning!!!After executing the above statement you will get the sql prompt. But that does not mean the recovery is complete. This statement will kick off “Disaster Recovery Service”.
SQL> recover managed standby database cancel;
The above statement will complete media recovery. Now all archive redo logs applied to standby database.

15) Switchover databases
In real time environment, if primary database is crashed or unavailable for some reason you need to make standby database as primary database.
• Switchover Standby to Primary
In order to do switchover standby have to be in mount state.
SQL> alter database commit to switch over to primary;
Sometimes, this command will ask to do media recovery on stand by instance. Apply logs as mentioned above and rerun the command.
SQL> shutdown immediate;
SQL> startup pfile=switch_to_primary_initTEST.ora
• Switchover Primary to Standby
SQL>alter database commit to switch over to standby;
SQL> shutdown immediate;
SQL> startup pfile= switch_to_stdby_initTEST.ora
How to Create Logical Standby Database
Now, let us talk about how we can create one logical standby database and administer logical standby database.
Before we create logical database perform the following checks to make sure the primary database qualify to have logical standby database.
• Determine whether primary database contains data types like LONG, NCLOB, LONG RAW, BFILE those are not supported by standby database.
• Ensure that the tables in primary database can be uniquely identified.
• Ensure that the primary database is in ARCHIVELOG mode and that archiving is enabled.
• Ensure supplemental logging is enabled on the primary database. To see whether supplemental logging is enabled, start a SQL session and query the V$DATABASE fixed view. For example, enter:
SQL> SELECT SUPPLEMENTAL_LOG_DATA_PK, SUPPLEMENTAL_LOG_DATA_UI
FROM V$DATABASE;

SUP SUP
--- ---
YES YES
If supplemental logging is not enabled, execute the following statements:

SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY, UNIQUE INDEX) COLUMNS;
SQL> ALTER SYSTEM SWITCH LOGFILE;
• Ensure LOG_PARALLELISM init.ora parameter is set to 1 (default value).If you plan to be performing switchover operations with the logical standby then you must create an alternate tablespace in the primary database for logical standby system tables. Use the DBMS_LOGMNR_D.SET_TABLESPACE procedure to move the tables into the new tablespace. For example:
SQL> EXECUTE LOGMNR_D.SET_TABLESPACE
'logical_tblsp');
Steps to create Logical Standby Database
1. On primary database, perform cold backup.
2. Bring the primary database to mount state to create backup of control file.
3. Open the primary database and build the log miner dictionary.
SQL> ALTER system enable restricted session;
SQL> ALTER DATABASE OPEN;
SQL> EXECUTE DBMS_LOGSTDBY.BUILD;
4. Archive the current online redo log and disable the restricted session.
5. Identify the archived redo log that contains the log miner dictionary

SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE DICTIONARY_BEGIN='YES'
AND STANDBY_DEST='NO';
6. . Copy all these files with init parameter file to the stand by database.
7. On standby system, modify the copied primary init file to support the logical standby feature. Some parameters affected are control_files,standby_archive_dest,parallel_max_servers,instance_name.
8. Start and mount the standby database in exclusive mode.
9. Turn on the database guard.
ALTER DATABASE GUARD ALL;
10. Open the logical standby followed by a shutdown immediate or normal.
SQL> ALTER DATABASE OPEN RESETLOGS;
SQL> SHUTDOWN IMMEDIATE;
11. Open the logical standby database in exclusive mode.
12. Create temporary table space for the standby database.
13. On logical standby database, register the archived log identified in step 5 and run the following command.
SQL> ALTER DATABASE REGISTER LOGICAL LOGFILE
'/u01/oradata/stby/arch/arc1_28.arc';
14. Run the following ALTER DATABASE statement and include the INITIAL keyword to begin SQL apply operations for the first time on the logical standby.

SQL> ALTER DATABASE START LOGICAL STANDBY APPLY INITIAL;
15. Configure listener on the standby database.
16. Once listeners running on both primary and standby databases run the following command.
ALTER SYSTEM RESGITER;
17. Create tns names entry for primary database in standby host. Similarly, create tns names entry for standby database in primary host.
18. Enable archiving in the logical standby database.
19. Start archive the current online redo log file. Verify that the remote archiving succeeded by running the following select statement.
SQL> SELECT STATUS, ERROR FROM V$ARCHIVE_DEST;
20. Verifying archived redo logs are being applied. To verify that the archived redo logs are being applied, query the V$LOGSTDBY view. This view provides information about the processes that are reading redo log information and applying it to the logical standby databases. You can also query the DBA_LOGSTDBY_PROGRESS view to find out the progress of SQL apply operations. The V$LOGSTDBY_STATS view shows the state of the coordinator process and information about the SQL transactions that have been applied to the logical standby database.

Logical or Physical, What is your Choice?

You ask you DBA to implement “stand by “database for your production database. Guess what? You will get “physical standby “database implemented for your production database. Because implementing physical standby database would be first choice for many DBAs as most of us would not realize the benefits we get from having logical standby database.
Let us take close look at the major bottleneck with physical stand by database.
We need to keep the physical standby database in “recovery mode” in order to apply the received archive logs from the primary database. We can open “physical stand by database to “read only” and make it available to the applications users (Only select is allowed during this period).Once the database is opened in “Read only” mode then we can not apply redo logs received from primary database.
We do not see such issues with logical standby database. We can open up the database in normal mode and make it available to the users. At the same time, we can apply archived logs received from primary database.

Real World Scenario 
 
One of my clients was facing critical performance issue with their primary database. The primary database needed to support pretty large user community for the OLTP system and pretty large “Reporting Group”. The database could not cope up with these two large user groups. So I have configured “logical stand by” database for the primary database and I have opened up the database available to the reporting users group. Now the database load is kind of distributed between primary and standby database.

Conclusion

I will hazard a guess that you and many users now have a newfound respect for standby databases. In future, if you need to create standby databases ask yourself the following questions.
• Do I need just one failover database? Then you can choose physical standby database.
• Do I need failover database? Also I need to see whether I can make use of this failover database rather than sitting idle on the server. Then choose logical standby database.