Skip to main content

ORA-01578: ORACLE data block corrupted

Hi Techies.

Just a brainstorm about nologging / nologging concept in Oracle.

This will help you to understand the concept and make sure you are using data guard always in FORCE logging mode, otherwise, Disaster Recover (DR) Oracle standby database may end up in data block corruption which is a serious problem in High Availability/secured databases environments.

First of all, my test database is in ARCHIVE LOG mode, and not in database level FORCE LOGGING.

Create a tablespace, test user, and table with default options along with inserting some records.

SQL> archive log list
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            /oracle/SID/oraarch
Oldest online log sequence     205
Next log sequence to archive   208
Current log sequence           208

Created test tablespace and database user for testing.

SQL> create tablespace TEST datafile '/<file path>/test01.dbf' size 40M;
Tablespace created.

SQL> create user test identified by test default tablespace test;
User created.

SQL> grant connect, resource,dba to test;
Grant succeeded.

$> sqlplus test

SQL*Plus: Release 11.2.0.3.0 Production on Thu Apr 6 20:13:13 2017

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Enter password:

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> set timing on;

SQL> create table ABC tablespace TEST as select * from dba_objects;

Table created.

Elapsed: 00:00:02.05

SQL> select count(*) from abc;

  COUNT(*)
----------
     75105

Elapsed: 00:00:00.83

The table is inserted with 75105 records.

In this test case, datafile number is 12. Take an RMAN backup of this datafile which contains all test data.

$52> rman target /

Recovery Manager: Release 11.2.0.3.0 - Production on Thu Apr 6 20:22:20 2017

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database: SID (DBID=4071548329)

RMAN> backup datafile 12;

Starting backup at 06-APR-17
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=275 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00012 name=/oracle/SID/<path>/test01.dbf
channel ORA_DISK_1: starting piece 1 at 06-APR-17
channel ORA_DISK_1: finished piece 1 at 06-APR-17
piece handle=/oracle/SID/oraflash/SID/backupset/2017_04_06/o1_mf_nnndf_TAG20170406T202319_dgf5f8oo_.bkp tag=TAG20170406T202319 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
Finished backup at 06-APR-17

Create a new table ABC1 with nologging option.

SQL> create table ABC1 nologging tablespace TEST as select * from dba_objects;

Table created.

Elapsed: 00:00:00.68

SQL> select TABLE_NAME,LOGGING from user_tables;

TABLE_NAME                     LOG
------------------------------ ---
ABC                            YES
ABC1                           NO

SQL>    select count(*) from ABC;

  COUNT(*)
----------
     75105

Elapsed: 00:00:00.01
SQL> select count(*) from ABC1;

  COUNT(*)
----------
     75106

Now, let us check, how exactly this can impact Disaster Recovery Database. I am simulating this on the same database.

Have a look at this series of steps.

SQL> alter database datafile 12 offline drop;

Database altered.

Now restore previously backed up data file which ABC data but not of ABC1 and we shall perform recovery of this datafile.

RMAN> restore datafile 12;

Starting restore at 06-APR-17
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00012 to /oracle/SID/<path>/test01.dbf
channel ORA_DISK_1: reading from backup piece /oracle/SID/oraflash/SID/backupset/2017_04_06/o1_mf_nnndf_TAG20170406T202319_dgf5f8oo_.bkp
channel ORA_DISK_1: piece handle=/oracle/SID/oraflash/SID/backupset/2017_04_06/o1_mf_nnndf_TAG20170406T202319_dgf5f8oo_.bkp tag=TAG20170406T202319
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:04
Finished restore at 06-APR-17

RMAN> recover datafile 12;

Starting recover at 06-APR-17
using channel ORA_DISK_1

starting media recovery
media recovery complete, elapsed time: 00:00:02

Finished recover at 06-APR-17

SQL> alter database datafile 12 online;

Database altered.

Now all stuff (including recovery) completed and let us see data in ABC and ABC1 tables.

SQL> select count(*) from ABC;

  COUNT(*)
----------
     75105

Elapsed: 00:00:00.67
SQL> select count(*) from ABC1;
select count(*) from ABC1
                     *
ERROR at line 1:
ORA-01578: ORACLE data block corrupted (file # 12, block # 1155)
ORA-01110: data file 12: '/oracle/SID/sapdata2/sr3_2/test01.dbf'
ORA-26040: Data block was loaded using the NOLOGGING option


Elapsed: 00:00:12.16
SQL> select TABLE_NAME,LOGGING from user_tables;

TABLE_NAME                     LOG
------------------------------ ---
ABC                            YES
ABC1                           NO

Since the database was not in FORCE LOGGING mode, Oracle did not generate enough change vectors for recovery of ABC1 table (which is in NOLOGGING mode). While performing datafile recovery, Oracle did not found redo change vectors to recover ABC1 data.

So, now you got it, what went wrong.

Imagine, if the same configuration in case of DR, this can mess up all data of those tables which are in no-logging mode.


Hope, this help will you. Please share your questions, if any.

Comments

Post a Comment

Popular posts from this blog

Alter System Checkpoint

Hello Techies, This happens most of the time that you need to add/drop redo log groups in Oracle databases due to any administrative reasons, however, if candidate redo group is in ACTIVE status, then you can't drop it and DBA need to wait until this comes in INACTIVE status to become this group available from drop. Here is the trick. Current status of redo log groups in my test database and you can see highlighted groups in ACTIVE status. In this situation, if DBA need to modify/drop these groups he/she need to wait until this comes in INACTIVE status. SQL> select * from v$log;     GROUP#    THREAD#  SEQUENCE#      BYTES  BLOCKSIZE    MEMBERS ARC STATUS           FIRST_CHANGE# FIRST_TIME      NEXT_CHANGE# NEXT_TIME ---------- ---------- ---------- ---------- ---------- ---------- --- ---------------- ------...

Oracle OS Watcher - Start and generate html report

Hi Guys, OS Watcher is an excellent utility for system performance investigation. OSWatcher invokes these distinct operating system utilities, each as a distinct background process, as data collectors. These utilities will be supported, or their equivalents, as available for each supported target platform. ps top ifconfig mpstat iostat netstat traceroute vmstat meminfo (Linux Only) slabinfo (Linux Only) OSWatcher is java program and requires as a minimum java version 1.4.2 or higher. This can run on on any Unix/Windows. An X Windows environment is required because oswbba uses Oracle Chart builder which requires it. Make sure java path is set $which java $<java path>/java Following will take system snapshot in every 30 seconds and will log last 24 hours data to OS Watcher archive log files. $cd <OS watcher dir> $nohup ./startOSWbb.sh 30 24 gzip <OS Watcher file location> & Without compress $nohup ./s...

Performance Hub Report

Hi Guys, In case you do not have Oracle Enterprise Manager or Grid Cloud Control and want to have almost similar features for Oracle monitoring purpose, Performance Hub Reports can help you up to a great extent. Performance Hub report helps to view all performance data available for a specified time period. Various tabs are available in the Performance Hub, depending on whether real-time or historical data is selected for the time period. When real-time data is selected, more granular data is presented, because real-time data for the last hour is displayed. When historical data is selected, more detailed data is presented, but the data points are averaged out to the Automatic Workload Repository (AWR) interval for the selected time period. To use the performance hub report, underlying Oracle must be at least 12c. Following is the way how to generate this report. 1.        At the SQL prompt, enter:         ...