Direct Path Reads and Chained Rows

Another DBA was investigating a performance difference of a query between two different systems and asked me to take a look. The execution plan, a full table scan, was the same in both cases.  We ran Snapper on the problem system and observed that most of the time was spent on event ‘db file sequential read’.

-- Session Snapper v4.24 - by Tanel Poder ( http://blog.tanelpoder.com/snapper ) - Enjoy the Most Advanced Oracle Troubleshooting Script on the Planet! :)

---------------------------------------------------------------------------------------------------------------
  ActSes   %Thread | INST | SQL_ID          | SQL_CHILD | EVENT                               | WAIT_CLASS
---------------------------------------------------------------------------------------------------------------
     .98     (98%) |    1 | 2kcksks9ht14t   | 0         | db file sequential read             | User I/O
     .03      (3%) |    1 | 2kcksks9ht14t   | 0         | ON CPU                              | ON CPU

--  End of ASH snap 1, end=2017-11-24 16:12:18, seconds=5, samples_taken=40, AAS=1

‘db file sequential read’ is a single block access, not usually associated with a full table scan, however from previous experience I guessed that we had chained rows (one row split among multiple blocks), so we reran snapper and checked the value of statistic ‘table fetch continued row’.

-- Session Snapper v4.24 - by Tanel Poder ( http://blog.tanelpoder.com/snapper ) - Enjoy the Most Advanced Oracle Troubleshooting Script on the Planet! :)

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SID, USERNAME  , TYPE, STATISTIC                                                 ,         DELTA, HDELTA/SEC,    %TIME, GRAPH       , NUM_WAITS,  WAITS/SEC,   AVERAGES
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
30, GLOGOWNER , STAT, session logical reads                                     ,          8477,      1.56k,         ,             ,          ,           ,      8.48k total buffer visits
30, GLOGOWNER , STAT, user I/O wait time                                        ,           531,      98.03,         ,             ,          ,           ,          ~ per execution
30, GLOGOWNER , STAT, non-idle wait time                                        ,           531,      98.03,         ,             ,          ,           ,          ~ per execution
30, GLOGOWNER , STAT, non-idle wait count                                       ,          8449,      1.56k,         ,             ,          ,           ,          ~ per execution
30, GLOGOWNER , STAT, physical read total IO requests                           ,          8449,      1.56k,         ,             ,          ,           ,          ~ per execution
30, GLOGOWNER , STAT, physical read total bytes                                 ,      69214208,     12.78M,         ,             ,          ,           ,          ~ per execution
30, GLOGOWNER , STAT, cell physical IO interconnect bytes                       ,      69214208,     12.78M,         ,             ,          ,           ,          ~ per execution
30, GLOGOWNER , STAT, consistent gets                                           ,          8477,      1.56k,         ,             ,          ,           ,          ~ per execution
30, GLOGOWNER , STAT, consistent gets direct                                    ,          8477,      1.56k,         ,             ,          ,           ,          ~ per execution
30, GLOGOWNER , STAT, physical reads                                            ,          8449,      1.56k,         ,             ,          ,           ,          ~ per execution
30, GLOGOWNER , STAT, physical reads direct                                     ,          8449,      1.56k,         ,             ,          ,           ,          ~ per execution
30, GLOGOWNER , STAT, physical read IO requests                                 ,          8449,      1.56k,         ,             ,          ,           ,      8.19k bytes per request
30, GLOGOWNER , STAT, physical read bytes                                       ,      69214208,     12.78M,         ,             ,          ,           ,          ~ per execution
30, GLOGOWNER , STAT, file io service time                                      ,         16898,      3.12k,         ,             ,          ,           ,        2us bad guess of IO service time per IO request
30, GLOGOWNER , STAT, file io wait time                                         ,       5285794,    975.84k,         ,             ,          ,           ,   625.61us bad guess of IO wait time per IO request
30, GLOGOWNER , STAT, no work - consistent read gets                            ,          8478,      1.57k,         ,             ,          ,           ,          ~ per execution
30, GLOGOWNER , STAT, table scan rows gotten                                    ,          3798,     701.17,         ,             ,          ,           ,          ~ per execution
30, GLOGOWNER , STAT, table scan blocks gotten                                  ,           882,     162.83,         ,             ,          ,           ,          ~ per execution
30, GLOGOWNER , STAT, table fetch continued row                                 ,          7567,       1.4k,         ,             ,          ,           ,          ~ per execution
30, GLOGOWNER , STAT, buffer is not pinned count                                ,          8477,      1.56k,         ,             ,          ,           ,          ~ per execution
30, GLOGOWNER , TIME, DB CPU                                                    ,        716890,   132.35ms,    13.2%, [@@        ],          ,           ,
30, GLOGOWNER , TIME, sql execute elapsed time                                  ,       6001826,      1.11s,   110.8%, [##########],          ,           ,
30, GLOGOWNER , TIME, DB time                                                   ,       6001826,      1.11s,   110.8%, [##########],          ,           ,      -10.8 % unaccounted time
30, GLOGOWNER , WAIT, db file sequential read                                   ,       5300427,   978.54ms,    97.9%, [WWWWWWWWWW],      8409,      1.55k,   630.33us average wait

--  End of Stats snap 1, end=2017-11-24 16:12:29, seconds=5.4

We were starting to understand the problem we were facing, especially when the DBA observed that the table in question had more than 255 columns (rows in such tables will always be chained), however we still didn’t understand why on the good performing system, also with chained rows, the query ran very fast, mostly on CPU.

I reran the query with SQL trace on the problem system, and attach a small portion below:

WAIT #139940851571352: nam='direct path read' ela= 3354 file number=11 first dba=257026 block cnt=62 obj#=131201 tim=1988902071975
WAIT #139940851571352: nam='db file sequential read' ela= 316 file#=11 block#=13034 blocks=1 obj#=131201 tim=1988902073088
WAIT #139940851571352: nam='db file sequential read' ela= 576 file#=11 block#=13034 blocks=1 obj#=131201 tim=1988902073711
WAIT #139940851571352: nam='db file sequential read' ela= 472 file#=11 block#=118100 blocks=1 obj#=131201 tim=1988902074209
WAIT #139940851571352: nam='db file sequential read' ela= 382 file#=11 block#=13034 blocks=1 obj#=131201 tim=1988902074629
WAIT #139940851571352: nam='db file sequential read' ela= 558 file#=11 block#=118140 blocks=1 obj#=131201 tim=1988902075214
WAIT #139940851571352: nam='db file sequential read' ela= 479 file#=11 block#=13034 blocks=1 obj#=131201 tim=1988902075732
WAIT #139940851571352: nam='db file sequential read' ela= 532 file#=11 block#=141498 blocks=1 obj#=131201 tim=1988902076290
WAIT #139940851571352: nam='db file sequential read' ela= 605 file#=11 block#=13034 blocks=1 obj#=131201 tim=1988902076933
WAIT #139940851571352: nam='db file sequential read' ela= 512 file#=11 block#=12950 blocks=1 obj#=131201 tim=1988902077472
WAIT #139940851571352: nam='direct path read' ela= 4326 file number=11 first dba=259074 block cnt=62 obj#=131201 tim=1988902081885
WAIT #139940851571352: nam='db file sequential read' ela= 754 file#=11 block#=13035 blocks=1 obj#=131201 tim=1988902082905
WAIT #139940851571352: nam='db file sequential read' ela= 544 file#=11 block#=13035 blocks=1 obj#=131201 tim=1988902083483
WAIT #139940851571352: nam='db file sequential read' ela= 617 file#=11 block#=12955 blocks=1 obj#=131201 tim=1988902084132
WAIT #139940851571352: nam='db file sequential read' ela= 546 file#=11 block#=13035 blocks=1 obj#=131201 tim=1988902084723
WAIT #139940851571352: nam='db file sequential read' ela= 976 file#=11 block#=141559 blocks=1 obj#=131201 tim=1988902085730
WAIT #139940851571352: nam='direct path read' ela= 3240 file number=11 first dba=264450 block cnt=62 obj#=131201 tim=1988902089070
WAIT #139940851571352: nam='db file sequential read' ela= 380 file#=11 block#=13036 blocks=1 obj#=131201 tim=1988902090605
WAIT #139940851571352: nam='db file sequential read' ela= 812 file#=11 block#=13036 blocks=1 obj#=131201 tim=1988902091452
WAIT #139940851571352: nam='db file sequential read' ela= 452 file#=11 block#=118118 blocks=1 obj#=131201 tim=1988902091936
WAIT #139940851571352: nam='db file sequential read' ela= 608 file#=11 block#=13036 blocks=1 obj#=131201 tim=1988902092588
WAIT #139940851571352: nam='db file sequential read' ela= 566 file#=11 block#=13008 blocks=1 obj#=131201 tim=1988902093186

While trying to figure out why the same block, say block# 13034, in the trace was being reread from disk during subsequent events, rather than being accessed from the buffer cache, another of my colleagues (thanks Kei) pointed out that the event associated with the table scan was a ‘direct path read’, rather than the usual ‘db file scattered read’.

‘direct path reads’ bypass the buffer cache, reading blocks directly into the session’s PGA. It seems this behavior also affects subsequent ‘db file sequential read’s necessary for accessing chained row pieces, potentially causing some significant performance issues.

As is often the case, Jonathan Lewis has already blogged about this issue however it’s always more rewarding and fun (re)discovering these issues oneself!

Slovenia & Croatia Oracle User Group Conferences 2017

20171014_183311.jpgThe fact that these two conferences run back-to-back seemed a perfect opportunity to experience both, at the expense of a single long-haul flight.  Their taking place at amazing locations on the Istrian coast which I had never visited before sealed the deal. I submitted abstracts and was delighted when they both accepted me to speak.

I landed in Zagreb after eighteen hours in transit, picked up my rental car, and crossed the Croatian-Slovenian border on my way to the Slovenian capital Ljubljana.  The city was  beautiful, but I was only able to spend about half an hour looking around before exhaustion kicked in and I retired for an early night.

20171015_113318.jpgThe Slovenian conference was held on the town of Portoroz, and I had heard about some cave systems, Postojna and Skocjan on the way.  I had planned to visit both, but on the way to the Postojna, I saw a sign to a cave at Planina so I decided to check that out.  The public accessible area only extends a couple of hundred meters inside (much more is accessible with a guide)  but it was really cool just being alone in the quiet and pitch darkness.

20171015_131242.jpgPostojna Cave  is a complete contrast; you take a 20 minute train journey with hundreds of other sightseers deep into an amazing cave system, followed by a 40 minute guided tour, before taking the train back out again.  After these two experiences I’d had more then enough of caves, so I decided too skip Skocjan and head direct to the conference venue.

20171015_201945.jpg

20171016_203427.jpgThe speakers dinner was held at an fantastic restaurant, Na Burji, and the meal was accompanied by bread baked by the multi-talented Jože Senegačnik.  I was thrilled to finally catch up in person with Jure Bratina, the first subscriber to my blog (!).

Both were fantastic hosts, epitomizing the warmth of the all Slovenian and Croatian people I met on the trip.

After the two day conference I drove with ‘Mr SmartScan’ Roger McNicol back across the border to Croatia and the town of Rovinj.

The Croatian conference is held at Hotel Istra, on Red Island, a 20 minute ferry trip from Rovinj.  There are actually two islands joined together by a causeway, and you can walk the circumference of both in about forty minutes.  They must have been spectacular before the hotel was built!

20171018_072500.jpg

Most days during the conference I managed to swim in the morning and evening, one morning completing the 75 minute swim around the smaller island with Kamil “Let’s do it quickly before we realize it’s a stupid idea” Stawiarski!

One of the highlights was a guided trip to the historic town of Pula, followed by drinking and go-karting, organized by the larger than life figure of Davor Ranković.

20171018_150440.jpg

The above description makes everything seem like an amazing holiday, which indeed it was.  However in addition, I saw so many excellent and informative presentations on topics such as Oracle database, Application Express and Oracle JET.  Being able to interact and make connections in person with so many incredibly knowledgeable and talented people from both inside, and outside of, Oracle corporation is very empowering, and something that is just not possible through the internet. Without exception I found that everyone was so welcoming to me as a new speaker.

Thanks everyone that made it possible, or made it special just by being part of it!

 

 

 

 

Identifying invalid SQL with event 10035

I recently learned from Jonathan Lewis’s blog that in version 12.2 details of invalid SQL statements are written to the alert log (after every 100 occurrences for a particular SQL statement).

Its worth to  mention that similar functionality can be achieved in earlier versions by setting event 10035 (not to be confused with it’s more (in)famous sibling, event 10053).

I will demonstrate with version 12.1 using the same invalid SQL from Jonathan’s post (I have tested that this works at least as far back as 10.2):

Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> ALTER SESSION SET EVENTS='10035 trace name context forever, level 1';

Session altered.

SQL> select count(*) frm dual;

Error starting at line : 1 in command -
select count(*) frm dual
Error at Command Line : 1 Column : 21
Error report -
SQL Error: ORA-00923: FROM keyword not found where expected
00923. 00000 - "FROM keyword not found where expected"
*Cause:
*Action:

The following information is written in the alert log:

PARSE ERROR: ospid=14737, error=923 for statement:
Sun Oct 08 01:14:43 2017
select count(*) frm dual
Additional information: hd=0x6bffaa18 phd=0x71c88e18 flg=0x28 cisid=8 sid=8 ciuid=8 uid=8
PARSE ERROR: ospid=14737, error=923 for statement:
Sun Oct 08 01:14:43 2017
select count(*) frm dual
Additional information: hd=0x6bffaa18 phd=0x71c88e18 flg=0x28 cisid=8 sid=8 ciuid=8 uid=8
Sun Oct 08 01:14:43 2017
----- PL/SQL Call Stack -----
object line object
handle number name
0x6f9e6a10 1120 package body SYS.DBMS_SQL
0x727c2ba8 15 anonymous block

Due to the extra overhead this would add to the parse of every invalid SQL, I would suggest to only set this event temporarily at session or system level after identifying that this is a problem from the statistic “parse count (failures)”.

Update

I just found out that Tanel covered this back in 2009 as part of an deep-dive article into diagnostic events, that is well worth a read.

In-Memory Column Store and tables with more than 255 columns

Last week, developers reported some unexpected behavior on a test system.  I thought I would take advantage of the long weekend here in Hong Kong to build the simplest possible reproducible testcase.  I find this is useful to not only in allowing others to easily test on their system, but also to distill exactly what is and is not relevant to the problem, an essential first step for understanding what is going on.

The result confirms something I think I heard from Jonathan Lewis, in that when you start to mix certain features that majority of customers are not using, in this case in-memory column store, tables with >255 columns, and NCHAR columns, then you can start to see some ‘interesting’ results.

For my testcase I am running 12.1.0.2 with 170418 DBBP.

Observe the first part of the setup, I simply use PL/SQL block to create a table with 256 columns (note the first is an NCHAR) and populate it with a couple of rows.  The rows both have c1 set to ‘X’, and on one I populate the last field c256 with value 0, on the other it is NULL.  A query from the the table where c1=’X’ returns both rows as expected.

SQL> DECLARE
  2      sqltext   VARCHAR2(32767);
  3  BEGIN
  4      sqltext := 'CREATE TABLE IM256 (C1 NCHAR(1), ';
  5      FOR loopy IN 2..256 LOOP
  6          sqltext := sqltext
  7           || 'c'
  8           || loopy
  9           || ' NUMBER,';
 10      END LOOP;
 11      -- Remove trailing comma and add close braket
 12      sqltext := SUBSTR(sqltext, 1, LENGTH(SQLTEXT)-1) || ')' ;
 13      EXECUTE IMMEDIATE sqltext;
 14  END;
 15  /

PL/SQL procedure successfully completed.

SQL>
SQL> INSERT INTO IM256(C1, C256) VALUES ('X' , 0);

1 row inserted.

SQL> INSERT INTO IM256(C1) VALUES ('X');

1 row inserted.

SQL> COMMIT;

Commit complete.

SQL>
SQL> SELECT * FROM IM256 WHERE C1 = 'X';

X














                                                                                                                                                                   0

X

















2 rows selected.

SQL>

As soon as I move the table into In-Memory Column Store, the query with the predicate starts to return only one row. The query without predicate still returns both rows.

SQL> ALTER TABLE IM256 INMEMORY NO MEMCOMPRESS;

Table IM256 altered.

SQL> SELECT * FROM IM256 WHERE C1 = 'X';

X














                                                                                                                                                                   0


1 row selected.

SQL> SELECT * FROM IM256;

X














                                                                                                                                                                   0

X














                                                                                                                                                                   0


2 rows selected.

SQL>

Stefan Koehler pointed me towards event 12099 which is mentioned in a couple of bug reports in this area, as a way to disable ‘turboscan’ and indeed setting this event causes the expected results to appear.

SQL>  alter session set events '12099 trace name context forever, level 1';

Session altered.

SQL> SELECT * FROM IM256 WHERE C1 = 'X';

X














                                                                                                                                                                   0

X

















2 rows selected.

Oracle support have identified this as bug 20508559 which is described in further detail in document 20508559.8 – Wrong Results / ORA-600 [kdmv_check_row] with more than 255 columns with nulls.

Edit: I have just tested the patch and after applying it the issue is resolved.

High Five POUG

Earlier in the year while digging into AWR Warehouse internals I noticed that POUG (Pint with/Polish Oracle User Group depending on who you ask) had opened their call for papers so on impulse I submitted an abstract and thought no more about it.

I was surprised to receive a response that I could attend as a reserve speaker, they hoped I wasn’t too disappointed. I don’t think they realized I considered this the perfect outcome, an opportunity to attend a fantastic conference without the pressure of having to present.

A couple of weeks before the conference I received some disturbing news, one of their speakers had dropped out, and would be presenting. I would actually have to work for my conference ticket!

The conference itself was everything I had hoped for and more. It was held in a microbrewery so there was free flow beer for the duration which really contributed to the relaxed social atmosphere. All the organizers and staff were great, in particular Kamil’s enthusiasm was complemented by Luiza’s attention to detail. They were so accommodating and had added so many personalized touches to how I was treated. Likely because of this they had also attracted a top list of A-list speakers.

One thing I hadn’t expected was the support from the other speakers. As a first time presenter I received reassuring words from many, but in particular those from Neil Chandler and Jim Czuprynski really helped calm my nerves.

IMG_1582

In summary if you have yet to present at a conference then consider submitting a paper, you never know what might happen. Also if you ever get the opportunity to attend a future POUG conference then don’t hesitate to go.

IMG_1584

“ORA-20110: set stamp set count conflict” after upgrading to 12.2 Recovery Catalog

This is the second time we have hit this error after upgrading the recovery catalog to 12.2.

The first is now documented by Oracle as Document 2291791.1, and affects 12.1 multi-tenant instances with DBBP 12.1.0.2.161018 or later applied.
If you have applied that patchset, you can see the problem by checking the contents of file $ORACLE_HOME/rdbms/admin/recover.bsq

  cursor bs(low_recid number, high_recid number) is
    select bs.recid, bs.stamp, bs.set_stamp, bs.set_count, bs.backup_type,
           bs.incremental_level,
           bs.pieces, start_time, completion_time, controlfile_included,
           bs.input_file_scan_only, keep_until,
           decode (bs.keep_options, 'LOGS'      , KEEP_LOGS
                               , 'NOLOGS'       , KEEP_NOLOGS
                               , 'BACKUP_LOGS'  , KEEP_CONSIST
                                                , 0) keep_options,
           bs.block_size, bs.multi_section, bs.guid,
           decode(bs.con_id, invalid_pdbid, 1, 0) dropped_pdb
    from v$backup_set bs, v$containers pdb
    where bs.recid between low_recid and high_recid
      and (bs.stamp >= kccdivts OR bs.recid = high_recid)
      and bs.stamp >= resyncstamp
      and bs.for_xtts != 'YES'
    order by bs.recid;

Note that the join has been missed between v$backup_set and v$containers(!). The next cursor in the source, bp, also has the same ommission.
You can work-around the issue by adding the following condition to the two cursors, but you should probably check with Oracle Support first.

    
 and bs.con_id = pdb.con_id(+) 

The latest issue is on a 11g (hence single tenant obviously) on a dataguard standby site (in this case we take backups from both primary and standby instances).
Tracing the resync of the catalog from the standby site, the issue begins when inserting to table bs (backup sets) in the recovery catalog.
The insert fails because of a violation the unique key BS_U2 on columns db_key, set_stamp and set_count.
Checking the recovery catalog there already is record for this combination of set_stamp and set_count, but from the primary instance.

My suspicion is that the unique key also needs to include site_key field, however I don’t have enough of an understanding of what the set_stamp, set_count combination represents to be sure.

The only reference to the fields I can find come quotes in the documentation such as the following:

The SET_STAMP value from V$BACKUP_SET. SET_STAMP and SET_COUNT form a concatenated key that uniquely identifies this record in the target database control file.

What does seem apparent is that the 12.2 recovery catalog is stricter in enforcing this uniqueness, thus revealing some bugs elsewhere in the codebase.

Instance crash when not restarted through clusterware after patching

We had some issues, fortunately on a test instance, after applying a database
patch. With the help of a colleague (thanks Kei) I was able to identify what had
gone wrong and setup a testcase to demonstrate the issue. I think it is worth
sharing as it reveals some of the complexities of the relationships between
the components of the Oracle stack.

My setup consists of two database instances on the same host configured in Oracle Restart. To start the patching process I stop both instances. Note in particular the group owner of oracle binary is asmadmin.

[oracle@hkexdb01 ~]$ srvctl stop database -db PVJA -o abort
[oracle@hkexdb01 ~]$ srvctl stop database -db PVJB -o abort
[oracle@hkexdb01 ~]$ ls -alrt $ORACLE_HOME/bin/oracle
-rwsr-s--x 1 oracle asmadmin 327575894 Jun 13 14:52 /u01/app/oracle/product/12.1.0.2/dbhome_2/bin/oracle

Next I apply database patch (I have omitted some details from the
patch session as they are not relevant). Observe that after patch completes,
oracle binary has been re-linked, and owner is now oinstall.

[oracle@hkexdb01 ~]$ cd ~/25397136
[oracle@hkexdb01 25397136]$ $ORACLE_HOME/OPatch/opatch apply
Oracle Interim Patch Installer version 12.2.0.1.9
Copyright (c) 2017, Oracle Corporation.  All rights reserved.


Oracle Home       : /u01/app/oracle/product/12.1.0.2/dbhome_2
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/12.1.0.2/dbhome_2/oraInst.loc
OPatch version    : 12.2.0.1.9
OUI version       : 12.1.0.2.0
Log file location : /u01/app/oracle/product/12.1.0.2/dbhome_2/cfgtoollogs/opatch/opatch2017-06-13_14-58-15PM_1.log

Verifying environment and performing prerequisite checks...
OPatch continues with these patches:   24732088  25397136  

Do you want to proceed? [y|n]
y
User Responded with: Y
All checks passed.

Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = '/u01/app/oracle/product/12.1.0.2/dbhome_2')


Is the local system ready for patching? [y|n]
y
User Responded with: Y
Backing up files...
Applying sub-patch '24732088' to OH '/u01/app/oracle/product/12.1.0.2/dbhome_2'

--DETAILS OMITTED

Patching component oracle.rdbms.install.plugins, 12.1.0.2.0...
Composite patch 25397136 successfully applied.
Log file location: /u01/app/oracle/product/12.1.0.2/dbhome_2/cfgtoollogs/opatch/opatch2017-06-13_14-58-15PM_1.log

OPatch succeeded.
[oracle@hkexdb01 25397136]$ ls -alrt $ORACLE_HOME/bin/oracle
-rwsr-s--x 1 oracle oinstall 327813870 Jun 13 14:59 /u01/app/oracle/product/12.1.0.2/dbhome_2/bin/oracle

After patching is complete, I now restart PVJA outside of
clusterware (note the group owner of oracle binary is still oinstall).

[oracle@hkexdb01 25397136]$ export ORACLE_SID=PVJA
[oracle@hkexdb01 25397136]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Tue Jun 13 15:00:54 2017

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to an idle instance.

SYS@PVJA> startup
ORACLE instance started.

Total System Global Area 2147483648 bytes
Fixed Size                  2926472 bytes
Variable Size            1564297336 bytes
Database Buffers          570425344 bytes
Redo Buffers                9834496 bytes
Database mounted.
Database opened.
SYS@PVJA> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options
[oracle@hkexdb01 25397136]$ ls -alrt $ORACLE_HOME/bin/oracle
-rwsr-s--x 1 oracle oinstall 327813870 Jun 13 14:59 /u01/app/oracle/product/12.1.0.2/dbhome_2/bin/oracle

I then restart PVJB instance through clusterware. Note that clusterware
updates the group owner of the oracle binary.

srvctl start database -db PVJB 
ls -alrt $ORACLE_HOME/bin/oracle

[oracle@hkexdb01 25397136]$ srvctl start database -db PVJB 
[oracle@hkexdb01 25397136]$ ls -alrt $ORACLE_HOME/bin/oracle
-rwsr-s--x 1 oracle asmadmin 327813870 Jun 13 14:59 /u01/app/oracle/product/12.1.0.2/dbhome_2/bin/oracle
[oracle@hkexdb01 25397136]$ 

This change in group ownership ‘breaks’ the PVJA instance. There are lots of errors in the alert log.

Errors in file /u01/app/oracle/diag/rdbms/pvja/PVJA/trace/PVJA_j000_64968.trc:
ORA-27140: attach to post/wait facility failed
ORA-27300: OS system dependent operation:invalid_egid failed with status: 1
ORA-27301: OS failure message: Operation not permitted
ORA-27302: failure occurred at: skgpwinit6
ORA-27303: additional information: startup egid = 1001 (oinstall), current egid = 1006 (asmadmin)

Note the clusterware log seems to indicate the point at which this action took
place.

2017-06-13 14:55:47.233925 :CLSDYNAM:4045649664: [ora.pvjb.db]{1:31716:24755} [start] Utils:execCmd action = 1 flags = 6 ohome = /u01/app/12.1.0.2/grid cmdname = setasmgidwrap

This behaiour seems to match Bug 9784037 : SETASMGID CAUSING ORA-27303, which
was closed as “Not a Bug”. The provided workaround is “always remember to execute setasmgidwrap after doing relink all / patch apply in RDBMS Oracle Home”.
However I think just making sure that all instances are restarted through
clusterware is probably a better way to handle this.

Update, seems Frits Hoogland has run into this issue too, I now feel a bit better about my ‘mistake’.

OPatch 12.2.0.1.9 change in behaviour patching RAC nodes

We’ve had issues in the past patching the Oracle Homes on our Exadata nodes due to the fact that they were defined as part of a RAC cluster.
We don’t use RAC, so we have made sure we don’t encounter such problems in the past by unlinking them as described in the documentation.

However I have maintained a separate pair of linked Oracle homes for testing, and I have a reproducible test case of OPatch getting into a mess under certain conditions.
Running this testcase, I have noticed some change in behavior in this area with latest version of OPatch 12.2.0.1.9 compared to previous version 12.2.0.1.8.

Review the following testcase using OPatch 12.2.0.1.8. A few things to note.
First by default patches are applied to both nodes (this can be overridden by specifying -local flag).
Second the confirmation prompt regarding the remote node takes place after the local node has been patched.

[oracle@hkexdb01 22652097]$ $ORACLE_HOME/OPatch.backup/opatch apply
Oracle Interim Patch Installer version 12.2.0.1.8
Copyright (c) 2017, Oracle Corporation.  All rights reserved.


Oracle Home       : /u01/app/oracle/product/12.1.0.2/dbhome_2
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/12.1.0.2/dbhome_2/oraInst.loc
OPatch version    : 12.2.0.1.8
OUI version       : 12.1.0.2.0
Log file location : /u01/app/oracle/product/12.1.0.2/dbhome_2/cfgtoollogs/opatch/opatch2017-05-25_15-36-03PM_1.log

Verifying environment and performing prerequisite checks...
OPatch continues with these patches:   22652097  

Do you want to proceed? [y|n]
y
User Responded with: Y
All checks passed.

This node is part of an Oracle Real Application Cluster.
Remote nodes: 'hkexdb02' 
Local node: 'hkexdb01'
Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = '/u01/app/oracle/product/12.1.0.2/dbhome_2')


Is the local system ready for patching? [y|n]
y
User Responded with: Y
Backing up files...
Applying interim patch '22652097' to OH '/u01/app/oracle/product/12.1.0.2/dbhome_2'

Patching component oracle.rdbms, 12.1.0.2.0...

Patching component oracle.rdbms.dbscripts, 12.1.0.2.0...
Skip copying to "/u01/app/oracle/product/12.1.0.2/dbhome_2/rdbms/admin/catfusrg.sql" because it is the same as 
the file in incoming patch "/home/oracle/patchtest/22652097/files/rdbms/admin/catfusrg.sql"

Patching in rolling mode.


The node 'hkexdb02' will be patched next.


Please shutdown Oracle instances running out of this ORACLE_HOME on 'hkexdb02'.
(Oracle Home = '/u01/app/oracle/product/12.1.0.2/dbhome_2')

Is the node ready for patching? [y|n]
y
User Responded with: Y
Updating nodes 'hkexdb02' 
   Apply-related files are:
     FP = "/u01/app/oracle/product/12.1.0.2/dbhome_2/.patch_storage/NApply/2017-05-25_15-36-03PM/rac/copy_files.txt"
     DP = "/u01/app/oracle/product/12.1.0.2/dbhome_2/.patch_storage/NApply/2017-05-25_15-36-03PM/rac/copy_dirs.txt"
     MP = "/u01/app/oracle/product/12.1.0.2/dbhome_2/.patch_storage/NApply/2017-05-25_15-36-03PM/rac/make_cmds.txt"
     RC = "/u01/app/oracle/product/12.1.0.2/dbhome_2/.patch_storage/NApply/2017-05-25_15-36-03PM/rac/remote_cmds.txt"

Instantiating the file "/u01/app/oracle/product/12.1.0.2/dbhome_2/.patch_storage/NApply/2017-05-25_15-36-03PM/rac/copy_files.txt.instantiated" by replacing $ORACLE_HOME in "/u01/app/oracle/product/12.1.0.2/dbhome_2/.patch_storage/NApply/2017-05-25_15-36-03PM/rac/copy_files.txt" with actual path.
Propagating files to remote nodes...
Instantiating the file "/u01/app/oracle/product/12.1.0.2/dbhome_2/.patch_storage/NApply/2017-05-25_15-36-03PM/rac/copy_dirs.txt.instantiated" by replacing $ORACLE_HOME in "/u01/app/oracle/product/12.1.0.2/dbhome_2/.patch_storage/NApply/2017-05-25_15-36-03PM/rac/copy_dirs.txt" with actual path.
Propagating directories to remote nodes...
Instantiating the file "/u01/app/oracle/product/12.1.0.2/dbhome_2/.patch_storage/NApply/2017-05-25_15-36-03PM/rac/make_cmds.txt.instantiated" by replacing $ORACLE_HOME in "/u01/app/oracle/product/12.1.0.2/dbhome_2/.patch_storage/NApply/2017-05-25_15-36-03PM/rac/make_cmds.txt" with actual path.
Running command on remote node 'hkexdb02': 
cd /u01/app/oracle/product/12.1.0.2/dbhome_2/rdbms/lib; /usr/bin/make -f ins_rdbms.mk ioracle ORACLE_HOME=/u01/app/oracle/product/12.1.0.2/dbhome_2 || echo REMOTE_MAKE_FAILED::>&2 


The node 'hkexdb02' has been patched.  You can restart Oracle instances on it.


Finished patching in rolling mode.

Patch 22652097 successfully applied.
Log file location: /u01/app/oracle/product/12.1.0.2/dbhome_2/cfgtoollogs/opatch/opatch2017-05-25_15-36-03PM_1.log

OPatch succeeded.
[oracle@hkexdb01 22652097]$ 

Compare that with an attempt to apply same patch using OPatch 12.2.0.1.9 using the same command; by default, only the local node will be patched.

[oracle@hkexdb01 22652097]$ $ORACLE_HOME/OPatch/opatch apply       
Oracle Interim Patch Installer version 12.2.0.1.9
Copyright (c) 2017, Oracle Corporation.  All rights reserved.


Oracle Home       : /u01/app/oracle/product/12.1.0.2/dbhome_2
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/12.1.0.2/dbhome_2/oraInst.loc
OPatch version    : 12.2.0.1.9
OUI version       : 12.1.0.2.0
Log file location : /u01/app/oracle/product/12.1.0.2/dbhome_2/cfgtoollogs/opatch/opatch2017-05-25_15-39-06PM_1.log

Verifying environment and performing prerequisite checks...
OPatch continues with these patches:   22652097  

Do you want to proceed? [y|n]
y
User Responded with: Y
All checks passed.

Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = '/u01/app/oracle/product/12.1.0.2/dbhome_2')


Is the local system ready for patching? [y|n]
n
User Responded with: N
NApply exits on request
UtilSession: NApply exits on user's request.
UtilSession system modification phase did not start: NApply exits on request
Log file location: /u01/app/oracle/product/12.1.0.2/dbhome_2/cfgtoollogs/opatch/opatch2017-05-25_15-39-06PM_1.log

OPatch stopped on request.
[oracle@hkexdb01 22652097]$ 

To get 12.2.0.1.9 to patch remote nodes as well, -all_nodes switch must be provided, as demonstrated below.  Observe also that confirmation regards patching all nodes takes place before patching any of them.

[oracle@hkexdb01 22652097]$ $ORACLE_HOME/OPatch/opatch apply -all_nodes
Oracle Interim Patch Installer version 12.2.0.1.9
Copyright (c) 2017, Oracle Corporation.  All rights reserved.


Oracle Home       : /u01/app/oracle/product/12.1.0.2/dbhome_2
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/12.1.0.2/dbhome_2/oraInst.loc
OPatch version    : 12.2.0.1.9
OUI version       : 12.1.0.2.0
Log file location : /u01/app/oracle/product/12.1.0.2/dbhome_2/cfgtoollogs/opatch/opatch2017-05-25_15-40-05PM_1.log

Verifying environment and performing prerequisite checks...
OPatch continues with these patches:   22652097  

Do you want to proceed? [y|n]
y
User Responded with: Y
All checks passed.

This node is part of an Oracle Real Application Cluster.
Remote nodes: 'hkexdb02' 
Local node: 'hkexdb01'
Please shut down Oracle instances running out of this ORACLE_HOME on all the nodes.
(Oracle Home = '/u01/app/oracle/product/12.1.0.2/dbhome_2')


Are all the nodes ready for patching? [y|n]
y
User Responded with: Y
Backing up files...
Applying interim patch '22652097' to OH '/u01/app/oracle/product/12.1.0.2/dbhome_2'

Patching component oracle.rdbms, 12.1.0.2.0...

Patching component oracle.rdbms.dbscripts, 12.1.0.2.0...

Patching in all-node mode.

Updating nodes 'hkexdb02' 
   Apply-related files are:
     FP = "/u01/app/oracle/product/12.1.0.2/dbhome_2/.patch_storage/NApply/2017-05-25_15-40-05PM/rac/copy_files.txt"
     DP = "/u01/app/oracle/product/12.1.0.2/dbhome_2/.patch_storage/NApply/2017-05-25_15-40-05PM/rac/copy_dirs.txt"
     MP = "/u01/app/oracle/product/12.1.0.2/dbhome_2/.patch_storage/NApply/2017-05-25_15-40-05PM/rac/make_cmds.txt"
     RC = "/u01/app/oracle/product/12.1.0.2/dbhome_2/.patch_storage/NApply/2017-05-25_15-40-05PM/rac/remote_cmds.txt"

Instantiating the file "/u01/app/oracle/product/12.1.0.2/dbhome_2/.patch_storage/NApply/2017-05-25_15-40-05PM/rac/copy_files.txt.instantiated" by replacing $ORACLE_HOME in "/u01/app/oracle/product/12.1.0.2/dbhome_2/.patch_storage/NApply/2017-05-25_15-40-05PM/rac/copy_files.txt" with actual path.
Propagating files to remote nodes...
Instantiating the file "/u01/app/oracle/product/12.1.0.2/dbhome_2/.patch_storage/NApply/2017-05-25_15-40-05PM/rac/copy_dirs.txt.instantiated" by replacing $ORACLE_HOME in "/u01/app/oracle/product/12.1.0.2/dbhome_2/.patch_storage/NApply/2017-05-25_15-40-05PM/rac/copy_dirs.txt" with actual path.
Propagating directories to remote nodes...
Instantiating the file "/u01/app/oracle/product/12.1.0.2/dbhome_2/.patch_storage/NApply/2017-05-25_15-40-05PM/rac/make_cmds.txt.instantiated" by replacing $ORACLE_HOME in "/u01/app/oracle/product/12.1.0.2/dbhome_2/.patch_storage/NApply/2017-05-25_15-40-05PM/rac/make_cmds.txt" with actual path.
Running command on remote node 'hkexdb02': 
cd /u01/app/oracle/product/12.1.0.2/dbhome_2/rdbms/lib; /usr/bin/make -f ins_rdbms.mk ioracle ORACLE_HOME=/u01/app/oracle/product/12.1.0.2/dbhome_2 || echo REMOTE_MAKE_FAILED::>&2 


All nodes have been patched.  You may start Oracle instances on the local system and nodes 'hkexdb02' 

Patch 22652097 successfully applied.
Log file location: /u01/app/oracle/product/12.1.0.2/dbhome_2/cfgtoollogs/opatch/opatch2017-05-25_15-40-05PM_1.log

OPatch succeeded.
[oracle@hkexdb01 22652097]$ 

I think the change to prompt for confirmation before patching any nodes is an improvement.  Regarding the change of defaults for patching the local node vs patching all nodes, this doesn’t affect us (as we’ve unlinked our homes) but the change might cause some issues for some.

Update 2017-05-26, this change in behaviour is documented in the following MOS note:

OPatch: Behavior Changes starting in OPatch 12.2.0.1.9 and 11.2.0.3.16 releases (Doc ID 2232156.1)

Changed READ ANY TABLE audit behavior in 12.2

We have noticed a change in the behavior of READ ANY TABLE auditing in 12.2.

The following initial setup is performed as SYS:

SQL> AUDIT
  2      SELECT ANY TABLE;

Audit succeeded.

SQL>
SQL> AUDIT
  2      READ ANY TABLE;

Audit succeeded.

SQL>
SQL> CREATE USER tst_table_owner IDENTIFIED BY tst_table_owner
  2      ACCOUNT LOCK;

User TST_TABLE_OWNER created.

SQL>
SQL> CREATE USER tst_view_owner IDENTIFIED BY tst_view_owner
  2      ACCOUNT LOCK;

User TST_VIEW_OWNER created.

SQL>
SQL> GRANT
  2      READ ANY TABLE
  3  TO tst_view_owner WITH ADMIN OPTION;

Grant succeeded.

SQL>
SQL> GRANT
  2      SELECT ANY TABLE
  3  TO tst_view_owner WITH ADMIN OPTION;

Grant succeeded.

SQL>
SQL> CREATE USER tst_login IDENTIFIED BY tst_login;

User TST_LOGIN created.

SQL>
SQL> GRANT
  2      CREATE SESSION
  3  TO tst_login;

Grant succeeded.

SQL>
SQL> CREATE TABLE tst_table_owner.tst_table
  2      AS
  3          SELECT
  4              *
  5          FROM
  6              dual
  7          WHERE
  8              1 = 0;

Table TST_TABLE_OWNER.TST_TABLE created.

SQL>
SQL> CREATE VIEW tst_view_owner.tst_view AS
  2      SELECT
  3          *
  4      FROM
  5          tst_table_owner.tst_table;

View TST_VIEW_OWNER.TST_VIEW created.

SQL>
SQL> GRANT SELECT ON tst_view_owner.tst_view TO tst_login;

Grant succeeded.

SQL>
SQL> GRANT SELECT ON dba_capture TO tst_login;

Grant succeeded.

The following select statements are then run as newly created tst_login user:

SELECT
    COUNT(*)
FROM
    tst_view_owner.tst_view;

SELECT
    COUNT(*)
FROM
    dba_capture;

Querying the audit trail in 12.1 shows no entries, however in 12.2:

SQL> SELECT
  2      db_user,
  3      object_schema,
  4      object_name,
  5      sql_text,
  6      priv_used
  7  FROM
  8      dba_common_audit_trail
  9  WHERE
 10          extended_timestamp > SYSDATE - 2 / 24 / 60
 11      AND
 12          statement_type = 'SELECT';

DB_USER    OBJECT_SCHEMA    OBJECT_NAME               SQL_TEXT  PRIV_USED

TST_LOGIN  TST_TABLE_OWNER  TST_TABLE                           READ ANY TABLE
TST_LOGIN  TST_VIEW_OWNER   TST_VIEW
TST_LOGIN  SYS              XSTREAM$_SERVER
TST_LOGIN  SYS              STREAMS$_APPLY_PROCESS
TST_LOGIN  SYS              STREAMS$_APPLY_MILESTONE
TST_LOGIN  SYS              XSTREAM$_SERVER
TST_LOGIN  SYS              STREAMS$_APPLY_PROCESS
TST_LOGIN  SYS              STREAMS$_APPLY_MILESTONE
TST_LOGIN  SYS              XSTREAM$_SERVER
TST_LOGIN  SYS              XSTREAM$_SERVER
TST_LOGIN  SYS              XSTREAM$_SERVER
TST_LOGIN  SYS              XSTREAM$_SERVER
TST_LOGIN  SYS              XSTREAM$_SERVER
TST_LOGIN  SYS              PROPS$
TST_LOGIN  SYS              DUAL
TST_LOGIN  SYS              PROPS$
TST_LOGIN  SYS              DUAL
TST_LOGIN  SYS              STREAMS$_CAPTURE_PROCESS
TST_LOGIN  SYS              _DBA_CAPTURE
TST_LOGIN  SYSTEM           LOGMNR_SESSION$                     READ ANY TABLE
TST_LOGIN  SYS              DBA_LOGMNR_SESSION
TST_LOGIN  SYS              USER$
TST_LOGIN  SYS              _SXGG_DBA_CAPTURE
TST_LOGIN  SYS              PROPS$
TST_LOGIN  SYS              DUAL
TST_LOGIN  SYS              PROPS$
TST_LOGIN  SYS              DUAL
TST_LOGIN  SYS              STREAMS$_CAPTURE_PROCESS
TST_LOGIN  SYS              _DBA_CAPTURE
TST_LOGIN  SYSTEM           LOGMNR_SESSION$                     READ ANY TABLE
TST_LOGIN  SYS              DBA_LOGMNR_SESSION
TST_LOGIN  SYS              USER$
TST_LOGIN  SYS              _SXGG_DBA_CAPTURE
TST_LOGIN  SYS              DBA_CAPTURE

34 rows selected.

It appears that READ ANY TABLE is audited when querying a view, if the privilege is required by  the view owner to access the base table.

Oracle support have now confirmed that this is not the expected behavior, and have created Bug 26035911 : AUDIT RECORDS GENERATED EVEN WHEN THE SYSTEM PRIVILEGE IS NOT EXERCISED IN 12.2

12cR2 Incrementally Updated Backups and dropped datafiles

We have just noticed a difference in behavior in 12cR2 with regards to image copies being marked as obsolete after the backup is updated past the drop of a datafile.
I won’t describe the feature itself, if necessary you can read up at oracle-base or the Oracle documentation.

First review output from the testcase on a 12.1 instance, note that after dropping the datafile, and updating the backup past this point, the datafile copy is marked as obsolete:

RMAN> show all;

RMAN configuration parameters for database with db_unique_name ADAPTEST are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION ON;
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
CONFIGURE RMAN OUTPUT TO KEEP FOR 7 DAYS; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/snapcf_ADAPTEST.f'; # default

RMAN> create tablespace test datafile size 1m autoextend off;

Statement processed

RMAN> backup incremental level 1 for recover of copy with tag 'test_obsolete' database;

Starting backup at 2017-04-19 12:47:09
using channel ORA_DISK_1
no parent backup or copy of datafile 1 found
no parent backup or copy of datafile 3 found
no parent backup or copy of datafile 4 found
no parent backup or copy of datafile 6 found
no parent backup or copy of datafile 5 found
channel ORA_DISK_1: starting datafile copy
input datafile file number=00001 name=+DATAC1/ADAPTEST/DATAFILE/system.401.941128155
output file name=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/data_D-ADAPTEST_I-757536437_TS-SYSTEM_FNO-1_0hs2302d tag=TEST_OBSOLETE RECID=19 STAMP=941719632
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting datafile copy
input datafile file number=00003 name=+DATAC1/ADAPTEST/DATAFILE/sysaux.397.941128123
output file name=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/data_D-ADAPTEST_I-757536437_TS-SYSAUX_FNO-3_0is2302k tag=TEST_OBSOLETE RECID=20 STAMP=941719639
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting datafile copy
input datafile file number=00004 name=+DATAC1/ADAPTEST/DATAFILE/undotbs1.403.941128201
output file name=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/data_D-ADAPTEST_I-757536437_TS-UNDOTBS1_FNO-4_0js2302r tag=TEST_OBSOLETE RECID=21 STAMP=941719644
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting datafile copy
input datafile file number=00006 name=+DATAC1/ADAPTEST/DATAFILE/users.399.941128199
output file name=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/data_D-ADAPTEST_I-757536437_TS-USERS_FNO-6_0ks2302s tag=TEST_OBSOLETE RECID=22 STAMP=941719644
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting datafile copy
input datafile file number=00005 name=+DATAC1/ADAPTEST/DATAFILE/test.1051.941719625
output file name=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/data_D-ADAPTEST_I-757536437_TS-TEST_FNO-5_0ls2302t tag=TEST_OBSOLETE RECID=23 STAMP=941719645
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01
Finished backup at 2017-04-19 12:47:26

Starting Control File and SPFILE Autobackup at 2017-04-19 12:47:26
piece handle=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/c-757536437-20170419-00 comment=NONE
Finished Control File and SPFILE Autobackup at 2017-04-19 12:47:27

RMAN> drop tablespace test including contents and datafiles;

Statement processed

RMAN> backup incremental level 1 for recover of copy with tag 'test_obsolete' database;

Starting backup at 2017-04-19 12:47:41
using channel ORA_DISK_1
channel ORA_DISK_1: starting incremental level 1 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=+DATAC1/ADAPTEST/DATAFILE/system.401.941128155
input datafile file number=00003 name=+DATAC1/ADAPTEST/DATAFILE/sysaux.397.941128123
input datafile file number=00004 name=+DATAC1/ADAPTEST/DATAFILE/undotbs1.403.941128201
input datafile file number=00006 name=+DATAC1/ADAPTEST/DATAFILE/users.399.941128199
channel ORA_DISK_1: starting piece 1 at 2017-04-19 12:47:41
channel ORA_DISK_1: finished piece 1 at 2017-04-19 12:47:42
piece handle=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/0ns2303d_1_1 tag=TEST_OBSOLETE comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 2017-04-19 12:47:42

Starting Control File and SPFILE Autobackup at 2017-04-19 12:47:42
piece handle=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/c-757536437-20170419-01 comment=NONE
Finished Control File and SPFILE Autobackup at 2017-04-19 12:47:43

RMAN> recover copy of database with tag 'test_obsolete';

Starting recover at 2017-04-19 12:47:55
using channel ORA_DISK_1
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: specifying datafile copies to recover
recovering datafile copy file number=00001 name=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/data_D-ADAPTEST_I-757536437_TS-SYSTEM_FNO-1_0hs2302d
recovering datafile copy file number=00003 name=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/data_D-ADAPTEST_I-757536437_TS-SYSAUX_FNO-3_0is2302k
recovering datafile copy file number=00004 name=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/data_D-ADAPTEST_I-757536437_TS-UNDOTBS1_FNO-4_0js2302r
recovering datafile copy file number=00006 name=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/data_D-ADAPTEST_I-757536437_TS-USERS_FNO-6_0ks2302s
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/0ns2303d_1_1
channel ORA_DISK_1: piece handle=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/0ns2303d_1_1 tag=TEST_OBSOLETE
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
Finished recover at 2017-04-19 12:47:56

Starting Control File and SPFILE Autobackup at 2017-04-19 12:47:56
piece handle=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/c-757536437-20170419-02 comment=NONE
Finished Control File and SPFILE Autobackup at 2017-04-19 12:47:57

RMAN> report obsolete;

RMAN retention policy will be applied to the command
RMAN retention policy is set to redundancy 1
Report of obsolete backups and copies
Type                 Key    Completion Time    Filename/Handle
-------------------- ------ ------------------ --------------------
Datafile Copy        23     2017-04-19 12:47:25 /u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/data_D-ADAPTEST_I-757536437_TS-TEST_FNO-5_0ls2302t
Backup Set           7      2017-04-19 12:47:26
  Backup Piece       7      2017-04-19 12:47:26 /u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/c-757536437-20170419-00
Backup Set           8      2017-04-19 12:47:42
  Backup Piece       8      2017-04-19 12:47:42 /u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/0ns2303d_1_1
Backup Set           9      2017-04-19 12:47:42
  Backup Piece       9      2017-04-19 12:47:42 /u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/c-757536437-20170419-01

Compare this with output from the same testcase on 12.2 instance, the datafile copy is not marked as obsolete:

RMAN> show all;

RMAN configuration parameters for database with db_unique_name PVJTEST are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION ON;
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON; # default
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
CONFIGURE RMAN OUTPUT TO KEEP FOR 7 DAYS; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u01/app/oracle/product/12.2.0.1/dbhome/dbs/snapcf_PVJTEST.f'; # default

RMAN> create tablespace test datafile size 1m autoextend off;

Statement processed

RMAN> backup incremental level 1 for recover of copy with tag 'test_obsolete' database;

Starting backup at 19-APR-17
using channel ORA_DISK_1
no parent backup or copy of datafile 3 found
no parent backup or copy of datafile 1 found
no parent backup or copy of datafile 4 found
no parent backup or copy of datafile 7 found
no parent backup or copy of datafile 5 found
channel ORA_DISK_1: starting datafile copy
input datafile file number=00003 name=/u02/oradata/PVJTEST/sysaux01.dbf
output file name=/u01/app/oracle/product/12.2.0.1/dbhome/dbs/data_D-PVJTEST_I-2122366327_TS-SYSAUX_FNO-3_0vs2304k tag=TEST_OBSOLETE RECID=36 STAMP=941719708
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:15
channel ORA_DISK_1: starting datafile copy
input datafile file number=00001 name=/u02/oradata/PVJTEST/system01.dbf
output file name=/u01/app/oracle/product/12.2.0.1/dbhome/dbs/data_D-PVJTEST_I-2122366327_TS-SYSTEM_FNO-1_10s23053 tag=TEST_OBSOLETE RECID=37 STAMP=941719720
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting datafile copy
input datafile file number=00004 name=/u02/oradata/PVJTEST/undotbs01.dbf
output file name=/u01/app/oracle/product/12.2.0.1/dbhome/dbs/data_D-PVJTEST_I-2122366327_TS-UNDOTBS1_FNO-4_11s2305a tag=TEST_OBSOLETE RECID=38 STAMP=941719724
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting datafile copy
input datafile file number=00007 name=/u02/oradata/PVJTEST/users01.dbf
output file name=/u01/app/oracle/product/12.2.0.1/dbhome/dbs/data_D-PVJTEST_I-2122366327_TS-USERS_FNO-7_12s2305d tag=TEST_OBSOLETE RECID=39 STAMP=941719726
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting datafile copy
input datafile file number=00005 name=/u02/oradata/PVJTEST/datafile/o1_mf_test_dhfv0gk3_.dbf
output file name=/u01/app/oracle/product/12.2.0.1/dbhome/dbs/data_D-PVJTEST_I-2122366327_TS-TEST_FNO-5_13s2305f tag=TEST_OBSOLETE RECID=40 STAMP=941719727
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01
Finished backup at 19-APR-17

Starting Control File and SPFILE Autobackup at 19-APR-17
piece handle=/u01/app/oracle/product/12.2.0.1/dbhome/dbs/c-2122366327-20170419-0e comment=NONE
Finished Control File and SPFILE Autobackup at 19-APR-17

RMAN> drop tablespace test including contents and datafiles;

Statement processed

RMAN> backup incremental level 1 for recover of copy with tag 'test_obsolete' database;

Starting backup at 19-APR-17
using channel ORA_DISK_1
channel ORA_DISK_1: starting incremental level 1 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00003 name=/u02/oradata/PVJTEST/sysaux01.dbf
input datafile file number=00001 name=/u02/oradata/PVJTEST/system01.dbf
input datafile file number=00004 name=/u02/oradata/PVJTEST/undotbs01.dbf
input datafile file number=00007 name=/u02/oradata/PVJTEST/users01.dbf
channel ORA_DISK_1: starting piece 1 at 19-APR-17
channel ORA_DISK_1: finished piece 1 at 19-APR-17
piece handle=/u01/app/oracle/product/12.2.0.1/dbhome/dbs/15s2305t_1_1 tag=TEST_OBSOLETE comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:15
Finished backup at 19-APR-17

Starting Control File and SPFILE Autobackup at 19-APR-17
piece handle=/u01/app/oracle/product/12.2.0.1/dbhome/dbs/c-2122366327-20170419-0f comment=NONE
Finished Control File and SPFILE Autobackup at 19-APR-17

RMAN> recover copy of database with tag 'test_obsolete';

Starting recover at 19-APR-17
using channel ORA_DISK_1
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: specifying datafile copies to recover
recovering datafile copy file number=00001 name=/u01/app/oracle/product/12.2.0.1/dbhome/dbs/data_D-PVJTEST_I-2122366327_TS-SYSTEM_FNO-1_10s23053
recovering datafile copy file number=00003 name=/u01/app/oracle/product/12.2.0.1/dbhome/dbs/data_D-PVJTEST_I-2122366327_TS-SYSAUX_FNO-3_0vs2304k
recovering datafile copy file number=00004 name=/u01/app/oracle/product/12.2.0.1/dbhome/dbs/data_D-PVJTEST_I-2122366327_TS-UNDOTBS1_FNO-4_11s2305a
recovering datafile copy file number=00007 name=/u01/app/oracle/product/12.2.0.1/dbhome/dbs/data_D-PVJTEST_I-2122366327_TS-USERS_FNO-7_12s2305d
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/product/12.2.0.1/dbhome/dbs/15s2305t_1_1
channel ORA_DISK_1: piece handle=/u01/app/oracle/product/12.2.0.1/dbhome/dbs/15s2305t_1_1 tag=TEST_OBSOLETE
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:02
Finished recover at 19-APR-17

Starting Control File and SPFILE Autobackup at 19-APR-17
piece handle=/u01/app/oracle/product/12.2.0.1/dbhome/dbs/c-2122366327-20170419-10 comment=NONE
Finished Control File and SPFILE Autobackup at 19-APR-17

RMAN> report obsolete;

RMAN retention policy will be applied to the command
RMAN retention policy is set to redundancy 1
Report of obsolete backups and copies
Type                 Key    Completion Time    Filename/Handle
-------------------- ------ ------------------ --------------------
Backup Set           22     19-APR-17         
  Backup Piece       22     19-APR-17          /u01/app/oracle/product/12.2.0.1/dbhome/dbs/15s2305t_1_1