Planned (Cursor) Obsolescence

I’ll start this blog-post by posing a question. Is it possible to have multiple records in v$sql for a given sql_id and child_number combination? While the title of this blog post may give you some clues, I’ll admit I’d always assumed that those values uniquely identified a child cursor.

As a bit of background we had a database availability situation this week, which we narrowed down to SGA issues, specifically bug 15881004 “Excessive SGA memory usage with Extended Cursor Sharing”. Some of our more complex SQL Statements were getting many (more than 700) child cursors. The reported reason for the child was “Bind mismatch(33)”.  Probably caused by bug 14176247 “Many child cursors using Adaptive Cursor Sharing with binds (due to BIND_EQUIV_FAILURE)”, although that is listed as fixed in 12.1 and this instance is running on 12.2.

We resolved the immediate issue by flushing the shared pool (admittedly not a great solution, but sometimes you got to do what you got to do), and created SQL Plan Baselines for those problem SQL statements so they would each just get one plan and child cursor.

We plan to monitor more closely for any SQL statements that do have many child cursors, however we need to make sure that even if that does happen it doesn’t break the system again.  One thing that seemed promising is the _cursor_obsolete_threshold parameter.  We had already reduced this parameter down to 1024 from it’s default of 8192 based on Mike Deitrich’s blog post but with this incident were considering reducing it further.  I think it’s wise to be wary of messing too much with underscore parameters but per Doc ID 2431353.1 Oracle Support say “the … parameter can be adjusted case-to-case basis should there be a problem”. For sure we had a significant problem with the setting at 1024 so plan to reduce further to 512.

We involved super consultant Stefan Koehler to review our findings and action plan, he was broadly in agreement, even recommending further reduction of the parameter value to 256.  However something puzzling me which I asked him was “What actually happens if the number of child cursors hit the value specified by this parameter”.  His answer “Well what happens is this … if your parent cursor got more than _cursor_obsolete_threshold child cursors it invalidates the parent (and in consequence all childs) and it starts from 0 again”

I was skeptical, my expectation was that Oracle would just invalidate the oldest unused child cursor and then re-use that child number.  Another thing puzzling me was happens if some of the child cursors were still held open?  Time to test this out for myself…
First let me demonstrate how I can get 4 child cursors for a given SQL Statement using different values of optimizer_index_cost_adj as a quick hack.
SQL> alter system flush shared_pool;
SQL> select count(*) from all_objects;
  COUNT(*)
     74807

SQL> select prev_sql_id from v$session where sid=sys_context('userenv','sid');
PREV_SQL_ID
9tz4qu4rj9rdp

SQL> alter session set optimizer_index_cost_adj=1;
SQL> select count(*) from all_objects;
  COUNT(*)
     74807

SQL> alter session set optimizer_index_cost_adj=2;
SQL> select count(*) from all_objects;
  COUNT(*)
     74807

SQL> alter session set optimizer_index_cost_adj=3;
SQL> select count(*) from all_objects;
  COUNT(*)
     74807

SQL> select sql_id, child_number, executions from v$sql where sql_id = '9tz4qu4rj9rdp';
SQL_ID            CHILD_NUMBER   EXECUTIONS
9tz4qu4rj9rdp                0            1
9tz4qu4rj9rdp                1            1
9tz4qu4rj9rdp                2            1
9tz4qu4rj9rdp                3            1

Let me reduce _cursor_obsolete_threshold at session level and re-run the test.

SQL> alter system flush shared_pool;
SQL> alter session set "_cursor_obsolete_threshold"=2;
SQL> alter session set optimizer_index_cost_adj=100;
SQL> select count(*) from all_objects;
  COUNT(*)
     74807

SQL> alter session set optimizer_index_cost_adj=1;
SQL> select count(*) from all_objects;
  COUNT(*)
     74807

SQL> alter session set optimizer_index_cost_adj=2;
SQL> select count(*) from all_objects;
  COUNT(*)
     74807

SQL> alter session set optimizer_index_cost_adj=3;
SQL> select count(*) from all_objects;
  COUNT(*)
     74807

SQL> select sql_id, child_number, executions from v$sql where sql_id = '9tz4qu4rj9rdp';
SQL_ID            CHILD_NUMBER   EXECUTIONS
9tz4qu4rj9rdp                0            1
9tz4qu4rj9rdp                1            1
9tz4qu4rj9rdp                0            1
9tz4qu4rj9rdp                1            1

Whoah…. each combination of sql_id and child number has two entries (not what I was expecting to see). To get a more full picture we need to look at a couple of additional fields, namely ‘address’ and ‘is_obsolete’.

SQL> select sql_id,  address, child_number, is_obsolete from v$sql where sql_id = '9tz4qu4rj9rdp';
SQL_ID          ADDRESS              CHILD_NUMBER IS_OBSOLETE
9tz4qu4rj9rdp   00000000610AB500                0 Y
9tz4qu4rj9rdp   00000000610AB500                1 Y
9tz4qu4rj9rdp   0000000073DDE788                0 N
9tz4qu4rj9rdp   0000000073DDE788                1 N

Although we tend to use sql_id as our handle for the parent cursor, Oracle actually uses the ‘Address’ field, and when the _cursor_obsolete_threshold value is exceeded, Oracle allocates a new parent cursor with a new ‘Address’.  This explains how Oracle copes when old child cursors are held open, they still stay in the shared pool, keeping their address, but are marked as obsolete, able to be aged out when they are no longer in use.

The other lessons here, firstly that Stefan knows his stuff, but also whenever someone tells you something, don’t just take it on trust, it’s normally easy to validate for yourself, and you may learn something about how Oracle works along the way

Resource manager session_pga_limit has it’s own limits

Recently we hit an issue with a complex SQL statement (formatted was 44,000 lines, maybe subject of a separate blog post), causing the CBO to struggle consuming large amounts of PGA memory, and the host to start swapping memory and impacting other database users.

The pga_aggregate_limit parameter did not appear to be kicking in (maybe because this was happening during parse phase), so while looking for a proper solution we considered other ways to limit the effect of this problem SQL.

As we are on release 12.2 one thing we tried was a (relatively new) feature of resource manager, session_pga_limit. This should limit the PGA any one session can consume (as opposed to pga_aggregate_limit which is instance wide), however new features can be a little temperamental, especially in the first few versions after they have been introduced.

After a bit of trial and error we determined that setting it to any value greater than 4G (4096 MB) causes the feature not to kick in.

The following is my testcase on a 12.2.0.1.180717 PDB. I could not reproduce this behavior on 18c (non-multitenant) implying this limitation (bug?) has likely been fixed.

First we create a resource manager plan, consumer groups and directives, and configure the instance to use this plan.

SQL> BEGIN
  2    sys.DBMS_RESOURCE_MANAGER.clear_pending_area();
  3
  4    sys.DBMS_RESOURCE_MANAGER.create_pending_area();
  5
  6    sys.DBMS_RESOURCE_MANAGER.create_plan(
  7      plan    => 'PGA_PLAN',
  8      comment => 'Plan to demonstrate behaviour with session_pga_limit >= 4096');
  9
 10    sys.DBMS_RESOURCE_MANAGER.create_consumer_group(
 11      consumer_group => 'PGA_LIMIT_4095_GROUP',
 12      comment        => '4095 MB PGA Limit');
 13
 14    sys.DBMS_RESOURCE_MANAGER.create_consumer_group(
 15      consumer_group => 'PGA_LIMIT_4096_GROUP',
 16      comment        => '4096 MB PGA Limit');
 17
 18    sys.DBMS_RESOURCE_MANAGER.create_plan_directive (
 19      plan              => 'PGA_PLAN',
 20      group_or_subplan  => 'PGA_LIMIT_4095_GROUP',
 21      session_pga_limit => 4095);
 22
 23    sys.DBMS_RESOURCE_MANAGER.create_plan_directive (
 24      plan              => 'PGA_PLAN',
 25      group_or_subplan  => 'PGA_LIMIT_4096_GROUP',
 26      session_pga_limit => 4096);
 27
 28    sys.DBMS_RESOURCE_MANAGER.create_plan_directive (
 29      plan              => 'PGA_PLAN',
 30      group_or_subplan  => 'OTHER_GROUPS');
 31
 32    sys.DBMS_RESOURCE_MANAGER.validate_pending_area;
 33
 34    sys.DBMS_RESOURCE_MANAGER.submit_pending_area();
 35  END;
 36  /

PL/SQL procedure successfully completed.

SQL> ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = PGA_PLAN;

System altered.

Then we configure my test user to use the consumer group limiting it to 4095 MB PGA.

SQL> BEGIN
  2    sys.DBMS_RESOURCE_MANAGER.clear_pending_area();
  3    sys.DBMS_RESOURCE_MANAGER.create_pending_area();
  4
  5    sys.DBMS_RESOURCE_MANAGER.set_consumer_group_mapping (
  6      attribute      => DBMS_RESOURCE_MANAGER.oracle_user,
  7      value          => 'TEST_PGA_USER',
  8      consumer_group => 'PGA_LIMIT_4095_GROUP');
  9
 10    sys.DBMS_RESOURCE_MANAGER.validate_pending_area;
 11    sys.DBMS_RESOURCE_MANAGER.submit_pending_area();
 12  END;
 13  /

We connect as the test user, and executing some stupid PL/SQL that sits in a tight loop eating PGA. Observe that the resource manager directive is obeyed when the PGA hits 4095 MB.

SQL>   declare
  2          type vc_tt is table of VARCHAR2(32767);
  3          vc_t vc_tt := vc_tt() ;
  4      begin
  5          while TRUE
  6          loop
  7               vc_t.extend();
  8          end loop;
  9      end;
 10   /
  declare
*
ERROR at line 1:
ORA-04068: existing state of packages has been discarded
ORA-10260: PGA limit (4095 MB) exceeded - process terminated
ORA-06512: at line 7

Let’s remap my test user to the consumer group limited to 4096 MB.

SQL> BEGIN
  2    sys.DBMS_RESOURCE_MANAGER.clear_pending_area();
  3    sys.DBMS_RESOURCE_MANAGER.create_pending_area();
  4
  5    sys.DBMS_RESOURCE_MANAGER.set_consumer_group_mapping (
  6      attribute      => DBMS_RESOURCE_MANAGER.oracle_user,
  7      value          => 'TEST_PGA_USER',
  8      consumer_group => 'PGA_LIMIT_4096_GROUP');
  9
 10    sys.DBMS_RESOURCE_MANAGER.validate_pending_area;
 11    sys.DBMS_RESOURCE_MANAGER.submit_pending_area();
 12  END;
 13  /

PL/SQL procedure successfully completed.

My test program runs unchecked:

SQL> declare
  2      type vc_tt is table of VARCHAR2(32767);
  3      vc_t vc_tt := vc_tt() ;
  4  begin
  5      while TRUE
  6      loop
  7           vc_t.extend();
  8      end loop;
  9  end;
 10  /

Checking PGA allocation from another session we can see it’s up to 8546MB, way past the 4096MB it should be limited to.

SQL>  SELECT
  2      spid,
  3      resource_consumer_group,
  4      round(pga_used_mem / 1024 / 1024) pga_used_mb
  5  FROM
  6      v$session s,
  7      v$process p
  8  WHERE
  9      s.username LIKE 'TEST_PGA_USER'
 10      AND p.addr = s.paddr;

SPID                     RESOURCE_CONSUMER_GROUP          PGA_USED_MB
------------------------ -------------------------------- -----------
234513                   PGA_LIMIT_4096_GROUP                    8546

Note if you’re testing this yourself, I suggest to be careful, don’t do this on a production instance and be prepared to kill the run-away session forcefully.

Fun with SQL Translation Framework – By-Passing Parse Errors

Invalid SQL being sent to the database is something to watch out for,
it’s a real performance killer because once SQL is rejected by parser,
it is not cached in shared pool, resulting in hard parse every time the SQL is encountered.

One cool new feature of Oracle 12.2 is the fact that such SQL parse errors are automatically logged to the alert log (by default every 100 occurrences of a particular SQL)

(Random thoughts, the database must be storing these invalid SQL somewhere to keep track of the parse error count, I wonder where that is? I guess though that even though it has stored cached the statement in it’s “Invalid SQL” list it will still have to re-parse every time it’s encountered as the statement may become valid if say a table it relied on got created).  (Edit: 2018-10-10 As is often the case, Jonathan Lewis seems to have the answer to this)

A similar effect can be achieved on previous versions by setting event 10035.

One instance that I’ve been monitoring has regular occurrences of the following SQL:

SELECT INVALID SELECT STATEMENT TO FORCE ODBC DRIVER TO UNPREPARED STATE

A quick google returns lots of hits, but no real solution. Oracle Support bug 8469553 also has some clues. Basically it seems to be a problem with older versions of ODBC, and likely solution is to upgrade ODBC version.

However I was thinking about a short-term fix, and recalled a presentation from Kerry Osborne regarding SQL Translation Framework to transform one SQL statement to another, and wondered whether I could transform this invalid statement to a valid one?

To my surprise it worked, as I demonstrate below:

SQL> begin
  2     dbms_sql_translator.create_profile('odbc_profile');
  3     dbms_sql_translator.register_sql_translation( profile_name => 'odbc_profile',
  4                                                   sql_text => 'SELECT INVALID SELECT STATEMENT TO FORCE ODBC DRIVER TO UNPREPARED STATE',
  5                                                   translated_text => 'SELECT DUMMY FROM DUAL');
  6  end;
  7  /

PL/SQL procedure successfully completed.

SQL> alter session set sql_translation_profile=odbc_profile;

Session altered.

SQL> alter session set events = '10601 trace name context forever, level 32';

Session altered.

SQL> SELECT INVALID SELECT STATEMENT TO FORCE ODBC DRIVER TO UNPREPARED STATE;
D
-
X

SQL>

Now I’m not sure if this has any knock on effects on the application in question, but at least goes to show a usage of the SQL Translation Framework that I hadn’t seen or considered before.

I’m sure there are many more.

 

Autonomous MView Log Shrinkage

With version 12.2 there seems to have a some change in behavior of Materialised View Logs which caught us by surprise; I haven’t seen it documented elsewhere, so let me demonstrate.

My test instance is un-patched 12.2.0.1 running on Oracle Linux.

For the setup, I create a table, a mview log on that table, a mview that can use the mview log and a stored procedure that can insert and delete rows from the table (thus populating the mview log).

SQL> CREATE TABLE detail (
  2      id        NUMBER
  3          GENERATED BY DEFAULT ON NULL AS IDENTITY
  4      PRIMARY KEY,
  5      padding   VARCHAR2(255)
  6  );

Table DETAIL created.

SQL> CREATE MATERIALIZED VIEW LOG ON detail WITH
  2      ROWID,
  3      SEQUENCE ( id,
  4                 padding )
  5      INCLUDING NEW VALUES;

Materialized view log DETAIL created.

SQL> CREATE MATERIALIZED VIEW summary AS
  2      SELECT
  3          COUNT(*)
  4      FROM
  5          detail;

Materialized view SUMMARY created.

SQL> CREATE PROCEDURE insert_and_delete AS
  2  BEGIN
  3      INSERT /*+append*/ INTO detail ( padding )
  4          SELECT
  5              lpad('X',255,'X')
  6          FROM
  7              dual
  8          CONNECT BY
  9              level  comment to avoid WordPress format issue
 10              ;  
 11  
 12      COMMIT; 
 13      DELETE FROM detail;  
 14  
 15      COMMIT;
 16  END;
 17  /
 Procedure INSERT_AND_DELETE compiled 
SQL>

Then I call the stored procedure to populate the mview log, and then perform a fast refresh.

SQL> EXEC insert_and_delete;

PL/SQL procedure successfully completed.

SQL> EXEC dbms_mview.refresh('summary','f');

PL/SQL procedure successfully completed.

SQL>

Observe from the alert log that because the entries in the mview log have been deleted as part of the fast refresh, Oracle determines that it is appropriate to enable row movement on the mview log table and perform a “shrink space” operation.

MVRF: kkzlShrinkMVLog: recommendations: Enable row movement of the table PATRICK.MLOG$_DETAIL and perform shrink, estimated savings is 38789120 bytes.
MVRF: kkzlShrinkMVLog: executed: alter table "PATRICK"."MLOG$_DETAIL" enable row movement
MVRF: kkzlShrinkMVLog: executed: alter table "PATRICK"."MLOG$_DETAIL" shrink space

Re-running the test, it is not necessary to re-enable row movement (the table retains this setting), so the “shrink space” action only is executed.

SQL> EXEC insert_and_delete;

PL/SQL procedure successfully completed.

SQL> EXEC dbms_mview.refresh('summary','f');

PL/SQL procedure successfully completed.

SQL>
MVRF: kkzlShrinkMVLog: recommendations: Perform shrink, estimated savings is 38789120 bytes.
MVRF: kkzlShrinkMVLog: executed: alter table "PATRICK"."MLOG$_DETAIL" shrink space

This behavior caused us some problems on our live system, because other sessions were blocked trying to refresh the mview while the shrink space was running.

Oracle Support Doc ID 2320441.1 describes this behavior, and suggests that an underscore parameter, _mv_refresh_shrink_log can be used to disable the shrink space.

I re-run my test-case with this set at session level

SQL> ALTER SESSION SET "_mv_refresh_shrink_log" = false;

Session altered.

SQL> EXEC insert_and_delete;

PL/SQL procedure successfully completed.

SQL> EXEC dbms_mview.refresh('summary','f');

PL/SQL procedure successfully completed.

SQL>

Monitoring the alert log, it can be seen that the “shrink space” operation no longer takes place.

If you apply this change you may want to keep an eye on your mview log sizes, and shrink manually if necessary.

Do you come here often? 12.2 change in behavior for DBA_USERS.LAST_LOGIN with Proxy Authentication

The behavior of the LAST_LOGIN field on DBA_USERS has changed with respect to proxy authentication (for the better I think).

Proxy authentication is an feature of the Oracle Database that effective allows you to be connected as one user (the client user to use Oracle’s terminology), but using the credentials of another user (the proxy user). This is useful in combination with using personal accounts (one for every user) as the proxy users, using application accounts as the client users, avoiding the need for users to share application account passwords.

The test-case below demonstrates that when using proxy authentication in 12.1, the last login for the client user (only) is updated.

[oracle@lnx-ora121 ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Mon Sep 17 05:03:55 2018

Copyright (c) 1982, 2014, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> create user proxy_user identified by proxy_user;

User created.

SQL> create user client_user identified by client_user;

User created.

SQL> grant create session to proxy_user;

Grant succeeded.

SQL> grant create session to client_user;

Grant succeeded.

SQL> alter user client_user grant connect through proxy_user;

User altered.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
[oracle@lnx-ora121 ~]$ sqlplus proxy_user[client_user]/proxy_user

SQL*Plus: Release 12.1.0.2.0 Production on Mon Sep 17 05:05:13 2018

Copyright (c) 1982, 2014, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
[oracle@lnx-ora121 ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Mon Sep 17 05:05:45 2018

Copyright (c) 1982, 2014, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> select last_login from dba_users where username = 'PROXY_USER';

LAST_LOGIN
---------------------------------------------------------------------------
SQL> select last_login from dba_users where username = 'CLIENT_USER';

LAST_LOGIN
---------------------------------------------------------------------------
17-SEP-18 05.05.33.000000000 AM +00:00

SQL>

Contrast this behavior with that of the same test running on 12.2 and you can see that now it is the proxy user that has their last login time updated.

[oracle@lnx-ora122 ~]$ sqlplus / as sysdba 

SQL*Plus: Release 12.2.0.1.0 Production on Mon Sep 17 05:04:22 2018

Copyright (c) 1982, 2016, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> create user proxy_user identified by proxy_user;

User created.

SQL> create user client_user identified by client_user;

User created.

SQL> grant create session to proxy_user;

Grant succeeded.

SQL> grant create session to client_user;

Grant succeeded.

SQL> alter user client_user grant connect through proxy_user;

User altered.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
[oracle@lnx-ora122 ~]$ sqlplus proxy_user[client_user]/proxy_user

SQL*Plus: Release 12.2.0.1.0 Production on Mon Sep 17 05:06:25 2018

Copyright (c) 1982, 2016, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

[oracle@lnx-ora122 ~]$ sqlplus proxy_user[client_user]/proxy_user                     

SQL*Plus: Release 12.2.0.1.0 Production on Mon Sep 17 05:07:18 2018

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

^CERROR:
ORA-28547: connection to server failed, probable Oracle Net admin error


[oracle@lnx-ora122 ~]$ sqlplus / as sysdba                                            

SQL*Plus: Release 12.2.0.1.0 Production on Mon Sep 17 05:07:42 2018

Copyright (c) 1982, 2016, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> select last_login from dba_users where username = 'PROXY_USER';

LAST_LOGIN
---------------------------------------------------------------------------
17-SEP-18 05.07.05.000000000 AM +00:00

SQL> select last_login from dba_users where username = 'CLIENT_USER';

LAST_LOGIN
---------------------------------------------------------------------------


SQL

 
Previously if an account was only being used as a proxy user, there was no way of knowing it was actually being used (without implementing a login trigger and storing the login time in a separate table). With this change we can know for such a user account if and when it is being used.

Problems with Kerberos Credentials to 12.2 Databases in Enterprise Manger

Just a quick note in case anyone else hits this issue. We use make extensive use of Kerberos to give us Windows single-sign on. For database connections via Enterprise Manger we use “Database Kerberos Credentials” credential type and have no problems connecting to 11.2 and 12.1 databases (apart from slight pain points that it’s not possible to use them as preferred credentials, and the necessity to update the credential whenever Windows password is updated).

However as we have been migrating databases to 12.2, the credentials have not worked, testing against such a database sometimes give the following error message:

Credentials could not be verified. EXCEPTION_WHILE_CREATING_CONN_FROMSUB
Surprisingly sometimes testing the credential also completes successfully.

Testing through various combinations I’ve discovered that problem is the following line in the [libdefaults] section of the Kerberos configuration file, krb5.conf, on the Enterprise Manger server

forwardable = true

After commenting out this line, Kerberos credentials test successfully against 12.2 databases.  I have no idea what change in 12.2 causes this setting to cause a problem, if anyone has any ideas welcome to share.

12.2 controlfile backup never gets marked as obsolete

I’ve blogged about this bug in passing, but I thought would be worthwhile to document my testcase.    Basically backups of controlfiles never get marked as obsolete. The issue reproduces for both image copies and backupsets, my test-case uses the latter for simplicity.

RMAN> show all;

RMAN configuration parameters for database with db_unique_name PVJTEST are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION OFF;
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON; # default
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
CONFIGURE RMAN OUTPUT TO KEEP FOR 7 DAYS; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u01/app/oracle/product/12.2.0.1/dbhome/dbs/snapcf_PVJTEST.f'; # default

RMAN> list backup of controlfile;

specification does not match any backup in the repository

RMAN> backup database;

Starting backup at 2017-11-01 11:31:38
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00003 name=/u02/oradata/PVJTEST/sysaux01.dbf
input datafile file number=00001 name=/u02/oradata/PVJTEST/system01.dbf
input datafile file number=00004 name=/u02/oradata/PVJTEST/undotbs01.dbf
input datafile file number=00007 name=/u02/oradata/PVJTEST/users01.dbf
channel ORA_DISK_1: starting piece 1 at 2017-11-01 11:31:38
channel ORA_DISK_1: finished piece 1 at 2017-11-01 11:31:53
piece handle=/u01/app/oracle/product/12.2.0.1/dbhome/dbs/agsifi8q_1_1 tag=TAG20171101T113138 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:15
Finished backup at 2017-11-01 11:31:54

Starting Control File and SPFILE Autobackup at 2017-11-01 11:31:54
piece handle=/u01/app/oracle/product/12.2.0.1/dbhome/dbs/c-2122366327-20171101-07 comment=NONE
Finished Control File and SPFILE Autobackup at 2017-11-01 11:31:55

RMAN> backup database;

Starting backup at 2017-11-01 11:31:59
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00003 name=/u02/oradata/PVJTEST/sysaux01.dbf
input datafile file number=00001 name=/u02/oradata/PVJTEST/system01.dbf
input datafile file number=00004 name=/u02/oradata/PVJTEST/undotbs01.dbf
input datafile file number=00007 name=/u02/oradata/PVJTEST/users01.dbf
channel ORA_DISK_1: starting piece 1 at 2017-11-01 11:31:59
channel ORA_DISK_1: finished piece 1 at 2017-11-01 11:32:14
piece handle=/u01/app/oracle/product/12.2.0.1/dbhome/dbs/aisifi9f_1_1 tag=TAG20171101T113159 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:15
Finished backup at 2017-11-01 11:32:14

Starting Control File and SPFILE Autobackup at 2017-11-01 11:32:14
piece handle=/u01/app/oracle/product/12.2.0.1/dbhome/dbs/c-2122366327-20171101-08 comment=NONE
Finished Control File and SPFILE Autobackup at 2017-11-01 11:32:15

RMAN> delete obsolete;

RMAN retention policy will be applied to the command
RMAN retention policy is set to redundancy 1
using channel ORA_DISK_1
Deleting the following obsolete backups and copies:
Type Key Completion Time Filename/Handle
-------------------- ------ ------------------ --------------------
Backup Set 207 2017-11-01 11:31:53
Backup Piece 207 2017-11-01 11:31:53 /u01/app/oracle/product/12.2.0.1/dbhome/dbs/agsifi8q_1_1

Do you really want to delete the above objects (enter YES or NO)? YES
deleted backup piece
backup piece handle=/u01/app/oracle/product/12.2.0.1/dbhome/dbs/agsifi8q_1_1 RECID=207 STAMP=958908699
Deleted 1 objects

 

RMAN> list backup of controlfile;

 

List of Backup Sets
===================

 

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ -------------------
208 Full 10.47M DISK 00:00:00 2017-11-01 11:31:54
BP Key: 208 Status: AVAILABLE Compressed: NO Tag: TAG20171101T113154
Piece Name: /u01/app/oracle/product/12.2.0.1/dbhome/dbs/c-2122366327-20171101-07
Control File Included: Ckp SCN: 12426398 Ckp time: 2017-11-01 11:31:54

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ -------------------
210 Full 10.47M DISK 00:00:00 2017-11-01 11:32:14
BP Key: 210 Status: AVAILABLE Compressed: NO Tag: TAG20171101T113214
Piece Name: /u01/app/oracle/product/12.2.0.1/dbhome/dbs/c-2122366327-20171101-08
Control File Included: Ckp SCN: 12426426 Ckp time: 2017-11-01 11:32:14

RMAN>

Oracle Support initially identified the following bug:
Bug 25943271 : RMAN REPORT OBSOLETE DOES NOT REPORT CONTROLFILE BACKUP AS OBSOLETE

However after waiting for the patch, it still did not resolve the issue. Seems this may now be the following bug:
Bug 26771767 : SPFILE BACKUP OR BACKUP PIECE INCLUDES SPFILE ARE NOT REPORTED AS OBSOLETE

We are working around the issue at the moment by modifying our rman scripts to delete controlfile backups older than a certain age.  Still waiting for the fix, after supplying reproducible testcase over six months ago 🙂

“ORA-20110: set stamp set count conflict” after upgrading to 12.2 Recovery Catalog

This is the second time we have hit this error after upgrading the recovery catalog to 12.2.

The first is now documented by Oracle as Document 2291791.1, and affects 12.1 multi-tenant instances with DBBP 12.1.0.2.161018 or later applied.
If you have applied that patchset, you can see the problem by checking the contents of file $ORACLE_HOME/rdbms/admin/recover.bsq

  cursor bs(low_recid number, high_recid number) is
    select bs.recid, bs.stamp, bs.set_stamp, bs.set_count, bs.backup_type,
           bs.incremental_level,
           bs.pieces, start_time, completion_time, controlfile_included,
           bs.input_file_scan_only, keep_until,
           decode (bs.keep_options, 'LOGS'      , KEEP_LOGS
                               , 'NOLOGS'       , KEEP_NOLOGS
                               , 'BACKUP_LOGS'  , KEEP_CONSIST
                                                , 0) keep_options,
           bs.block_size, bs.multi_section, bs.guid,
           decode(bs.con_id, invalid_pdbid, 1, 0) dropped_pdb
    from v$backup_set bs, v$containers pdb
    where bs.recid between low_recid and high_recid
      and (bs.stamp >= kccdivts OR bs.recid = high_recid)
      and bs.stamp >= resyncstamp
      and bs.for_xtts != 'YES'
    order by bs.recid;

Note that the join has been missed between v$backup_set and v$containers(!). The next cursor in the source, bp, also has the same ommission.
You can work-around the issue by adding the following condition to the two cursors, but you should probably check with Oracle Support first.

    
 and bs.con_id = pdb.con_id(+) 

The latest issue is on a 11g (hence single tenant obviously) on a dataguard standby site (in this case we take backups from both primary and standby instances).
Tracing the resync of the catalog from the standby site, the issue begins when inserting to table bs (backup sets) in the recovery catalog.
The insert fails because of a violation the unique key BS_U2 on columns db_key, set_stamp and set_count.
Checking the recovery catalog there already is record for this combination of set_stamp and set_count, but from the primary instance.

My suspicion is that the unique key also needs to include site_key field, however I don’t have enough of an understanding of what the set_stamp, set_count combination represents to be sure.

The only reference to the fields I can find come quotes in the documentation such as the following:

The SET_STAMP value from V$BACKUP_SET. SET_STAMP and SET_COUNT form a concatenated key that uniquely identifies this record in the target database control file.

What does seem apparent is that the 12.2 recovery catalog is stricter in enforcing this uniqueness, thus revealing some bugs elsewhere in the codebase.

A Consolidated List of 12cR2 Issues

I thought it would be useful to have  a consolidated list of issues we have run into, some of which I have already blogged about, some of which I haven’t. I will try to keep this page up-to-date moving forwards. Note we are not really using 12cR2 extensively, we have just upgraded OEM Repository and AWR Warehouse instances so far.

Recovery Manager

Upgrade Catalog Fails with RMAN-06444: error creating init_grsp_pdb_key

Oracle have now published a document about this issue:
UPGRADE CATALOG command from 12.1 to 12.2 Fails With RMAN-6004 and ORA-1422 (Doc ID 2252894.1)

Image Copies never marked as obsolete after datafile is removed

Bug 26115103 – REPORT OBSOLETE NOT SHOWING OLD DATAFILE COPY IN 12.2

Control File autobackups are never marked as obsolete

Bug 25943271: RMAN REPORT OBSOLETE DOES NOT REPORT CONTROLFILE BACKUP AS OBSOLETE

After upgrading catalog to 12.2,  resync failing with “ORA-20110: set stamp set count conflict”

Bug 26385473 : REGISTER TARGET DATABASE WITH 12.2 CATALOG FAILS WITH RMAN-03008 ORA-20110

This seems to be a problem that is specific to 12.1 multi-tenant instance on 12.2 catalog. It appears that rman is attempting to catalog the same backupset once for each PDB

Alain Fuhrer has run into this and has more details

Trace on the target system reveals problem seems to be caused by SQL like the following, which seem to ‘forget’ to provide the join conditions between the two tables queried:

SELECT BS.RECID , BS.STAMP , BS.SET_STAMP , BS.SET_COUNT , BS.BACKUP_TYPE , BS.INCREMENTAL_LEVEL , BS.PIECES , START_TIME , COMPLETION_TIME , CONTROLFILE_INCLUDED , BS.INPUT_FILE_SCAN_ONLY , KEEP_UNTIL , DECODE(BS.KEEP_OPTIONS , ‘LOGS’ , :b1 , ‘NOLOGS’ , :b2 , ‘BACKUP_LOGS’ , :b3 , 0 ) KEEP_OPTIONS , BS.BLOCK_SIZE , BS.MULTI_SECTION , BS.GUID , DECODE(BS.CON_ID , :b4 , 1 , 0 ) DROPPED_PDB FROM V$BACKUP_SET BS , V$CONTAINERS PDB WHERE BS.RECID BETWEEN :b5 AND :b6 AND ( BS.STAMP >= :b7 OR BS.RECID = :b6 ) AND BS.STAMP >= :b8 AND BS.FOR_XTTS != ‘YES’ ORDER BY BS.RECID

 

AWR and AWR Warehouse

AWR Warehouse load process failing with ORA-600 after upgrading repository

The error message is [kewrspbr_2: wrong last partition]. Doc ID 2020227.1 describes the fix for a similar issue. I found partitioning was wrong on some on some new 12.2 AWR tables so I recreated them, added a partition for each database contributing to AWR Warehouse. I have been unable to reproduce this issue on a test system.

WRH$_SGASTAT_U becoming unusable

Only affects database upgraded from 12.1.0.2, workaround is to re-create the index.
BUG 25954054 – WRH$_SGASTAT_U BECOMING UNUSABLE STATE IN UPGRADED DB
A patch is now available for this issue.

AWR Transfer Task Failing with ORA-28040

Set SQLNET.ALLOWED_LOGON_VERSION_SERVER to 11

Enterprise Manager

TNS-12599: TNS:cryptographic checksum mismatch in alert log of 12.2 database from connections from OEM

No functional impact. Bug number 25915038 created. Setting the following in sqlnet.ora of target database suppresses the message:

  • SQLNET.CRYPTO_CHECKSUM_SERVER=rejected
  • SQLNET.ENCRYPTION_SERVER=rejected

No Such Metadata in Enterprise Manager after upgrading database

This is a bug on calculation of database version from db plugin.  Simplest solution is to upgrade db plugin to 13.2.2 or later on database host.

Miscellaneous

ORA-20001: Statistics Advisor: Invalid task name for the current user

Seems to be on fresh created database. Solution per Doc ID 2127675.1 is to run  dbms_stats.init_package.

READ ANY TABLE audit records

Bug 26035911 : AUDIT RECORDS GENERATED EVEN WHEN THE SYSTEM PRIVILEGE IS NOT EXERCISED IN 12.2

12cR2 Incrementally Updated Backups and dropped datafiles

We have just noticed a difference in behavior in 12cR2 with regards to image copies being marked as obsolete after the backup is updated past the drop of a datafile.
I won’t describe the feature itself, if necessary you can read up at oracle-base or the Oracle documentation.

First review output from the testcase on a 12.1 instance, note that after dropping the datafile, and updating the backup past this point, the datafile copy is marked as obsolete:

RMAN> show all;

RMAN configuration parameters for database with db_unique_name ADAPTEST are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION ON;
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
CONFIGURE RMAN OUTPUT TO KEEP FOR 7 DAYS; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/snapcf_ADAPTEST.f'; # default

RMAN> create tablespace test datafile size 1m autoextend off;

Statement processed

RMAN> backup incremental level 1 for recover of copy with tag 'test_obsolete' database;

Starting backup at 2017-04-19 12:47:09
using channel ORA_DISK_1
no parent backup or copy of datafile 1 found
no parent backup or copy of datafile 3 found
no parent backup or copy of datafile 4 found
no parent backup or copy of datafile 6 found
no parent backup or copy of datafile 5 found
channel ORA_DISK_1: starting datafile copy
input datafile file number=00001 name=+DATAC1/ADAPTEST/DATAFILE/system.401.941128155
output file name=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/data_D-ADAPTEST_I-757536437_TS-SYSTEM_FNO-1_0hs2302d tag=TEST_OBSOLETE RECID=19 STAMP=941719632
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting datafile copy
input datafile file number=00003 name=+DATAC1/ADAPTEST/DATAFILE/sysaux.397.941128123
output file name=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/data_D-ADAPTEST_I-757536437_TS-SYSAUX_FNO-3_0is2302k tag=TEST_OBSOLETE RECID=20 STAMP=941719639
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting datafile copy
input datafile file number=00004 name=+DATAC1/ADAPTEST/DATAFILE/undotbs1.403.941128201
output file name=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/data_D-ADAPTEST_I-757536437_TS-UNDOTBS1_FNO-4_0js2302r tag=TEST_OBSOLETE RECID=21 STAMP=941719644
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting datafile copy
input datafile file number=00006 name=+DATAC1/ADAPTEST/DATAFILE/users.399.941128199
output file name=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/data_D-ADAPTEST_I-757536437_TS-USERS_FNO-6_0ks2302s tag=TEST_OBSOLETE RECID=22 STAMP=941719644
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting datafile copy
input datafile file number=00005 name=+DATAC1/ADAPTEST/DATAFILE/test.1051.941719625
output file name=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/data_D-ADAPTEST_I-757536437_TS-TEST_FNO-5_0ls2302t tag=TEST_OBSOLETE RECID=23 STAMP=941719645
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01
Finished backup at 2017-04-19 12:47:26

Starting Control File and SPFILE Autobackup at 2017-04-19 12:47:26
piece handle=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/c-757536437-20170419-00 comment=NONE
Finished Control File and SPFILE Autobackup at 2017-04-19 12:47:27

RMAN> drop tablespace test including contents and datafiles;

Statement processed

RMAN> backup incremental level 1 for recover of copy with tag 'test_obsolete' database;

Starting backup at 2017-04-19 12:47:41
using channel ORA_DISK_1
channel ORA_DISK_1: starting incremental level 1 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=+DATAC1/ADAPTEST/DATAFILE/system.401.941128155
input datafile file number=00003 name=+DATAC1/ADAPTEST/DATAFILE/sysaux.397.941128123
input datafile file number=00004 name=+DATAC1/ADAPTEST/DATAFILE/undotbs1.403.941128201
input datafile file number=00006 name=+DATAC1/ADAPTEST/DATAFILE/users.399.941128199
channel ORA_DISK_1: starting piece 1 at 2017-04-19 12:47:41
channel ORA_DISK_1: finished piece 1 at 2017-04-19 12:47:42
piece handle=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/0ns2303d_1_1 tag=TEST_OBSOLETE comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 2017-04-19 12:47:42

Starting Control File and SPFILE Autobackup at 2017-04-19 12:47:42
piece handle=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/c-757536437-20170419-01 comment=NONE
Finished Control File and SPFILE Autobackup at 2017-04-19 12:47:43

RMAN> recover copy of database with tag 'test_obsolete';

Starting recover at 2017-04-19 12:47:55
using channel ORA_DISK_1
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: specifying datafile copies to recover
recovering datafile copy file number=00001 name=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/data_D-ADAPTEST_I-757536437_TS-SYSTEM_FNO-1_0hs2302d
recovering datafile copy file number=00003 name=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/data_D-ADAPTEST_I-757536437_TS-SYSAUX_FNO-3_0is2302k
recovering datafile copy file number=00004 name=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/data_D-ADAPTEST_I-757536437_TS-UNDOTBS1_FNO-4_0js2302r
recovering datafile copy file number=00006 name=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/data_D-ADAPTEST_I-757536437_TS-USERS_FNO-6_0ks2302s
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/0ns2303d_1_1
channel ORA_DISK_1: piece handle=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/0ns2303d_1_1 tag=TEST_OBSOLETE
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
Finished recover at 2017-04-19 12:47:56

Starting Control File and SPFILE Autobackup at 2017-04-19 12:47:56
piece handle=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/c-757536437-20170419-02 comment=NONE
Finished Control File and SPFILE Autobackup at 2017-04-19 12:47:57

RMAN> report obsolete;

RMAN retention policy will be applied to the command
RMAN retention policy is set to redundancy 1
Report of obsolete backups and copies
Type                 Key    Completion Time    Filename/Handle
-------------------- ------ ------------------ --------------------
Datafile Copy        23     2017-04-19 12:47:25 /u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/data_D-ADAPTEST_I-757536437_TS-TEST_FNO-5_0ls2302t
Backup Set           7      2017-04-19 12:47:26
  Backup Piece       7      2017-04-19 12:47:26 /u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/c-757536437-20170419-00
Backup Set           8      2017-04-19 12:47:42
  Backup Piece       8      2017-04-19 12:47:42 /u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/0ns2303d_1_1
Backup Set           9      2017-04-19 12:47:42
  Backup Piece       9      2017-04-19 12:47:42 /u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/c-757536437-20170419-01

Compare this with output from the same testcase on 12.2 instance, the datafile copy is not marked as obsolete:

RMAN> show all;

RMAN configuration parameters for database with db_unique_name PVJTEST are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION ON;
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON; # default
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
CONFIGURE RMAN OUTPUT TO KEEP FOR 7 DAYS; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u01/app/oracle/product/12.2.0.1/dbhome/dbs/snapcf_PVJTEST.f'; # default

RMAN> create tablespace test datafile size 1m autoextend off;

Statement processed

RMAN> backup incremental level 1 for recover of copy with tag 'test_obsolete' database;

Starting backup at 19-APR-17
using channel ORA_DISK_1
no parent backup or copy of datafile 3 found
no parent backup or copy of datafile 1 found
no parent backup or copy of datafile 4 found
no parent backup or copy of datafile 7 found
no parent backup or copy of datafile 5 found
channel ORA_DISK_1: starting datafile copy
input datafile file number=00003 name=/u02/oradata/PVJTEST/sysaux01.dbf
output file name=/u01/app/oracle/product/12.2.0.1/dbhome/dbs/data_D-PVJTEST_I-2122366327_TS-SYSAUX_FNO-3_0vs2304k tag=TEST_OBSOLETE RECID=36 STAMP=941719708
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:15
channel ORA_DISK_1: starting datafile copy
input datafile file number=00001 name=/u02/oradata/PVJTEST/system01.dbf
output file name=/u01/app/oracle/product/12.2.0.1/dbhome/dbs/data_D-PVJTEST_I-2122366327_TS-SYSTEM_FNO-1_10s23053 tag=TEST_OBSOLETE RECID=37 STAMP=941719720
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting datafile copy
input datafile file number=00004 name=/u02/oradata/PVJTEST/undotbs01.dbf
output file name=/u01/app/oracle/product/12.2.0.1/dbhome/dbs/data_D-PVJTEST_I-2122366327_TS-UNDOTBS1_FNO-4_11s2305a tag=TEST_OBSOLETE RECID=38 STAMP=941719724
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting datafile copy
input datafile file number=00007 name=/u02/oradata/PVJTEST/users01.dbf
output file name=/u01/app/oracle/product/12.2.0.1/dbhome/dbs/data_D-PVJTEST_I-2122366327_TS-USERS_FNO-7_12s2305d tag=TEST_OBSOLETE RECID=39 STAMP=941719726
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting datafile copy
input datafile file number=00005 name=/u02/oradata/PVJTEST/datafile/o1_mf_test_dhfv0gk3_.dbf
output file name=/u01/app/oracle/product/12.2.0.1/dbhome/dbs/data_D-PVJTEST_I-2122366327_TS-TEST_FNO-5_13s2305f tag=TEST_OBSOLETE RECID=40 STAMP=941719727
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01
Finished backup at 19-APR-17

Starting Control File and SPFILE Autobackup at 19-APR-17
piece handle=/u01/app/oracle/product/12.2.0.1/dbhome/dbs/c-2122366327-20170419-0e comment=NONE
Finished Control File and SPFILE Autobackup at 19-APR-17

RMAN> drop tablespace test including contents and datafiles;

Statement processed

RMAN> backup incremental level 1 for recover of copy with tag 'test_obsolete' database;

Starting backup at 19-APR-17
using channel ORA_DISK_1
channel ORA_DISK_1: starting incremental level 1 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00003 name=/u02/oradata/PVJTEST/sysaux01.dbf
input datafile file number=00001 name=/u02/oradata/PVJTEST/system01.dbf
input datafile file number=00004 name=/u02/oradata/PVJTEST/undotbs01.dbf
input datafile file number=00007 name=/u02/oradata/PVJTEST/users01.dbf
channel ORA_DISK_1: starting piece 1 at 19-APR-17
channel ORA_DISK_1: finished piece 1 at 19-APR-17
piece handle=/u01/app/oracle/product/12.2.0.1/dbhome/dbs/15s2305t_1_1 tag=TEST_OBSOLETE comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:15
Finished backup at 19-APR-17

Starting Control File and SPFILE Autobackup at 19-APR-17
piece handle=/u01/app/oracle/product/12.2.0.1/dbhome/dbs/c-2122366327-20170419-0f comment=NONE
Finished Control File and SPFILE Autobackup at 19-APR-17

RMAN> recover copy of database with tag 'test_obsolete';

Starting recover at 19-APR-17
using channel ORA_DISK_1
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: specifying datafile copies to recover
recovering datafile copy file number=00001 name=/u01/app/oracle/product/12.2.0.1/dbhome/dbs/data_D-PVJTEST_I-2122366327_TS-SYSTEM_FNO-1_10s23053
recovering datafile copy file number=00003 name=/u01/app/oracle/product/12.2.0.1/dbhome/dbs/data_D-PVJTEST_I-2122366327_TS-SYSAUX_FNO-3_0vs2304k
recovering datafile copy file number=00004 name=/u01/app/oracle/product/12.2.0.1/dbhome/dbs/data_D-PVJTEST_I-2122366327_TS-UNDOTBS1_FNO-4_11s2305a
recovering datafile copy file number=00007 name=/u01/app/oracle/product/12.2.0.1/dbhome/dbs/data_D-PVJTEST_I-2122366327_TS-USERS_FNO-7_12s2305d
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/product/12.2.0.1/dbhome/dbs/15s2305t_1_1
channel ORA_DISK_1: piece handle=/u01/app/oracle/product/12.2.0.1/dbhome/dbs/15s2305t_1_1 tag=TEST_OBSOLETE
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:02
Finished recover at 19-APR-17

Starting Control File and SPFILE Autobackup at 19-APR-17
piece handle=/u01/app/oracle/product/12.2.0.1/dbhome/dbs/c-2122366327-20170419-10 comment=NONE
Finished Control File and SPFILE Autobackup at 19-APR-17

RMAN> report obsolete;

RMAN retention policy will be applied to the command
RMAN retention policy is set to redundancy 1
Report of obsolete backups and copies
Type                 Key    Completion Time    Filename/Handle
-------------------- ------ ------------------ --------------------
Backup Set           22     19-APR-17         
  Backup Piece       22     19-APR-17          /u01/app/oracle/product/12.2.0.1/dbhome/dbs/15s2305t_1_1