Do not set job_queue_processes at PDB level in 12.1.0.2

MOS Doc ID 2175443.1 goes into more detail, including the following quote:

It is possible to set job_queue_processes at pdb level on 12.1.0.2 multitenant database, but it does not work correctly.

I have managed to reproduce the actual behavior. Of particular concern is that this allows a local user on one PDB to disrupt things on other PDBs.

My test system is running 12.1.0.2, multi-tenant with two PDBs, PDB1 and PDB2. Each PDB has a local user with DBA privileges, dba1, and dba2 respectively.

[oracle@awrw admin]$ sqlplus dba1/oracle@pdb1

SQL*Plus: Release 12.1.0.2.0 Production on Sun Apr 2 13:34:41 2017

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Sun Apr 02 2017 13:33:46 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> show parameter job_queue_processes

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
job_queue_processes		     integer	 1000
SQL> alter system set job_queue_processes=0 container=current;

System altered.

SQL> show parameter job_queue_processes

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
job_queue_processes		     integer	 0
SQL>

So dba1 seems to have changed job_queue_processes, in their own PDB, pdb1.

The alert log reveals some cause for concern:

CJQ0 started with pid=34, OS id=31546
Stopping background process CJQ0
Sun Apr 02 13:35:01 2017
ALTER SYSTEM SET job_queue_processes=0 SCOPE=BOTH;

Sure enough, in PDB2, even though job_queue_processes parameter seems to be set to it’s default value, we are unable to use scheduler:

[oracle@awrw admin]$ sqlplus dba2/oracle@pdb2

SQL*Plus: Release 12.1.0.2.0 Production on Sun Apr 2 13:44:19 2017

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Sun Apr 02 2017 13:43:25 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> show parameter job_queue_processes

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
job_queue_processes		     integer	 1000
SQL> exec sys.dbms_scheduler.create_job( 'DUMMY_JOB', job_type=>'PLSQL_BLOCK', job_action=>'BEGIN NULL; END;');

PL/SQL procedure successfully completed.

SQL> exec sys.dbms_scheduler.run_job( 'DUMMY_JOB', false);
BEGIN sys.dbms_scheduler.run_job( 'DUMMY_JOB', false); END;

*
ERROR at line 1:
ORA-27492: unable to run job "DBA2"."DUMMY_JOB": scheduler unavailable
ORA-06512: at "SYS.DBMS_ISCHED", line 209
ORA-06512: at "SYS.DBMS_SCHEDULER", line 594
ORA-06512: at line 1

I connect back to PDB1 and reset the parameter:

[oracle@awrw admin]$ sqlplus dba1/oracle@pdb1

SQL*Plus: Release 12.1.0.2.0 Production on Sun Apr 2 13:46:40 2017

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Sun Apr 02 2017 13:34:41 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> alter system set job_queue_processes=1000 container=current;

System altered.
In the alert log I can see Job Queue Co-ordinator process is restarted:
Starting background process CJQ0
Sun Apr 02 13:47:08 2017
ALTER SYSTEM SET job_queue_processes=1000 SCOPE=BOTH;
Sun Apr 02 13:47:08 2017
CJQ0 started with pid=34, OS id=2227

And sure enough I am now able to run schedule jobs in PDB2.

[oracle@awrw ~]$ sqlplus dba2/oracle@pdb2

SQL*Plus: Release 12.1.0.2.0 Production on Sun Apr 2 13:48:51 2017

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Sun Apr 02 2017 13:44:19 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> exec sys.dbms_scheduler.run_job( 'DUMMY_JOB', false);

PL/SQL procedure successfully completed.

Upcoming Adaptive Query Optimization Changes

I got a chance to discuss some of the issues we have been facing in this area with the CBO team at Oracle OpenWorld last week, and learned about some of the changes they have planned for the future.  Please don’t take this as fact, this is only based on my recollection of from an informal chat.

The parameter OPTIMIZER_ADAPTIVE_FEATURES will be split into two parts, OPTIMIZER_ADAPTIVE_PLANS and OPTIMIZER_ADAPTIVE_STATISTICS.  Franck Pachot has a blog post which goes into more details.

Worth noting is the fact that OPTIMIZER_ADAPTIVE_STATISTICS will be set to false by default.  This change will be implemented in 12.2, but also be made available as a patch for 12.1 (I believe again with the same defaults).

With OPTIMIZER_ADAPTIVE_STATISTICS set to false, SQL Plan Directives will still be created when assumptions made at parse time are discovered to be wrong at execution time, however  they will not automatically be used to trigger dynamic sampling or creation of column groups.  I imagine that the advantage for this is that these directives could be manually monitored, and root causes could be fixed by the DBA.

Specific to 12.2, if dynamic sampling is triggered, it will be a completely new mechanism for caching results instead of the current mechanism reliant on the result cache.  Results from dynamic sampling queries will be stored persistently (not lost after instance restart and available to all nodes in a RAC cluster).  There will be a mechanism to ensure that this data can be refreshed when it gets stale (somewhat similar to the existing mechanism for incremental statistics refresh where the database stores number of changed rows in base tables).

I am a little surprised at what appears like a partial back-track in this area from Oracle, however based on the number of issues we have found in this area I think this is the right thing to do.  My guess is that many customers have just been disabling ALL adaptive features when problems have been encountered during upgrade.  Separating the features out and providing more sensible default settings to encourage more customers to use at least some of these features should be beneficial to all.

 

Adaptive Dynamic Sampling and the Result Cache: RC Latch

After we migrated one of our instances from Oracle 11.2.0.4 to 12.1.0.2 we ran into some serious latch contention that basically took down the instance.  From online research, and discussion with other DBAs I don’t think we are the only customer that has run into this problem, so I wanted to share my investigation and findings in the hope that it will help others, or to spur some fixes from Oracle in this area.

If you don’t want to drill into details, the TLDR is that Adaptive Dynamic Sampling (ADS) (the most likely way for this to be triggered if a SQL Plan Directive appears on a table, these are created automatically with default database settings) and frequent hard parsing (for example due to not using bind variables) do not play well together.

For an excellent introduction to ADS, I suggest you check out this presentation on the subject by Christian Antognini.

To reproduce the behavior that I was seeing, the script below creates a procedure that I can call to perform lots of hard parsing, and forcing ADS using a hint.  The purpose of the identifier parameter is to include it in the SQL text, so that each time I execute the procedure I pass in a different value, forcing a reparse on each execution.

create table t1 nologging as
select  rownum        id,
        rpad('x',100) padding
    from dual 
    connect by level <= 1e4;
               
create index i1 on t1(id);
 
CREATE OR REPLACE PROCEDURE lhps(identifier IN VARCHAR2, parse_count IN NUMBER) IS
BEGIN
   FOR LOOPY IN 1..PARSE_COUNT
   LOOP
      EXECUTE IMMEDIATE 'select /*+dynamic_sampling(11) '|| IDENTIFIER || ' */ * from t1 where id = ' || LOOPY;
   END LOOP;
END;
/

Using mystats package, we can see that in this case, for 100 hard parses, the RC Latch is taken 401 times.

SQL> exec mystats_pkg.ms_start; 
PL/SQL procedure successfully completed.

SQL> EXEC lhps('A', 100);                                                                               
PL/SQL procedure successfully completed.    
                                                             
SQL> exec mystats_pkg.ms_stop(p_statname_like=>'%RC%');
 ==========================================================================================
 MyStats report : 04-SEP-2016 11:18:49
 ==========================================================================================
 2. Statistics Report
 ----------------------------------------------------------------
 Type    Statistic Name                                     Value
 ------  --------------------------------------  ----------------
LATCH   Result Cache: RC Latch                                401             
 ----------------------------------------------------------------
 4. About
 ------------------------------------------------------------------------------------------
 - MyStats v3.0 by Adrian Billington (http://www.oracle-developer.net)
 - Based on the SNAP_MY_STATS utility by Jonathan Lewis
 ==========================================================================================

Examination of the SQL Trace shows that, as you would expect (from Christian’s presentation), the reason for the high RC Latch usage is to the internal ADS SQL statements.

SELECT /* DS_SVC */ /*+ dynamic_sampling(0) no_sql_tune no_monitoring optimizer_features_enable(default) no_parallel result_cache(snapshot=3600) */ SUM(C1) FROM (SELECT /*+ qb_name("innerQuery") NO_INDEX_FFS( "T1")  */ 1 AS C1 FROM "T1" "T1" WHERE ("T1"."ID"=1)) innerQuery

MOS note 2002089.1 describes this behavior, and suggests that this behavior can be disabled (obviously there are pros and cons in doing so) with the following setting:

alter session set "_optimizer_ads_use_result_cache" = FALSE;

As a side note it is slightly suprizing that there appears to be 4 RC Latch gets per each execution of the dynamic sampling SQL.  Note that this database in multi-tenant.  As there is only one RC Latch shared by all container databases, my guess is that when SQL is parsed, the RC Latch is taken once for each PDB.

Let’s retest after applying that underscore setting at session level:

SQL> alter session set "_optimizer_ads_use_result_cache" = FALSE;

SQL> exec mystats_pkg.ms_start;                                                                        
PL/SQL procedure successfully completed.                                                               

SQL> EXEC lhps('B', 100);                                                                             
PL/SQL procedure successfully completed.                                                               

SQL> exec mystats_pkg.ms_stop(p_statname_like=>'%RC%');
 ==========================================================================================
 MyStats report : 04-SEP-2016 11:28:53
 ==========================================================================================
 ------------------------------------------------------------------------------------------
 2. Statistics Report
 ------------------------------------------------------------
 Type    Statistic Name                                 Value
 ------  ----------------------------------  ----------------
 LATCH   Result Cache: RC Latch                            50
 ------------------------------------------------------------
 4. About
 ------------------------------------------------------------------------------------------
 - MyStats v3.0 by Adrian Billington (http://www.oracle-developer.net)
 - Based on the SNAP_MY_STATS utility by Jonathan Lewis
 ==========================================================================================

RC Latch usage is much reduced, but still seems to be quite high, it is taken 50 times for 100 hard parses.

Further analysis of the SQL Trace identifies the following SQL as the culprit:

SELECT /*+ RESULT_CACHE (SYSOBJ=TRUE SHELFLIFE=30) */ SNAP_ID,DBID,INSTANCE_NUMBER,SQL_ID,PARSING_SCHEMA_NAME,END_OF_FETCH_COUNT_DELTA,EXECUTIONS_DELTA,PX_SERVERS_EXECS_DELTA,BUFFER_GETS_DELTA,CPU_TIME_DELTA,ELAPSED_TIME_DELTA,FLAG,CON_ID FROM "SYS"."INT$DBA_HIST_SQLSTAT" "INT$DBA_HIST_SQLSTAT" WHERE ("INT$DBA_HIST_SQLSTAT"."SQL_ID"=:1 AND "INT$DBA_HIST_SQLSTAT"."PARSING_SCHEMA_NAME"=:2 AND ("INT$DBA_HIST_SQLSTAT"."CON_ID"=0 OR "INT$DBA_HIST_SQLSTAT"."CON_ID"=3) AND BITAND(NVL("INT$DBA_HIST_SQLSTAT"."FLAG",0),1)=0) AND ("INT$DBA_HIST_SQLSTAT".CON_ID=0 OR "INT$DBA_HIST_SQLSTAT".CON_ID=3)

Paraphrasing Christian’s presentation this lookup is performed to check from AWR whether SQL statement has been seen previously, to determine how long to spend on ADS during optimization.  I can’t see any method to disable Results Cache for this statement (apart from disabling Results Cache itself).  Note also, that I suspect this behavior is also related to multi-tenant, as I can’t reproduce this (either the RC Latch usage or even the lookup itself) on a non multi-tenant instance.

In our case have we disabled ADS at application schema level by setting the following in a login trigger, while we try to work with application teams to reduce hard parsing.

alter session set optimizer_dynamic_sampling = 0;

 

 

Incorrect Cardinality Adjustment caused by Dynamic Statistics

During migration testing to Oracle 12, one of our teams complained about regression of a particular query.
Drilling into what was going on identified some a problem with Adaptive Dynamic Sampling, and certainly not the first one we have encountered.
I will detail the steps to reproduce, and my understanding of what is going on.

create table idxsel as
select rownum id,
dbms_random.value(0, 1000)  r,
rpad('X', 1000, 'X') padding
from dual connect by level <= 10000;

CREATE INDEX idxsel_id ON idxsel(id);
CREATE INDEX idxsel_r_id ON idxsel(r, id);
EXEC DBMS_STATS.GATHER_TABLE_STATS(NULL, 'IDXSEL');

OK, so we have created a table with 10000 rows.  Column id is actually unique (not that the CBO knows this for sure), and column r is a random number between 1 and 1000.

Consider the following query:

SELECT NULL FROM IDXSEL WHERE id = 1 AND r > 500;

Amongst the options available to the optimizer to perform this query are:

  1.  Use a range scan on index IDXSEL_ID to get the enties with id equal to 1 (actually there will just be one), then for every index entry, access the table to eliminate those where r is greater than 500 (fifty percent chance for each entry).
  2. Use a range scan on IDXSEL_R_ID to get the entries those rows with r > 500 (this will match about half of the index entries), then as part of the index access operation, throw away all rows where id is not 1 (which will be all except maybe one row).

Intuitively (to me at least), option 1 is the better plan, and indeed you can see this is what happens without Dynamic Sampling.

SELECT /*+ GATHER_PLAN_STATISTICS DYNAMIC_SAMPLING(0)  */ NULL FROM IDXSEL WHERE id = 1 AND r > 500;
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(FORMAT=>'ALLSTATS LAST'));
-----------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name      | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
-----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |           |      1 |        |      0 |00:00:00.01 |       3 |
|*  1 |  TABLE ACCESS BY INDEX ROWID BATCHED| IDXSEL    |      1 |      1 |      0 |00:00:00.01 |       3 |
|*  2 |   INDEX RANGE SCAN                  | IDXSEL_ID |      1 |      1 |      1 |00:00:00.01 |       2 |
-----------------------------------------------------------------------------------------------------------

However as soon as we use adaptive dynamic sampling (in the real life case this kicked in because we had a SQL Plan Directive, but for simplicity of the test case I am forcing this with a hint) you can see that option 2 is chosen.
Note, the fact that nearly 10 times the number of buffer gets (28 instead of 3) does tend to confirm that this is not the optimal plan.

SELECT /*+ GATHER_PLAN_STATISTICS DYNAMIC_SAMPLING(11)  */ NULL FROM idxsel WHERE id = 1 AND r > 500;
 SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(FORMAT=>'ALLSTATS LAST'));
 ------------------------------------------------------------------------------------------
 | Id  | Operation        | Name        | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
 ------------------------------------------------------------------------------------------
 |   0 | SELECT STATEMENT |             |      1 |        |      0 |00:00:00.01 |      28 |
 |*  1 |  INDEX RANGE SCAN| IDXSEL_R_ID |      1 |      1 |      0 |00:00:00.01 |      28 |
 ------------------------------------------------------------------------------------------

Drilling into the CBO trace (10053) gives some clues as to what is going on.  Below are some extracts from the trace en evaluating IDXSEL_R_ID.
Firstly without Dynamic Sampling.  ix_sel and ix_sel_with_filters seem correct.
(Note, for more understanding of ix_sel and ix_sel_with_filters refer to Jonathan Lewis’s book Cost-Based Oracle Fundamentals).

****** Costing Index IDXSEL_R_ID
 Access Path: index (IndexOnly)
 Index: IDXSEL_R_ID
 ix_sel: 0.499931  ix_sel_with_filters: 4.9993e-05
 Cost: 27.037320  Resp: 27.037320  Degree: 1

However with Dynamic Sampling ix_sel_with_filters remains correct, but ix_sel is way off.
(Note for full disclosure this behaviour is slightly different on the real life data where I actually saw ix_sel and ix_sel_with_filters being adjusted to the same wrong value).
This brings the cost of this access path down to a value of 2, which causes it to be chosen ahead of that using IDXSEL_ID.

****** Costing Index IDXSEL_R_ID
 >> Index Card adjusted from 4999.313010 to 1.000000 due to adaptive dynamic sampling
 Access Path: index (IndexOnly)
 Index: IDXSEL_R_ID
 ix_sel: 1.0000e-04  ix_sel_with_filters: 4.9993e-05
 Cost: 2.000452  Resp: 2.000452  Degree: 1

Update:

Oracle support have created bug 24596139 – DYNAMIC SAMPLING LEADS TO INCORRECT INDEX SELECTION DUE TO WRONG COST.

“rtld: 0712-001 Symbol CreateIoCompletionPort was referenced” installing Oracle 12.1.0.2 on AIX

I first encountered this error message when installing 12.1.0.2 on AIX, and have since hit this with other Oracle software, particularly Enterprise Manager agent.

HKE1DEV2@[/home/oracle]#sqlplus
exec(): 0509-036 Cannot load program sqlplus because of the following errors:
rtld: 0712-001 Symbol CreateIoCompletionPort was referenced
from module /u01/app/oracle/product/12.1.0.2/client/lib/libttsh12.so(), but a runtime definition
of the symbol was not found.
rtld: 0712-001 Symbol GetMultipleCompletionStatus was referenced
from module /u01/app/oracle/product/12.1.0.2/client/lib/libttsh12.so(), but a runtime definition
of the symbol was not found.

The solution to this problem can be found in the Database Installation Guide
On IBM AIX on POWER Systems (64-Bit), enable I/O completion ports (IOCP) to ensure successful database and grid infrastructure installation.
You can check the current status with the following command:

/usr/sbin/lsdev -c iocp -F status

The result should be “Available”.
If it is not you should change it, either do this through smitty:

smitty iocp2
  • Select Change/Show Characteristics of I/O Completion Ports.
  • Change State to be configured at system restart from Defined to Available.
  • Reboot the AIX server.

Or you can make the change directly with the following command:

mkdev -l iocp0