Before describing the issue that lead to this post, let’s shortly review how the handling of initialization parameters works in a multitenant environment.
- Initialization parameters exist at both the CDB level and the PDB level. This is a critical feature because it wouldn’t be acceptable to use the same set of initialization parameters for all PDB (of course, except if you have only one). For example, the initialization parameters used by the query optimizer primarily depend on the application using the database, and not on the server running it.
- By default, initialization parameters at the PDB level inherit the value from the initialization parameters at the CDB level.
- At the PDB level about half of the initialization parameters can override the value inherited from the CDB level (to know which ones support that, refer to the V$SYSTEM_PARAMETER.ISPDB_MODIFIABLE flag).
- The initialization parameter set at the PDB level aren’t stored in the spfile. They are stored in a data dictionary table in the root of the CDB (SYS.PDB_SPFILE$).
- When a PDB is moved around (through an unplug/plug), the initialization parameters set at the PDB level are part of the metadata stored in the XML file used for the unplug/plug.
The issue that I recently experienced while working in a multitenant environment for one of my customers can be summarized by the inconsistency related to the output of the following queries (not that both are executed at the PDB level):
SQL> SELECT value FROM v$system_parameter WHERE name = 'optimizer_dynamic_sampling'; VALUE ------------------------------ 0 SQL> SELECT value FROM v$sys_optimizer_env WHERE name = 'optimizer_dynamic_sampling'; VALUE ------------------------------ 11 |
Note that both queries should return the same value. But, since it isn’t the case, two are the key questions that require an answer…
First question, in which situation the output of the two queries is inconsistent?
I’m able to observe such an inconsistency when three conditions are met. First, the initialization parameter has been set at the PDB level. Second, the value of the initialization parameter at the PDB level is different than the one at the CDB level. Third, the initialization parameter was set before the last instance bounce (unplug/plug and close/open don’t show this behavior). Note that I didn’t test whether it happens for all initialization parameters. In any case, I can confirm that it’s the case for the ones with the “optimizer” prefix. The following example illustrates:
SQL> STARTUP SQL> ALTER PLUGGABLE DATABASE pdb121 OPEN; SQL> ALTER SESSION SET CONTAINER = cdb$root; SQL> ALTER SYSTEM SET optimizer_dynamic_sampling = 2; SQL> ALTER SESSION SET CONTAINER = pdb121; SQL> ALTER SYSTEM SET optimizer_dynamic_sampling = 0; SQL> ALTER SESSION SET CONTAINER = cdb$root; SQL> SHUTDOWN IMMEDIATE SQL> STARTUP SQL> ALTER PLUGGABLE DATABASE pdb121 OPEN; SQL> ALTER SESSION SET CONTAINER = pdb121; SQL> SELECT value FROM v$system_parameter WHERE name = 'optimizer_dynamic_sampling'; VALUE ------------------------- 0 SQL> SELECT value FROM v$sys_optimizer_env WHERE name = 'optimizer_dynamic_sampling'; VALUE ------------------------- 2 |
Second question: which one of the two values is used?
According to my tests, in version 12.1 (more about 12.2 later on) the query optimizer uses the one returned by V$SYS_OPTIMIZER_ENV. The following example, which is a continuation of the previous one, illustrates:
SQL> ALTER SESSION SET events 'trace[sql_optimizer.*]'; SQL> EXPLAIN PLAN FOR SELECT * FROM dual; SQL> ALTER SESSION SET events 'trace[sql_optimizer.*]off'; SQL> SELECT value FROM v$diag_info WHERE name = 'Default Trace File'; VALUE -------------------------------------------------------------------------------- /u00/oracle/app/oracle/diag/rdbms/dbm121/DBM121/trace/DBM121_ora_25742.trc SQL> host grep "^optimizer_dynamic_sampling" /u00/oracle/app/oracle/diag/rdbms/dbm121/DBM121/trace/DBM121_ora_25742.trc optimizer_dynamic_sampling = 2 optimizer_dynamic_sampling = 2 |
The examples shown before where generated with version 12.1.0.2.160719. The same test in version 12.2.0.1.0 leads, in my opinion, to an even worse result. In fact, even though the value shown by the two dynamic performance views is consistent (i.e. both shows the value 0), the value that is actually used by the query optimizer is the wrong one! The following example illustrates:
SQL> ALTER SESSION SET CONTAINER = pdb1; SQL> SELECT value FROM v$system_parameter WHERE name = 'optimizer_dynamic_sampling'; VALUE ------------------------- 0 SQL> SELECT value FROM v$sys_optimizer_env WHERE name = 'optimizer_dynamic_sampling'; VALUE ------------------------- 0 SQL> ALTER SESSION SET events 'trace[sql_optimizer.*]'; SQL> EXPLAIN PLAN FOR SELECT * FROM dual; SQL> ALTER SESSION SET events 'trace[sql_optimizer.*]off'; SQL> SELECT value FROM v$diag_info WHERE name = 'Default Trace File'; VALUE -------------------------------------------------------------------------------- /u01/app/oracle/diag/rdbms/orcl/ORCL/trace/ORCL_ora_5518.trc SQL> host grep "^optimizer_dynamic_sampling" /u01/app/oracle/diag/rdbms/orcl/ORCL/trace/ORCL_ora_5518.trc optimizer_dynamic_sampling = 2 optimizer_dynamic_sampling = 2 |
I checked MOS and I found a number of bugs (all related to version 12.1) in this area:
- Bug 20700587 – Underscore parameters defined in pdb being ignored (Doc ID 20700587.8) (updated the name of the note on 2019-06-06)
- Bug 23557067 – V$SYS_OPTIMIZER_ENV DOES NOT USE VALUE SPECIFIED AT PDB LEVEL
- Bug 17805926 – Parameter changes at pdb level affect other pdbs
- Bug 24321547 – MODIFIED UNDERSCORE PARAMETER VALUES NOT VISIBLE IN PDB AFTER RESTART (bug reference added on 2016-11-16)
I’m not sure whether what I see is caused by one of these bugs. In any case, Oracle has to fix all of them straightaway! I’m actually surprised that version 12.2 shows the behavior I just described. It’s such a basic functionality that must work as expected!
In summary, if you work in a multitenant environment, after restarting an instance, you have to make sure that all your PDB are using the right settings. To do so, you either have to set the initialization parameters at the CDB level only or you have to execute a number of ALTER SYSTEM statements at the PDB level. It goes without saying that the former is only an acceptable workaround if you have a single PDB.
Update 2019-06-06
Today I repeated the test above with the following versions:
- Database Bundle Patch : 12.1.0.2.181016
- Database Release Update : 12.2.0.1.190416
- Database Release Update : 18.6.0.0.190416
With all of them, the issue didn’t occur! Even though not all bugs in this area were fixed (e.g. according to MOS bug 23557067 isn’t), several of them are. For example, in 12.1.0.2.181016, the bugs 17805926 and 24321547 are listed when executing “opatch lsinv”. In addition, bug 20700587 is marked to be fixed in 12.2 and one-off patches for 12.1 exist.
Thanks Christian,
I’ll look into this.
Regards,
Nigel Bayliss (Oracle)
Hello Christian,
I’ve observed the same issue with underscore parameters in the same circumstances as you described (restart of a PDB or whole instance is essential to reproduce it).
Thank you for the information, Yuri!
Christian,
There’s a catch to this, as the outcome depends on how you connect to the pdb. Here’s what I got on a 12.2.0.1 (for 12.1.0.2, there are some patches that should be applied to get the proper behaviour, even on the latest BP):
SQL> connect / as sysdba
Connected.
SQL> alter session set container=ssb;
Session altered.
Elapsed: 00:00:00.07
SQL> ALTER SESSION SET events ‘trace[sql_optimizer.*]’;
Session altered.
Elapsed: 00:00:00.00
SQL> explain plan for select count(*) from ssb.lineorder;
Explained.
Elapsed: 00:00:00.07
SQL> ALTER SESSION SET events ‘trace[sql_optimizer.*]off’;
Session altered.
Elapsed: 00:00:00.00
SQL> sELECT value FROM v$diag_info WHERE name = ‘Default Trace File’;
VALUE
——————————————————————————————————————————————————————————————————–
/home/oracle/app/diag/rdbms/cdb1/CDB1/trace/CDB1_ora_7047.trc
Elapsed: 00:00:00.01
SQL> host grep “^optimizer_dynamic_sampling” /home/oracle/app/diag/rdbms/cdb1/CDB1/trace/CDB1_ora_7047.trc
optimizer_dynamic_sampling = 2
optimizer_dynamic_sampling = 2
SQL> connect sys/……..@ssb as sysdba
Connected.
SQL> ALTER SESSION SET events ‘trace[sql_optimizer.*]’;
Session altered.
Elapsed: 00:00:00.01
SQL> explain plan for select count(*) from ssb.lineorder;
Explained.
Elapsed: 00:00:00.07
SQL> ALTER SESSION SET events ‘trace[sql_optimizer.*]off’;
Session altered.
Elapsed: 00:00:00.00
SQL> sELECT value FROM v$diag_info WHERE name = ‘Default Trace File’;
VALUE
——————————————————————————————————————————————————————————————————–
/home/oracle/app/diag/rdbms/cdb1/CDB1/trace/CDB1_ora_7060.trc
Elapsed: 00:00:00.00
SQL> host grep “^optimizer_dynamic_sampling” /home/oracle/app/diag/rdbms/cdb1/CDB1/trace/CDB1_ora_7060.trc
optimizer_dynamic_sampling = 0
optimizer_dynamic_sampling = 0
Note that optimizer_dynamic_sampling is set to 0 at the pdb level, and to 2 at the cdb level.
So, it seems that changing the container does NOT reload the optimizer from the pdb. Should it? Not easy to decide.
Regards
Hi
Thank you for the information!
I verified it and, as you mentioned:
– in 12.2.0.1.0 it works correctly when I connect through SQL*Net.
– in 12.1.0.2.160719 without additional patches it works as describing in the blog
Best,
Chris
The patch for 17805926 is actually solving that for 12cR1 – independent of the way you are connecting.
When using scheduler jobs an additional patch is needed, see here :
https://s234blog.wordpress.com/2017/01/29/dbms_scheduler-and-multitenant/
Best,
Chris
Thank you very much for your feedback. I’ll give it a try.
Best,
Chris