Starting with Oracle 12.1 SQL Monitor reports show you the reason in case your parallel query has been downgraded. This blog post by Mohamed Houri demonstrates that the reason codes along with a description can be found in some fixed table.
As the numbers can change from release to release and I don’t always have easy access to fixed tables I decided to quickly query the reason codes for all major releases and list them here. Maybe they come in handy in the future.
The following query is used to get the reason codes:
INDX QKSXAREASONS
---- -----------------------------------------------------
350 DOP downgrade due to adaptive DOP
351 DOP downgrade due to resource manager max DOP
352 DOP downgrade due to insufficient number of processes
353 DOP downgrade because slaves failed to join
12cR2
INDX QKSXAREASONS
---- -----------------------------------------------------
351 DOP downgrade due to adaptive DOP
352 DOP downgrade due to resource manager max DOP
353 DOP downgrade due to insufficient number of processes
354 DOP downgrade because slaves failed to join
18c & 19c
INDX QKSXAREASONS
---- -----------------------------------------------------
352 DOP downgrade due to adaptive DOP
353 DOP downgrade due to resource manager max DOP
354 DOP downgrade due to insufficient number of processes
355 DOP downgrade because slaves failed to join
If no real time SQL monitor report is available I often sample v$session or do aggregations over v$active_session_history or dba_hist_active_sess_history and group by the column sql_plan_line_id to find out where most of the time is spent.
This blog post shows that you should be aware of whether or not an adaptive plan is used when using this data. So, let’s create a small demo.
Now let’s display the execution plan of a simple query joining the two tables:
This is the plan Oracle came up with:
Plan hash value: 1838229974
-----------------------------------
| Id | Operation | Name |
-----------------------------------
| 0 | SELECT STATEMENT | |
| 1 | HASH JOIN | |
| 2 | TABLE ACCESS FULL| T1 |
| 3 | TABLE ACCESS FULL| T2 |
-----------------------------------
Note
-----
- this is an adaptive plan
Now we execute the query:
The query returns 0 rows as the generated numbers in the join columns are not overlapping. In my case it ran for about 6 seconds. I then used the following query to get a rough idea where in the execution plan Oracle spent most of the time.
Oracle reports two samples where it spent time on plan line id 8, but in the execution plan there is no such plan line id. As the plan is an adaptive plan, it’s pretty obvious that this must be the issue here. So we display the plan with the ADAPTIVE formatting option, which also displays skipped operations.
Plan hash value: 1838229974
----------------------------------------------
| Id | Operation | Name |
----------------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | HASH JOIN | |
|- 2 | NESTED LOOPS | |
|- 3 | NESTED LOOPS | |
|- 4 | STATISTICS COLLECTOR | |
| 5 | TABLE ACCESS FULL | T1 |
|- 6 | INDEX RANGE SCAN | I2 |
|- 7 | TABLE ACCESS BY INDEX ROWID| T2 |
| 8 | TABLE ACCESS FULL | T2 |
----------------------------------------------
Note
-----
- this is an adaptive plan (rows marked '-' are inactive)
Whether or not an operation is skipped in a specific execution plan is stored in the other_xml column of v$sql_plan. Therefore we could extract this information to get a mapping between the reported plan line ids and the plan line ids shown in the execution plan
when calling the dbms_xplan.display* functions without the ADAPTIVE formatting option.
This demo showed an example where it was obvious that there’s something wrong, because the reported plan line id simply didn’t exist in the plan. Depending on the reported line ids it is not necessarily obvious at first. So if you’re using the sql_plan_line_id from v$session, v$active_session_history or dba_hist_active_sess_history to determine where in the execution plan Oracle is spending time, always make sure to check whether you’re looking at an adaptive plan or not.
Let’s suppose you have the following PL/SQL package source:
In older versions of Oracle if you tried to use the type t_varchar2_300 in Java you got an exception.
This would throw java.sql.SQLException: invalid name pattern: MY_PKG.T_VARCHAR2_300.
Until Oracle 12.1 types declared in PL/SQL packages were not supported via JDBC and you had to create a standalone type:
Luckily with 12.1 you can now use types declared in PL/SQL. Both the database and the JDBC driver need to be 12.1 or newer.
This makes it possible to avoid ugly workarounds like creating standalone types that would logically belong to a package and also enables you to group related types together in a single package.
For the sake of completeness here’s the full example code:
Unfortunately, package types still only work in SELECT statements and cannot be used in DML. This is a general limitation in Oracle and is not specific to JDBC.
The Oracle JDBC driver offers two methods for client-side statement caching. Enabling statement caching changes the behavior when the client closes a prepared statement. With statement caching disabled, the driver simply closes the statement. This is a server call which closes the associated cursor on the database. When the same statement is executed again it has to be at least soft parsed again. With statement caching enabled, the statement and its associated database cursor are not closed. Instead the statement is put in a pool (basically a LRU list of statements) for reuse. This way soft parses can be avoided which saves database resources and improves performance, which is the main reason to use statement caching.
At a first glance, the two statement caching methods differ mainly in the way a statement can be retrieved from the cache, but they also differ in what kind of data is cached. There’s no detailed information about the latter. This post tries to shed some light into that area and reveals some interesting behavior. All tests where done with the 18.3 JDBC driver. If you want to test your driver version, check out the code on Github.
Cached Statement Retrieval
First, let’s have a look at some code to recap how these two methods are used.
Please note that while the examples always use PreparedStatement, caching is also done for CallableStatement but not for plain Statement.
Implicit Caching
The following code snippet demonstrates the use of implicit statement caching:
The code dealing with the prepared statement looks exactly the same as it would without caching enabled. When preparing a statement, the cache is searched for an already existing statement with the exact same SQL text and result set type. This has the huge advantage that you could benefit from it without any code changes. It’s transparent to the client and therefore can be used with any framework in use. The only thing that the client needs to do, is to enable it either on the connection or on the data source.
Explicit Caching
Let’s have a look at code using explicit statement caching:
The main difference to implicit caching is how the statement is put into the cache and how it’s retrieved from it. The statement has to be closed by providing a key. This closeWithKey method is declared in OraclePreparedStatement and is not available in PreparedStatement. The cached statement can be retrieved with the same key. Again, getStatementWithKey needs an OracleConnection.
So explicit caching needs some more work to be used in your code and leads to tighter coupling to Oracle specific classes and methods. I’m not aware of any data access framework supporting it.
Cached Data
It looks like the two methods differ mainly in how statements are put in and retrieved from the cache. But there’s one more major difference. The Oracle JDBC Guide states the following:
Because explicit Statement caching retains statement data and state as well as metadata, it has a performance edge over implicit Statement caching, which retains only metadata. However, you must be cautious when using this type of caching, because explicit Statement caching saves all three types of information for reuse and you may not be aware of what data and state are retained from prior use of the statements.
The documentation talks about state, data and metadata. I don’t know what kind of data is supposed to fit into which of these categories. So, after some tests I defined the following categories to illustrate the caching behavior:
Properties - Properties set on the statement, which don’t retrieve or send any user data to the database server.
Data - User data or statement metadata to be sent or retrieved from the database server.
The following table shows the different statement properties and whether these settings are retained when a statement is retrieved from the cache.
Property
Used method to set the property
Implicit
Explicit
Result Set Type
Connection#prepareStatement
N/A
✓
Result Set Concurrency
Connection#prepareStatement
N/A
✓
Fetch Size
OraclePreparedStatement#setRowPrefetch
✗
✓
LOB Prefetch Size
OraclePreparedStatement#setLobPrefetchSize
✓
✓
Max Field Size
OraclePreparedStatement#setMaxFieldSize
✗
✓
Query Timeout
OraclePreparedStatement#setQueryTimeout
✗
✓
Result Set Type and Concurrency
These properties are set by calling Connection#prepareStatement. For implicitly cached statements they are, together with the SQL text, part of the cache key. Therefore, a prepared statement is only found in the cache when these 2 properties match.
LOB Prefetch Size
As the regular fetch size setting is not retained in implicitly cached statements, I expected the same for LOB prefetch size. Interestingly this is not the case.
Data
The table below shows a summary of which kind of data is cached by which method. There are some interesting irregularities explained afterwards.
Data
Implicit
Explicit
Result Set
✗
✗
Result Set Metadata
✓
✓
Parameter Metadata
✓
✗
Update Count
✓
✓
Generated Keys
✗
✓
Bind Data
✗
✓
Batches
✗
✗
Result Set
When calling Statement#close the associated ResultSet is closed as well. This is also true when statement caching is enabled. Any subsequent calls on a result set returned by Statement#getResultSet will result in a SQLException.
Result Set Metadata
Result set metadata (e.g. retrieved by PreparedStatement#getMetaData) is retained by both explicit and implicit statement caching as it is stated in the documentation.
Parameter Metadata
As one would expect, parameter metadata (retrieved by PreparedStatement#getParameterMetaData) behaves exactly the same as result set metadata regarding caching. But interestingly parameter metadata is only cached when implicit caching is enabled. I assume it is because internally the JDBC driver executes some statements to retrieve this information from the database. With implicit caching enabled, these statements are automatically cached as well.
Update Count
After calling PreparedStatement#executeUpdate to insert or update records, the number of affected rows can be returned by calling Statement#getUpdateCount. The update count is retained for statements put into the cache. Interestingly this is also the case for implicitly cached statements. I don’t see any reason why this value is not reset for implicitly cached statements, and it’s not the behavior described in the documentation.
This is only a problem if Statement#getUpdateCount is called after retrieving a statement from the cache but before doing an update. In this case the method reports the update count from the previous execution, otherwise the value simply gets overwritten when an update statement is executed.
The reported row count also depends on whether the Statement#getUpdateCount was called on the statement before it was put back into the cache. It either reports the updated rows (if the method was not called previously) or -1, which means the value was already retrieved.
Generated Keys
By calling Statement#getGeneratedKeys one can retrieve the keys generated by an insert statement. In fact, any columns could be returned, but it makes most sense to return generated keys, as they are not already known by the client.
This method returns a regular ResultSet, but unlike the one returned by Statement#getResultSet this one is retained for explicitly cached statements.
Bind Data
Oracle uses three different ways for binding data depending on data size and type of SQL statement (SQL or PL/SQL): direct, stream or LOB binding. See the documentation for more information. Irrespective of the method used, bind data is always cached by explicit statement caching. You should be aware of this, as it can hugely impact your memory footprint when dealing with LOBs or other large data. Also keep it in mind when sizing the statement cache (OracleConnection#setStatementCacheSize).
Batches
To minimize round trips between the client and the database, a common technique is to batch bind data (PreparedStatement#addBatch) and send it to the database in one go. In contrast to non-batched statements where bind data is cached by explicit statement caching, it is not cached when using batches. Batches are always cleared when a statement is put into the cache.
Conclusion
Most data access frameworks nowadays take away the burden of SQL cursor handling. One of the benefits is, that it avoids cursor leaks, but on the other hands leads to more soft parses and cursor creation on the database side. JDBC statement caching does mitigate these issues, so using it is strongly recommended.
Implicit caching is very easy to use and there is no reason not to enable it.
Explicit caching requires changes in the application code. According to Oracle it can give you better performance because more data is cached. Whether this is only true if you actually reuse data from previously cached statements or also when you override this data (think of bind data) has to be thoroughly tested.
After all, you are not limited to use only one of the caching methods but can use both simultaneously and benefit from the strengths of each method. For this reason, you should be aware of the different data that is retained by the two caching methods. The tests have shown the behavior is not always what one would expect after reading the documentation.
The fixed table x$ksmmem (Kernel Layer, Service Layer, Memory Management, MEMory) is basically a memory map of your SGA. It contains a row for each memory address (addr) and the value stored at this location (ksmmmval). This post shows which areas were accessible over this structure in older versions of Oracle, and more importantly, which areas are not present anymore in more recent versions.
In all examples ASMM is enabled, AMM disabled and platform is Linux x86_64.
Version 11.2
Let’s first have a look at the shared memory segments allocated by Oracle.
As we can see, there are 3 different segments allocated (Shmid 32768, 65537 and 98306).
By taking the values from Actual Addr as the start address and Actual Addr + Total size as the end address, we can write a simple query to see which areas are externalized in x$ksmmem.
As can be seen, all SGA areas except skgm overhead can be accessed over this table.
Version 18.3
Let’s do the same thing in 18.3.
Instead of listing the shared memory segments via oradebug ipc we can query the new fixed table x$ksmssinfo (Probably a mnemonic for (Kernel Layer, Service Layer, Memory Management, Shared Memory Segment INFO).
We execute the same query as before, with only the start and end addresses changed.
As can be seen, in Oracle 18.3 only the Fixed Size area is externalized in x$ksmmem. In fact, this is the case since bug 19363432 (introduced in 12.1) has been fixed later in Oracle 12.1.
The behavior in Oracle 12.1 before applying the bugfix, is that x$ksmmem also displays addresses which are not currently mapped. This leads to segmentation faults when you try to access this memory location by referring to x$ksmmem.ksmmmval. Unfortunately, after applying the bugfix, all areas except Fixed Size disappear from this fixed table.
Alternatives
In some cases, you want to have access to the contents of shared memory for troubleshooting. x$ksmmem provided a convenient way to do so, because you could use it in any query (e.g. by joining to other fixed views/tables containing memory addresses). Of course, you can still use oradebug peek or any other tool you like, but it’s definitely more cumbersome.