The dbms_xplan.display* functions have a lot of options to format the execution plan output via the format parameter. What often annoys me is the output of the SQL statement text. If, for example, you want to display all execution plans of a cursor with many child cursors and large SQL statement texts, the SQL text is returned for each child cursor and clutters your screen. Unfortunately there is no way to turn off this output with a format parameter.
Luckily we can just filter the rows returned by dbms_xplan.display* functions with a custom query. This blog post show you how. If you are just interested in how the final query looks like, click here.
Build the query
First, let’s have a look at a sample output of dbms_xplan.display_cursor for a cursor with 2 child cursors.
The highlighted lines in the output above should always occur in the output, no matter what format options are defined. They are also very easy to identify with a simple regex, so the idea is to filter out everything in between them, with some exceptions (like the Plan hash value line or the header of the execution plan table).
A couple of ways to do that come to my mind:
Create a PL/SQL function which accepts the output of the DBMS_XPLAN.DISPLAY_CURSOR function and process it procedurally row by row. I don’t want to create any objects in the database, but in 12c you could define the function in the WITH clause. As I want a solution also working on older Oracle version I don’t follow this approach.
The pattern matching functionality in 12c is very powerful and could to this job, but again, I don’t want to depend on 12c.
There must be a way to do this with analytic functions. They are available since Oracle 8i, so I’m able to use this solution on all environments I need.
First step: Use last_value analytic function
The idea is to add an additional column to the plan output which has the value 1 if the row should be display or 0 if it shouldn’t be displayed.
I use the CASE expression as a first step to mark a row to either be shown or not. In combination with the LAST_VALUE analytic function we can mark all subsequent rows the same way.
Check the output below first, to see what’s returned.
The use of NULL and the IGNORE NULLS clause in the LAST_VALUE function is important. This way, for every NULL returned by the CASE expression, the LAST_VALUE function ignores them and uses the last non-null value, which is either 1 or 0.
Note that the regex also searches for HASH_VALUE and EXPLAINED SQL STATEMENT. This is because, depending on the detail level chosen in the format parameter of the DBMS_XPLAN function, the output looks a little bit different.
The column SHOW now correctly contains 0 for the SQL statement section, but we also have some false negatives:
The row showing use the SQL_ID
The row after the SQL_ID which is kind of a separator
The row showing the plan hash value
The first row which is part of the execution plan table header
Second step: Refine query
To fix these false negatives, again we use the CASE expression in combination with some regex.
The LAG and LEAD analytic functions are used to decide what value the column has, depending on the next or previous row.
This is exactly what we want. All we have to do now, is to remove the projection of the SHOW column and filter the rows with SHOW=0.
The query was tested with various format options and should work equally well under 12c and 11g and probably also on older versions.
I use a variaton of it in a script where I can control whether the SQL statement text is shown or not with when I invoke it.
If you want to do parallel DML and use packaged functions in the statement, the functions need to fulfill certain requirements. Otherwise you will get the following note in the plan and PDML is not used:
- PDML disabled because function is not pure and not declared parallel enabled
Temporal validity is a new feature in Oracle 12c. It makes querying effective date ranges much simpler. For more details refer to the Oracle documentation.
This blog post explains in which situation one gets “wrong” or at least unexpected results when using temporal validity.
To demonstrate the situation let’s create a table with a time dimension and some records:
So if you want to get the records from table T1 which are effective as of the day after tomorrow you can query it with the following select:
The returned result is as expected
So what happens if you use a nonexistent time dimension:
You get an error, which is also the expected behaviour. But what happens when you delete the only existing time dimension on the table and query the table again.
Oracle successfully executes the query and simply ignores the temporal validity clause. If someone accidentally drops your time dimension you won’t notice but get wrong results.
In my opinion the correct behaviour in this situation would be returning an error, as it is the case when you have an already existing time dimension on the table but use a non-existing one in the query. I already filed a bug in MOS to ask whether this is a bug or a design decision and I will keep this post updated on the outcome.
This bug has finally been fixed in version 19.1. Oracle now correctly reports ORA-55603: invalid flashback archive or valid time period command when there is no time dimension on the table.
Most of the time you don’t have full control of how your data gets loaded into the database, because it is often imposed by business rules. But in the case you have full control, you should take advantage of it.
I recently came across an index organized table which was kind of a denormalized form of other tables and which was truncated and reloaded every day before some heavy batch jobs queried it.
The data was loaded into the IOT with an insert as select statement. Because the data returned by the select was not ordered (or at least not ordered after the primary key columns of the IOT), the insert caused a lot of 50/50 block splits on the IOT.
I added a simple order by clause to the statement with the effect that only 90/10 block splits were done afterwards. This also resulted in a 50% smaller index size.
Here’s a small test case for demonstration:
Create a heap table which is later used to load data into the IOT
The c1 column will be the pk column of the later created IOT. I used a random function so the values for this column are not ordered in any way.
Remove any duplicate values for c1 as this would lead to unique constraint violations later.
Create the IOT
Check the index split statistics before inserting
Insert without order by clause
About ~3k block splits were done, all of them 50/50. The size of the index is ~23 MB.
Now insert with order by clause
(Open a new database session, so we have “fresh” counters in v$mystat)
The amount of block splits is cut in half and because of the order by clause, only 90/10 block splits were done. The index size is also considerably smaller (12 MB compared to 23 MB).
Oracle Wallets are used to store your database passwords in encrypted format. This is useful for application servers when you don’t want to store your passwords in cleartext. A wallet password protects the wallet from reading and modification of entries. Each time your application needs to open a database connection it has to access the wallet, which requires entry of the wallet password. If you want your application to be able to read the database passwords from the wallet without entry of the wallet password, you can create it with the autologin option (so called SSO wallets).
When you think a little bit about it, it should be clear that this SSO wallet is not really encrypted anymore. Otherwise it would not be possible to read passwords from it, without authentication. In fact the autologin option creates a decrypted and obfuscated copy (cwallet.sso) from the original encrypted wallet (ewallet.p12). The whole security benefit from using a wallet compared to storing the passwords in cleartext more or less completely vanishes with the usage of the autologin option. I think the Oracle documentation is not very clear about this.
In this blog post I would like to demonstrate, that once you have access to an autologin wallet, you can extract all passwords very easily.
1. First let’s create a wallet with a sample entry
2. Write a trivial java class to open a database connection
3. After compiling the class, start it with Java Debugger (jdb)
ojdbc7_g.jar is the jdbc driver compiled with debugging information.
oraclepki.jar, osdt_cert.jar and osdt_core.jar are needed when working with Oracle wallets.
4. After starting the program with jdb, a breakpoint can be set and the program execution can be continued
The position of the breakpoint depends on the jdbc version. In this case 18.104.22.168 was used.
Shortly after running the program, jdb should output the following:
This is the point where you have access to all the entries in the wallet in cleartext
Dump the password
Dump the username
Dump the connection string
Extracting passwords from a SSO wallet is easy and only takes a little bit more effort than extracting it from a cleartext property file. In this example the wallet was created with orapki and the -auto_login_local option, so the above steps have to be executed on the machine where the wallet was created. If the wallet was created with mkstore, it can be copied and the steps to extract the passwords can later be executed on a different machine.
For security reasons you should consider the following points:
Restrict filesystem access to your wallet (this should be obvious).
If possible don’t use the autologin option. This means you have to manually enter a password each time you want to start your application, which is often not feasible.
If you really have to use a SSO wallet, create it with the -auto_login_local option, so it cannot be used after copying to other machines.
Prevent your application user to connect from other hosts than the application server.