What is the Difference Between the v$sql* Views?

V$SQL

v$sql has details if you have multiple copies of the query “select * from T” in the shared pool.

v$sql will have a row per query. This can happen if user U1 and user U2 both have a table T and both issue “select * from T”.

Those are entirely different queries with different plans and so on.

v$sql will have 2 rows.

V$SQLAREA

v$sqlarea is a aggregate of v$sql.

It selects out DISTINCT sql.

“select * from T” will appear there.

V$SQLTEXT

v$sqltext is simply a way to see the entire query.

The v$sql and v$sqlarea views only show the first 1000 bytes.

Newlines and other control characters are replace with whitespace.

V$SQLTEXT_WITH_NEWLINES

v$sqltext_with_newlines is v$sqltext without the whitespace replacement.

How to Improve SQL Query Performance by Using Bind Variables in Oracle Database?

BIND VARIABLES INTRODUCTION

One of the most important practices in writing applications that use an Oracle database is to always use bind variables in your SQL statements. While easy to do, far too often this practice is not understood, or overlooked. The advantage of using bind variables is that if a query is executed multiple times, Oracle does not need to expend resources to create an execution plan each time but instead can reuse the original execution plan. This saves a lot of resources, especially CPU resources on the database server because most applications execute a relatively small number of statements over and over again. Just as important, when Oracle is able to reuse an execution plan you avoid creating contention around some segments of shared memory in the Oracle database. If this contention does occur, it creates a bottleneck around how fast SQL statements can be processed throughout the entire database. Which not just increases the amount of time it takes to process each statement, but also can greatly diminish the overall throughput of your Oracle database. Finally, using bind variables also provides protection against SQL injection attacks. And while this is a security benefit rather than a performance benefit, it is still an important consideration. Especially because SQL injection attacks always seem to be near the top of the list of application security threats.

SQL STATEMENTS AND EXECUTION PLANS

If we examine of the life of a SQL statement, we are reminded that for every SQL statement Oracle executes, Oracle must have an execution plan which tells Oracle all the individual steps that must be performed in order to process the query. This execution plan is created by the Oracle Optimizer and is shown in below step two of this diagram.

When the Oracle Optimizer creates an execution plan for a SQL statement, it must examine all of the different ways that it can process that statement in order to come up with the most efficient one. The first part of this process is for Oracle to gather all of the information about the tables involved and any indexes that could potentially be used from the Oracle Data Dictionary. To read this data, internally the Oracle Optimizer will execute over 50 queries against data dictionary objects and possibly more depending upon the complexity of the SQL statement it is attempting to analyze. Then the Optimizer takes all this data and evaluates all of the ways in which the SQL statement could be performed. This part of the process includes evaluating database statistics in order to make decisions about if an index should be used, how to perform any join operations, to what order the table should be joined in. The final phase is to take the most efficient plan and store it in an area of shared memory known as the shared SQL area. In real time this process happens very quickly. However, parsing SQL and determining the best execution plan does tend to be CPU intensive. And you also have to remember, Oracle isn’t just processing a single SQL statement at a time. It is processing hundreds or perhaps thousands of statements every second. Taken together, parsing all of these SQL statements can take up a significant amount of system resources, especially CPU.

SHARED SQL AREA

The shared SQL area is a segment of shared memory in the database where Oracle caches execution plans. You can think of the shared SQL area as being similar to a large hash table. For each SQL statement, Oracle will calculate a unique key by using a hashing function and then store the parsed SQL and the execution plan in the shared SQL area by that key. The amount of memory allocated to the shared SQL area is finite. So Oracle manages the shared SQL area using a least recently used algorithm. This means that execution plans that haven’t been executed recently will be removed from the cache in order to make room for new incoming plans. Therefore, at any given time, the contents of the shared SQL area will vary. Frequently execute statements will most likely always be present. In SQL statements that are only executed a few times a day tend to have a shorter life span and be removed from the cache.

When a SQL statement is submitted to Oracle and the Oracle Optimizer needs to determine an execution plan, we will first examine the shared SQL area to see if an execution plan already exists for this SQL statement. To do this, Oracle will use the same hashing function, to calculate the hash key on the text of the submitted SQL statement. And then, much like you would in a hash table, performing a lookup by this hash key in the shared SQL area. If a matching execution plan is found, Oracle can simply read this execution plan out of the shared SQL area and continue on processing the query. If a matching execution plan is not found, then the Oracle Optimizer will go ahead and generate an execution plan for the SQL statement.

Why is Oracle cache execution planned in this way? The short answer is that generally applications contain a relatively small number of unique SQL statements and they just execute these same SQL statements over and over again with different parameters each time. By caching SQL statements and their execution plans, we see two benefits. First, we save computing resources on the database server, because we don’t have to regenerate an execution plan each time. And this frees up these resources to perform other work in our database, like actually processing our SQL statements. Second, we’ll actually see each SQL statement run slightly faster after its first execution. The reason for this is because looking up an execution plan in the shared SQL area is faster than regenerating a plan from scratch. Oracle’s not the only database to make use of caching SQL statements and their execution plans in this way. While some of the implementation details vary, you will find the same strategy used in all the major relational database systems in use today.

When Oracle is able to match an incoming SQL statement to an execution plan that is already in the shared SQL area, this is called a soft parse in Oracle terminology. If Oracle has to generate a new execution plan for a SQL statement, this is called a hard parse. You will see the terms soft parse and hard parse used consistently throughout the Oracle documentation, books written about Oracle, and blog posts. So it is important to be familiar with these terms. There will always be some level of hard parsing that needs to go on for an Oracle database. After all, there will always be a first time when a statement is submitted to Oracle, and on that first time Oracle will have to hard parse the statement. The idea though, is to minimize the amount of hard parses you do and maximize the amount of soft parsing that goes on. You should check how much hard parsing your system is doing and what the ratio is between hard parsing and soft parsing. Being able to monitor the ratio on a system-wide level gives you a good idea of the health of your Oracle database and lets you understand if there are any bottlenecks forming due to a high hard parse count. It is hard parsing that we want to avoid and make sure that Oracle can soft parse as many of our statements as possible.

There are a few loose ends about the shared SQL area that we need to spend a moment to wrap up. First, you might be wondering about the size of the shared SQL area and how many statements can be cached? The answer is that the size will vary from system to system, based on parameters, like how much memory is available in Oracle, some configuration parameters set by the DBA. But generally speaking, the shared SQL area is sized sufficiently that it can contain several thousand SQL statements and their execution plans. Second, it is important to know that the shared SQL area is shared across all users and all applications that are using a single Oracle instance. So if you have multiple applications that are using a single Oracle database as a back-end, all those applications are using the same shared SQL area. This has implications. Because if one application is doing a lot of hard parsing, it can by itself create a lot of contention around the shared SQL area and this will, in turn, affect other applications that are using that same Oracle database. Third, if you regenerate statistics on a database object, whether a table or an index, Oracle is smart enough to invalidate any execution plans that use that object, and remove those execution plans from the shared SQL area. The reason why, is that now that new statistics are available, the Oracle Optimizer may determine a different execution plan is more efficient. So we don’t want to keep that old execution plan around. This is something that Oracle manages for you internally and no action is required on your part. Finally, there’s a way for a DBA to manually flush the shared SQL area while Oracle is running and remove all of the cached SQL statements and their execution plans. Something that you probably never want to do on your production server. But it can be useful in test environments if you’re running some sort of performance test and you want to make sure that you’re starting from the same point each time.

CONTENTION AND LATCH WAITS

There’s a second impact to having a large amount of hard parsing going on in your Oracle database. This impact is more subtle, but just as serious to consider. Oracle is a multi-user system, and at any given time there are many different SQL statements being processed by Oracle. If multiple processes are all hard parsing SQL statements, then each of these processes needs access to update the shared SQL area. And here in lies the problem. So if you have multiple processes, they’re all accessing and modifying a shared data structure at the same time. That data structure will become corrupt, and the shared SQL area is no exception. Therefore, in order to protect data integrity, Oracle needs to synchronize access to the shared SQL area amongst all these different processes.

If you’ve ever done any multi-threaded programming, you’re already familiar with this concept. And object that needs to accessed and modified by multiple threads has to be protected, such that only one thread is accessed in the object at a time. In C sharp this is done using the lock keyword and in Java we use the synchronize keyword. As a result, the compiler creates a critical section of code and a serialization construct is created to make sure that only one thread can be running in that section of code at a time. In this way, the object is prevented from becoming corrupted because only one thread can be in the critical section of code modifying the object at any time. In addition, while the thread is accessing a critical code section, no other thread can read data from the object. Just make sure that no other threads read the object while it is being modified and potentially receive incomplete or inconsistent data. All of these same concepts apply in Oracle. In Oracle, we use the term latch to describe the serialization construct used to synchronize access to a shared resource. The process that currently has access to the shared resource has the latch. A process that is waiting to gain access to the shared resource is said to be undergoing a latch wait. There are a number of resources that need to be shared across processes in Oracle and therefore there are many different latches. For now though, we’ll concentrate on the latch used to synchronize access to the shared SQL area. Understanding multi-threaded operations, how locking works and why it is needed is a very difficult subject area to understand and understanding the shared SQL area is no exception to this rule. Therefore, we’ll start out with an analogy to help us understand what’s happening here.

This analogy is very similar to what happens in Oracle when multiple processes are trying to hard parse SQL statements at the same time. Just like in our file example, we can’t have all these processes writing to the shared SQL area at the same time. So this is where Oracle introduces a latch, to make sure that only one of these processes is modifying the shared SQL area, at any given time. All the other processes that need to access to shared SQL area, will experience a latch wait. In this way, Oracle is able to protect the shared SQL area from becoming corrupted, like our file did in the file example. The trade off is though, that waiting on a latch to become available, is in and of itself a very expensive operation.

When an Oracle process tries to acquire a latch but is unable to, the process that is waiting for the latch does not go to sleep and then check again after some period of time, or go into a queue where it simply waits. Instead, the waiting process undergoes what is termed Latch Spinning, where it stays on the CPU and continually checks over and over again if it can acquire the latch. The reason why Oracle does this is because going to sleep and giving up the CPU, and then getting back onto the CPU has a high cost. So if at all possible, the process wants to continue to loop and see if it can acquire the latch. So what is happening here is even though that the process is waiting, it is consuming CPU while its looping. And this is CPU that isn’t really doing any useful work. At some point, usually after 2, 000 iterations, the waiting process will give up trying to get the latch and put itself to sleep for a short period of time. When a process gives up the CPU like this and is replaced by another process, this activity is known as a context switch in operating system terminology. If you’ve studied operating systems, you know that context switching is something that you want to avoid, this is a lot of computational cost in switching from one process to another on the CPU. After a brief period of time, the process will wake up and will again be switched back into running status. At this point, it will try to acquire the latch again. And if the latch is unavailable, say a different process has acquired the latch while our waiting process was sleeping, then the Latch Spinning starts all over again. The waiting process will loop, consume CPU while trying to acquire the latch, and then potentially go to sleep again. And this cycle will continue until the process is able to acquire the latch.

What should be clear from this discussion is that contention is something that we want to do everything we can to avoid. First, because of how processes will spin while undergoing latch waits, we’re going to end up using a lot of CPU while these processes loop on the CPU. And this is CPU resources that are just wasted, not available to do other useful work in our database. The second impact of contention, is that we’re really introducing a bottleneck into our system. Statements will only process as fast as they can gain access to the shared SQL area. And as we have discussed, if we have a lot of hard parsing going on, now the responsiveness of these statements is impacted, because they have to spend time waiting to get access to the shared SQL area. One of the side effects of this bottleneck, is that we understand that a system will only run as fast as its slowest component. So it doesn’t matter how many CPUs we have, how much memory we add, or how fast our SAN is. It is this bottleneck that is going to limit the performance and the throughput of our system. What we have here is the computer science equivalent to sending all of our SQL statements down a one lane road. It doesn’t matter that we have a 12 lane super highway on the other side. Our ultimate throughput is limited by this single lane road, that is gaining access to the shared SQL area, and all of our SQL statements need to travel down this road. So clearly, if we can design applications that avoid contention in the database, we want to learn how to do this.

MATCHING SQL STATEMENTS

In the last few segments, we’ve come to understand the affects that excessive hard parsing can have on the performance of your system. Let’s now turn our attention to how to write SQL statements that will be soft parsed rather than hard parsed, and how we can incorporate these statements into our applications. For an incoming SQL statement to be matched to a SQL statement and execution plan already in the shared SQL area, the two statements have to match exactly. This includes column order, the order of predicates in your WHERE clause, and even extends to white-space characters in the statement, and what characters are in upper and lowercase. You can think of this as if you called String. Equals on the two strings of the SQL statements, one string being the incoming SQL, and the other string being the statement in the shared SQL area. If String. Equals would return true, then the statements match. In effect, what we have here is a character by character comparison of the two strings. And only if each and every character matches are the two SQL statements considered to be equivalent. Let’s take a look at a few examples of this.

On this slide, we have two queries that are each retrieving grades for a student in a particular term. And in this case, the student and the term is the same between these two queries. These two queries are exactly the same, except for the fact that in the first query, the name of the view is in all lower case. While on the second query, the name of the view is in all upper case. For the purpose of determining if the two SQL statements match, character casing counts. And so Oracle would regard these two SQL statements as different statements, and would hard parse each one.

In this slide, we have two SQL statements that are very similar to the one shown in the last slide. And once again, we’re querying the view for grades of a particular student. Notice however, that the order of columns is different between the two queries. So even though these two queries are in fact returning the same columns and therefore the same data, the fact that the column lists are in different orders means that Oracle will regard these two queries as different, and hard parse each one. It is important to consider though, that when we write an application, that we define the SQL our application uses in some sort of string in the data access class. And then we’ll use this string almost like a template that we plug values into and generate the SQL we’re going to run against the database over and over again. So generally speaking, we don’t see issues where a statement is run once in uppercase and the next time in lowercase and things like that. It’s just important to be aware of, that Oracle does match these statements exactly. And you do want to pay attention to all of the little details like character casing, column order and spacing, especially if you have any code that dynamically generates SQL.

What is more important for us to focus on, is that using literal values directly in SQL statements, causes the statements to differ, and therefore be hard parsed. Let’s see what we mean by this. Consider the two SQL statements shown on the slide. Once again, each of these statements is querying the view to get student grades for a particular semester, in this case the Fall of 2013. The only difference is that they are querying the grades for two different students. Query on the top is getting the grades for student 12345, and the query on the bottom is getting the grades for student 67890. Functionally, these two queries are exactly the same. They are returning the same columns in the same order. They have the same predicates in the WHERE clause. The only difference is which student grades will be returned for. However, since these queries are directly specifying the student ID in the WHERE clause as a literal value, Oracle will see this two queries as separate and distinct queries, and will hard parse each one. The reason why is because we have specified the literal value of the value we want in the query directly. And again, if we called String. Equals on these two strings, we would get false. So in the eyes of Oracle, these two statements will be regarded as different.

This applies not just to the student_id field, but to any literal value that can vary in any predicate in the WHERE clause. In this case, I’ve changed the query so that in both instances, we’re retrieving grade data for student 12345. The first query is getting grades for the Fall of 2013 term, and the second query, for the Spring of 2013 term. Again, these two queries are functionally equivalent, we’re just varying the term in which we are retrieving grades for. However, Oracle will again see these two SQL statements as different statements, and hard parse both of these. You can start to see the road we are headed down here. If we write our SQL statements with literal values like this, every time a different student logs on to our web application to check their grades, or a student checks their grades for a different semester. We’re going to end up with what Oracle considers a unique SQL statement, and wind up hard parsing that statement. And as we’ve talked about, that is the situation we want to avoid.

What we need to do, is to rewrite our query using Bind Variables. To do so, we’re going to replace the literal values in our WHERE clause with this syntax that represents a Bind Variable. In Oracle, the syntax that is used is a colon character followed by the name of the Bind Variable. So you can see in this statement, we’re specifying two Bind Variables, one for student_id, and one for the term_code. When Oracle parses this statement, it sees the bind variables in the text of the SQL statement, and not the actual values. So therefore, the text of the SQL statement is the same each time. Consequently, the statement will be soft parsed on each time it is run after its initial execution. What we want to do is to use a bind variable for any value in the WHERE clause that can vary on different executions of the statement. In this case, we would expect that both the student_id and the term_code would vary as the statement is run on behalf of different students looking at their grades in different terms. So both of these values need bind variables. When you submit a SQL statement that uses bind variables, you will also need to provide the actual values that Oracle will need to plug in when it executes the statement.

There are a couple of final items that we need to cover. First, pretty much all of the major object relational mappers in use today, already use bind variables implicitly. So, if you were using entity framework or in hibernate. Your application is already using bind variables and you should be in good shape. If you are using an ORM, you might be tempted to think that bind variables is a topic you don’t need to worry about. However, while new development might be taking place using an ORM, a lot of times we have to support existing applications in our jobs. And you want to make sure that these applications that are just using plain old ADO. net or JDBC are using bind variables as well. And if they’re not, you want to get those applications uplifted. One reason why that we just covered is the security concerns around SQL injection attacks. The other reason is that if you have one or two of these applications out there in your organization these applications by themselves can use up a lot of resources on your Oracle server. And cause contention around the shared SQL area. So, in that way, these existing applications that don’t use good practices can impact the performance of all of your other applications that are using that same Oracle instance.

How to Grant Developers Access to Dynamic Performance Views in Oracle Database?

CREATE ROLE perf_viewer;

GRANT SELECT ON V_$LOCK TO perf_viewer;
GRANT SELECT ON V_$LOCKED_OBJECT TO perf_viewer;
GRANT SELECT ON USER_OBJECT_USAGE TO perf_viewer;
GRANT SELECT ON V_$SESSION TO perf_viewer;
GRANT SELECT ON V_$SESSION_LONGOPS TO perf_viewer;
GRANT SELECT ON V_$SESSMETRIC TO perf_viewer;
GRANT SELECT ON V_$SESSTAT TO perf_viewer;
GRANT SELECT ON V_$SQL TO perf_viewer;
GRANT SELECT ON V_$SQLAREA TO perf_viewer;
GRANT SELECT ON V_$SQLSTATS TO perf_viewer;
GRANT SELECT ON V_$SQLTEXT TO perf_viewer;
GRANT SELECT ON V_$SQLTEXT_WITH_NEWLINES TO perf_viewer;
GRANT SELECT ON V_$SQL_PLAN TO perf_viewer;
GRANT SELECT ON V_$SQL_PLAN_STATISTICS TO perf_viewer;
GRANT SELECT ON V_$SQL_PLAN_STATISTICS_ALL TO perf_viewer;
GRANT SELECT ON V_$SQL_WORKAREA TO perf_viewer;
GRANT SELECT ON V_$STATNAME TO perf_viewer;
GRANT SELECT ON V_$SYSSTAT TO perf_viewer;
GRANT SELECT ON V_$TRANSACTION TO perf_viewer;

GRANT perf_viewer TO <<user name>>;

How to Find and Resolve Blocked Sessions in Oracle RAC Environment?

Row-level locks are primarily used to prevent two transactions from modifying the same row. When a transaction needs to modify a row, a row lock is acquired. There is no limit to the number of row locks held by a statement or transaction, and Oracle does not escalate locks from the row level to a coarser granularity.

CREATE DATABASE LOCK

1. In the RAC Node-1, Create Below Table and Insert a Row Without Commit or Rollback.

SQL> CREATE TABLE RAC_LOCK_DEMO(
  ID# NUMBER PRIMARY KEY,
  VALUE VARCHAR2(20)
);

SQL> INSERT INTO RAC_LOCK_DEMO VALUES (1, 'INSERT RAC LOCK DEMO');

2. In the RAC Node-2, Run Same Insert Command.

SQL> INSERT INTO RAC_LOCK_DEMO VALUES (1, 'INSERT RAC LOCK DEMO');

3. Summary

  • Transaction was not committed in Rac Node-1 session
  • Rac Node-2 session is trying to insert a record with same primary key value
  • Rac Node-2 session will wait until Rac Node-1 session issues COMMIT or ROLLBACK

FIND BLOCKED SESSIONS

Below query can be used to find blocking and blocked session.

SQL> SELECT    SESLCK.USERNAME|| '@'|| SESLCK.MACHINE|| '@INSTANCE'|| SESLCK.INST_ID|| ' (SID='|| SESLCK.SID|| ' SERIAL='|| SESLCK.SERIAL#
       || ' STATUS='|| SESLCK.STATUS|| ') IS BLOCKING '|| SEWT.USERNAME|| '@'|| SEWT.MACHINE|| '@INSTANCE'|| SEWT.INST_ID|| ' (SID='
       || SEWT.SID|| ' SERIAL='|| SEWT.SERIAL#|| ' STATUS='|| SEWT.STATUS|| ' SQLID='|| SEWT.SQL_ID|| ')'LOCK_INFORMATION
  FROM GV$LOCK     WT,
       GV$LOCK     LCKR,
       GV$SESSION  SESLCK,
       GV$SESSION  SEWT
 WHERE     LCKR.ID1 = WT.ID1
       AND LCKR.SID = SESLCK.SID
       AND LCKR.INST_ID = SESLCK.INST_ID
       AND WT.SID = SEWT.SID
       AND WT.INST_ID = SEWT.INST_ID
       AND LCKR.ID2 = WT.ID2
       AND LCKR.REQUEST = 0
       AND WT.LMODE = 0;

FIND LOCK WAIT TIME

Below query can be used to find how long the blocked session is waiting (in minutes)

SQL> SELECT BLOCKING_SESSION         "BLOCKING_SESSION",
         SID                      "BLOCKED_SESSION",
         SECONDS_IN_WAIT / 60     "WAIT_TIME(MINUTES)"
    FROM GV$SESSION
   WHERE BLOCKING_SESSION IS NOT NULL
ORDER BY BLOCKING_SESSION;

FIND BLOCKING SQL

Below query can be used to try to find blocking SQL.

SQL> SELECT *
    FROM GV$OPEN_CURSOR OC
   WHERE     OC.INST_ID = :TYPE_BLOCKING_INSTANCE_ID
         AND OC.SID = :TYPE_BLOCKING_SID
         AND (   OC.SQL_TEXT LIKE 'INSERT%'
              OR OC.SQL_TEXT LIKE 'UPDATE%'
              OR OC.SQL_TEXT LIKE 'DELETE%')
         AND OC.CURSOR_TYPE = 'OPEN'
ORDER BY OC.LAST_SQL_ACTIVE_TIME;

FIND BLOCKED SQL

Below query can be used to find what SQL is being run by the BLOCKED SESSION inside the database or which SQL command is waiting.

SQL> SELECT SES.SID,
       SES.SERIAL#     SER#,
       SES.PROCESS     OS_ID,
       SES.STATUS,
       SQL.SQL_FULLTEXT
  FROM GV$SESSION SES, GV$SQL SQL, GV$PROCESS PRC
 WHERE     SES.SQL_ID = SQL.SQL_ID
       AND SES.SQL_HASH_VALUE = SQL.HASH_VALUE
       AND SES.PADDR = PRC.ADDR
       AND SES.INST_ID = SQL.INST_ID
       AND SES.SID = &ENTER_BLOCKED_SESSION_SID;

FIND LOCKED TABLE

Below query can be used to find the table locked, table owner, lock type and other details.

SQL> SELECT LO.SESSION_ID,
         LO.ORACLE_USERNAME,
         LO.OS_USER_NAME,
         LO.PROCESS,
         DO.OBJECT_NAME,
         DO.OWNER,
         DECODE (LO.LOCKED_MODE,
                 0, 'NONE',
                 1, 'NULL',
                 2, 'ROW SHARE (SS)',
                 3, 'ROW EXCL (SX)',
                 4, 'SHARE',
                 5, 'SHARE ROW EXCL (SSX)',
                 6, 'EXCLUSIVE',
                 TO_CHAR (LO.LOCKED_MODE))
             MODE_HELD
    FROM GV$LOCKED_OBJECT LO, DBA_OBJECTS DO
   WHERE LO.OBJECT_ID = DO.OBJECT_ID
ORDER BY 5;

RESOLVING LOCKS IN ORACLE

As per Oracle, the blocked (or waiting) session will continue to wait until

  • Blocking session issues a COMMIT
  • Blocking session issues a ROLLBACK
  • Blocking session disconnects from the database

As a DBA, the only way you can help resolve a lock conflict is by killing either the blocking session or by killing the blocked (waiting) session.

Before you decide which session to kill, you must send the information to application team to get their approval on which session to be killed.

Below query can be used to kill blocking session inside Oracle.

SQL> SELECT    'ALTER SYSTEM KILL SESSION '''|| SESLCK.SID|| ','|| SESLCK.SERIAL#|| ',@'|| SESLCK.INST_ID|| ''' IMMEDIATE;'
           BLOCKING_SESSION_KILL_COMMAND
  FROM GV$LOCK     WT,
       GV$LOCK     LCKR,
       GV$SESSION  SESLCK,
       GV$SESSION  SEWT
 WHERE     LCKR.ID1 = WT.ID1
       AND LCKR.SID = SESLCK.SID
       AND LCKR.INST_ID = SESLCK.INST_ID
       AND WT.SID = SEWT.SID
       AND WT.INST_ID = SEWT.INST_ID
       AND LCKR.ID2 = WT.ID2
       AND LCKR.REQUEST = 0
       AND WT.LMODE = 0;

SELECT FOR UPDATE

SELECT FOR UPDATE command will select the specific records from the table and lock those records. This prevents other users from updating the records as the lock on those records is held by the current session.

SQL> SELECT * FROM EMP WHERE DEPTNO=10 FOR UPDATE;

Let us assume that some other session is holding lock on the records returned by – SELECT * FROM EMP WHERE DEPTNO=10. The above query will wait until the lock is released.

How to Use Virtual Indexes in Oracle Database?

A virtual index is a “fake” index whose definition exists in the data dictionary, but has no associated index segment. The purpose of virtual indexes is to simulate the existence of an index – without actually building a full index. This allows developers to run an explain plan as if the index is present without waiting for the index creation to complete and without using additional disk space. If we observe that optimizer is creating a plan which is expensive and SQL tuning advisor suggest us to create an index on a column, in case of production database it may not be always feasible to create an index and test the changes. We need to make sure that the created index will not have any negative impact on the execution plan of other queries running in the database.

Lets test it 😊

CREATE A TEST TABLE

SQL> select count(1) from dba_objects;

COUNT(1)
----------
52220

SQL> create sequence sys.sequencetest cache 10000;
Sequence created.
 
SQL> alter session enable parallel dml;
Session altered.

SQL> alter session enable parallel ddl;
Session altered.

SQL> set timing on;
SQL>
SQL> create /*+ parallel */ table sys.test as
  2    select sequencetest.nextval ID, d.*
  3    from dba_objects d, (select 1 from dba_objects where rownum < 500)r;
Table created.
Elapsed: 00:00:57.39
SQL>

SQL> select count(1) from sys.test;

COUNT(1)
----------
26057281

SQL> select ds.owner, ds.segment_name, ds.bytes/1024/1024/1024 size_gb from dba_segments ds where ds.segment_name='TEST';

OWNER		        SEGMENT_NAME		        SIZE_GB
--------------------	--------------------		----------
SYS 			TEST				3.84020996

SELECT ANY VALUE FROM THE TABLE

SQL> select * from sys.test where object_name = 'STANDARD';

CHECK THE EXPLAIN PLAN FOR THE SELECT QUERY

SQL> set autotrace traceonly explain
SQL>
SQL> select * from sys.test where object_name = 'STANDARD';

CREATE A VIRTUAL INDEX ON THE TABLE CREATED

Remember, in order to create a virtual index you need to specify the NOSEGMENT clause in the CREATE INDEX statement. Also note by executing the above statement, an index segment is not created.

SQL> create index test_index on sys.test (object_name) NOSEGMENT;
Index created.
SQL>

GENERATE SOME STATISTICS OF YOUR VIRTUAL INDEXES

SQL> exec dbms_stats.generate_stats('SYS', 'TEST_INDEX');

CHECK DBA_INDEXES AND DBA_OBJECTS

So, The object exists in database, but we dont have segment for the same.

SQL> set autotrace off
SQL>
SQL> select index_name from dba_indexes where table_name = 'TEST' and index_name = 'TEST_INDEX';
no rows selected
SQL>
SQL> col OBJECT_NAME format a20;
SQL>
SQL> select object_name, object_type from dba_objects where object_name = 'TEST_INDEX';

OBJECT_NAME             OBJECT_TYPE
-------------------- 	-----------------------
TEST_INDEX           	INDEX

CHECK AGAIN THE EXPLAIN PLAN FOR THE SELECT QUERY

We can clearly observe that the index is not being used.

SQL> set autotrace traceonly explain
SQL>
SQL> select * from sys.test where object_name = 'STANDARD';

TO MAKE USE OF THE VIRTUAL INDEX CREATED, WE NEED TO SET _USE_NOSEGMENT_INDEXES PARAMETER TO TRUE

SQL> alter session set "_use_nosegment_indexes" = true;
Session altered.
SQL>

CHECK AGAIN THE EXPLAIN PLAN FOR THE SELECT QUERY

The cost of the query has decreased to 5 from 72076.

This saves so much time. After checking your execution plans, if you are ok with them you can actually create the index.

To create this virtual index as a real one, you must drop it and re-create it without NOSEGMENT clause.

SQL> select * from sys.test where object_name = 'STANDARD';

Once you set this hidden parameter, the optimizer will start using the virtual index you created on this table.

If you run this query from any other session, it will not use this virtual index (as we have used “alter session” statement)

Notes

  • It can be used only in Enterprise Edition not Standard Edition. Standard Edition has no deferred segment creation so you cannot create a data object without data segment.
  • You can analyze virtual indexes.
  • You cannot rebuild a virtual index; it throws an ORA-8114: “User attempted to alter a fake index”
  • You can drop the index just as a normal index.
SQL> drop index <index_name>;

DROP TEST TABLE & INDEX & SEQUENCE

SQL> drop table sys.test cascade constraints;
Table dropped.

SQL> drop sequence sequencetest;
Sequence dropped.   

How to Flashback Oracle Table to Before Drop Using Recycle Bin?

When you drop a table, the database does not immediately remove the space associated with the table. The database renames the table and places it and any associated objects in a recycle bin, where, in case the table was dropped in error, it can be recovered at a later time. This feature is called Flashback Drop, and the FLASHBACK TABLE statement is used to restore the table.

ENABLING AND DISABLING THE RECYCLE BIN

When the recycle bin is enabled, dropped tables and their dependent objects are placed in the recycle bin. When the recycle bin is disabled, dropped tables and their dependent objects are not placed in the recycle bin; they are just dropped, and you must use other means to recover them (such as recovering from backup).

Disabling the recycle bin does not purge or otherwise affect objects already in the recycle bin. The recycle bin is enabled by default.

You enable and disable the recycle bin by changing the recyclebin initialization parameter.

This parameter is not dynamic, so a database restart is required when you change it with an ALTER SYSTEM statement.

To disable the recycle bin:

  1. Issue one of the following statements:

ALTER SESSION SET recyclebin = OFF;

ALTER SYSTEM SET recyclebin = OFF SCOPE = SPFILE;

  1. If you used ALTER SYSTEM, restart the database.

To enable the recycle bin:

  1. Issue one of the following statements:

ALTER SESSION SET recyclebin = ON;

ALTER SYSTEM SET recyclebin = ON SCOPE = SPFILE;

  1. If you used ALTER SYSTEM, restart the database.

FLASHBACK TABLE TO BEFORE DROP

Onward Oracle 10g the default action of a DROP TABLE command is to logically move the table to the recycle bin by renaming it, rather than actually dropping it. The DROP TABLE … PURGE option can be used to permanently drop a table.

SQL> DROP TABLE my_table PURGE;

The recycle bin is a logical collection of previously dropped objects, with access tied to the DROP privilege. The contents of the recycle bin can be shown using the SHOW RECYCLEBIN command and purged using the PURGE TABLE command. As a result, a previously dropped table can be recovered from the recycle bin.

Create a test table

SQL> CREATE TABLE flashback_drop_test (
  id  NUMBER(10)
);

SQL> INSERT INTO flashback_drop_test (id) VALUES (1);
COMMIT;

Drop the table without the PURGE keyword, and check the contents of the recycle bin using the SHOW RECYCLEBIN command.


SQL> DROP TABLE flashback_drop_test;

SQL> SHOW RECYCLEBIN;

ORIGINAL NAME    	RECYCLEBIN NAME                		 OBJECT TYPE    DROP TIME
-----------------	------------------------------ 		 ------------ 	-------------------
FLASHBACK_DROP_T 	BIN$fjdkfjdkfjkdfjkdfjkdjfkddjfkdtr==$0	 TABLE        	2022-03-29:11:09:07

A privileged user can display all recycle bins using the SHOW DBA_RECYCLEBIN command.

Restore the dropped table using the FLASHBACK TABLE command.

SQL> FLASHBACK TABLE flashback_drop_test TO BEFORE DROP;

SQL> SELECT * FROM flashback_drop_test;

ID
----------
1

Tables in the recycle bin can be queried like any other table. The following example drops the table again, and queries the renamed table from the recycle bin.

SQL> DROP TABLE flashback_drop_test;

SQL> SHOW RECYCLEBIN;

ORIGINAL NAME      RECYCLEBIN NAME		OBJECT TYPE	DROP TIME
----------------   -----------------		------------ 	-------------------
FLASHBACK_DROP_T   BIN$JKJlnmnmhjhvmbh==$0 	TABLE        	2022-03-29:11:18:39

SQL> SELECT * FROM " BIN$JKJlnmnmhjhvmbh==$0 ";

ID
----------
1

If an object is dropped and recreated multiple times all dropped versions will be kept in the recycle bin, subject to space. Where multiple versions are present it’s best to reference the tables via the RECYCLEBIN_NAME. For any references to the ORIGINAL_NAME it is assumed the most recent object is drop version in the referenced question.

During the flashback operation the table can be renamed.

SQL> FLASHBACK TABLE flashback_drop_test TO BEFORE DROP RENAME TO flashback_drop_test_old;

Purge Objects From the Recycle Bin

Several purge options exist.

PURGE TABLE tablename;                   -- Specific table.
PURGE INDEX indexname;                   -- Specific index.
PURGE TABLESPACE ts_name;                -- All tables in a specific tablespace.
PURGE TABLESPACE ts_name USER username;  -- All tables in a specific tablespace for a specific user.
PURGE RECYCLEBIN;                        -- The current users entire recycle bin.
PURGE DBA_RECYCLEBIN;                    -- The whole recycle bin.

Views

In addition to the SHOW command, you can get information about the contents of the recycle bin using the following views.

  • USER_RECYCLEBIN : Displays items in the current user’s recycle bin.
  • DBA_RECYCLEBIN : Displays items in the recycle bin of all users in the database.
  • CDB_RECYCLEBIN : When queried from the root container of a container database, it displays items in the recycle bin of all users in the containers. From any container other than the root container, it acts like the DBA_RECYCLEBIN view.

Restrictions

Several restrictions apply relating to the recycle bin.

  • Only available for non-system, locally managed tablespaces.
  • There is no fixed size for the recycle bin. The time an object remains in the recycle bin can vary. The recycle bin is cleared down if the tablespace is under space pressure. The recycle bin will not cause data files to autoextend, so in some situations objects can be purged very quickly.
  • The objects in the recycle bin are restricted to query operations only (no DDL or DML).
  • Flashback query operations must reference the recycle bin name.
  • Tables and all dependent objects are placed into, recovered and purged from the recycle bin at the same time.
  • Tables with Fine Grained Access policies aer not protected by the recycle bin.
  • Partitioned index-organized tables are not protected by the recycle bin.
  • The recycle bin does not preserve referential integrity.

What is the Difference Between Oracle NULL and EMPTY_BLOB() ?

I searched the difference between null and empty_blob() in order to delete BLOB data but couldn’t find the exact answer.

Then I opened SR to learn that. The answer is like below.

There is no difference between null and empty_blob() to delete BLOB data. Both works in similar way.

UPDATE table_name
SET blob_column = EMPTY_BLOB()
WHERE id = your_id;

OR

UPDATE table_name
SET blob_column = NULL
WHERE id = your_id;

But coming answer was not enough for me and I tested both of them on 60GB BLOB_DELETE_TEST table.

Test results are like below.

EMPTY_BLOB() deletes faster than NULL.

-- DELETE TEST WITH NULL ON NEWLY CREATED BLOB_DELETE_TEST TABLE
-- SIZE: 60GB   867094 rows updated.    Elapsed: 00:02:00.41
UPDATE BLOB_DELETE_TEST TABLE SET BLOB_DATA = NULL;

-- DELETE TEST WITH EMPTY_BLOB()  ON NEWLY CREATED BLOB_DELETE_TEST TABLE
-- SIZE: 60GB   867112 rows updated.    Elapsed: 00:01:38.85
UPDATE BLOB_DELETE_TEST TABLE SET BLOB_DATA = EMPTY_BLOB();

How to Open Physical Standby Database in READ-WRITE Mode for Test Purpose?

This article is to open the Standby database in read write mode for any reporting or testing and then move it back to standby database using the flashback technology.

Using a combination of Data Guard, restore points, and Flashback Database, a physical standby database can be opened temporarily in read/write mode for development, reporting, or testing purposes, and then flashed back to a point in the past to be reverted back to a physical standby database. When the database is flashed back, Data Guard automatically synchronizes the standby database with the primary database, without the need to re-create the physical standby database from a backup copy of the primary database.

Perform the following steps to activate the physical standby database as a production database and later resynchronize it with the primary database.

Since Oracle 11g, there is a new feature in this area called : Snapshot standby database. A Snapshot Standby Database is a fully update-able standby database that is created by converting a physical standby database into a snapshot standby database.

A snapshot Standby is open in the read-write mode and hence it is possible to process transactions independently of the primary database. At the same time, it maintains protection by continuing to receive data from the production database, archiving it for later use.

Using a single command changes made while read-write mode can be discarded and quickly resynchronize the standby with the primary database.

FIRST METHOD – SHORT WAY – MY FAVORITE

Please note that all operation will be done on standby database.

CONVERT PHYSICAL STANDBY TO SNAPSHOT STANDBY

Check Database Status-Role-MRP

SQL> select a.status, b.database_role, c.process from v$instance a, v$database b, v$managed_standby c where c.process like 'MRP%';

STATUS       	   DATABASE_ROLE        PROCESS
------------ 	   --------------	---------
MOUNTED	PHYSICAL   STANDBY 	        MRP0

Cancel Recovery Process

SQL>alter database recover managed standby database cancel;
Database altered.

Convert Physical Standby To Snapshot Standby

SQL> alter database convert to snapshot standby;
Database altered.

Open Database On Both Node

SQL> alter database open;
Database altered.

Set job_queue_processes Value As 0 (Optional)

SQL> alter system set job_queue_processes=0 scope=both sid='*';

Check Database Status-Role-Open Mode

SQL> select a.status, b.database_role, b.open_mode  from v$instance a, v$database b;

STATUS     		DATABASE_ROLE		OPEN_MODE
------------ 		-----------------	-------------------
OPEN     		SNAPSHOT STANDBY	READ WRITE

Check Restore Point

Note: It is created automatically!

SQL> select NAME,GUARANTEE_FLASHBACK_DATABASE from v$restore_point;

NAME                                            GUA
------------------------------------------	------
SNAPSHOT_STANDBY_REQUIRED_03/17/2022 01:45:55	YES

CONVERT SNAPSHOT STANDBY TO PHYSICAL STANDBY

Shutdown Database

srvctl stop database -d <type_your_db_name>

Mount Database

srvctl start database -d <type_your_db_name> -o mount

Convert Snapshot Standby To Physical Standby

Notes:
1. This process will take a long time, it is necessary to wait without interrupting the command used.
2. Only single node should be open otherwise below error occurs

SQL> alter database convert to physical standby;
alter database convert to physical standby
*
ERROR at line 1:
ORA-38777: database must not be started in any other instance

Stop Second Instance

srvctl stop instance -d <type_your_db_name> -i <type_your_instance_name>

Try Again Converting Snapshot Standby To Physical Standby

SQL> alter database convert to physical standby;
Database altered.

Check Database Status-Role

SQL> select a.status, b.database_role from v$instance a, v$database b;

STATUS       	  DATABASE_ROLE
------------ 	  -------------
MOUNTED	PHYSICAL  STANDBY

Start Recovery Process

SQL>alter database recover managed standby database using current logfile disconnect;

Check Database Status-Role-MRP

SQL> select a.status, b.database_role, c.process from v$instance a, v$database b, v$managed_standby c where c.process like 'MRP%';

STATUS       	    DATABASE_ROLE    	PROCESS
------------ 	    --------------	---------
MOUNTED	PHYSICAL    STANDBY 	        MRP0

Check Dataguard GAP On Primary

SQL> SELECT a.resetlogs_id,
         a.thread#
             THREAD_ID,
         b.last_seq
             last_occured,
         a.applied_seq
             last_applied,
         (b.last_seq - a.applied_seq)
             difference,
         TO_CHAR (a.last_applied_time, 'YYYY-MM-DD HH24:MI:SS')
             last_applied_time,
         dest_id
    FROM (  SELECT resetlogs_id,
                   thread#,
                   MAX (sequence#)     applied_seq,
                   MAX (next_time)     last_applied_time,
                   dest_id
              FROM gv$archived_log
             WHERE     applied = 'YES'
                   AND resetlogs_id =
                       (SELECT MAX (resetlogs_id) FROM gv$archived_log)
          GROUP BY resetlogs_id, thread#, dest_id) a,
         (  SELECT resetlogs_id, thread#, MAX (sequence#) last_seq
              FROM gv$archived_log
             WHERE resetlogs_id =
                   (SELECT MAX (resetlogs_id) FROM gv$archived_log)
          GROUP BY resetlogs_id, thread#) b
   WHERE     a.thread# = b.thread#
         AND dest_id IN (SELECT DEST_ID
                           FROM SYS.GV_$ARCHIVE_DEST
                          WHERE TARGET = 'STANDBY')
ORDER BY dest_id;

SECOND METHOD – LONG WAY

Step 1 – In Standby Database

A) Cancel Redo Apply

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

B) Create a Guaranteed Restore Point

SQL> CREATE RESTORE POINT standby_flashback_testing GUARANTEE FLASHBACK DATABASE;

C) Check Restore Point

SQL> select NAME,SCN,TIME from v$restore_point;

Step 2 – In Primary Database

A) On the primary database, switch logs so the SCN of the restore point will be archived on the physical standby database. When using standby redo log files, this step is essential to ensure the database can be properly flashed back to the restore point.

SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;

B) Defer log archive destinations pointing to the standby that will be activated.

SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=DEFER;

Step 3 – In Standby Database

A) Activate the Physical Standby Database

SQL> ALTER DATABASE ACTIVATE STANDBY DATABASE;

Once its done you can check the controlfile status will be changed from Standby to Current.

SQL> select CONTROLFILE_TYPE from v$database;

CONTROL
-------
CURRENT

B) Then open the database.

SQL> ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PERFORMANCE;
SQL> ALTER DATABASE OPEN;

Step 4 – In Standby Database

Once the standby database has been activated, you can run reporting tools or perform other testing and activities for days or even weeks, independent of the primary database

Any results stored in the activated database will be lost when you later flash back the database. Results that should be saved must be copied out of the activated database before flashing it back.

For example:

SQL> create table test ( col1 varchar2 (100));
Table created.

SQL> insert into test values ( 'test data');
1 row created.

SQL> commit;
Commit complete

Step 5 – In standby Database

A) Revert the Snapshot Standby Database Back to Physical Standby Database

A1)  Mount the Database

SQL> STARTUP MOUNT FORCE;
ORACLE instance started.
Database mounted. 

A2) Flashback the Database to Restore Point

SQL> FLASHBACK DATABASE TO RESTORE POINT standby_flashback_testing ;

You can confirm the same by checking the controlfile status. It will be now backup controlfile

SQL> select controlfile_type from v$database;

CONTROL
--------------
BACKUP

B) Convert to Standby Database

SQL> ALTER DATABASE CONVERT TO PHYSICAL STANDBY;
SQL> STARTUP MOUNT FORCE;

SQL> select controlfile_type from v$database;

CONTROL
--------------
STANDBY

Step 6 – In Standby Database

A) Put the standby database in managed recovery mode. Let archive gap resolution fetch all missing archived redo log files and allow Redo Apply to apply the gap.

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT;

Step 7 – In Primary Database

A) Re-Enable Archiving to the Physical Standby Database

SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE;

Step 8 – In Standby Database

A) Open the database in Read Only mode and ensure that all the transaction done in active mode are no more

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
SQL> ALTER DATABASE OPEN READ ONLY;

SQL> select * from test; 
select * from test 
* 
ERROR at line 1: 
ORA-00942: table or view does not exist 

B) Drop the Restore Point

SQL> STARTUP FORCE MOUNT;
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT;
SQL> DROP RESTORE POINT standby_flashback_testing;

Note: While the database is activated, it is not receiving redo data from the primary database and cannot provide disaster protection. It is recommended that there be at least two physical standby databases participating in the configuration so that the primary database remains protected against data loss.

How To Change Oracle Compatible Parameter in Primary and Standby Environments?

Your Primary and Physical Standby are running 12.2.0.1 and you need to change the COMPATIBLE parameter to 19.0.0.0 on both the databases.

Let’s assume the current COMPATIBLE setting on both Primary and Standby databases is at 12.2.0.1.

STEPS

The value of COMPATIBLE parameter on Standby should always be higher or equal to the value of COMPATIBLE on Primary.

In order to change the COMPATIBLE parameter, make the changes in Standby first and then on Primary.

Steps to Raise the COMPATIBLE Parameter on the Standby

1. Change the value of the compatible parameter on standby

SQL> ALTER SYSTEM SET COMPATIBLE= ’19.0.0.0’ SCOPE=SPFILE;

If you are using pfile, then edit pfile and change the value of the parameter
*. COMPATIBLE= ’19.0.0.0’

2. Shutdown and Restart the standby database in mount stage

3. Restart Managed recovery process.

Steps to Raise the COMPATIBLE Parameter on the Primary

1. Perform a backup of your database before you raise the COMPATIBLE initialization parameter (optional).

2. If you are using a server parameter file, then complete the following steps:

a. Update the server parameter file to set or change the value of the COMPATIBLE initialization parameter. For example, to set the COMPATIBLE initialization parameter to 19.0.0.0, issue the following statement:

SQL> ALTER SYSTEM SET COMPATIBLE = '19.0.0.0' SCOPE=SPFILE;

b. Shut down and restart the database.

3. If you are using an initialization parameter file, then complete the following steps:

a. Shut down the database if it is running:

SQL> SHUTDOWN IMMEDIATE;

b. Edit the initialization parameter file to set or change the value of the COMPATIBLE initialization parameter.

For example, to set the COMPATIBLE initialization parameter to 19.0.0.0, enter the following in the initialization parameter file:

COMPATIBLE = 19.0.0.0

c. Start the database using STARTUP.