Mar 23, 2016

High Water Mark

High water mark is the maximum amount of database blocks used so far by a segment. This mark cannot be reset by delete operations.


When an INSERT statement cannot find a free block to populate, it goes straight to the High Water Mark and allocates a new block.When rows are deleted, the High Water Mark does not come down.

Inorder to remove the empty blocks in the HWM you need to issue the following two commands


ALTER TABLE emp ENABLE ROW MOVEMENT;
ALTER TABLE emp SHRINK SPACE;

ROWIDs are normally assigned to a row for the life time of the row at insert time.  Oracle can move a table row using the first command .

It will read the table, it will delete/insert the rows at the bottom of the table to move them up, it will generate redo, it will generate undo thus can relocate and reorganize rows in a table including it will change rowids


The second step adjusts the high watermark. For this Oracle needs an exclusive table lock,but for a very short moment only.
 - can be done online
 - indexes will be maintained and remain usable

Monitoring Index

CREATE TABLE t1 (c1 NUMBER);
CREATE INDEX t1_idx ON t1(c1);
ALTER INDEX t1_idx MONITORING USAGE;
SELECT table_name, index_name, monitoring, used FROM v$object_usage;

SELECT * FROM t1 WHERE c1 = 1;


SELECT table_name, index_name, monitoring, used FROM v$object_usage;

ALTER INDEX indexname NOMONITORING USAGE;
ALTER INDEX indexname MONITORING USAGE;

Some points in Performance Tuning


Try to avoid using  following in triggers

1) UTL_ packages .  They do not roll back
2) Pragma autonomous_transaction only use if it is unavoidable


Wrong cardinality = Wrong Plan
Check Constraints can rewrite queries

If you want to limit the creation of index however want to increase performance degredated query by refering explain plan . One main point is that you want to check the the highest cardinality which is having FULL TABLE ACCESS in the explain plan , for that particular cardinality check the access predicates in the explain plan and try to create index for those columns in the join (access predicates)

Creating index will help in the performance of a query even though the join is right or left outer join .

Dimensions convey information to the optimizer.The presence of Dimensions open up access paths that would not be otherwise available.


The Forms' PL/SQL code runs in the Forms PL/SQL engine, which is different in PL/SQL engine in database , this is why you cannot access database package's variable, which result in the following error
Implementation Restriction: 'Cannot directly access remote package variable or cursor while compiling oracle forms

Mar 17, 2016

More about Indexes

In B tree index single column indexes do not include entries for null values ; but bitmap and function based index can store all null entries .
Multi column indexes will store null values but only if at least one column is not null


1) If all columns contain null values , the entry will not be stored in the index

2) Adding a frequently null valued column to an index can impact performance by allowing the index to filter the nulls

3) If range scanning a column ie: < or >  , put those columns at the end of the composite index

Oracle Trace file





Mar 11, 2016

New Features in Oracle 11g

Oracle Lock

Oracle locking policy consists of the following


•Oracle locks data at the row level on modification only. There is no lock escalation to a block or table level, ever.
•Oracle never locks data just to read it. There are no locks placed on rows of data by simple reads.
•A writer of data does not block a reader of data.
•A writer of data is blocked only when another writer of data has already locked the row it was going after.


There are two mechanisms for locking data in a database


Pesimistic Locking

In pessimistic locking a record or page is locked immediately when the lock is requested,

Optimistic Locking

In an optimistic lock the record or page is only locked when the changes made to that record are updated.


Blocked Inserts

The only time an INSERT will block is when you have a table with a primary key or unique constraint placed on it and two sessions simultaneously attempt to insert a row with the same value, it is most avoided via the use of Oracle sequences in the generation of primary keys as they are highly concurrent method of generating unique keys in a multi-user environment.


Blocked Updates and Deletes

To avoid update and delete blocking use either one of the two locking methods Pessimistic or Optimistic.


Deadlocks

Deadlocks occur when two people hold a resource that the other wants. Oracle records all deadlocks in a trace file. The number one cause of deadlocks is un-indexed foreign keys
•if you update the parent table's primary key the child table will be locked in the absence of an index
•if I delete a parent table, the entire child table will be locked, again in the absence of an index.

Lock Escalation

In other RDBMS when a users locks 100 rows (this may vary) the lock is escalated to a table lock, however Oracle will never escalates a lock


Type of Locks

There a number of different types of locks as listed below:
•DML Locks - DML (data manipulation language), in general SELECT, INSERT, UPDATE and DELETE. DML locks will be locks on a specific row of data, or a lock at the table level, which locks every row in the table.
•DDL locks - DDL (data definition language), in general CREATE, ALTER and so on. DDL locks protect the definition of the structure of objects.
•Internal locks and latches - These are locks that Oracle uses to protect its internal data structure.
•Distributed Locks - These are used by OPS to ensure that different nodes are consistent with each other.
•Deadlocks - Occurs when two sessions block each other while each waits for a resource that the other session is holding.
•PCM - PCM (Parallel Cache Management) These are locks that protect one or more cached data blocks in the buffer cache across multiple instances


DML Locks

There are two main types of DML locks TX (Transaction) and TM (DML Enqueue).


A TX lock is acquired when a transaction initiates its first change and is held until the transaction performs a COMMIT or ROLLBACK. It is used as a queuing mechanism so that other sessions can wait for the transaction to complete.


 A TM lock is used to ensure that the structure of the table is not altered while you are modifying its contents.


The complete set of DML locks are
Row Share  ,Row Exclusive ,Share,Share Row Exclusive  ,Exclusive


The type of locked used when using DML are
RS (table) and RX (row)   select  ... for update;  lock table ... in row share mode 
RX (table) and RX (row)   any insert, update or delete



DDL Locks

DDL locks are automatically placed against objects during a DDL operation to protect them from changes by other sessions.


There are three types of DDL locks


Exclusive DDL Locks - these prevent other sessions from gaining a DDL lock or TM lock themselves. You can query a table but not modify it. Exclusive locks will normally lock the object until the statement has finished.


Share DDL Locks - This protect the structure of the referenced object against modification by other sessions, but allows modification to the data. Shared DDL locks allow you to modify the contents of a table but not their structure.


•Breakable Parse Locks - This allows an object, such as a query plan cached in the shared pool to register its reliance on some objects. If you perform a DDL against that object, Oracle will review the list of objects that have registered their dependence, and invalidate them. Hence these locks are breakable, they do not prevent the DDL from occurring. Breakable parse locks are used when a session parses a statement, a parse lock is taken against every object referenced by that statement. These locks are taken in order to allow the parsed, cached statement to be invalidated (flushed) in the shared pool if a reference object is dropped or alter in some way. Use the below SQL to identify any parse locks on views, procedures,grants, etc

Latches and Internal Locks (Enqueues)

Latches are locks that are held for short period of time, for example the time it takes to modify an in-memory data structure like buffer cache or the library cache in the shared pool. They are lightweight  . They do not protect database objects such as tables or data files.


Enqueues are another more sophisticated serialized device used when update rows in a database table. The requestor will queue up and wait for the resource to become available, hence these are not as fast as a latch. It is possible to use manual locking using the FOR UPDATE statement or LOCK TABLE statement, or you can create your own locks by using the DBMS_LOCK package.

Mar 4, 2016

Bulk Processing in Oracle

 Bulk Processing with BULK COLLECT and FORALL


  • BULK COLLECT: SELECT statements that retrieve multiple rows with a single fetch, improving the speed of data retrieval


  • FORALL: INSERTs, UPDATEs, and DELETEs that use collections to change multiple rows of data very quickly


  • BULK COLLECT

    declare
      type t_plsql_type is table of employee%rowtype index by pls_integer;
      vt_plsql_type t_plsql_type;                      
    begin

       select * bulk collect into vt_plsql_type from  employee ;

       for v_typ_idx in 1..vt_plsql_type.count loop
           dbms_output.put_line(vt_plsql_type(v_typ_idx).emp_id);
       end loop;       

    end ;  

    FORALL





    Context Switches and Performance

    Context Switches and Performance

    Almost every program PL/SQL developers write includes both PL/SQL and SQL statements. PL/SQL statements are run by the PL/SQL statement executor; SQL statements are run by the SQL statement executor. When the PL/SQL runtime engine encounters a SQL statement, it stops and passes the SQL statement over to the SQL engine. The SQL engine executes the SQL statement and returns information back to the PL/SQL engine   This transfer of control is called a context switch, and each one of these switches incurs overhead that slows down the overall performance of programs




    Bulk Processing in PL/SQL

    The bulk processing features of PL/SQL are designed specifically to reduce the number of context switches required to communicate from the PL/SQL engine to the SQL engine