Aug 15, 2015

Outer Join restriction

create table TAB1
( col1 VARCHAR2(5),col2 VARCHAR2(5), col3 VARCHAR2(5),  col4 VARCHAR2(5),
  col5 VARCHAR2(5)) ;

create table TAB2
(  col1 VARCHAR2(5), col2 VARCHAR2(5), col3 VARCHAR2(5),
  col4 VARCHAR2(5),  col5 VARCHAR2(5)) ;


insert into TAB1 (COL1, COL2, COL3, COL4, COL5)
values ('a', 'a', 'a', 'a', 'a');

insert into TAB1 (COL1, COL2, COL3, COL4, COL5)
values ('b', 'b', 'b', 'b', 'b');

insert into TAB1 (COL1, COL2, COL3, COL4, COL5)
values ('c', 'c', 'c', null, 'c');

insert into TAB1 (COL1, COL2, COL3, COL4, COL5)
values ('d', 'd', 'd', 'd', 'd');

insert into TAB1 (COL1, COL2, COL3, COL4, COL5)
values ('e', 'e', 'e', 'e', 'e');


insert into TAB2 (COL1, COL2, COL3, COL4, COL5)
values ('a', 'a', 'a', 'a', 'a');

insert into TAB2 (COL1, COL2, COL3, COL4, COL5)
values ('b', 'b', 'b', 'b', 'b');

insert into TAB2 (COL1, COL2, COL3, COL4, COL5)
values ('c', 'c', 'c', 'c', 'c');

insert into TAB2 (COL1, COL2, COL3, COL4, COL5)
values ('c', 'c', 'c', 'c', 'c');

commit;
---------------------------

select tab1.*,tab2.* from tab1 left join tab2
on tab1.col1 = tab2.col1
and tab1.col2 = tab2.col2
and tab1.col3 <> 'c';

select tab1.*,tab2.* from tab1 left join tab2
on tab1.col1 = tab2.col1
and tab1.col2 = tab2.col2
and tab1.col3 <> 'c';

select tab1.*,tab2.* from tab1 , tab2
where tab1.col1 = tab2.col1 (+)
and tab1.col2 = tab2.col2 (+)
and tab2.col3 (+)<> 'c';

select tab1.*,tab2.* from tab1 , tab2
where tab1.col1 = tab2.col1 (+)
and tab1.col2 = tab2.col2 (+)
and tab1.col3 (+)<> 'c';

Conventional & Direct-Path INSERT


•During conventional insert operations, Oracle reuses free space in the table During such operations, Oracle also maintains referential integrity constraints.

Eg : INSERT INTO TABLE T


•During direct-path INSERT operations, Oracle appends the inserted data after existing data in the table. Data is written directly into datafiles, bypassing the buffer cache. Free space in the existing data is not reused, and referential integrity constraints are ignored. These procedures combined can enhance performance.


Eg : INSERT /*+ append */ INTO TABLE T

When we insert a rows into a data block, Oracle has the choice of re-using a data block from the freelist or grabbing a "brand new" empty data block, extending the table into the tablespace and raising the high water mark (HWM), usually in chunks of five data blocks

Direct path inserts - These are performed with blocks above the table high water mark
Append - Append tells Oracle to extend the table and grab new, empty data blocks for the insert
Existing free space is not re-used. So, if you direct load insert a bunch of rows, then delete them, and insert them again, the space made by the delete will not be re-used. •Referential Integrity (Foreign Key) constraints and triggers must be disabled before running Direct Path Insert
To make sure your SQL is using Direct Load Insert, run it through Explain Plan. You should see a line reading LOAD AS SELECT.
•Direct Load Insert cannot occur on: ◦Index Organised Tables ◦Tables with LOB Columns ◦Tables with Object Columns ◦Clustered Tables
Direct Load Insert is a faster way of running an INSERT statement. It is particularly useful for inserting large numbers of rows ----

Jun 25, 2015

Oracle Hints



1) Hint to use with Like clause

If your SQL contain a LIKE clause then you could consider using a hint on the column where an index exists on this column.

Use the /*+index(table index_name) */ hint. The LIKE clause may prevent the use of the index. If the index is physically smaller than the table then it will likely be faster to use the index rather than the table.

2)  APPEND

Append only works with Insert into table1 select * from table2;
It really doesn't make sense for a single-row insert



Jun 21, 2015

Oracle Questions


 1) Please find this  link of some Oracle questions to practice yourself  .


2) Questions




Apr 29, 2015

ORA-01555 : snapshot too old

ORA-01555 : snapshot too old : rollback segment number 60 with name

 This is because rollback records needed by a reader for consistent read are overwritten by other writers
This error normally occurs if you are running a very big transaction (like activating millions of records in an odds) and the redo log is to small. Try to enlarge the redo log and run your transaction again.

sql> show parameter undo_retention

 Increase Undo Table space to Auto Extended on.

1- increase the setting of UNDO_RETENTION:

 to change: SQL> alter system set undo_retention=(new-value)

2- increase the tablespace UNDO that contain the ROLLBACK segments

alter tablespace UNDO add datafile '(name-of-the-new-datafile)' size (size-value)M;

Crazy Query

 Do you have any idea how this queries are working  ?


 select * from user_details d where exists (select 1/0 from dual x where d.first_name =x.dummy) ;

 select * from user_details d where exists (select 1/0 from dual ) ;

 Query  to get HR, MI,SS
 

select trunc((sysdate-created)*24) "Hr",trunc(mod( (sysdate-created)*24*60,60)) "Mi",
 trunc( mod( (sysdate-created)*24*60*60,60 )) "Sec"
from all_users where rownum < 10 ;

ORA-01450: maximum key length

create table ir_temp_feeload (s_category varchar2(800),s_cov varchar2(800),v_rate varchar2(10) ) ;

 alter table ir_temp_feeload  add constraint uk_ unique(s_category,s_cov) ;

ORA-01450: maximum key length (6398) exceeded

This error is due to the bad database design
This error occurs as key length of index exceeds 6398

key length = sum of all column lengths in index + number of columns + 2 (key length) + 6 (restricted ROWID) + 1 (ROWID field length)


Here  = (800+800) + (3)+2+6+1 = 1612

select * from nls_database_parameters where parameter like '%SET%';

AL32UTF8 is a multi-byte character set with up to 4 bytes per character

So the key value =  1612*4 = 6448

 Solution :
---------

As a general rule, indexing on very large columns (raw, long, clob) is rarely useful because the optimizer will almost always find a full-table scan cheaper than invoking an index on a long column value.

It is better to create a look up table that holds all the distinct values for the large text with a number id. Then you have a foreign key in the main table or use it as primary key

Apr 25, 2015

Orale Forms to Oracle ADF

Its high time for Oracle Form Developers to learn Oracle ADF or Oracle Apex if you want to stick in front end development else you can polish your PL/SQL & DBA Stuffs .

Those who know Java and have ample time and patience to study its better to go for Oracle ADF So u can be star in Oracle ADF & Java .

Those who doesn't have patience and time to study the you should at least try to learn Oracle Apex .These are my suggestions you can take it if its good else please excuse me.

Here I will try to give some idea about Oracle ADF

First we can check the task of a framework (or What is a Framework)


Oracle ADF Framework


Migrating Oracle Forms to ADF


Features both ADF & Forms has 


Is it Simple to create Web Page in ADF

ORACLE ARCHITECTURE

Normally, one database is serviced by one instance, but in the case of clustering there can be many instances running for a single database.
The System Identifier, or SID, is the name given to the database instance.

There are two major components to the software: the TNS Listener (Transparent Network Substrate ) and the relational database management system (RDBMS) itself.
The TNS Listener is the hub of all Oracle communications.
When a database instance is brought up or started, it registers with the TNS Listener.
When a client wishes to access the database, it first connects to the Listener and the Listener then redirects the client to the database server.When the RDBMS wants to launch an external procedure, it connects first to the Listener, which in turns launches a program called extproc. One exception to this is the external jobs launched by the databases job scheduler. Here the RDBMS connects directly to the external job server.

 The Listener is comprised of two binaries: (1) tnslsnr which is the Listener itself and (2) the Listener Control Utility
(lsnrctl) which is used to administer the Listener on the server or remotely.

 Transparent Network Substrate (TNS) is the network protocol used by Oracle for connectivity to Oracle Databases.

The lsnrctl program is the mechanism for starting and stopping the listener process (tnslsnr). 
tnslnsr starts and reads the listener.ora and sqlnet.ora files for configuration information, such as
port numbers and database service names.

The tnslnsr processes starts with the process owner of the lsnrctl program,


A database instance describes all the processes and memory structures that provide access to the database.
There are two kinds of processes - background and shadow, or server. Shadow or server processes serve client requests. In other words, when a client connects
to the TNS Listener and requests access to database services, the Listener hands them off to a server process. This server process takes SQL queries and
executes them on behalf of the client. Background processes exist to support this. There are a number of different background processes each with a different role,
including the Database Writer, the Log Writer, the Archiver, the System Monitor, and the Process Monitor, among others.

When a local user attempts to connect to Oracle on Windows, it does so over named pipes. Four threads are created in the main server process to handle the
communication between the client and the server. These four threads have a Discretionary Access Control List (DACL) that gives the user permission to open the thread.

In Unix platforms each of these background processes is a separate running process, as in an operating system process. On Windows, theyre all
wrapped up into one larger process - namely, oracle.exe. There is a special area of memory that is mapped among all processes on Unix called the System Global Area
(SGA). The SGA is implemented as a memory-mapped file (section) and contains information pertaining to the instance and the database. It also contains an area known
as the shared pool, which contains structures shared among all users, such as table definitions and the like.

Materialized view

A materialized view is a stored summary containing precomputed results. Materialized views allow for significantly faster data warehouse query processing. The Oracle database server automatically rewrites queries to use the summary data, rather than retrieving data from detail tables by doing expensive joins and aggregate operations. This query rewrite facility is totally transparent to the application, which is not aware of the existence of the materialized view.

The DBA's first step in creating materialized views is to define dimensions. These represent the hierarchies that are present in the real world; for instance, multiple months make up a quarter, multiple districts make up a region, etc. The CREATE MATERIALIZED VIEW statement is used to create a materialized view. This statement includes a sub query, typically a join or a data aggregation (GROUP BY), the results of which comprise the materialized view.

A materialized view is maintained by a refresh process. The refresh process can be done automatically when a commit is done on a detail table, or it can be controlled manually by the DBA. A refresh is specified as complete or incremental. A complete refresh truncates existing data, then repopulates the summary with new data from the detail tables. An incremental refresh updates only changed data.


To create a materialized view in a users own schema, the user must have the  CREATE MATERIALIZED VIEW, CREATE TABLE, CREATE INDEX, and CREATE VIEW system privileges. To create a materialized view in another user's schema, a user must have the CREATE ANY MATERIALIZED VIEW system privilege.

Feb 4, 2015

Integration of Hadoop with Oracle


Oracle Database provides the flexibility to leverage programming language functionality within the database without having to write complex SQL statements by using user defined functions known as Table Functions.

The Map Reduce programming model can be implemented within the Oracle Database using Parallel Pipelined Table Functions and parallel operations. It is possible to write parallel processing tasks as database queries using user defined Table Functions to aggregate or filter the data. Pipelined Table Functions were introduced in Oracle 9i as a way of embedding procedural logic within a data flow. At a logical level, a Table Function is a user defined function that appears in the FROM clause of a SQL statement and operates like a table returning a stream of rows.

This mechanism provides an alternative for SQL developers to perform very complex processing in a procedural way, not easily expressed with SQL. It also follows the Map Reduce paradigm, enabling massively parallel processing within the realm of the database.

Feeding Hadoop Data to the Database for Further Analysis

External tables present data stored in a file system in a table format, and can be used in SQL queries transparently. Hadoop data stored in HDFS can be accessed from inside the Oracle Database by using External Tables through the use of FUSE (File system in User Space) project driver to provide the application programming interface between HDFS and the External Table infrastructure. Using the External Table makes it easier for non-programmers to work with Hadoop data from inside an Oracle Database.

Leveraging Hadoop Processing From the Database

In the event that you need to process some data from Hadoop before it can be correlated with the data from your database, you can control the execution of the Map Reduce programs through a table function using the DBMS_SCHEDULER framework to asynchronously launch an external shell script that submit a Hadoop Map Reduce job. The table function and the Map Reduce program communicate using Oracle’s Advanced Queuing feature.

By leveraging Hadoop processing from the Database you can take advantage of the power of Oracle RDBMS at the same time to simplify the analysis of your data stored in a Hadoop Cluster by streaming data directly from Hadoop with Oracle Queue and Table Function