Wednesday, 23 August 2017

printing values from ref cursor using anonymous block

------------------------------------simple cursor------------------------------------------------------------------
set serveroutput on;
declare
cursor c_emp is
select ename,deptno,hiredate from emp
where months_between(sysdate,hiredate)/12 > 30
order by hiredate desc;
begin
for c_var in c_emp
loop
dbms_output.put_line(c_var.ename||' '||c_var.deptno||' '||c_var.hiredate);
end loop;
end;
-------------------------------------using for loop sub-qury ------------------------------------------------------
set serveroutput on;
declare

begin
for c_var in (
select ename,deptno,hiredate from emp
where months_between(sysdate,hiredate)/12 > 30
order by hiredate desc)
loop
dbms_output.put_line(c_var.ename||' '||c_var.deptno||' '||c_var.hiredate);
end loop;
end;
----------------------using cursor dml operation through multiple rows---------------------------------
set serveroutput on;
declare

begin
for c_var in (
select empno,sal,ename,deptno,hiredate from emp
where months_between(sysdate,hiredate)/12 > 30
order by hiredate desc)
loop
update emp set sal=c_var.sal+100 where empno=c_var.empno;
--dbms_output.put_line(c_var.empno||'  '||c_var.sal||' '||c_var.ename||' '||c_var.deptno||' '||c_var.hiredate);
end loop;
end;
/
select * from emp;
-----------------------------------displaying multiple data using ref_cursor----------------------------
create or replace
procedure ref_proc
(po_result out SYS_REFCURSOR )

as
begin
open po_result for
select empno,sal,ename,deptno,hiredate from emp
where months_between(sysdate,hiredate)/12 > 30
order by hiredate desc;

end;
-- -----------------displaying values from ref cursor using anonymous block---------------------
set serveroutput on;
set autoprint on;
var r refcursor;
DECLARE
  PO_RESULT SYS_REFCURSOR;
BEGIN

  REF_PROC(
    PO_RESULT => :r
  );
  /* Legacy output:
DBMS_OUTPUT.PUT_LINE('PO_RESULT = ' || PO_RESULT);
*/
--  :PO_RESULT := PO_RESULT; --<-- Cursor
END;


how to see and delete duplicate rows .

select mgr,count(*) from emp
group by mgr
having count(*) >1;

MGRCOUNT(*)
78393
76985
75662

DELETE FROM your_table
WHERE rowid not in
(SELECT MIN(rowid)
FROM your_table
GROUP BY column1, column2, column3);

example-
select * from emp
where rowid not in (select min(rowid) from emp group by mgr);
EMPNOENAMEJOBMGRHIREDATESALCOMMDEPTNO
7844TURNERSALESMAN769808-09-19811500030
7900JAMESCLERK769803-12-1981950 30
7654MARTINSALESMAN769828-09-19811250140030
7698BLAKEMANAGER783901-05-19812850 30
7902FORDANALYST756603-12-19813000 20
7521WARDSALESMAN769822-02-1981125050030
7782CLARKMANAGER783909-06-19812450 10

Wednesday, 14 June 2017

CLUSTER


CLUSTER
Definition:
In Oracle, clustering is a way of ordering the physical placement of tables on a disk drive in such a way as
to speed up I/O access times. This is accomplished by sharing data blocks on the disk. Anything that
minimizes I/O or reduces contention in the physical file system will improve the overall performance of
the database.
A table cluster can be described as a set of tables that share the same data blocks, and that are grouped
together because they share common columns that are often used together. Index cluster tables can be
either multi-table or single-table.
When cluster tables are created, Oracle stores all of the rows for each of the tables in the same data
blocks. The cluster key value is the value of the cluster key columns for a particular row.
An index cluster uses an index to maintain row sequence, as opposed to a hash cluster, where the
symbolic key is hashed to the data block address.
A cluster provides an optional method of storing table data. A cluster is made up of a group of tables
that share the same data blocks. The tables are grouped together because they share common columns
and are often used together. For example, the emp and dept table share the deptno column. When you
cluster the emp and dept tables, Oracle Database physically stores all rows for each
department from both the emp and dept tables in the same data blocks.
Because clusters store related rows of different tables together in the same data blocks, properly used
clusters offer two primary benefits:
• Disk I/O is reduced and access time improves for joins of clustered tables.
• The cluster key is the column, or group of columns, that the clustered
tables have in common. You specify the columns of the cluster key when
creating the cluster. You subsequently specify the same columns when
creating every table added to the cluster. Each cluster key value is stored
only once each in the cluster and the cluster index, no matter how many rows
of different tables contain the value.
Therefore, less storage might be required to store related table and index data in a cluster than is
necessary in non-clustered table format. For example, in Figure 1, notice how each cluster key (each
deptno) is stored just once for many rows that contain the same value in both the emp and dept tables.
After creating a cluster, you can create tables in the cluster. However, before any rows can be inserted
into the clustered tables, a cluster index must be created. Using clusters does not affect the creation of
additional indexes on the clustered tables; they can be created and dropped as usual.
You should not use clusters for tables that are frequently accessed individually.

Data Dictionary Objects
user_clusters
user_clu_columns
user_cluster_hash _expressions
user_all_tables


Clustering is a method of storing tables that are intimately related and often joined together into the
same area on disk. The cluster key is the column or columns by which the tables are usually joined in a
query
By storing the field comprising the Cluster Key once instead of multiple times, storage is saved. The
arguably more significant advantage to Clustering is to expidite join queries. When a query is fired that
joins these 2 tables by Cluster Key, the joined rows would be fetched with a single IO operation.
A cluster is a data structure that improves retrieval performance
Example:
----------------------------------------------------------

create cluster empdept (did number(2));
----------------------------------------------------------
create index empdept_indx on cluster empdept;
----------------------------------------------------------
create table emp
(
eid number(10),
ename varchar2(100),
did number(2)
)
cluster empdept(did);
----------------------------------------------------------
create table dept
(
did number(2),
dname varchar2(100)
)
cluster empdept(did);
----------------------------------------------------------

Monday, 5 June 2017

PL/SQL packages


PL/SQL packages are schema objects that groups logically related PL/SQL types, variables and subprograms.
A package will have two mandatory parts:
•    Package specification
•    Package body or definition
Package Specification
The specification is the interface to the package. It just DECLARES the types, variables, constants, exceptions, cursors, and subprograms that can be referenced from outside the package. In other words, it contains all information about the content of the package, but excludes the code for the subprograms.
All objects placed in the specification are called public objects. Any subprogram not in the package specification but coded in the package body is called a private object.
The following code snippet shows a package specification having a single procedure. You can have many global variables defined and multiple procedures or functions inside a package.
CREATE PACKAGE cust_sal AS
   PROCEDURE find_sal(c_id customers.id%type);
END cust_sal;
/
When the above code is executed at SQL prompt, it produces the following result:
Package created.
Package Body
The package body has the codes for various methods declared in the package specification and other private declarations, which are hidden from code outside the package.
The CREATE PACKAGE BODY Statement is used for creating the package body. The following code snippet shows the package body declaration for the cust_sal package created above. I assumed that we already have CUSTOMERS table created in our database as mentioned in PL/SQL - Variables chapter.
CREATE OR REPLACE PACKAGE BODY cust_sal AS
   PROCEDURE find_sal(c_id customers.id%TYPE) IS
   c_sal customers.salary%TYPE;
   BEGIN
      SELECT salary INTO c_sal
      FROM customers
      WHERE id = c_id;
      dbms_output.put_line('Salary: '|| c_sal);
   END find_sal;
END cust_sal;
/
When the above code is executed at SQL prompt, it produces the following result:
Package body created.
Using the Package Elements
The package elements (variables, procedures or functions) are accessed with the following syntax:
package_name.element_name;
Consider, we already have created above package in our database schema, the following program uses the find_sal method of the cust_sal package:
DECLARE
   code customers.id%type := &cc_id;
BEGIN
   cust_sal.find_sal(code);
END;
/
When the above code is executed at SQL prompt, it prompt to enter customer ID and when you enter an ID, it displays corresponding salary as follows:
Enter value for cc_id: 1
Salary: 3000

PL/SQL procedure successfully completed.
Example:
The following program provides a more complete package. We will use the CUSTOMERS table stored in our database with the following records:
Select * from customers;

+----+----------+-----+-----------+----------+
| ID | NAME     | AGE | ADDRESS   | SALARY   |
+----+----------+-----+-----------+----------+
|  1 | Ramesh   |  32 | Ahmedabad |  3000.00 |
|  2 | Khilan   |  25 | Delhi     |  3000.00 |
|  3 | kaushik  |  23 | Kota      |  3000.00 |
|  4 | Chaitali |  25 | Mumbai    |  7500.00 |
|  5 | Hardik   |  27 | Bhopal    |  9500.00 |
|  6 | Komal    |  22 | MP        |  5500.00 |
+----+----------+-----+-----------+----------+
THE PACKAGE SPECIFICATION:
CREATE OR REPLACE PACKAGE c_package AS
   -- Adds a customer
   PROCEDURE addCustomer(c_id   customers.id%type,
   c_name  customers.name%type,
   c_age  customers.age%type,
   c_addr customers.address%type,
   c_sal  customers.salary%type);
 
   -- Removes a customer
   PROCEDURE delCustomer(c_id  customers.id%TYPE);
   --Lists all customers
   PROCEDURE listCustomer;

END c_package;
/
When the above code is executed at SQL prompt, it creates the above package and displays the following result:
Package created.
CREATING THE PACKAGE BODY:
CREATE OR REPLACE PACKAGE BODY c_package AS
   PROCEDURE addCustomer(c_id  customers.id%type,
      c_name customers.name%type,
      c_age  customers.age%type,
      c_addr  customers.address%type,
      c_sal   customers.salary%type)
   IS
   BEGIN
      INSERT INTO customers (id,name,age,address,salary)
         VALUES(c_id, c_name, c_age, c_addr, c_sal);
   END addCustomer;
 
   PROCEDURE delCustomer(c_id   customers.id%type) IS
   BEGIN
       DELETE FROM customers
         WHERE id = c_id;
   END delCustomer;

   PROCEDURE listCustomer IS
   CURSOR c_customers is
      SELECT  name FROM customers;
   TYPE c_list is TABLE OF customers.name%type;
   name_list c_list := c_list();
   counter integer :=0;
   BEGIN
      FOR n IN c_customers LOOP
      counter := counter +1;
      name_list.extend;
      name_list(counter)  := n.name;
      dbms_output.put_line('Customer(' ||counter|| ')'||name_list(counter));
      END LOOP;
   END listCustomer;
END c_package;
/
Above example makes use of nested table which we will discuss in the next chapter. When the above code is executed at SQL prompt, it produces the following result:
Package body created.
USING THE PACKAGE:
The following program uses the methods declared and defined in the package c_package.
DECLARE
   code customers.id%type:= 8;
BEGIN
      c_package.addcustomer(7, 'Rajnish', 25, 'Chennai', 3500);
      c_package.addcustomer(8, 'Subham', 32, 'Delhi', 7500);
      c_package.listcustomer;
      c_package.delcustomer(code);
      c_package.listcustomer;
END;
/
When the above code is executed at SQL prompt, it produces the following result:
Customer(1): Ramesh
Customer(2): Khilan
Customer(3): kaushik   
Customer(4): Chaitali
Customer(5): Hardik
Customer(6): Komal
Customer(7): Rajnish
Customer(8): Subham
Customer(1): Ramesh
Customer(2): Khilan
Customer(3): kaushik   
Customer(4): Chaitali
Customer(5): Hardik
Customer(6): Komal
Customer(7): Rajnish

PL/SQL procedure successfully completed

Before we write any code, we need to define what our package actually does; here is a description for
our package:-
• Maintain a bank balance
• Make withdrawals (reduce balance)
• Make deposits (increase balance)
• Query balance
Also, we can define some business rules at this point:-
• Balance cannot be overdrawn
• There should be some kind of limit to how much
can be withdrawn per transaction. We now have the definition of the bank account package, now for its implementation.
First, you need to create the package specification, this is done with the following statement:-
CREATE OR REPLACE PACKAGE package_name
IS|AS
<package-specification>
END [package name]
The name can be almost anything you like, the same restrictions apply as with all identifiers. Try to make the name useful, generally the package name will relate to what its contents relate to, for example, if a package contains functions and procedures for working with bank accounts, such as withdraw and deposit, then a good package
name might be bank_account. The package specification can be anything that would normally appear in the declarative section of a block of code (except for function and procedure bodies, this code is in the body).
The package specification and body can appear in a single file or separate files. The usual practice is to
give the source filename an extension of pkg for a single file or pks (pkh) and pkb for separate files
Looking at the description for our bank account
package, we can see that:-

• We require 3 public functions or procedures:-
1. Withdrawal
2. Deposit
3. Balance query

• We require private data
1. Current balance
2. Maximum allowed per withdrawal

Using packages, we can protect our data; by making data such as balance private and providing procedures or functions to access this data, we are assured that nothing else can modify this data.
Now, let's create our package specification:-
CREATE OR REPLACE PACKAGE bank_account
IS
-- Procedure for making a withdrawal
PROCEDURE withdraw(p_amount IN NUMBER);
-- Procedure for making a deposit
PROCEDURE deposit(p_amount IN NUMBER);
-- Function to query the balance
FUNCTION balance RETURN NUMBER;
END;
That’s it!!! Our Package specification is done. This can be thought of as the public interface for our package, i.e. anything that appears in here is visible and within scope for all programs outside the package. Keeping the package specification in a separate file facilitates modifications later, this means that you can change the body of a package (so long as the spec is not affected) without affecting any programs which use the package. It also allows you to hide your implementation. We now need to create the package body, this is where the actual code appears for all public subprograms as well as any other private objects. You create the body with the following statement:-
CREATE OR REPLACE PACKAGE BODY package_name
IS|AS
<package-body>
END [package name]
As you can see, the only major difference for the body is the inclusion of the word BODY in the first line.

The following code will create our package body:-
CREATE OR REPLACE PACKAGE BODY bank_account
IS
--
-- Private data objects
--
-- Hold account balance, initialised to 0
v_balance NUMBER := 0;
-- Hold maximum withdrawl amount
v_max_withdrawl CONSTANT NUMBER := 250;
--
-- Private functions/procedures
--
-- Print text to screen
PROCEDURE p(p_text IN VARCHAR2)
IS
BEGIN
DBMS_OUTPUT.put_line(p_text);
END;
--
-- Public function/procedure definitions
--
-- Procedure for making a withdrawal
PROCEDURE withdraw(p_amount IN NUMBER)
IS
l_new_balance NUMBER:= v_balance - p_amount;
BEGIN
IF p_amount > l_max_withdrawal THEN
-- Ensure amount is within limit
p('Withdrawals limited to '||
TO_CHAR(l_max_withdrawl)||
' per transaction');
-- No overdrafts allowed
ELSIF l_new_balance < 0 THEN
-- No overdrafts allowed
p('No overdraft available');
p('Cash available : '
||TO_CHAR(v_balance));
ELSE
v_balance := v_balance - p_amount;
p('Here is your cash!');
END IF;
END withdraw;
-- Procedure for making a deposit
PROCEDURE deposit(p_amount IN NUMBER)
IS
BEGIN
IF p_amount <= 0 THEN
p('Deposit must be at least £1');
ELSE
v_balance := v_balance + p_amount;
p('Thankyou!');
END IF;
END;
-- Function to query the balance
FUNCTION balance RETURN NUMBER
IS
BEGIN
RETURN v_balance;
END;
END;
To make use of this package, you might do the following:-
BEGIN
DBMS_OUTPUT.put_line(BANK_ACCOUNT.balance);
BANK_ACCOUNT.withdraw(100);
BANK_ACCOUNT.deposit(500);
DBMS_OUTPUT.put_line(BANK_ACCOUNT.balance);
BANK_ACCOUNT.withdraw(275);
DBMS_OUTPUT.put_line(BANK_ACCOUNT.balance);
END;

Wednesday, 31 May 2017

PRAGMA EXCEPTION_INIT in oracle pl/sql

PRAGMA EXCEPTION_INIT
A pragma is a special instruction to the PL/SQL compiler. It is important to note that pragmas
are processed at the time of the compilation. The EXCEPTION_INIT pragma allows you to associate an
Oracle error number with the name of a user-defined error. After you associate an error name with an
Oracle error number, you can reference the error and write a handler for it.
/*
PLSQL Program to demonstrate how to handle PRE-DEFINED EXCEPTIONs
*/
DECLARE
E_EmpRemaining EXCEPTION;
PRAGMA EXCEPTION_INIT ( E_EmpRemaining , -2292 );
V_Deptno Emp.Deptno%TYPE := &Deptno;
BEGIN
DELETE FROM Dept
WHERE Deptno = V_Deptno;
IF SQL%NOTFOUND THEN
DBMS_OUTPUT.PUT_LINE('The given information is not found in the Database >. Check
for proper values........');
ELSE
NULL;
END IF;
ROLLBACK;
EXCEPTION
WHEN E_EmpRemaining THEN
DBMS_OUTPUT.PUT_LINE ( 'Unable to delete the Department Number ' || V_Deptno ||
' as the Employees are existing . Validate your relations and then try once again.' );
WHEN NO_DATA_FOUND THEN
DBMS_OUTPUT.PUT_LINE( ' The given information is missing in the Database . Check for
proper vslues.......' );
END;

/
/*
PLSQL Program to demonstrate PRAGMA.
*/
DECLARE
E_EmpExists EXCEPTION;
V_Count NUMBER(2);
V_SalSum NUMBER(6);
PRAGMA EXCEPTION_INIT ( E_EmpExists , -2292);
V_Empno Emp.Empno%TYPE := &Empno;
BEGIN
DELETE FROM Emp
WHERE Empno = V_Empno;
IF SQL%NOTFOUND THEN
DBMS_OUTPUT.PUT_LINE (' The given Employee Number '|| V_Empno || ' is misssing
in the Database . Check for proper values.......');
ROLLBACK;
ELSE
COMMIT;
END IF;
EXCEPTION
WHEN E_EmpExists THEN
DBMS_OUTPUT.PUT_LINE( ' Unable to delete the Employee Details '|| V_Empno|| ' as
the Employee are existing. Validate your relations and then try once again...........' );
WHEN NO_DATA_FOUND THEN

DBMS_OUTPUT.PUT_LINE ( ' The given information in the Database . Please Check for
proper values................' );
END;
/
/*
PLSQL Program to demonstrate PRAGMA.
*/
DECLARE
E_NotNullViolation EXCEPTION;
PRAGMA EXCEPTION_INIT( E_NotNullViolation , -1400 );
BEGIN
INSERT INTO Emp ( Empno , Ename , Job , Sal , Comm , Deptno )
VALUES( &Empno , 'HIMAN' , 'ANALYST', 2500 , NULL, &Deptno);
COMMIT;
EXCEPTION
WHEN E_NotNullViolation THEN
DBMS_OUTPUT.PUT_LINE(' A field which cannot be NULL is not attended . Please check
properly......');
END;
/
/*
PLSQL Program to demonstrate PRAGMA.
*/
DECLARE
E_NotNullViolation EXCEPTION;
PRAGMA EXCEPTION_INIT( E_NotNullViolation , -1400);

E_CheckViolation EXCEPTION;
PRAGMA EXCEPTION_INIT( E_CheckViolation , -2290);
BEGIN
INSERT INTO Emp VALUES
( &Empno , '&Ename' , '&Job' , &MGR , SYSDATE , &Sal , &Comm , &Deptno);
COMMIT;
EXCEPTION
WHEN E_CheckViolation THEN
DBMS_OUTPUT.PUT_LINE ( ' A field with CKECK Constraint is not attended
properly..............' );
WHEN E_NotNullViolation THEN
DBMS_OUTPUT.PUT_LINE( ' A Field with NOT NULL Constraint is not attended
properly...........' );
END;
/

DDL and DML operation in oracle

DDL
Data Definition Language
An Oracle database is comprised of the physical structures used to store,
manage, secure, and retrieve data.
A tablespace is the logical structure that allows you to manage a group of
datafiles.
Datafiles are the physical datafiles on disk.
The table is the object that stores data in a database.
Oracle provides a wide variety of table types. The default table type is heaporganized.
For most applications, a heap-organized table is an effective
structure for storing and retrieving data.
More on Datatypes---
¡Note If you insert values into a NUMBER(n,m) column and you exceed precision n, you get an error
message. If you exceed scale m, the Oracle DBMS rounds the value.
The Oracle DBMS supports many datatype synonyms for portability with other DBMS
implementations and for compliance with the ANSI/ISO standard. For example, CHARACTER is
identical to CHAR; DECIMAL(n,m) is identical to NUMBER(n,m); and NUMBER even has multiple synonyms,
such as INTEGER, REAL, and SMALLINT.

Each Oracle datatype has its own precision or length limits
shows 2000 and 4000 for the CHAR and VARCHAR2
datatype limits, respectively. You might wonder in which unit these numbers are expressed. That
depends on the value of the NLS_LENGTH_SEMANTICS parameter. The default for the
Oracle DBMS is to use BYTE length semantics
Creating a Heap-Organized Table
You use the CREATE TABLE statement to create tables. When creating a table, at minimum you must
specify the table name, column name(s), and data types associated with the columns. The Oracle default

table type is heap-organized. The term heap means that the data isnft stored in a specific order in the
table (instead, itfs a heap of data). Herefs a simple example of creating a heap-organized table with four
columns:
create table d_sources(
d_source_id number not null,
source_type varchar2(32),
create_dtt date default sysdate not null,
update_dtt timestamp(5)
);
If you donft specify a tablespace, then the table is created in the default permanent tablespace of the
user that creates the table
Implementing Virtual Columns
With Oracle Database 11g and higher, you can create a virtual column as part of your table
definition. A virtual column is based on one or more existing columns from the same table
and/or a combination of constants, SQL functions, and user-defined PL/SQL functions.
Virtual columns arenft stored on disk; theyfre evaluated at runtime when the SQL query
executes. Virtual columns can be indexed and can have stored statistics.
create table inv(
inv_id number
,inv_count number
,inv_status as (
case when inv_count <= 100 then 'GETTING LOW'
when inv_count > 100 then 'OKAY'
end)
);
SQL> insert into inv (inv_id, inv_count) values (1,100);
SQL> select * from inv;
Here is some sample output:
INV_ID INV_COUNT INV_STATUS
---------- ---------- -----------
1 100 GETTING LOW
Creating a Table from a Query
create table cwp_user_profile_101910
as select * from cwp_user_profile;
create table cwp_user_profile_test
as select * from cwp_user_profile
where 1=2;
Modifying a Table
Renaming a Table
There are a couple of reasons for renaming a table:
....To make the table conform to standards

....To better determine whether the table is being used before you drop it
This example renames a table from INV_MGMT to INV_MGMT_OLD:
SQL> rename inv_mgmt to inv_mgmt_old;
Adding a Column
Use the ALTER TABLE ... ADD statement to add a column to a table. This example adds a column to the
INV table:
SQL> alter table inv add(inv_count number);
Altering a Column
Occasionally, you need to alter a column to adjust its size or change its data type. Use the
ALTER TABLE
... MODIFY statement to adjust the size of a column. This example changes the size of a
column to 256
characters:
SQL> alter table inv modify inv_desc varchar2(256);
If any rows have a NULL value for the column youfre modifying to NOT NULL, then you must
first update the column to contain a value. Here is an example of modifying a column to
NOT NULL:
SQL> alter table inv modify(inv_desc not null);
You can also alter the column to have a default value. The default value is used any time a
record is
inserted into the table but no value is provided for a column:
SQL> alter table inv modify(inv_desc default 'No Desc');
If you want to remove the default value of a column, then set it to be NULL:
SQL> alter table inv modify(inv_desc default NULL);
Renaming a Column
There are a couple of reasons to rename a column:
....Sometimes requirements change, and you want to modify the column name to
better reflect what the column is used for.
....If youfre planning to drop a column, it doesnft hurt to rename the column first to
better determine whether any users or applications are accessing the column.
SQL> alter table inv rename column inv_count to inv_amt;
Dropping a Column
SQL> alter table inv drop (inv_name);
SQL> alter table inv set unused (inv_name);

SQL> alter table inv drop unused columns;
Displaying Table DDL
SQL> set long 10000
SQL> select dbms_metadata.get_ddl('TABLE','EMP') from dual;
Dropping a Table
SQL> drop table inventory;
If you attempt to drop a parent table that has a primary key or unique keys referenced as a foreign
key in a child table, you see an error such as
ORA-02449: unique/primary keys in table referenced by foreign keys
You need to either drop the referenced foreign-key constraint(s) or use the CASCADE CONSTRAINTS
option when dropping the parent table:
SQL> drop table inventory cascade constraints;
You must be the owner of the table or have the DROP ANY TABLE system privilege to drop a table. If
you have the DROP ANY TABLE privilege, you can drop a table in a different schema by prepending the
schema name to the table name:
SQL> drop table inv_mgmt.inventory;
Undropping a Table
Suppose you accidentally drop a table, and you want to restore it. First, verify that the table you want to
restore is in the recycle bin:
SQL> show recyclebin;
Here is some sample output:
ORIGINAL NAME RECYCLEBIN NAME OBJECT TYPE DROP TIME
---------------- ------------------------------ ------------ -------------------
PURCHASES BIN$YzqK0hN3Fh/gQHdAPLFgMA==$0 TABLE 2009-02-18:17:23:15
Next, use the FLASHBACK TABLE...TO BEFORE DROP statement to recover the dropped table:
SQL> flashback table purchases to before drop;
Note You canft FLASHBACK TABLE...TO BEFORE DROP for a table created in the SYSTEM tablespace
Removing Data from a Table
You can use either the DELETE statement or the TRUNCATE statement to remove records from a table. You
need to be aware of some important differences between these two approaches
Features of DELETE and TRUNCATE

Using DELETE
One big difference is that the DELETE statement can be either committed or rolled back. Committing a
DELETE statement makes the changes permanent:
SQL> delete from inv;
SQL> commit;
If you issue a ROLLBACK statement instead of COMMIT, the table contains data as it was before the
DELETE was issued.
Using TRUNCATE
TRUNCATE is a DDL statement. This means Oracle automatically commits the statement (and the current
transaction) after it runs, so there is no way to roll back a TRUNCATE statement. If you need the option of
choosing to roll back (instead of committing) when removing data, then you should use the DELETE
statement. However, the DELETE statement has the disadvantage that it generates a great deal of undo
and redo information. Thus for large tables, a TRUNCATE statement is usually the most efficient way to
remove data.
SQL> truncate table computer_systems;
Data Manipulation.
how to change the contents of an Oracle database. The SQL commands to change the database contents are commonly referred to
as Data Manipulation Language (DML) commands.
The first four sections of this chapter cover the DML commands INSERT, UPDATE, DELETE,
and MERGE.
The INSERT Command
Standard INSERT Commands
The standard INSERT command supports the following two ways to insert rows:
. Use the VALUES clause, followed by a list of column values (between parentheses).
This method allows you to insert only one row at a time per execution of the INSERT
command.
. Formulate a subquery, thus using existing data to generate new rows.
You can use the reserved word NULL to specify a null value for a specific column. You can also
specify the reserved word DEFAULT to instruct the Oracle DBMS to insert the default value associated

with the corresponding column
If you donft specify a value for a specific column in your INSERT
statement, there are two possibilities:
. If the column has an associated DEFAULT value, the Oracle DBMS will insert that value.
. If you did not define a DEFAULT value for the column, the Oracle DBMS inserts a null
value (provided, of course, that the column allows null values).
.
QL> CREATE TABLE SampleUPD
2 (
3 SampID NUMBER(2)
4 DEFAULT 99,
5 SampName VARCHAR2(10)
6 DEFAULT 'NOT GIVEN',
7 SampDate DATE
8 DEFAULT SYSDATE
9 );
Table created.

SQL> INSERT INTO SampleUPD
2 VALUES(1, 'SAMPLE01', '26-JUN-09');
1 row created.
SQL> INSERT INTO SampleUPD
2 VALUES(2, 'SAMPLE02', '27-JUN-09');
1 row created.
1 INSERT INTO SampleUPD(SampID, SampName)
2* VALUES(3, 'SAMPLE03')
SQL> /
1 row created.
SQL> ed
Wrote file afiedt.buf
1 INSERT INTO SampleUPD(SampID, SampDate)
2* VALUES(3, '21-JUN-09')
SQL> /
1 INSERT INTO SampleUPD(SampName, SampDate)
2* VALUES('SAMPLE04', '21-JUN-09')
SQL> /
1 row created.
SQL> ed
Wrote file afiedt.buf

1 INSERT INTO SampleUPD(SampID, SampName, SampDate)
2* VALUES(4, 'SAMPLE04', DEFAULT)
SQL> /
1 row created.
SQL> SELECT * FROM SampleUPD;
SAMPID SAMPNAME SAMPDATE
---------- ---------- ---------
1 SAMPLE01 26-JUN-09
2 SAMPLE02 27-JUN-09
3 SAMPLE03 29-JUN-09
3 NOT GIVEN 21-JUN-09
99 SAMPLE04 21-JUN-09
4 SAMPLE04 29-JUN-09
SQL> CREATE TABLE MyBonus
2 (
3 Empno NUMBER,
4 Bonus NUMBER DEFAULT 100
5 );
Table created.
SQL> SELECT E.Empno
2 FROM Emp E
3 WHERE Job = 'SALESMAN';
EMPNO
----------
7654

7499
7844
7521
SQL> INSERT INTO MyBonus(Empno)
2 (SELECT E.Empno
3 FROM Emp E
4 WHERE Job = 'SALESMAN');
4 rows created.
SQL> SELECT * FROM MyBonus;
EMPNO BONUS
---------- ----------
7654 100
7499 100
7844 100
7521 100
The UPDATE Command
You can change column values of existing rows in your tables with the UPDATE command
. UPDATE: The table you want to update
. SET: The change you want to apply
. WHERE: The rows to which you want to apply the change

the SET clause offers two alternatives:
. You can specify a comma-separated list of single-column changes. With this approach, you can use the DEFAULT keyword as
an expression. This allows you to change column default values in the data dictionary at any point in time without the need to
change the UPDATE commands in your applications.
. You can drive the change with a subquery. The subquery must provide the right number of values for the list of column names
specified between the parentheses. Of course, the datatypes should also match, or the Oracle DBMS should at least be able to
convert values to the appropriate datatypes on the fly.
1 UPDATE EMP
2 SET
3* Sal = NULL
SQL> UPDATE EMP
2 SET
3 Comm = NULL
4 WHERE Job = 'CLERK';
SQL> ROLLBACK;
SQL> SELECT Ename, Sal, Deptno, Job
2 FROM Emp
3 WHERE Ename = 'ALLEN';
ENAME SAL DEPTNO JOB
---------- ---------- ---------- ---------
ALLEN 1600 30 SALESMAN
SQL> UPDATE Emp

2 SET
3 Job = 'MANAGER',
4 Sal = 2500,
5 Deptno = 10
6 WHERE Ename = 'ALLEN';
1 row updated.
SQL> ed
Wrote file afiedt.buf
1 UPDATE Emp
2 SET
3 Job = 'MANAGER',
4 Sal = 2500,
5* Deptno = 10
SQL> /
14 rows updated.
SQL> SELECT Ename, Sal, Deptno, Job
2 FROM Emp;
ENAME SAL DEPTNO JOB
---------- ---------- ---------- ---------
KING 2500 10 MANAGER
BLAKE 2500 10 MANAGER
CLARK 2500 10 MANAGER
JONES 2500 10 MANAGER
MARTIN 2500 10 MANAGER
ALLEN 2500 10 MANAGER
TURNER 2500 10 MANAGER

JAMES 2500 10 MANAGER
WARD 2500 10 MANAGER
FORD 2500 10 MANAGER
SMITH 2500 10 MANAGER
ENAME SAL DEPTNO JOB
---------- ---------- ---------- ---------
SCOTT 2500 10 MANAGER
ADAMS 2500 10 MANAGER
MILLER 2500 10 MANAGER
14 rows selected.
SQL> ROLLBACK;
Rollback complete.
QL> UPDATE Emp
2 SET
3 Job = 'CLERK',
4 Sal = 600,
5 Deptno = 10
6 WHERE Ename = 'ALLEN';
1 row updated.
SQL> UPDATE Emp
2 SET
3 Job = 'ANALYST',
4 Sal = Sal + 1000
5 WHERE Ename = 'BLAKE'
6
SQL> SELECT Ename, Sal

2 FROM Emp
3 WHERE Ename = 'BLAKE';
ENAME SAL
---------- ----------
BLAKE 2850
SQL> UPDATE Emp
2 SET
3 Job = 'ANALYST',
4 Sal = Sal + 1000
5 WHERE Ename = 'BLAKE';
1 row updated.
SQL> SELECT Ename, Sal
2 FROM Emp
3 WHERE Ename = 'BLAKE';
ENAME SAL
---------- ----------
BLAKE 3850
SQL> ed
Wrote file afiedt.buf
1 UPDATE Emp
2 SET
3 Job = (SELECT Job
4 FROM Emp
5 WHERE Ename = 'MILLER'),
6 Sal = 1000 + (SELECT Sal

7 FROM Emp
8 WHERE Ename = 'ALLEN'),
9 Deptno = (SELECT Deptno
10 FROM Emp
11 WHERE Ename = 'TURNER')
12* WHERE Ename = 'BLAKE'
SQL> /
1 row updated.
SQL> UPDATE Emp
2 SET
3 (Job, Deptno) = (SELECT Job, Deptno
4 FROM Emp
5 WHERE Empno = 7499)
6 WHERE Empno = 7698;
1 row updated.
The DELETE Command
The simplest data manipulation command is DELETE, This command also operates at the table level, and you use the WHERE
clause to restrict the set of rows you want to delete from the table. If you omit the WHERE clause, the DELETE command
results in an empty table.
Note the difference between the following two commands:
SQL> drop table departments;
SQL> delete from departments;

The DROP TABLE command not only removes the contents of the table, but also the table
itself, including all dependent objects/structures such as indexes and privileges. DROP TABLE is a
data definition (DDL) command. The DELETE command does not change the database structure,
but only the contents.it is a data manipulation (DML) command. Moreover, the effects of a
DROP TABLE command cannot be undone with a ROLLBACK command, as opposed to the effects of
a DELETE command, which can.
Example of a DELETE Command
SQL> delete from salgrades
2 where grade = 5;
1 row deleted.
SQL> rollback;
Rollback complete.
SQL>
Alternative DELETE Command,Using a Subquery
SQL> delete from (select *
2 from salgrades
3 where grade = 5);
1 row deleted.
SQL> rollback;
Rollback complete.
SQL>
The TRUNCATE command allows you to delete all rows of a table in a
more efficient way than with the DELETE command. The TRUNCATE command belongs to the
category of the data definition (DDL) commands,
The MERGE Command
It is able to perform insertions, updates, and deletions in a single statement. This makes
the MERGE command very efficient in data warehouse environments, where the tables
are often populated/updated from external sources.
SQL> cl scr
SQL> CREATE TABLE MyBonus
2 (
3 Empno NUMBER,
4 Bonus NUMBER DEFAULT 100
5 );
Table created.
SQL> SELECT E.Empno
2 FROM Emp E
3 WHERE Job = 'SALESMAN';
EMPNO

----------
7654
7499
7844
7521
SQL> INSERT INTO MyBonus(Empno)
2 (SELECT E.Empno
3 FROM Emp E
4 WHERE Job = 'SALESMAN');
4 rows created.
SQL> SELECT * FROM MyBonus;
EMPNO BONUS
---------- ----------
7654 100
7499 100
7844 100
7521 100
SQL> SELECT Empno, Sal, Deptno
2 FROM Emp
3 WHERE Deptno = 30;
EMPNO SAL DEPTNO
---------- ---------- ----------
7698 2850 30
7654 1250 30
7499 1600 30
7844 1500 30
7900 950 30
7521 1250 30
6 rows selected.
SQL> MERGE INTO MyBonus B
2 USING (SELECT Empno, Sal, Deptno
3 FROM Emp
4 WHERE Deptno = 30) S
5 ON (B.Empno = S.Empno)
6 WHEN MATCHED THEN
7 UPDATE
8 SET B.Bonus = B.Bonus + S.Sal * 0.1
9 DELETE
10 WHERE (S.Sal > 4000)
11 WHEN NOT MATCHED THEN
12 INSERT(B.Empno, B.Bonus)
13 VALUES(S.Empno, S.Sal * 0.1)
14 WHERE(S.Sal <= 4000);
6 rows merged.

SQL> SELECT * FROM MyBonus;
EMPNO BONUS
---------- ----------
7654 225
7499 260
7844 250
7521 225
7698 285
7900 95
6 rows selected
Savepoints
You can define ginterim points,h known as savepoints, within your transactions. During a
transaction, you can roll back to such savepoints without rolling back the transaction as a whole,
thus maintaining the changes you made before the savepoint.
Using Savepoints Within a Transaction
SQL> delete from history where empno=7654;
2 rows deleted.
SQL> delete from employees where empno=7654;
1 row deleted.
SQL> savepoint ONE;
Savepoint created.
SQL> delete from offerings where course='ERM';
1 row deleted.
SQL> delete from courses where code ='ERM';
1 row deleted.
SQL> rollback to savepoint ONE;
Rollback complete.
SQL> select description
2 from courses
3 where code='ERM';
DESCRIPTION
----------------------------------------------
Data modeling with ERM
SQL> rollback;
Rollback complete.

Monday, 29 May 2017

bulk collect and forall in oracle pl/sql

Using of BULK COLLECT and FORALL for better performance
PL/SQL engine sends SQL statements to the SQL engine, which returns results to the PL/SQL engine.
The communication between the PL/SQL and SQL engines is called a context switch. A certain amount of performance overhead is associated with these context switches. However, the PL/SQL language has a number of features that can minimize the performance overhead known as bulk SQL. Generally, if a SQL
statement affects four or more rows, bulk SQL may improve performance significantly. Bulk SQL supports batch processing of SQL statements and their results. It consists of two features—the FORALL statement and the BULK COLLECT clause.


FORALL is:

"The keyword FORALL instructs the PL/SQL engine to bulk-bind input collections before sending them to the SQL engine. Although the FORALL statement contains an  iteration scheme, it is not a FOR loop.

Its syntax follows:

FORALL index IN lower_bound..upper_bound
   sql_statement;

The index can be referenced only within the FORALL statement and only as a collection subscript. The SQL statement must be an INSERT, UPDATE, or DELETE statement that
references collection elements. And, the bounds must specify a valid range of consecutive index numbers. The SQL engine executes the SQL statement once for each index
number in the range."

DECLARE
 TYPE NumList IS VARRAY(20) OF NUMBER;
 Depts   NumList := NumList(10, 30, 70); -- department numbers
 BEGIN
 FORALL i IN depts.FIRST..depts.LAST
 DELETE FROM emp WHERE deptno = depts(i);
 END;

BULK COLLECT is:

"The keywords BULK COLLECT tell the SQL engine to bulk-bind output collections before returning them to the PL/SQL engine. You can use these keywords in the SELECT  INTO, FETCH INTO, and RETURNING INTO clauses.

Here is the syntax:

... BULK COLLECT INTO collection_name[, collection_name] ..."
and FORALL is defined as





DECLARE
TYPE NumTab IS TABLE OF emp.empno%TYPE;
TYPE NameTab IS TABLE OF emp.ename%TYPE;
enums NumTab; -- no need to initialize
names NameTab;
BEGIN
SELECT empno, ename BULK COLLECT INTO enums, names FROM emp;

END;
…….
DECLARE
TYPE SalList IS TABLE OF emp.sal%TYPE;
sals SalList;
BEGIN
SELECT sal BULK COLLECT INTO sals FROM emp
WHERE ROWNUM <= 100;
...
END;
………
Example of a Bulk Fetch from a Cursor
The following example shows that you can bulk-fetch from a cursor into one or
more collections:
DECLARE
TYPE NameList IS TABLE OF emp.ename%TYPE;
TYPE SalList IS TABLE OF emp.sal%TYPE;
CURSOR c1 IS SELECT ename, sal FROM emp WHERE sal > 1000;
names NameList;
sals SalList;
BEGIN
OPEN c1;
FETCH c1 BULK COLLECT INTO names, sals;

END;
…………………
Retrieving DML Results into a Collection with the RETURNING INTO Clause
You can use the BULK COLLECT clause in the RETURNING INTO clause of an
INSERT, UPDATE, or DELETE statement, as the following example shows:
DECLARE
TYPE NumList IS TABLE OF emp.empno%TYPE;
enums NumList;

BEGIN
DELETE FROM emp WHERE deptno = 20
RETURNING empno BULK COLLECT INTO enums;
-- if there were five employees in department 20,
-- then enums contains five employee numbers
END;

Using FORALL and BULK COLLECT Together

FORALL j IN depts.FIRST..depts.LAST
DELETE FROM emp WHERE empno = depts(j)
RETURNING empno BULK COLLECT INTO enums;

You cannot use the SELECT ... BULK COLLECT statement in a FORALL statement.
Otherwise, you get the error implementation restriction: cannot use FORALL and BULK
COLLECT INTO together in SELECT statements.




SQL> create table t_all_objects as select * from all_objects;

Table created.

SQL> insert into t_all_objects select * from t_all_objects;

3332 rows created.

SQL> r
1* insert into t_all_objects select * from t_all_objects

6664 rows created.

---replicated a couple of times

SQL> select count(*) from t_all_objects;

COUNT(*)
----------
213248

SQL> declare
cursor c1 is select object_name from t_all_objects;
2    3  rec1 c1%rowtype;
4  begin
5  open c1;
6  loop
7     fetch c1 into rec1;
8     exit when c1%notfound;
9
10     null;
11
12  end loop;
13  end;
14  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:44.75

SQL> declare
2  cursor c1 is select object_name from t_all_objects;
3  type c1_type is table of c1%rowtype;
4  rec1 c1_type;
5  begin
6  open c1;
7
8     fetch c1 bulk collect into rec1;
9
10
11  end;
12  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:05.32
As can be clearly seen, bulk collecting the rows shows a huge performance improvement over fetching row by row.

The above method (which fetched all the rows) may not be applicable to all cases. When there are many rows to process, we can limit the number of rows to bulk collect, process those rows and fetch again. Otherwise process memory gets bigger and bigger as you fetch the rows.
SQL> declare
2  cursor c1 is select object_name from t_all_objects;
3  type c1_type is table of c1%rowtype;
4  rec1 c1_type;
5  begin
6  open c1;
7  loop
8     fetch c1 bulk collect into rec1 limit 200;
9     for i in 1..rec1.count loop
10             null;
11     end loop;
12     exit when c1%notfound;
13  end loop;
14
15
16  end;
17  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:04.07

INDEX in oracle

INDEX
An index is an optionally created database object used primarily to increase query performance
Similar to a book index, a database index stores the column value of interest along
with its row identifier (ROWID). The ROWID contains the physical location of the table row on disk that
stores the column value. With the ROWID in hand, Oracle can efficiently retrieve table data with a
minimum of disk reads. In this way, indexes function like a shortcut to the table data. If there is no
available index, then Oracle reads each row in the table to determine if the row contains the desired
information.
Improving Performance with Indexes----
When a SQL select statement executes, the Oracle query optimizer quickly calculates a
step-by-step execution plan detailing how it will retrieve column values specified in the query. In
calculating the plan, the optimizer determines which tables and indexes will be used to retrieve data.
When no index exists, the table itself is the only access path available to satisfy the results of the
query. In this scenario, Oracle has no choice but to inspect every row within every used block in the
table(this is known as a full table scan).
SQL> SET AUTOTRACE ON EXPLAIN
SQL> SELECT Ename, Sal, Job
2 FROM Emp
3 WHERE Ename = 'ALLEN';
ENAME SAL JOB
---------- ---------- ---------
ALLEN 1600 SALESMAN
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3 Card=1 Bytes=26)
1 0 TABLE ACCESS (FULL) OF 'EMP' (TABLE) (Cost=3 Card=1 Bytes=
26)

SQL> CREATE INDEX EmpEnameIDX
2 ON Emp(Ename);
Index created.
SQL> SELECT Ename, Sal, Job
2 FROM Emp
3 WHERE Ename = 'ALLEN';
ENAME SAL JOB
---------- ---------- ---------
ALLEN 1600 SALESMAN
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=2 Card=1 Bytes=26)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'EMP' (TABLE) (Cost=2 Car
d=1 Bytes=26)
2 1 INDEX (RANGE SCAN) OF 'EMPENAMEIDX' (INDEX) (Cost=1 Card
=1)

SQL> ed
Wrote file afiedt.buf
1 SELECT Ename, Sal, Job
2 FROM Emp
3* WHERE Empno = 7839 OR Ename = 'KING'
SQL> /
ENAME SAL JOB
---------- ---------- ---------
KING 5000 PRESIDENT
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3 Card=1 Bytes=39)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'EMP' (TABLE) (Cost=3 Car
d=1 Bytes=39)
2 1 BITMAP CONVERSION (TO ROWIDS)
3 2 BITMAP OR
4 3 BITMAP CONVERSION (FROM ROWIDS)
5 4 INDEX (RANGE SCAN) OF 'EMP_PRIMARY_KEY' (INDEX (UN
IQUE)) (Cost=1)
6 3 BITMAP CONVERSION (FROM ROWIDS)
7 6 INDEX (RANGE SCAN) OF 'EMPENAMEIDX' (INDEX) (Cost=

1)
SQL> SELECT TABLE_NAME, INDEX_NAME
2 FROM USER_INDEXES
3 WHERE TABLE_NAME = 'EMP';
SQL> SELECT TABLE_NAME, CONSTRAINT_NAME, CONSTRAINT_TYPE, SEARCH_CONDITION
2 FROM USER_CONSTRAINTS
3 WHERE TABLE_NAME = 'EMP';
SQL> DROP INDEX EMPENAMEIDX;
When the SELECT clause columns are all contained with an index, this is known
as a covering index. These indexes are particularly efficient because only the index blocks need to be
read
• Indexes are optional objects defined on a table and one or more columns.
• Indexes consume resources.
• A B-tree index is the default index type in Oracle.
• A fairly unique column value compared to all other rows in a table results in a
more efficient B-tree index.
• When appropriately created, indexes improve performance
• In some scenarios, the query optimizer will choose not to use an index. In other
words, the query optimizer calculates that the cost of a full table scan is less than
the cost when using an index.
• In some situations, Oracle can retrieve data for a query by only accessing the
index; the table doesn’t have to be accessed
Determining Which Type of Index to Use-----

Guidelines for When to Index
You want to consider indexing columns frequently used in the WHERE clause of SQL statements and
foreign key columns. Oracle automatically creates a unique index to
enforce the primary key constraint and the unique constraint. The following are some general guidelines
when an index is typically useful.
Frequently accessed columns containing highly selective data for B-tree indexes.
Columns frequently accessed with a small range of values for bitmap indexes.
Columns that are frequently accessed and that contain many null values, but the query is looking for the
NOT NULL values.
Frequent queries against large tables retrieving less than 5 to 15 percent of the rows. The percentage
may vary, depending on a number of factors, including the size of the table.
Building an index is often useless if:
The table is small, but you should nevertheless create unique and primary constraints to enforce
business rules.
The query retrieves more than 5 to 15 percent of the rows.
The indexed column is part of an expression. In this case, consider creating a function-based index
instead.
Keep in mind that adding indexes may increase the time required for data manipulation operations, such
as INSERT, UPDATE, and DELETE. If you primarily query the table,
then creating the index may offset the disadvantage of additional time required for DML statements
B-tree Indexes
the default index type in Oracle is a B-tree index. This index type is very efficient for high cardinality column values.
Prerequisites
To create an index in your own schema, one of the following conditions must be true:
The table or cluster to be indexed must be in your own schema.
You must have the INDEX object privilege on the table to be indexed.
You must have the CREATE ANY INDEX system privilege.
Without specifying any options, a B-tree is created with the CREATE INDEX statement; all you need to
provide is the index name, table name, and column(s).

create index cust_idx2
on cust(first_name);
SQL> SELECT TABLE_NAME, INDEX_NAME
2 FROM USER_INDEXES
3 WHERE TABLE_NAME = 'EMP';

There are several subtypes of B-tree indexes.
􀀀Index-organized table
􀀀Unique
􀀀Reverse key
􀀀Key compressed
􀀀Descending
Index-Organized Table
An index-organized table (IOT) stores the entire contents of the table’s row in a B-tree index structure.
An IOT provides fast access for queries that have exact matches and/or range searches on the primary
key.
create table prod_sku
(prod_sku_id number
,sku varchar2(256),
constraint prod_sku_pk primary key(prod_sku_id, sku)
) organization index;
Unique Indexes
When creating a B-tree index you can define it to be a unique index. In this regard it acts like a unique

key constraint. When inserting into the corresponding table, the unique index will guarantee that any
non-null values inserted into the table are distinct. For this reason, unique indexes are commonly used
in conjunction with primary key and unique key constraints
A unique index is specified via the CREATE UNIQUE INDEX statement.
create unique index cust_uidx1
on cust(last_name, first_name);
Reverse Key Indexes
Reverse key indexes are useful to balance I/O in an index that has many sequential inserts. These
indexes can perform better in scenarios where you need a way to evenly distribute index data that would
otherwise have similar values clustered together. Thus, when using a reverse-key index, you avoid
having I/O concentrated in one physical disk location within the index during large inserts of sequential
values
create index cust_ridx1
on cust(cust_id) reverse;
NULLs and Indexes
NULL values are not stored in a B-tree index, unless it is a composite index where at least the first
column of the index contains a value. The following query does not make use of the single-column index
on the FIRST_NAME column.
SELECT student_id, first_name
FROM student
WHERE first_name IS NULL
Specialized Index Types
Sometimes a B-tree index isn’t enough to provide the desired performance improvement. The following
are indexes that should be used under specialized circumstances:
􀀀Bitmap
􀀀Function-based
􀀀Invisible
Bitmap Index
Bitmap indexes are commonly used in data-warehouse environments. These indexes are recommended
for columns with a relatively low number of distinct values (low cardinality)
efficient for SQL statements that use multiple AND or OR join operators in the WHERE clause (which is
typical in a data warehouse environment).
You should not use bitmap indexes in OLTP databases with high INSERT/UPDATE/DELETE activities.
This is because the structure of the bitmap index results in many locked rows during singular DML
operations (which results in locking problems for high-transaction OLTP systems).
A bitmap index is created using the keyword BITMAP. For completeness, we also show the table
creation script upon which the bitmap index is built.
create table f_sales(
sales_amt number
,d_date_id number
,d_product_id number
,d_customer_id number);
create bitmap index f_sales_fk1
on f_sales(d_date_id);

The following statement creates a bitmap index on the GENDER column of a CUSTOMER table.
CREATE BITMAP INDEX customer_bm_gender_i
ON customer(gender)
Function-Based Indexes
Function-based indexes are created with SQL functions or expressions in their definitions. Functionbased
indexes allow index lookups on columns referenced by SQL functions in the WHERE clause of a
query. Here’s an example of creating a function-based index:
create index cust_fidx1
on cust(upper(last_name));
These types of indexes are necessary because Oracle won’t use a normal B-tree index when a query
references a column with a SQL function applied to it.
SQL> CREATE INDEX EmpTotSalIDX
2 ON Emp(Sal + NVL(Comm, 0));
Index created.
SQL> SELECT Ename, Sal, Comm, Sal + NVL(Comm, 0) TotSal, Deptno, Job
2 FROM Emp
3 WHERE Sal + NVL(Comm, 0) > 2500;
ENAME SAL COMM TOTSAL DEPTNO JOB
---------- ---------- ---------- ---------- ---------- ---------
MARTIN 1250 1400 2650 30 SALESMAN
BLAKE 2850 2850 30 MANAGER
JONES 2975 2975 20 MANAGER
FORD 3000 3000 20 ANALYST
SCOTT 3000 3000 20 ANALYST
KING 5000 5000 10 PRESIDENT
6 rows selected.

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=2 Card=6 Bytes=312
)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'EMP' (TABLE) (Cost=2 Car
d=6 Bytes=312)
2 1 INDEX (RANGE SCAN) OF 'EMPTOTSALIDX' (INDEX) (Cost=1 Car
d=1)
Invisible Index
An invisible index means the optimizer doesn’t use the index when retrieving data for a query. However,
the index structure is still maintained as the underlying table has records inserted, updated, or deleted.
This feature is used when you want to test the viability of an index without impacting existing
application code. Use the INVISIBLE keyword to create an invisible index.
create index cust_iidx1
on cust(last_name) invisible;
Note Invisible indexes are only available in Oracle Database 11g and higher.
Indexes and Constraints
When you create a primary key constraint or a unique constraint, Oracle creates the index automatically
unless a suitable index already exists
The index NEW_TERM_PK is created as part of the CREATE TABLE statement and is associated with the
primary key constraint.
sql>CREATE TABLE new_term
(term_no NUMBER(8) NOT NULL PRIMARY KEY USING INDEX
(CREATE INDEX new_term_pk ON new_term(term_no)
STORAGE (INITIAL 100 K NEXT 100K)),
season_tx VARCHAR2(20),
sequence_no NUMBER(3))
If you already have an existing index and you want to associate a constraint with it, you can use a
statement similar to the following. It assumes an existing index called
SEMESTER_SEMESTER_ID_I, based on the SEMESTER_ID column.
ALTER TABLE semester
ADD CONSTRAINT semester_pk PRIMARY KEY (semester_id)
USING INDEX semester_semester_id_i
The next statement shows an example of a unique constraint that is associated with a unique index.

CREATE TABLE semester
(semester_id NUMBER(8),
semester_name VARCHAR2(8) NOT NULL,
year_no NUMBER(4) NOT NULL,
CONSTRAINT semester_uk UNIQUE (semester_name, year_no)
USING INDEX
(CREATE UNIQUE INDEX semester_sem_yr_uk
ON semester(semester_name, year_no)))
note--When disabling a unique or primary key, you can keep the index if you specify the KEEP INDEX
clause in an ALTER TABLE statement
Indexes and Foreign Keys
You should almost always index foreign keys because they are frequently used in joins.
Dropping an Index
To drop an index, use the DROP INDEX command. You might drop an index if queries in your applications
do not utilize the index. You find out which indexes are used by
querying the V$OBJECT_USAGE data dictionary view.
>DROP INDEX sect_location_i
Index dropped.
---------When you drop a table, all associated indexes are dropped automatically
Loading Large Amounts of Data
When you insert or update large amounts of data, you might want to consider dropping certain indexes
not used for the DML operation’s WHERE clause to improve performance. After the operation is
complete, you can re-create the appropriate indexes.
One fast way to re-create indexes is by using the NOLOGGING option. It avoids writing to the redo log,
which keeps track of all the database changes. If you incur a fatal database error and you need to
recover from the redo log, the index will not be recovered. This may be fine because an index can always
be re-created.

You can also create an index by using the PARALLEL option. This allows parallel scans of the table to
create the index and can make index creation much faster, provided that you have the appropriate
hardware configuration, such as multiple CPUs.

Tuesday, 23 May 2017

Autonomous Transactions in oracle

Autonomous Transactions
Autonomous transactions allow you to leave the context of the calling transaction, perform an independant transaction, and return to the calling transaction without affecting it's state. The autonomous transaction has no link to the calling transaction, so only commited data can be shared by both transactions.
The following types of PL/SQL blocks can be defined as autonomous transactions:
•    Stored procedures and functions.
•    Local procedures and functions defined in a PL/SQL declaration block.
•    Packaged procedures and functions.
•    Type methods.
•    Top-level anonymous blocks.
The easiest way to understand autonomous transactions is to see them in action. To do this, we create a test table and populate it with two rows. Notice that the data is not commited.
CREATE TABLE at_test (
  id           NUMBER       NOT NULL,
  description  VARCHAR2(50) NOT NULL
);

INSERT INTO at_test (id, description) VALUES (1, 'Description for 1');
INSERT INTO at_test (id, description) VALUES (2, 'Description for 2');

SELECT * FROM at_test;

        ID DESCRIPTION
---------- --------------------------------------------------
         1 Description for 1
         2 Description for 2

2 rows selected.

SQL>
Next, we insert another 8 rows using an anonymous block declared as an autonomous transaction, which contains a commit statement.
DECLARE
  PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
  FOR i IN 3 .. 10 LOOP
    INSERT INTO at_test (id, description)
    VALUES (i, 'Description for ' || i);
  END LOOP;
  COMMIT;
END;
/

PL/SQL procedure successfully completed.

SELECT * FROM at_test;

        ID DESCRIPTION
---------- --------------------------------------------------
         1 Description for 1
         2 Description for 2
         3 Description for 3
         4 Description for 4
         5 Description for 5
         6 Description for 6
         7 Description for 7
         8 Description for 8
         9 Description for 9
        10 Description for 10

10 rows selected.

SQL>
As expected, we now have 10 rows in the table. If we now issue a rollback statement we get the following result.
ROLLBACK;
SELECT * FROM at_test;

        ID DESCRIPTION
---------- --------------------------------------------------
         3 Description for 3
         4 Description for 4
         5 Description for 5
         6 Description for 6
         7 Description for 7
         8 Description for 8
         9 Description for 9
        10 Description for 10

8 rows selected.

SQL>
The 2 rows inserted by our current session (transaction) have been rolled back, while the rows inserted by the autonomous transactions remain. The presence of the PRAGMA AUTONOMOUS_TRANSACTION compiler directive made the anonymous block run in its own transaction, so the internal commit statement did not affect the calling session. As a result rollback was still able to affect the DML issued by the current statement.
Autonomous transactions are commonly used by error logging routines, where the error messages must be preserved, regardless of the the commit/rollback status of the transaction. For example, the following table holds basic error messages.
CREATE TABLE error_logs (
  id             NUMBER(10)     NOT NULL,
  log_timestamp  TIMESTAMP      NOT NULL,
  error_message  VARCHAR2(4000),
  CONSTRAINT error_logs_pk PRIMARY KEY (id)
);

CREATE SEQUENCE error_logs_seq;
We define a procedure to log error messages as an autonomous transaction.
CREATE OR REPLACE PROCEDURE log_errors (p_error_message  IN  VARCHAR2) AS
  PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
  INSERT INTO error_logs (id, log_timestamp, error_message)
  VALUES (error_logs_seq.NEXTVAL, SYSTIMESTAMP, p_error_message);
  COMMIT;
END;
/
The following code forces an error, which is trapped and logged.
BEGIN
  INSERT INTO at_test (id, description)
  VALUES (998, 'Description for 998');

  -- Force invalid insert.
  INSERT INTO at_test (id, description)
  VALUES (999, NULL);
EXCEPTION
  WHEN OTHERS THEN
    log_errors (p_error_message => SQLERRM);
    ROLLBACK;
END;
/

PL/SQL procedure successfully completed.

SELECT * FROM at_test WHERE id >= 998;

no rows selected

SELECT * FROM error_logs;

        ID LOG_TIMESTAMP
---------- ---------------------------------------------------------------------------
ERROR_MESSAGE
----------------------------------------------------------------------------------------------------
         1 28-FEB-2006 11:10:10.107625
ORA-01400: cannot insert NULL into ("TIM_HALL"."AT_TEST"."DESCRIPTION")


1 row selected.

SQL>
From this we can see that the LOG_ERRORS transaction was separate to the anonymous block. If it weren't, we would expect the first insert in the anonymous block to be preserved by the commit statement in the LOG_ERRORS procedure.
Be careful how you use autonomous transactions. If they are used indiscriminately they can lead to deadlocks, and cause confusion when analyzing session trace. To hammer this point home, here's a quote from Tom Kyte.
"... in 999 times out of 1000, if you find yourself "forced" to use an autonomous transaction - it likely means you have a serious data integrity issue you haven't thought about.
Where do people try to use them?
•    in that trigger that calls a procedure that commits (not an error logging routine). Ouch, that has to hurt when you rollback.
•    in that trigger that is getting the mutating table constraint. Ouch, that hurts *even more*
Error logging - OK.
Almost everything else - not OK."

The MERGE Statement in oracle

The MERGE Statement
The MERGE statement is a single command that combines the ability to update or insert rows into a table
by conditionally deriving the rows to be updated or inserted from one or more sources
This allows you to avoid issuing multiple INSERT, UPDATE, and DELETE statements.

The syntax for the MERGE statement is:

MERGE <hint>
INTO <table_name>
USING <table_view_or_query>
ON (<condition>)
WHEN MATCHED THEN <update_clause>
DELETE <where_clause>
WHEN NOT MATCHED THEN <insert_clause>
[LOG ERRORS <log_errors_clause> <reject limit <integer | unlimited>];

 MERGE Statement Example


SQL> create table dept60_bonuses
2 (employee_id number
3 ,bonus_amt number);
Table created.
SQL> insert into dept60_bonuses values (103, 0);
1 row created.
SQL> insert into dept60_bonuses values (104, 100);
1 row created.
SQL> insert into dept60_bonuses values (105, 0);
1 row created.
SQL> commit;
Commit complete.
SQL> select employee_id, last_name, salary
2 from employees
3 where department_id = 60 ;
EMPLOYEE_ID LAST_NAME SALARY
--------------- ------------------------- ---------------
103     Hunold     9000
104     Ernst     6000
105     Austin     4800
106     Pataballa  4800
107     Lorentz        4200

SQL> select * from dept60_bonuses;
EMPLOYEE_ID BONUS_AMT
--------------- ---------------
103     0
104     100
105     0

3 rows selected.


SQL> merge into dept60_bonuses b
2 using (
3 select employee_id, salary, department_id
4 from employees
5 where department_id = 60) e
6 on (b.employee_id = e.employee_id)
7 when matched then
8 update set b.bonus_amt = e.salary * 0.2
9 where b.bonus_amt = 0
10 delete where (e.salary > 7500)
11 when not matched then
12 insert (b.employee_id, b.bonus_amt)
13 values (e.employee_id, e.salary * 0.1)
14 where (e.salary < 7500);

4 rows merged.
SQL> select * from dept60_bonuses;
EMPLOYEE_ID BONUS_AMT
--------------- ---------------
104     100
105     960
106     480
107     420

4 rows selected.

SQL> rollback;
Rollback complete.

The MERGE accomplished the following:
• Two rows were inserted (employee_ids 106 and 107).
• One row was updated (employee_id 105).
• One row was deleted (employee_id 103).
• One row remained unchanged (employee_id 104).
Without the MERGE statement, you would have had to write at least three different statements to
complete the same work.

Monday, 22 May 2017

Materialized Views in oracle

Materialized Views
materialized views are mainly used in complex data warehousing environments, where the tables
grow so big that the data volume causes unacceptable performance problems. An important
property of data warehousing environments is that you don’t change the data very often.
Typically, there is a separate Extraction, Transformation, Loading (ETL)
process that updates the data warehouse contents.
Materialized views are also often used with distributed databases. In such
environments, accessing data over the network can become a performance bottleneck. You can
use materialized views to replicate data in a distributed database Materialized views are a data
warehousing/decision support system tool that can increase by many orders of magnitude the
speed of queries that access a large number (maybe many hundreds of thousands or millions) of
records. In basic terms, they allow a user to query potentially terabytes of detail data in seconds
(or less). They accomplish this by transparently using pre‐computed summarizations and joins
of data. These pre‐computed summaries would typically be very small compared to the original
source data
Say, for example, your company has a sales database loaded with the details of a million orders,
and you want to get a breakdown of sales by region (a common enough query).
Each and every record would be scanned, the data aggregated to the region level, and the
calculation performed. Using a materialized view, we can store a summary of the sales data by
region, and have the system maintain these summaries for us. If you have ten sales regions, this
summary will have ten records, so instead of sifting through a million detail
records, we􀀀ll query only ten. Furthermore, if someone asks a slightly different question,
say for the sales in a specific region, then that query can also obtain the answer from the
materialized view.
You will need a user account with the following privileges (at least):
• GRANT CREATE SESSION
• GRANT CREATE TABLE
• GRANT CREATE MATERIALIZED VIEW
• GRANT QUERY REWRITE
An Example
We􀀀ll start with a large table that contains a list of owners of objects, and the objects they own.
This table is based on the ALL_OBJECTS data dictionary
view:
> create table my_all_objects
2 nologging
3 as
4 select * from all_objects
5 union all
6 select * from all_objects
7 union all
8 select * from all_objects
9 /
Table created.
> insert /*+ APPEND */ into my_all_objects
2 select * from my_all_objects;
65742 rows created.
> commit;
Commit complete.
> insert /*+ APPEND */ into my_all_objects
2 select * from my_all_objects;
131484 rows created.
> commit
> commit;
Commit complete.
> analyze table my_all_objects compute statistics;
Table analyzed
> set autotrace on
> set timing on
> select owner, count(*) from my_all_objects group by owner;
OWNER COUNT(*)
‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐
‐‐‐‐‐‐‐‐‐‐
A 36
B 24
CTXSYS 2220
DBSNMP 48
DEMO 60
DEMO11 36
DEMO_DDL 108
MDSYS 2112
MV_USER 60
ORDPLUGINS 312
ORDSYS 2472
OUR_TYPES 12
OUTLN 60
PERFSTAT 636
PUBLIC 117972
SCHEDULER 36
SCOTT 84
SEAPARK 36
SYS 135648
SYSTEM 624
TESTING 276
TKYTE 12
TTS_USER 48
TYPES 36
24 rows selected.
Elapsed: 00:00:03.35
> set timing off
> set autotrace traceonly
> select owner, count(*) from my_all_objects group by owner;
24 rows selected.
Execution Plan
‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐
‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2525 Card=24 Bytes=120)
1 0 SORT (GROUP BY) (Cost=2525 Card=24 Bytes=120)
2 1 TABLE ACCESS (FULL) OF 􀀀MY_ALL_OBJECTS􀀀 (Cost=547 Card=262968
Statistics
‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐
‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐
0 recursive calls
27 db block gets
3608 consistent gets
3516 physical reads
0 redo size
1483 bytes sent via SQL*Net to client
535 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
24 rows processed
In order to get the aggregate count, we must count 250000+ records on over 3600 blocks.
Unfortunately, in our system we ask this question frequently, dozens of times every day.
We are scanning almost 30MB of data. We could avoid counting the details each and every
time by creating a materialized view of the data. The following demonstrates the basic
steps needed to perform this operation. We􀀀ll discuss the GRANT and ALTER statements
> grant query rewrite to tkyte;
Grant succeeded.
> alter session set query_rewrite_enabled=true;
Session altered.
> alter session set query_rewrite_integrity=enforced;
Session altered.
> create materialized view my_all_objects_aggs
2 build immediate
3 refresh on commit
4 enable query rewrite
5 as
6 select owner, count(*)
7 from my_all_objects
8 group by owner
9 /
Materialized view created.
> analyze table my_all_objects_aggs compute statistics;
Table analyzed.
Creating a Complete-Refreshable Materialized View
Complete refreshes are appropriate for MVs that have base tables in which significant portions of the rows change
from one refresh interval to the next. Complete refreshes are also required in situations where a fast refresh isn’t
possible (due to restrictions imposed by Oracle .
Note To create an MV, you need both the CREATE MATERIALIZED VIEW system privilege and the CREATE
TABLE system privilege.
The example in this section uses the following base table named SALES:
create table sales(
sales_id number
,sales_amt number
,region_id number
,sales_dtt timestamp
,constraint sales_pk primary key(sales_id)
);
Insert some sample data into the SALES table:
insert into sales values(1,101,100,sysdate-50);
insert into sales values(2,511,200,sysdate-20);
insert into sales values(3,11,100,sysdate);
commit;
The next step is to create the MV, using a CREATE MATERIALIZED VIEW...AS SELECT statement. This
statement names the MV, specifies its attributes, and defines the SQL query on which the MV is based:
create materialized view sales_mv
segment creation immediate
refresh
complete
on demand
as
select
sales_amt
,sales_dtt
from sales;
The SEGMENT CREATION IMMEDIATE clause is available with Oracle 11g release 2 and higher. It instructs
Oracle to create the segment and allocate an extent when you create the MV. This was the behavior in
previous versions of Oracle. If you don’t want immediate segment creation, use the SEGMENT CREATION
DEFERRED clause. If the newly created MV has any rows in it, then segments are created and extents are
allocated regardless of whether you use SEGMENT CREATION DEFERRED.
Let’s look at the USER_MVIEWS data dictionary to verify that the MV was created as expected. Here’s
the query to run:
select
mview_name
,refresh_method
,refresh_mode
,build_mode
,fast_refreshable
from user_mviews
where mview_name = 'SALES_MV';
Here’s the output for this MV:
MVIEW_NAME REFRESH_ REFRESH_MODE BUILD_MOD FAST_REFRESHABLE
------------ -------- ------------ --------- --------------------
SALES_MV COMPLETE DEMAND IMMEDIATE NO
It’s also informative to inspect the USER_OBJECTS and USER_SEGMENTS views to see what has been
created. When you query USER_OBJECTS, notice that several objects have been created:
select
object_name
,object_type
from user_objects
where object_name like 'SALES%'
order by object_name;
Here’s the corresponding output:
OBJECT_NAME OBJECT_TYPE
-------------------- --------------------
SALES TABLE
SALES_MV MATERIALIZED VIEW
SALES_MV TABLE
SALES_PK INDEX
The MV is a logical container that stores data in a regular database table. Querying the
USER_SEGMENTS view shows the base table, its primary-key index, and the table that stores data returned
by the MV query:
select
segment_name
,segment_type
from user_segments
where segment_name like 'SALES%'
order by segment_name;
Here’s the output for this example:
SEGMENT_NAME SEGMENT_TYPE
-------------------- --------------------
SALES TABLE
SALES_PK INDEX
SALES_MV TABLE
Note Oracle sometimes automatically creates an index on the MV. In this scenario, no index for the MV
was created
Next, insert some additional data into the base SALES table:
SQL> insert into sales values(4,99,200,sysdate);
SQL> insert into sales values(5,127,300,sysdate);
Now you attempt to initiate a fast refresh of the MV using the REFRESH procedure of the DBMS_MVIEW
package. This example passes two parameters to the REFRESH procedure: the name and the refresh
method. The name is SALES_MV, and the parameter is F (for fast):
SQL> exec dbms_mview.refresh('SALES_MV','F');
Because this MV wasn’t created in conjunction with an MV log, a fast refresh isn’t possible. The
following error is thrown:
ORA-12004: REFRESH FAST cannot be used for materialized view "MV"."SALES_MV"

Creating a Fast-Refreshable Materialized View
When you create a fast-refreshable MV, it initially populates the MV table with the entire result set of the
MV query. After the initial result set is in place, only data modified (in the base table) since the last
refresh needs to be applied to the MV. In other words, any updates, inserts, or deletes from the master
table that have occurred since the last refresh are copied over. This feature is appropriate when you have
a small amount of changes to a base table over a period of time compared to the total number of rows in
the table.
Here are the steps to implement a fast-refreshable MV:
1. Create a base table (if it’s not already created).
2. Create an MV log on the base table.
3. Create an MV as fast-refreshable.
This example first creates a base table. In most environments, you already have a base table in
place. For illustrative purposes, here’s the table-creation script and some sample data:
create table sales(
sales_id number
,sales_amt number
,region_id number
,sales_dtt timestamp
,constraint sales_pk primary key(sales_id)
);
--
insert into sales values(1,101,100,sysdate-50);
insert into sales values(2,511,200,sysdate-20);
insert into sales values(3,11,100,sysdate);
commit;
A fast-refreshable MV requires an MV log on the base table. When a fast refresh occurs, the MV log
must have a unique way to identify which records have been modified and thus need to be refreshed.
You can do this two different approaches. One method is to specify the PRIMARY KEY clause when you
create the MV log, and the other is to specify the ROWID clause. If the underlying base table has a primary
key, then use the primary key–based MV log. If the underlying base table has no primary key, then you
have to create the MV log using ROWID. In most cases, you’ll probably have a primary key defined for
every base table. However, the reality is that some systems are poorly designed or have some rare reason
for a table not to have a primary key.
In this example, a primary key is defined on the base table, so you create the MV log with the
PRIMARY KEY clause:
SQL> create materialized view log on sales with primary key;
If there was no primary key defined on the base table, this error is thrown when attempting to create
the MV log:
ORA-12014: table does not contain a primary key constraint
If the base table has no primary key, and you don’t have the option to add one, you must specify
ROWID when you create the MV log:
SQL> create materialized view log on sales with rowid;
Now that you’ve created the MV log, you can create the fast-refreshable MV. There are a couple of
interesting architectural considerations to point out. If there is no MV log on the base table, then Oracle
doesn’t allow you to create a fast-refreshable MV. This error is thrown:
ORA-23413: table does not have a materialized view log.
Also, when you create the fast-refreshable MV, you must specify whether it’s refreshed via the
PRIMARY KEY (which is the default) or via the ROWID. This example uses a table with a primary key and an
MV log created with a primary key. In this example, the MV is fast-refreshed via a primary key. When you
use a primary key–based fast-refreshable MV, the primary-key column(s) of the base table must be part
of the fast-refreshable MV SELECT statement:
create materialized view sales_mv
segment creation immediate
refresh
with primary key
fast
on demand
as
select
sales_id
,sales_amt
,sales_dtt
from sales;
At this point, it’s useful to inspect the objects that are associated with the MV. The following query
selects from USER_OBJECTS:
select
object_name
,object_type
from user_objects
where object_name like '%SALES%'
order by object_name;
Here are the objects that have been created:
OBJECT_NAME OBJECT_TYPE
-------------------- --------------------
MLOG$_SALES TABLE
RUPD$_SALES TABLE
SALES TABLE
SALES_MV MATERIALIZED VIEW
SALES_MV TABLE
SALES_PK INDEX
SALES_PK1 INDEX
SQL> select sales_amt, to_char(sales_dtt,'dd-mon-yyyy') from sales_mv;
Here’s the output:
SALES_AMT TO_CHAR(SALES_DTT
---------- -----------------
101 12-jun-2010
511 12-jul-2010
11 01-aug-2010
Let’s add two records to the base SALES table:
insert into sales values(4,99,200,sysdate);
insert into sales values(5,127,300,sysdate);
commit;
At this point, it’s instructional to inspect the M$LOG table. You should see two records that identify
how the data in the SALES table has changed:
SQL> select count(*) from mlog$_sales;
There are two records:
COUNT(*)
----------
2
Next, let’s refresh the MV. This MV is fast-refreshable, so you call the REFRESH procedure of the
DBMS_MVIEW package with the F (for fast) parameter:
SQL> exec dbms_mview.refresh('SALES_MV','F');
A quick inspection of the MV shows two new records:
SQL> select sales_amt, to_char(sales_dtt,'dd-mon-yyyy') from sales_mv;
SALES_AMT TO_CHAR(SALES_DTT
---------- -----------------
101 12-jun-2010
511 12-jul-2010
11 01-aug-2010
99 01-aug-2010
127 01-aug-2010
In addition, the count of the MLOG$ has dropped to zero. After the MV refresh is complete, those
records are no longer required:
SQL> select count(*) from mlog$_sales;
Here’s the output:
COUNT(*)
----------
0
You can verify the last method in which an MV was refreshed by querying the USER_MVIEWS view:
select
mview_name
,last_refresh_type
,last_refresh_date
from user_mviews
order by 1,3;
Here’s some sample output:
MVIEW_NAME LAST_REF LAST_REFR
------------------------------ -------- ---------
SALES_MV FAST 01-AUG-10

Friday, 19 May 2017

html code for header ,sidebar,footer,containt

<html>
  <head>
        <style>
            #header{
                background-color: lightblue;
                width:100%;
                height:50px;
                text-align: center;
            }
            #sidebar-left{
                float:left;
                width:15%;
                background-color: red;
            }
            #main{
                float:left;
                width:70%;
                background-color: lightgray;
            }
            #sidebar-right{
                float:left;
                width:15%;
                background-color: red;
            }
            #footer{
                clear:both;
                height: 50px;
                width: 100%;
                text-align: center;
                background-color: lightblue;
            }
            #sidebar-left, #main, #sidebar-right{
                min-height: 600px               
            }
        </style>
    </head>
    <body>
        <div id="header">ITGENIE4U</div>
                <div id="sidebar-left">Left</div>
        <div id="main">
        <p><strong>CONTENT</strong>This is a test. This is a test. This is a test. This is a test.
This is a test. This is a test. This is a test. This is a test. This is a test.
This is a test. This is a test. This is a test. This is a test. This is a test.
This is a test. This is a test. This is a test. This is a test. This is a test.

</p>

<p>This is a test. This is a test. This is a test. This is a test.
This is a test. This is a test. This is a test. This is a test. This is a test.
This is a test. This is a test. This is a test. This is a test. This is a test.
This is a test. This is a test. This is a test. This is a test. This is a test. This is a test.
This is a test. This is a test. This is a test. This is a test. This is a test.
</p>
</div>
       
        <div id="sidebar-right">Right</div>
        <div id="footer">Footer</div>
    </body>
</html>

LOGICAL & PHYSICAL STRUCTURE OF DATABASE

 Image result for logical physical structure of database

       BLOCK
Block is smallest storage unit into which we can store data is called block.
Block size is 8kb.
„Ï 1 Byte = 8 Bit 1024 MB = 1 GB 2k Windows
„Ï 1 Kb = 1024 Bytes 1024 KB = 1 MB 4k Solaris
„Ï 1 Mb = 1048576 Bytes 8k Linux
„Ï 1 Gb = 1073741824 Bytes 16k Hp ¡V UX
# Mount show all mounted files. 32k AIX
# df ¡Vk (show free space in all TS)
There are two phases in block
„Ï One is for feature usage. We call it as pctfree.
„Ï One is for current usage. We call it as pctuse.
Block has two STORAGE PARAMETER called pctfree and pctuse.
Pctuse: it is the amount of space reserved for session.
Pctfree: it is the amount of free space left in the block for feature updating.
A database block is a combination of OS blocks. Suppose if the size of the OS block
is 4kb then 2 OS blocks combined to form database block.
„Ï Up oracle 9i minimum block size as 2k.
„Ï At oracle 10g minimum block size as 8k.
Physical structure
Collection of data files in operating system is called physical structure. Or, Set of
operating system files in database.
Logical Structure
The logical structure of the oracle architecture how the physical space of a
database is to be used.



In create table – table name is table segment
Tablespaces: A tablespace may consist of one more segments.
Segments: A segment made up of extents.
Extents: An extent is made up of blocks
Block: A block is the smallest storage unit of read and writes operations.
STRUCTURE OF A BLOCK
Header
Contains information regarding the types of block (table, block, index etc)
transaction information regarding active and post transaction on the block.
Table Directory
Contains information about the tables that store rows in this block.
Row Directory
Contains information describing the rows that are to be found on the block. This
is an array of pointers to where the rows are to be found in the data portion of the
block.
Block Over Head
The three above pieces are known as the block overhead and are used by oracle
to manage the block itself.