DBA best practices

Hi DBA’s,

I would like to share a nice presentation which has been given by Mr.Arup Nanda ,The President of Proligence and As a recognition of his outstanding accomplishments and contributions to the user community, Oracle awarded Arup the most coveted DBA of the Year Award at Oracle open World in September 2003.

  The following five essential best practices were derived from his years of experience working with Oracle systems large and small.

#1: Multiple Oracle Homes

My favorite best practice is the one about multiple Oracle Homes. Here it how it goes. When applying a patch or a patchset, I recommend against applying to the existing Oracle Home. Instead, I suggest creating a new Oracle Home, and apply the patches there. I create the first Oracle Home at /app/oracle/db_1, for instance. When a patch comes out, I install the whole Oracle software in a different home — /app/oracle/db_2 — and then apply the patch there. During the process of installation and patch application, the database is up and running, as it runs off the home /db_1. When the outage window comes, all I have to do is to shut down Oracle; change Oracle Home to db_2 and bring the database up. If there is a problem, I can reset the Oracle Home back to the old one.

So, here is the conventional approach:
1.Shut down the database
2.Apply patch to the Oracle Home
3.Start the database In case of problems
4.Shut down the database
5.Roll back the patch
6.Start the database

Steps 2 and 6 could take as much as three hours depending on the amount of patching. The database is down during these times.

 In the new approach:
1.Install new Oracle Home
2.Apply the patch to the new Home
3.Shut down the database
4.Change Oracle Home to the new location
5.Start the database
7.In case of problems
8.Shut down the database Change Oracle Home to the old one
9.Start the database

The database is down only during steps 4 and 8, which takes a couple of minutes at the most, not hours.

 So, here are the advantages:

1.The downtime is significantly reduced, to one 60th of the original time. 2.The risk is reduced significantly, as there is no rolling back a patch; you just go to the older version.
3.You can perform a “diff” on these two homes to see what changed. You can see the differences across multiple homes as well.
4.You can take several databases running on the same server to the new Oracle Home one by one.
5.You can see the various Oracle Homes and what patch level they are on using the inventory.

The only negative part is the space consumption — you need space for two Oracle Homes. But considering a typical Oracle Home takes about 4 GB or less, this aspect of the suggestion is trivial.

 #2: Set audit trail to DB

Set the audit trail to DB by placing the parameter AUDIT_TRAIL = DB in the initialization parameter file during the database creation. Setting this parameter does not start the auditing, because an explicit AUDIT command must be given on the object. But the parameter must be set to a value other than FALSE (the default) for the command to take effect. Being a non-dynamic parameter, the database must be bounced to change the value of AUDIT_TRAIL. To save the trouble and avoid an outage, always set the value to DB, even if you never intend to audit anything. It does not break anything and you will always be ready to audit when the time comes.

#3: Don’t use .log

Don’t use .log as the extension of redo logs. Someone may run a script to remove all the log files assuming they are redundant and you will end up losing the online redo logs as well, forcing a database recovery. Instead, name them with extension “redo” or “rdo.”

#4: Preview RMAN Restore

 Preview RMAN Restore to identify all the various backup pieces that will be used in the recovery process without doing an actual recovery. This eliminates any surprises from missing pieces during an actual recovery process.

#5: Create a new Oracle user for clients running on the same server as the DB

The Oracle Database server software also contains the client piece, which allows the clients to connect to the database on the same server. But as a best practice do not use the same user or the software; use a new one. For instance, if “oracle” is the user to install Oracle software, create a new user called, say, “oraapp” and install the client-only software using that user. The user “oraapp” should not be part of the dba or the oinstall group; so this user can’t log on to the database as sysdba. Create a new group called “appgrp” and assign the user oraaap to this group. All the application users on the box should also be part of the appgrp group. This way they can use the sqlplus, sqlldr and other executables on the server, but be able to connect as sysdba. The common practice is to use the client software in the same user as the database software owner; but starting with 10.2, Oracle has changed the security policy that takes away the global execution permissions from the Oracle Home. So the only option is to let app users be part of the dba group or change the permissions on Oracle Home — both make the database extremely vulnerable.


Materialized Views

A materialized view  is a database object that contains the results of a query. They are local copies of data located remotely, or are used to create summary tables based on aggregations of a table’s data. Materialized views, which store data based on remote tables are also, know as snapshots (in previous versions of oracle), Materialized Views can increase many times the speed of queries that access huge data records

When you create a materialized view, Oracle Database creates one internal table and at least one index, and may create one view, all in the schema of the materialized view. Oracle Database uses these objects to maintain the materialized view data. You must have the privileges necessary to create these objects.

The following types of refresh methods are supported by Oracle.

Complete – build from scratch
Fast – only apply the data changes
Force – try a fast refresh, if that is not possible, do a complete refresh
Never – never refresh the materialized view

Materialized View Log:

When DML changes are made to master table data, Oracle Database stores rows describing those changes in the materialized view log and then uses the materialized view log to refresh materialized views based on the master table. This process is called incremental or fast refresh. Without a materialized view log, Oracle Database must re-execute the materialized view query to refresh the materialized view. This process is called a complete refresh. Usually, a fast refresh takes less time than a complete refresh.

A materialized view log is located in the master database in the same schema as the master table. A master table can have only one materialized view log defined on it. Oracle Database can use this materialized view log to perform fast refreshes for all fast-refreshable materialized views based on the master table.

To fast refresh a materialized join view, you must create a materialized view log for each of the tables referenced by the materialized view.

Privileges required:

create materialized view
create any materialized view
drop any materialized view
delete any table
insert any table
lock any table
select any table
under any table
update any table
create table
create view

There is one mandatory INIT.ORA parameter necessary for materialized views to function, this is the COMPATIBLE parameter. The value of COMPATIBLE should be set to 8.1.0, or above, in order for query rewrites to be functional. If this value is not set appropriately, query rewrite will not be invoked.

There are two other relevant parameters that may be set at either the system-level via the INIT.ORA file, or the session-level via the ALTER SESSION command.


Unless the value of this parameter is set to TRUE, query rewrites will not take place. The default value is FALSE.

This parameter controls how Oracle rewrites queries and may be set to one of three values:

ENFORCED – Queries will be rewritten using only constraints and rules that are enforced and guaranteed by Oracle. There are mechanisms, by which we can tell Oracle about other inferred relationships, and this would allow for more queries to be rewritten, but since Oracle does not enforce those relationships, it would not make use of these facts at this level.

TRUSTED – Queries will be rewritten using the constraints that are enforced by Oracle, as well as any relationships existing in the data that we have told Oracle about, but are not enforced by the database.

STALE TOLERATED – Queries will be rewritten to use materialized views even if Oracle knows the data contained in the materialized view is ‘ stale ‘ (out-of-sync with the details). This might be useful in an environment where the summary tables are refreshed on a recurring basis, not on commit, and a slightly out-of-sync answer is acceptable.

Syntax (Fast Refresh)

PCTFREE <integer>
PCTUSED <integer>
TABLESPACE <tablespace_name>
INITRANS <integer>
AS (<SQL statement>);

Example: (Here we need to create a materialized view log first)


SELECT * FROM employees;

Syntax (Force Refresh)

PCTFREE <integer>
PCTUSED <integer>
TABLESPACE <tablespace_name>
AS (<SQL statement>);


SELECT * FROM employees;

Syntax (Complete Refresh)

PCTFREE <integer>
PCTUSED <integer>
TABLESPACE <tablespace_name>
NEXT <date_calculation>
AS (<SQL statement>);


AS SELECT a.employee_id,a.last_name,a.salary,b.department_id
from employees a,departments b
where a.department_id=b.department_id;
Materialized views significantly improves the performance of Oracle systems required to process complex SQL statements.

—————A Performance Test on Materialized views——————————————–

SQL> create table sample as select rownum as id, owner,object_name , object_type from all_objects ;

Table created.
SQL> CREATE INDEX ind ON sample(ID);

Index created.
SQL> analyze table sample compute statistics;

Table analyzed.
SQL> set autotrace traceonly;

SQL> set timing on

SQL>  select owner,count(object_name) from sample group by owner;

24 rows selected.

Elapsed: 00:00:00.10

Execution Plan
Plan hash value: 3791231185

Execution Plan
Plan hash value: 3791231185

| Id  | Operation          | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT   |        |    24 |   120 |    87  (11)| 00:00:02 |
|   1 |  HASH GROUP BY     |        |    24 |   120 |    87  (11)| 00:00:02 |
|   2 |   TABLE ACCESS FULL| SAMPLE | 49717 |   242K|    80   (3)| 00:00:01 |

          1      recursive calls
          0     db block gets
        337  consistent gets
          0     physical reads
          0     redo size
        955  bytes sent via SQL*Net to client
        392  bytes received via SQL*Net from client
          3      SQL*Net roundtrips to/from client
          0      sorts (memory)
          0      sorts (disk)
         24     rows processed

—————————–Login as a SYS user—————————————————–
SQL> conn / as sysdba

SQL> grant query rewrite to hr;

Grant succeeded.


—————————–Login Back to HR user—————————————————

SQL> conn hr/hr;

SQL> alter session set query_rewrite_enabled=true;

Session altered.
SQL>  alter session set query_rewrite_integrity=enforced;

Session altered.


SQL> create materialized view mv_sample
  2  build immediate
  3  refresh on commit
  4  enable query rewrite
  5  as
  6 select owner,count(object_name) from sample group by owner;

Materialized view created.

Elapsed: 00:00:00.65
SQL> analyze table mv_sample compute statistics;

Table analyzed.

Elapsed: 00:00:00.03
SQL> set autotrace traceonly
SQL>  select owner,count(object_name) from sample group by owner;

24 rows selected.

Elapsed: 00:00:00.18

Execution Plan
Plan hash value: 1060132680

| Id  | Operation                    | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT             |           |    24 |   216 |     3   (0)| 00:00:01 |
|   1 |  MAT_VIEW REWRITE ACCESS FULL| MV_SAMPLE |    24 |   216 |     3   (0)| 00:00:01 |
          0  recursive calls
          0  db block gets
          5  consistent gets
          0  physical reads
          0  redo size
        955  bytes sent via SQL*Net to client
        392  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
         24  rows processed