Caching a Sequence – Will it helps !

Yesterday, when I was reviewing AWR report of a busy OLTP database, I came across a dictionary related “Update” statement. This update statement was listed in top 10 SQL’s under “SQL ordered by Parse Calls” and “SQL ordered by Executions” sections of the AWR report and was parsed and executed 698 times during 30 minutes (A 30 minutes AWR report).

Here is the complete SQL Statement:

“update seq$ set increment$=:2, minvalue=:3, maxvalue=:4, cycle#=:5, order$=:6, cache=:7, highwater=:8, audit$=:9, flags=:10 where obj#=:1”

Immediately, I queried Oracle data dictionary for a list of sequences of one of our most important schema. Following is the query and its output:

sql>select sequence_name, cache_size, last_number from dba_sequences

where sequence_owner = ‘OM_TEST’;

SEQUENCE_NAME                  CACHE_SIZE       LAST_NUMBER

—————————— ———-                         ———–

SEQ_AUDIT_DETAILS             0                                     6728991
SEQ_MTR_LOG                             0                                    1
SEQ_TRANS_LOG                        20                                  991

Sequence “SEQ_AUDIT_DETAILS” seems to be a culprit here, it has been used for more than 6 Million times and is not cached.

Upon further investigation, it was revealed that this sequence is used by an audit detail table to generate unique serial.

I altered the sequence definition to cache 1000 values. Caching instructs Oracle to pre-allocate (in my case 1000) values in the memory for faster access.

SQL> alter sequence seq_audit_details cache 1000;

The above ALTER command did a magic and the UPDATE statement vanished from the AWR reports.

We need to cache sequences whenever possible to avoid extra physical I/O.

Advertisements

Export Error

Oracle Version- 10.2.0.4

I Have taken an export of a particular user  which dump size is nearly to 14 gb.

While I am importing it  , i got an import error which says ,

illegal lob length marker 65279
bytesread = 00000000000
TABLE =cm_test
IMP-00098: INTERNAL ERROR: impgst2
IMP-00018: partial import of previous table completed: 2031183 rows imported
IMP-00008: unrecognized statement in the export file:
ÿþÿþÿþÿ
IMP-00008: unrecognized statement in the export file:
IMP-00008: unrecognized statement in the export file:
IMP-00008: unrecognized statement in the export file:
IMP-00008: unrecognized statement in the export file:
IMP-00008: unrecognized statement in the export file:

All the  googles and forums mentioned that the export dump file is corrupted .

So  later i have taken an export once again with the large buffer size (buffer=300000000)

And then the export has been taken successfully ,

Later i try to import it by setting the same buffer parameter (buffer=300000000) and the import error message seems to be vanished .

This  same issue is been reported as a Bug by oracle support in 10.1.0.3, But this  bug has been fixed in 10.1.0.4 and Later.

Bug no: 3890213  IMP-98 possible importing zero length Lobs with CHUNKSIZE > 32767

Is Index Expensive in DML?

Indexes are used to enhance expensive queries to run more quickly. They provide faster access path to table data. But, there is a trade-off using indexes, DML statements against the table would consume more time as all indexes on the table have to be updated during the DML.

Here is a small test I performed to see how index existence would affect the DML statements. For the test case, I create a small table without indexes, inserted 100,000 records and measure the time taken. I repeat the same test but this time with indexes on all columns. The difference in timing will show us, how much expensive are our indexes.

Test 1: Without Indexes

SQL> set serveroutput on
SQL> create table t(a number, b varchar2(30), c date);

Table created.

SQL> declare
2     x number;
3   begin
4     x := dbms_utility.get_time;
5     for i in 1..100000 loop
6       insert into t values(i, ‘value = ‘ || i, sysdate + mod(i,365));
7     end loop;
8   dbms_output.put_line(‘Time taken WITHOUT indexes : ‘ ||to_char(dbms_utility.get_time – x));
9  end;
10  /
Time taken WITHOUT indexes : 1200

PL/SQL procedure successfully completed.

Now, repeat the same test by creating indexes on all the three columns.

Test 2: With Indexes:


SQL> drop table t purge;

Table dropped.

SQL> create table t(a number, b varchar2(30), c date);

Table created.

SQL> create index t_idx1 on t(a);

Index created.

SQL> create index t_idx2 on t(b);

Index created.

SQL> create index t_idx3 on t(c);

Index created.

SQL>  declare
2   x number;
3   begin
4     x := dbms_utility.get_time;
5     for i in 1..100000 loop
6       insert into t values(i, ‘value = ‘ || i, sysdate + mod(i,365));
7     end loop;
8     dbms_output.put_line(‘Time taken WITH indexes : ‘ ||to_char(dbms_utility.get_time – x));
9   end;
10 /
Time taken WITH indexes : 4772

PL/SQL procedure successfully completed.

Well, it’s evident from the above tests that having too many indexes surely affects performance of DML statements. When I had three indexes, time taken to process 100,000 records was more than double compared to process the same number of records without indexes. Moral of the test is to create indexes when required and avoid over-creating them

GATHER_FIXED_OBJECT_STATS and GATHER_DICTIONARY_STATS

There are two procedures in the DBMS_STATS package for gathering statistics on Oracle native objects:

GATHER_FIXED_OBJECT_STATS

GATHER_DICTIONARY_STATS
GATHER_FIXED_OBJECT_STATS collects statistics for fixed X$ and K$ objects. It needs to be run whenever any init.ora parameter is changed. Fixed objects are the magic tables that are not tables – they are not “dictionary” tables. The x$ tables would change size in response to init.ora setting changes generally. Setting processes higher will add rows to various x$ views for example.

So, they could be considered a one time thing unless you make a big change to your parameter
settings.

GATHER_DICTIONARY_STATS collects statistics for SYS schema. It needs to be run whenever you do “big things” to the dictionary (loaded a schema – not put data into the schema, but rather did things in the schema that affect the dictionary like creating and dropping objects…) – you would consider gathering statistics.

Look at last_analyzed for the sys tables to see when the dictionary was gathered against. Metalink Note 281793.1 states that the DBA_OPTSTAT_OPERATIONS view may be used to determine the start and end time of all DBMS_STATS operations executed at the schema and database level.

Materialized view with Unique Index

One of the uses of materialized views is replication. Mviews can be used to replicate a table to another database to prevent users from accessing several databases through database links. This can improve the performance of queries which frequently access that table by removing the latency of the database link.

Today the refresh job of one of the mviews we use for this purpose started getting “ORA-00001 unique constraint violated” error. This was an mview with a unique index on it. After a simple search in Metalink I have found the reason of this behavior.

I will provide a simple test case to reproduce the problem.

SQL> select * from v$version;

BANNER
—————————————————————-
Oracle9i Enterprise Edition Release 9.2.0.8.0 – 64bit Production
PL/SQL Release 9.2.0.8.0 – Production
CORE    9.2.0.8.0       Production
TNS for Solaris: Version 9.2.0.8.0 – Production
NLSRTL Version 9.2.0.8.0 – Production

SQL> create table master as select rownum r,’ROW’||rownum line from all_objects where rownum<=5;
Table created.

SQL> alter table master add primary key(r);

Table altered.

SQL> create unique index master_ind on master(line);

Index created.

SQL> create materialized view log on master with primary key;

Materialized view log created.

SQL> create materialized view snap refresh fast with primary key as select * from master;

Materialized view created.

SQL> create unique index snap_ind on snap(line);

Index created.

SQL> select * from master;

R LINE
———- ——————————————-
1 ROW1
2 ROW2
3 ROW3
4 ROW4
5 ROW5

After these steps we have now one master table, an mview log on it and a fast refreshable mview of the master table. Now we make some updates to the master table to switch the LINE column’s data of two rows.

SQL> update master set line=’DUMMY’ where r=1;

1 row updated.

SQL> update master set line=’ROW1′ where r=2;

1 row updated.

SQL> update master set line=’ROW2′ where r=1;

1 row updated.

SQL> commit;

Commit complete.

SQL> select * from master;

R LINE
———- ——————————————-
1 ROW2
2 ROW1
3 ROW3
4 ROW4
5 ROW5

Now when we try to refresh the mview we get the error.

SQL> exec dbms_mview.refresh(‘SNAP’);
BEGIN dbms_mview.refresh(‘SNAP’); END;

*
ERROR at line 1:
ORA-12008: error in materialized view refresh path
ORA-00001: unique constraint (YAS.SNAP_IND) violated
ORA-06512: at “SYS.DBMS_SNAPSHOT”, line 820
ORA-06512: at “SYS.DBMS_SNAPSHOT”, line 877
ORA-06512: at “SYS.DBMS_SNAPSHOT”, line 858
ORA-06512: at line 1

If we drop the unique index and recreate it as nonunique we can refresh the mview without errors.

SQL> drop index snap_ind;

Index dropped.

SQL> create index snap_ind on snap(line);

Index created.

SQL> exec dbms_mview.refresh(‘SNAP’);

PL/SQL procedure successfully completed.

SQL> select * from snap;

R LINE
———- ——————————————-
1 ROW2
2 ROW1
3 ROW3
4 ROW4
5 ROW5

The reason of this error is explained in Metalink note 284101.1. According to that note the refresh mechanism may temporarily leave the data in an inconsistent state during the refresh. The modifications are not made with the same order as the modifications to the master table. Because of this creating unique indexes, foreign keys or primary keys on mviews can cause problems in the refresh process. This totally makes sense as it is better to keep the integrity checks on the master table rather than the mview. An mview is just a copy of the master table, so if we define the integrity constraints on the master table the mview will take care of itself.

This behaviour reproduces in 10G also.