Reverse Key Index

There is an option to create index entries as reversed, which is called reverse key indexes. Oracle stores the index entries as their bytes reversed, except rowids of course.

There are a few cases where reverse key indexes can help to improve performance. One is in RAC environments. If you have a column populated by an increasing sequence the new entries come to the same blocks when you are inserting rows. If you have many concurrent sessions inserting rows from different RAC instances then you will have a contention for the same index blocks between nodes. If you use reverse key indexes in this case then the new index entries will go to different blocks and contention will be reduced.

In single instance databases there is also a case where reverse key indexes can be helpful. If you have a column populated by an increasing sequence, you delete some old rows from the table and you do not do range scans on that column and you have contention issues on index blocks, reverse key indexes can be considered. The reverse key index will scatter the entries accross different blocks during inserting and your many concurrent sessions will not have index block contention issues.

If you are deleting some old rows, the blocks from a normal index on that column will have some used and some empty space in them, but they will not be put on the freelist because they are not completely free. That empty space will not be used because the sequence values are always increasing and they will not go to those old blocks because of that. You will be able to use that space for different values with reverse key indexes.

One of the things to be careful about reverse key indexes is that you cannot perform range scans on them. Because the entries are stored as reversed you lose the capability to range scan on that index.

I had a situation just fitting the reasons to use a reverse key index. We have many concurrent programs that insert into the same table. The table has a primary key column populated by an increasing sequence. There are no range scans on that column. The data is deleted time to time according to some rules which leave some old data undeleted in the table. When these programs are running statspack reports show high buffer busy waits for the index segment (More than 900,000 waits for a 30 minute period causing %85 of all buffer busy waits). Also as this database will be converted to a RAC database soon, this case seems very appropriate to use a reverse key index on the related column.

To change an existing index as a reverse key index you can use the alter index statement.

alter index indexname rebuild reverse;

After this change the index size was reduced down to 16MB from 250MB. This change got rid of the buffer busy waits on the index blocks. The program run time was reduced from about 40 minutes to about 25 minutes.

Advertisements

Rman catalog Database creation

These are the steps for a catalog database creation

SAMPLE IS MY PRODUCTION DATABASE . THEN I HAVE CREATED A SEPERATE DATABASE FOR MY CATALOG BY USING DBCA.

target db_name =SAMPLE
catalog db_name =CATALOG

1.connect to the catalog database

a)here create a catalog tablespace

sql> create tablespace catalogThese are the steps for a catalog database creation

SAMPLE IS MY PRODUCTION DATABASE . THEN I HAVE CREATED A SEPERATE DATABASE FOR MY CATALOG BY USING DBCA.

target db_name =SAMPLE
catalog db_name =CATALOG

1.connect to the catalog database

a)here create a catalog tablespace

sql> create tablespace catalog
datafile ‘/……………………/catalog01.dbf’ size 250 m

b)create a user for recover catalog

sql> create user rmancat identified by rman
default tablespace catalog
default temporary tablespace temp
quota unlimited on catalog;

c)Grant the appropriate privilages.

sql>grant connect,resource,recover_catalog_owner to rmancat;

 

2. Login to the operating system
export ORACLE_SID=catalog

rman
rman> connect catalog rmancat/rman

rman>create catalog tablespace “CATALOG”

rman> exit

 

3.

a)Go to the listener.ora file.

Add the following highlighted entries then restart your listener.ora

SID_LIST_LISTENER =

(SID_LIST =

(SID_DESC =

(SID_NAME = PLSExtProc)

(ORACLE_HOME = /home/oracle/oracle/product/10.2.0/db_4)

(PROGRAM = extproc)

)

(SID_DESC =

(ORACLE_HOME = /home/oracle/oracle/product/10.2.0/db_4)

(SID_NAME = sample)

)

(SID_DESC =

(ORACLE_HOME = /home/oracle/oracle/product/10.2.0/db_4)

(SID_NAME = catalog)

)

)

LISTENER =

(DESCRIPTION_LIST =

(DESCRIPTION =

(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1))

)

(DESCRIPTION =

(ADDRESS = (PROTOCOL = TCP)(HOST = localhost.localdomain)(PORT = 1521))

)

)

 

—————————————————————————————————–

>lsnrctl stop

>lsnrctl start

LSNRCTL for Linux: Version 10.2.0.1.0 – Production on 13-OCT-2008 13:31:14
Copyright (c) 1991, 2005, Oracle. All rights reserved.
Starting /home/oracle/oracle/product/10.2.0/db_4/bin/tnslsnr: please wait…

TNSLSNR for Linux: Version 10.2.0.1.0 – Production
System parameter file is /home/oracle/oracle/product/10.2.0/db_4/network/admin/listener.ora

Log messages written to /home/oracle/oracle/product/10.2.0/db_4/network/log/listener.log

Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1)))

Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=1521)))

 

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1)))

STATUS of the LISTENER

————————

Alias LISTENER

Version TNSLSNR for Linux: Version 10.2.0.1.0 – Production

Start Date 13-OCT-2008 13:31:15

Uptime 0 days 0 hr. 0 min. 3 sec

Trace Level off

Security ON: Local OS Authentication

SNMP OFF

Listener Parameter File /home/oracle/oracle/product/10.2.0/db_4/network/admin/listener.ora

Listener Log File /home/oracle/oracle/product/10.2.0/db_4/network/log/listener.log

Listening Endpoints Summary…

(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1)))

(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=1521)))

Services Summary…

Service “PLSExtProc” has 1 instance(s).

Instance “PLSExtProc”, status UNKNOWN, has 1 handler(s) for this service…

Service “catalog” has 1 instance(s).

Instance “catalog”, status UNKNOWN, has 1 handler(s) for this service…

Service “sample” has 1 instance(s).

Instance “sample”, status UNKNOWN, has 1 handler(s) for this service…

The command completed successfully

 

you will found the above entries…

 

b)log in to the Operating system

export ORACLE_SID=sample
rman

rman> connect target sys/sys
rman>connect catalog rmancat/rman@catalog
rman > register database.

Now you configured your catalog database….

datafile ‘/……………………/catalog01.dbf’ size 250 m

b)create a user for recover catalog

sql> create user rmancat identified by rman
default tablespace catalog
default temporary tablespace temp
quota unlimited on catalog;

c)Grant the appropriate privilages.

sql>grant connect,resource,recover_catalog_owner to rmancat;

 

2. Login to the operating system
export ORACLE_SID=catalog

rman
rman> connect catalog rmancat/rman

rman>create catalog tablespace “CATALOG”

rman> exit

 

3.

a)Go to the listener.ora file.

Add the following highlighted entries then restart your listener.ora

SID_LIST_LISTENER =

(SID_LIST =

(SID_DESC =

(SID_NAME = PLSExtProc)

(ORACLE_HOME = /home/oracle/oracle/product/10.2.0/db_4)

(PROGRAM = extproc)

)

(SID_DESC =

(ORACLE_HOME = /home/oracle/oracle/product/10.2.0/db_4)

(SID_NAME = sample)

)

(SID_DESC =

(ORACLE_HOME = /home/oracle/oracle/product/10.2.0/db_4)

(SID_NAME = catalog)

)

)

LISTENER =

(DESCRIPTION_LIST =

(DESCRIPTION =

(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1))

)

(DESCRIPTION =

(ADDRESS = (PROTOCOL = TCP)(HOST = localhost.localdomain)(PORT = 1521))

)

)

 

—————————————————————————————————–

>lsnrctl stop

>lsnrctl start

LSNRCTL for Linux: Version 10.2.0.1.0 – Production on 13-OCT-2008 13:31:14
Copyright (c) 1991, 2005, Oracle. All rights reserved.
Starting /home/oracle/oracle/product/10.2.0/db_4/bin/tnslsnr: please wait…

TNSLSNR for Linux: Version 10.2.0.1.0 – Production
System parameter file is /home/oracle/oracle/product/10.2.0/db_4/network/admin/listener.ora

Log messages written to /home/oracle/oracle/product/10.2.0/db_4/network/log/listener.log

Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1)))

Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=1521)))

 

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1)))

STATUS of the LISTENER

————————

Alias LISTENER

Version TNSLSNR for Linux: Version 10.2.0.1.0 – Production

Start Date 13-OCT-2008 13:31:15

Uptime 0 days 0 hr. 0 min. 3 sec

Trace Level off

Security ON: Local OS Authentication

SNMP OFF

Listener Parameter File /home/oracle/oracle/product/10.2.0/db_4/network/admin/listener.ora

Listener Log File /home/oracle/oracle/product/10.2.0/db_4/network/log/listener.log

Listening Endpoints Summary…

(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1)))

(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=1521)))

Services Summary…

Service “PLSExtProc” has 1 instance(s).

Instance “PLSExtProc”, status UNKNOWN, has 1 handler(s) for this service…

Service “catalog” has 1 instance(s).

Instance “catalog”, status UNKNOWN, has 1 handler(s) for this service…

Service “sample” has 1 instance(s).

Instance “sample”, status UNKNOWN, has 1 handler(s) for this service…

The command completed successfully

 

you will found the above entries…

 

b)log in to the Operating system

export ORACLE_SID=sample
rman

rman> connect target sys/sys
rman>connect catalog rmancat/rman@catalog
rman > register database.

Now you configured your catalog database….

Redo Generations in Hot Backup

Many of you must have heard or experienced that while taking hot backup of database LGWR process writes aggressively. Meaning that more redo data has been written to redo log file and consecutively more archive logs gets generated.

Here is the common misconception we have in our mind. If some one ask, why excessive redo logs and archive logs are getting generated when we start a hot backup of database ?? Quickly we answer .. Its simple, when we put tablespace in hot backup mode, Oracle will take a check point of tablespace and data files belonging to this tablespace will be freezed. Any user activity happening on objects belonging to this tablespace wont write data to these datafiles, instead it will write data to redo log files. So obviously there will be more redo log file generation.

Well, to some extent this is COMPLETELY WRONG !!!

I will straight way come to the point and explain you what happens when we put the tablespace in hot backup mode.

Your first assumption that datafiles belonging to the tablespace in hot backup mode is freezed is wrong. Datafiles are not freezed, only the datafile headers will be freezed !! So simply imagine that when you put the tablespace in backup mode, Oracle will take a checkpoint and update the datafile headers with checkpoint SCN and there after it is freezed until we take tablespace out of backup mode.

Other datafile (other then header part) remains as normal and data changes happens continuously to this datafile.

Now you may want to ask me “do I mean to say that datafiles gets updated continuously even when we are coping the same to backup location ?”. The answer is YES. Never think that the datafile you are coping is “Consistent”. No, datafiles gets changed continuously !!!

You might want to ask couple of more questions then.

1) If we say that backup file is not consistent and changes continuously, then how come Oracle is able to recover the database when we restore that datafile?

2) If the data changes are anyway happening continuously on data files, then why there is excess redo log generation ?

Thats it !! don’t ask me more then this. Let me explain answers to these questions.

Consider a typical case, where an Oracle database is installed on Linux platform. The standard Oracle block size if 8K and lets say that OS level data block size is 512K. Now when we put the tablespace in “Begin Backup” mode checkpoint has happened and datafile header is freezed. You found which are the files related to this tablespace and started copying using OS command. Now when you copy a datafile using OS command it is going to copy as per OS block size. Lets say when you start copying it gave 8 blocks to you to copy – that means you are copying 4K (512K X 4) to backup location. That means you are copying half of Oracle block to backup location. Now this process of copy can be preempted by Server CPU depending on load. Lets say when you started copying after copy of those 8 block (4K, half of Oracle block), your process get preempted by CPU and it has allocated CPU time to some other important process. Mean while DBWR process changes that block that you have copied halfway (since datafile is not freezed and only header is freezed, continuous updates can happen to datafile).

After a while CPU returns back and gives you next 8 blocks to copy (rest of the halk Oracle block). Now here is the problem !!! we copied half of the oracle block taken at time T0 and another half taken at time T1 and in-between the data block got changed. Does this sounds consistent ? Not to me !! Such type of block is called “Fractured Block”.

Well, since Oracle copies files like this it should do some thing, so that during recovery it wont face any problem.

Usually in case of a normal tablespace (which is not in begin backup mode), when a transaction happens oracle generates redo information and puts in redo log file. This is the bare minimum information that oracle generates in order to redo the information. It does not copy the complete block. Where as in case of begin backup mode, if a transaction happens and changes any block FOR THE FIST TIME, oracle copies the complete block to redo log file. This happens only during first time. If subsequent transaction updates the same block again, oracle will not copy the complete block to redo, instead it will generate minimum information to redo the changes. Now because oracle has to copy the complete block when it changes for the first time in begin backup mode, we say that excess redo gets generated when we put tablespace in begin backup mode.

Question arises, why Oracle has to copy the complete block to redo log files. As you have seen above that during copy of datafile, there can be many fractured blocks, and during restore and recovery its going to put those block back and try to recover. Now assume that block is fractured and oracle has minimum information that it generates in the redo. Under such condition it wont be possible for Oracle to recover such blocks. So instead Oracle just copies the entire block back from redo log files to datafiles during recovery process. This will make the datafile consistent. So recovery process is very important which takes care of all fractured blocks and makes it possible to recover a database.

I hope this explains above 2 questions.

Now you can easily explain why hot backup is not possible if database is in NOARCHIVELOG mode.

When you take a backup using RMAN, it does not generate excessive redo logs. The reason is simple. RMAN is intelligent. It does not use OS block for copying, instead it uses oracle blocks for copying datafiles so the files are consistent.

Oracle Architecture

This post is regarding the basic database architecture for 9i. Its been very late to upload this basic stuff, but I realized that it would be an incomplete blog without having even a brief architecture. So here it is.

Below is the figure which gives a overview of “Inside Oracle”.

oracle core

An Oracle Server is a combination of oracle Instance and Oracle database.

Oracle Server = Oracle Instance + Oracle Database

Again Oracle Instance is nothing but Memory architecture and Background processes. Lets start the discussion with Memory architecture first.

Memory Architecture

Oracle database uses memory for its operation. The total memory allocated to the Oracle database can be broadly categorized into SGA (System Global Area) and PGA (Program Global Area).

SGA Contains following data structure

Database buffer cache
Redo log buffer
Shared pool
Java pool
Large pool (optional)
Data dictionary cache
Other miscellaneous information
We can also categorized SGA into fixed SGA and variable SGA. When asked about Fixed SGA, AskTom says that “fixed SGA is a component of the SGA that varies in size from platform to platform and release to release. It is “compiled” into the database. The fixed SGA contains a set of variables that point to the other components of the SGA and variables that contain the values of various parameters. The size of the fixed SGA is something over which we have no control and it is generally very small. Think of this area as a “bootstrap” section of the SGA, something Oracle uses internally to find the other bits and pieces of the
SGA.”

Variable SGA contains 4 main components as listed above, those are “Database Buffer Cache”, “Redo Log Buffer”, “Shared Pool” and “Large Pool”. We call it variable SGA because we can alter the size of each of these components manually using ALTER SYSTEM command. The size of each of the components of variable SGA is determined by INIT.ORA parameters. Following are the INIT.ORA parameter for each of the component.

Database Buffer Cache – db_block_buffers
Redo Log Buffer – log_buffer
Shared Pool – shared_pool_size
Large Pool – Large_pool_size
We cannot however alter the size of Fixed SGA.

Database Buffer Cache – This is used to hold the data into the memory. When ever a user access the data, it gets fetched into database buffer cache and it will be managed according to LRU (Least recently used) algorithm. Advantages – If a user is requesting data, which gets fetched into the buffer cache, then next time if he ask for same data with in a short period of time, the data will be read from buffer cache and Oracle process does not have to fetch data again from disk. Reading data from buffer cache is a faster operation. Another advantage is that if a user is modifying the data, it can be modified in the buffer cache which is a faster operation then modifying the data directly on the disk.

Redo Log Buffer – This memory block hold the data which is going to be written to redo log file. Why do we need this data? To rollback the changes if the need be. But instead of writing the data directly to the redo log files, it is first written to log buffer which improves performance and then with the occurrence of certain event it will be written to redo log file.

Shared Pool – This contains 2 memory section, 1) Library Cache 2) Dictionary Cache. Library cache hold the parsed SQL statement and execution plans and parsed PLSQL codes. Dictionary cache hold the information about user privileges, tables and column definitions, passwords etc. These 2 memory components are included in the size of shared pool.

Large Pool – If defined then used for heavy operations such as bulk copy during backup or during restore operation.

The total size of SGA is determined by a parameter SGA_MAX_SIZE. Below is the simple calculation of memory sizes.

SQL> show sga

Total System Global Area 577574308 bytes
Fixed Size 452004 bytes
Variable Size 402653184 bytes
Database Buffers 163840000 bytes
Redo Buffers 10629120 bytes

This will show fixed and variable size SGA. Fixed size SGA, as I said is not in our control. However we can verify the size of variable SGA and other memory values shown above.

Database Buffers 163840000 bytes

SQL> show parameters db_block_buffer

NAME TYPE VALUE
———————————— ———– ——————————
db_block_buffers integer 20000

This value is in terms of blocks. we can find the size of a block using DB_BLOCK_SIZE parameter

SQL> show parameters db_block_size

NAME TYPE VALUE
———————————— ———– ——————————
db_block_size integer 8192

So Database Buffers = db_block_buffers X db_block_size = 20000 X 8192 = 163840000 bytes

Also Variable size = “Shared Pool Size” + “Large Pool Size” + “Java Pool size” (some times defined)

SQL> SELECT pool, sum(bytes) from v$sgastat group by pool;

POOL SUM(BYTES)
———– ———-
java pool 50331648
shared pool 352321536
11069860
Variable size = 352321536 + 50331648 = 402653184 bytes

Program Global Area

PGA contains information about bind variables, sort areas, and other aspect of cursor handling. This is not a shared area and every user has its own PGA. But why PGA is required for every user? The reason being that even though the parse information for SQL or PLSQL may be available in library cache of shared pool, the value upon which the user want to execute the select or update statement cannot be shared. These values are stored in PGA. This is also called Private Global Area.

Going still deeper into the memory structure…

Database buffer cache is again divided into 3 different types of cache.

Default Cache
Keep Cache
Recycle Cache
If we define the cache size using DB_CACHE_SIZE (or DB_BLOCK_BUFFER and specify the block size) then this will be default cache. The cache has a limited size, so not all the data on disk can fit in the cache. When the cache is full, subsequent cache misses cause Oracle to write dirty data already in the cache to disk to make room for the new data.

You can configure the database buffer cache with separate buffer pools that either keep data in the buffer cache or make the buffers available for new data immediately after using the data blocks.

The KEEP buffer pool retains the schema object’s data blocks in memory. This is defined using the INIT.ORA parameter DB_KEEP_CACHE_SIZE
The RECYCLE buffer pool eliminates data blocks from memory as soon as they are no longer needed. This is defined using the INIT.ORA parameter DB_RECYCLE_CACHE_SIZE
You can also define multiple DB block sizes using following parameters. Example if you have defined standard default block size of 4K, then following parameters can be used to define a size of 2K, 8K, 16K and 32K.

DB_2K_CACHE_SIZE
DB_8K_CACHE_SIZE
DB_16K_CACHE_SIZE
DB_32K_CACHE_SIZE

Note that you can define the Keep and Recycle cache only on standard block size and buffer cache size is the sum of sizes of each of these pools.

Shared Pool Reserved Size

Shared Pool, as we have seen previously contains the parsed SQL statements and execution plans. With continuous use of database, after a period of time the shared pool will get fragmented. New parsed SQL and execution plans comes and old one gets aged out and hence overwritten. This will also lead to larger packages being aged out with new entries going into shared pool. Hence access to such larger packages will take time to parse and create execution plan. This might cause performance issues.

To avoid such situation, you can define a parameter SHARED_POOL_RESERVED_SIZE. This will reserve some additional space other then shared_pool_size. If an object (either parsed SQL statement or execution plan) is stored in reserved shared pool area then it will not age out.

For large allocations, the order in which Oracle attempts to allocate space in the shared pool is the following:

From the unreserved part of the shared pool.
If there is not enough space in the unreserved part of the shared pool, and if the allocation is large, then Oracle checks whether the reserved pool has enough space.
If there is not enough space in the unreserved and reserved parts of the shared pool, then Oracle attempts to free enough memory for the allocation. It then retries the unreserved and reserved parts of the shared pool.
Process Architecture

Oracle has several process running in the background for proper functioning of database. Following are the main categories of process.

Server Process
Background Process
Server Process – to handle the requests of user processes connected to the instance. Server processes (or the server portion of combined user/server processes) created on behalf of each user’s application can perform one or more of the following:

Parse and execute SQL statements issued through the application
Read necessary data blocks from datafiles on disk into the shared database buffers of the SGA, if the blocks are not already present in the SGA
Return results in such a way that the application can process the information
Background Process – An Oracle instance can have many background processes; not all are always present. The background processes in an Oracle instance include the following:

Database Writer (DBW0 or DBWn)
Log Writer (LGWR)
Checkpoint (CKPT)
System Monitor (SMON)
Process Monitor (PMON)
Archiver (ARCn)
Recoverer (RECO)
Lock Manager Server (LMS) – Real Application Clusters only
Queue Monitor (QMNn)
Dispatcher (Dnnn)
Server (Snnn)
On many operating systems, background processes are created automatically when an instance is started.

Database writer (DBWn) – The database writer process (DBWn) writes the contents of buffers to datafiles. The DBWn processes are responsible for writing modified (dirty) buffers in the database buffer cache to disk. Although one database writer process (DBW0) is adequate for most systems, you can configure additional processes (DBW1 through DBW9) to improve write performance if your system modifies data heavily. These additional DBWn processes are not useful on uniprocessor systems.

Log Writer (LGWR) – The log writer process (LGWR) is responsible for redo log buffer management–writing the redo log buffer to a redo log file on disk. LGWR writes all redo entries that have been copied into the buffer since the last time it wrote.

Checkpoint (CKPT) – When a checkpoint occurs, Oracle must update the headers of all datafiles to record the details of the checkpoint. This is done by the CKPT process. The CKPT process does not write blocks to disk; DBWn always performs that work.

System Monitor (SMON) – The system monitor process (SMON) performs crash recovery, if necessary, at instance startup. SMON is also responsible for cleaning up temporary segments that are no longer in use and for coalescing contiguous free extents within dictionary-managed tablespaces. If any dead transactions were skipped during crash and instance recovery because of file-read or offline errors, SMON recovers them when the tablespace or file is brought back online. SMON wakes up regularly to check whether it is needed.

Process Monitor (PMON) –

The process monitor (PMON) performs process recovery when a user process fails. PMON is responsible for cleaning up the database buffer cache and freeing resources that the user process was using. For example, it resets the status of the active transaction table, releases locks, and removes the process ID from the list of active processes.

PMON periodically checks the status of dispatcher and server processes, and restarts any that have died (but not any that Oracle has terminated intentionally). PMON also registers information about the instance and dispatcher processes with the network listener.

Archiver Process (ARCn) –

The archiver process (ARCn) copies online redo log files to a designated storage device after a log switch has occurred. ARCn processes are present only when the database is in ARCHIVELOG mode, and automatic archiving is enabled.

An Oracle instance can have up to 10 ARCn processes (ARC0 to ARC9). The LGWR process starts a new ARCn process whenever the current number of ARCn processes is insufficient to handle the workload. The ALERT file keeps a record of when LGWR starts a new ARCn process.

Recoverer (RECO) – The recoverer process (RECO) is a background process used with the distributed database configuration that automatically resolves failures involving distributed transactions. The RECO process of a node automatically connects to other databases involved in an in-doubt distributed transaction. When the RECO process reestablishes a connection between involved database servers, it automatically resolves all in-doubt transactions, removing from each database’s pending transaction table any rows that correspond to the resolved in-doubt transactions.

The RECO process is present only if the instance permits distributed transactions and if the DISTRIBUTED_TRANSACTIONS parameter is greater than zero. If this initialization parameter is zero, RECO is not created during instance startup.

Lock Manager Server (LMS) – In Oracle9i Real Application Clusters, a Lock Manager Server process (LMS) provides inter-instance resource management.

Queue Monitor (QMNn) – The queue monitor process is an optional background process for Oracle Advanced Queuing, which monitors the message queues. You can configure up to 10 queue monitor processes. These processes, like the Jnnn processes, are different from other Oracle background processes in that process failure does not cause the instance to fail.

The above once explained are the mail background processes. Please refer to the Oracle documentation for detailed Oracle 9i Architecture.

Hope this helps !!

References:

Oracle9i Database Online Documentation (Release 9.0.1)