Monday, June 4, 2007

Monitoring and Advisors

In 10g statistics are gathered automatically by background processes MMON (memory monitor) and MMNL (memory monitor light). By default every 60 minutes MMON wakes up and saves the statistics out of the dictionary views into AWR (automatic workload repository).

AWR data is owned by user SYSMAN and stored in SYSAUX tablespace.

Amount of statistics gathered is defined by parameter statistics_level
There are three possible options:
BASIC - practically disables statistics gathering, very few is saved.
TYPICAL - standard level
ALL - same as TYPICAL but also keeps the execution plans and timing information from the operating system

The statistics are kept in AWR for 7 days by default.

To change the collection interval and retention period DBMS_WORKLOAD_REPOSITORY package can be used. Like this:

exec dbms_workload_repository.modify_snapshot_settings(interval=>90,retention=>8640);
This sets interval to 90 minutes and retention to 8640 minutes (this makes 6 days or something)

After the statistics snapshot is taken, a process named ADDM (automatic database diagnostic monitoring) analyzes newly gathered information and compares it with two previously taken snapshots. By comparing these three, it can identify potential performance problems and suggest possible remedies. The main goal is to minimize the DBTime, which is being CPU and WAIT time put together for non-idle database users.

To get the list of snapshots:

SELECT * FROM dba_hist_snapshot;
To take snapshot manually:

EXECUTE dbms_workload_repository.create_snapshot('TYPICAL');
To delete snapshots:

exec dbms_workload_repository.drop_snapshot_range(begin_snap_id, end_snap_id);
To get a text ADDM report:

@?/rdbms/admin/addmrpt.sql
Additionally the following views can be queried to get the ADDM findings: DBA_ADVISOR_FINDINGS, DBA_ADVISOR_OBJECTS, DBA_ADVISOR_RECOMMENDATIONS, DBA_ADVISOR_RATIONALE

There is also a number of special-purpose advisors which include:

  • SQL Tuning Advisor
Analyzes SQL (recent or from the AWR). SQLs can be combined into SETs and whole SETs sent for analysis.

  • SQL Access Advisor
Analyzes SQL from the schema point of view. Suggests creating indexes and maetrialized views.

  • Memory Advisor
Advises on 3 memory parameters: buffer cache, shared pool, and PGA. In case of buffer cache performance measure is "relative change in physical reads". For shared pool "relative change in parse time savings" is used and for PGA it is the "cache hit %".

  • Mean Time To Recover Advisor
Here you only set the desired recovery time in seconds and oracle sets a number of internal parameters to make the recovery time as close as possible to your desired value.


  • Segment Advisor
Finds segments which can be shrunk and thus to release the unused space back to the tablespace. After finding the segements worth shrinking, following SQLs are executed:
alter table "SCOTT"."T2" enable row movement;
alter table "SCOTT"."T2" shrink space;


  • Undo Management Advisor
Analyzes the UNDO generation rates and suggests a correct retention time and undo tablespace size.

Saturday, June 2, 2007

Locking

Reference: Types of locks.

Developers Guide - Locking Data explicitly

Metalink Doc ID: Note:15476.1 FAQ about Detecting and Resolving Locking Conflicts

SQL*Plus script to query locks information

Locks can be: DML, DDL or internal
There are two basic types of DML locks: row locks and table locks.

Types of tables locks:
ROW SHARE (RS) - obtained when SELECT ... FOR UPDATE is issued. At least this is what Oracle documentation says. Testing shows that when SELECT FOR UPDATE is used, an RX locks is obtained. So it seems that... I don't know ;)

Basically this type of lock needed to show that some data is going to be changed in this table so that no other transaction could drop or modify the table or obtain a EXCLUSIVE lock.

ROW EXCLUSIVE (RX) - same as previous but it also prohibits other transactions to lock the table in SHARE and SHARE ROW EXCLUSIVE modes. It is set automatically when DML statements (such as update) are used against the table.
This seems fair since SHARE lock is needed to prohibit any updates, then it is fair that an update is prohibiting the SHARE lock to be obtained.

SHARE (S) - prohibits others to obtain RS, RX locks thus preventing a table to be updated while SHARE lock is active. This is needed if table level read consistency needs to be reached. Other sessions can also obtain SHARE locks at the same time. If two or more transactions will try to update the table while having the SHARE lock on, this will result in a deadlock even if they update different rows. If you need to get the read consistent table plus an exclusive right to update it, use the SHARE ROW EXCLUSIVE lock.
I also noticed that if after obtaining a SHARE lock, try to update the table, lock type is automatically converted into SHARE ROW EXCLUSIVE type. This is logical because you cannot update the table until others are holding the SHARE lock, you have to get the SHARE + ROW EXCLUSIVE lock type to do that.
Note:This lock is automatically obtained when CREATE INDEX statement is run.

SHARE ROW EXCLUSIVE (SRX) - prohibits all other locks to be obtained, except for the RS lock. So this is almost exclusive lock. You get both read consistency and exclusive right to update the table. Oracle docs say that others still can issue the SELECT .. FOR UPDATE statement on this table but my test show that this is not true...

EXCLUSIVE - this one has to be obtained before all of the DDL stuff can be executed.

Tuesday, May 29, 2007

SQL*Loader

Reference

SQL*Loader files:
log - contains the loading session log
control - a mandatory file containing all the control information (where are the other files, how to parse data and where to save it) It can also contain the data as well
bad - contains the records not loaded due to an error
discard - contains the records not satisfying the record select criteria

Possible ways of running the sql*loader:
sqlldr user/password control=some_file.ctl
sqlldr user/password some_file.ctl
You can also specify the direct = TRUE|FALSE parameter to use the direct path load. When left to default (FALSE) conventional path is used.

Direct path loading is fast. It doesn't generate undo or redo, it always writes above the high water mark, it doesn't issue COMMIT, it bypasses most of the constraints.

Some restrictions with the direct path loading:
  • Indexes are always rebuilt after the loading. If unique constraint is violated, the unique index is left in unusable state. You need to find the duplicate, eliminate them and rebuild the index.
  • Direct path load cannot occur if there are active transactions against the target table as it places an exclusive table lock on the whole table for the time of load.
  • Triggers do not fire during direct path loads
  • Direct path loading into cluster tables is not supported. In fact only loading to heap organized tables is supported.
  • During the direct path loads, foreign key constraints are disabled and then re-enabled

Possible types of input data formats:
  • fixed length. All rows have the same length and fields are separated by some character. This format will be the quickest in processing.
  • Variable length. Each row starts with a number showing how long the row is.
  • Stream format. Will be the slowest to process. But no information on the row length is needed. You just need to specify the field separator.

Import/Export, Datapump

DataPump is new to 10g. It replaces the old exp and imp utilities, which operated as clients. Datapump works on server side thus gaining access to the data directly, which dramatically increases export/import operations performance.

DP is called from the command line (expdp and impdp) or trough the DBMS_DATAPUMP package.

Note: You need to have a EXP_FULL_DATABASE role is you want to export schema other than yours.
Note: You also need the exp_full_database role if you want the user definition and grants to be included in the export. Otherwise you will only have the schema objects.

Possible export types using EXPDP
full = y | Exports full database
schema = [schema_list] | If omitted then connected user's schema will be exported
tablespace = [tablespace_list] | Exports all object belonging to a tablespace along with dependant object even if they are from a different tablespace
table = [table_list]


Other parameters
content = metadata_only | data_only | default ALL
directory = dir | Oracle DIRECTORY object, found in DBA_DIRECTORIES. User must have the read/write rights for the DIR
dumpfile = dir:filename, ... | dir is again Oracle DIRECTORY
logfile = dir:filename
filesize = n [G|M|K] | Sets the maximum size of each dump file.
parallel=degree | In case
network_link = db_link | Makes possible to use remote database as export source

To initiate export using the DBMS_DATAPUMP you have to do a little bit of programming:
Reference

The script would look something like this:
begin
--1st step
h1 := DBMS_DATAPUMP.OPEN (operation => 'EXPORT' ... so on)
--2nd step
DBMS_DATAPUMP.ADD_FILE ( ... --add the log file
--3rd step
DBMS_DATAPUMP.ADD_FILE (... --add the dump file
--4th step
DBMS_DATAPUMP.METADATA_FILTER (... --filter out not needed stuff
--5th step
DBMS_DATAPUMP.START_JOB ( handle => h1 ... -- from the open call
--6th step
DBMS_DATAPUMP.DETACH (handle => h1 ... --detach from the job
end;
You can then monitor the job through DBA_DATAPUMP_JOBS or USER_DATAPUMP_JOBS view or by looking inside the log file.

You can also perform these activities from the dbconsole interface. Log in into database control and go to Maintenance section. Locate the "Export to files" link, complete the wizard.


Importing using the impdp utility

You have mostly the same options as for the expdp.
Different parameters:

sqlfile = dir:file | Specifies a file to write the DDL statements to.
network_link = db_link | Importing froma live database! A very nice feature.
reuse_datafiles= Y|N | Specifies if the datafiles should be overwritten.
remap_dataile = source:target | Specifies how the datafiles should be renamed
remap_schema = source:target
remap_tablespace = source:target
include = object_list
exclude = object_list

include paramter usage example:
impdp scott/tiger full=y ... include=PROCEDURE:"LIKE SMTHNG%"
This will import only the procedure object which names start with "SMTHNG"


Import using the DBMS_DATAPUMP is done in a similar way to export.

And of course you can run import from EM.

Triggers

Reference

Triggers can be firing on the following types of events:
  • DML events
  • DDL events
  • Database events
DDL events include events such as CREATE, DROP, GRANT, ANALYZE, ASSOCIATE/DISASSOCIATE STATISTICS, REVOKE, AUDIT, COMMENT. Basically each DDL statement can be used as the triggering event. There is also a special event called "DDL", which incorporates all the DDL event so that trigger will fire for any DDL run against the specified object.

Example of DDL trigger:
CREATE OR REPLACE TRIGGER some_trigger
BEFORE GRANT ON schema.object
BEGIN
...
END;

Database triggers are something even more interesting. They fire on the following events:

SERVERERROR Specify SERVERERROR to fire the trigger whenever a server error message is logged.

The following errors do not cause a SERVERERROR trigger to fire:

  • ORA-01403: no data found

  • ORA-01422: exact fetch returns more than requested number of rows

  • ORA-01423: error encountered while checking for extra rows in exact fetch

  • ORA-01034: ORACLE not available

  • ORA-04030: out of process memory when trying to allocate string bytes (string, string)

LOGON Specify LOGON to fire the trigger whenever a client application logs onto the database.

LOGOFF Specify LOGOFF to fire the trigger whenever a client application logs off the database.

STARTUP Specify STARTUP to fire the trigger whenever the database is opened.

SHUTDOWN Specify SHUTDOWN to fire the trigger whenever an instance of the database is shut down.

SUSPEND Specify SUSPEND to fire the trigger whenever a server error causes a transaction to be suspended.

DB_ROLE_CHANGE In a Data Guard configuration, specify DB_ROLE_CHANGE to fire the trigger whenever a role change occurs from standby to primary or from primary to standby.

SERVERERROR event seemed very useful to me. You can create a trigger which will send each error message to your mail box, using the UTL_SMTP package.

Tracing session

A great reference for tracing sessions in Oracle at oracle-base.com

Possible ways of doing this:

To trace your own session
ALTER SESSION SET EVENTS '10046 trace name context forever, level 8';
ALTER SESSION SET EVENTS '10046 trace name context off';
To trace other sessions
EXEC DBMS_SUPPORT.start_trace_in_session(sid=>123, serial=>1234, waits=>TRUE, binds=>FALSE);
EXEC DBMS_SUPPORT.stop_trace_in_session(sid=>123, serial=>1234);
The DBMS_SUPPORT package is not present by default, but can be loaded as the SYS user by executing the @$ORACLE_HOME/rdbms/admin/dbmssupp.sql script

Alternatively use the DBMS_MONITOR package:
DBMS_MONITOR.SESSION_TRACE_ENABLE(
session_id IN BINARY_INTEGER DEFAULT NULL,
serial_num IN BINARY_INTEGER DEFAULT NULL,
waits IN BOOLEAN DEFAULT TRUE,
binds IN BOOLEAN DEFAULT FALSE)

exec dbms_monitor.session_trace_enable(144, 253, TRUE, FALSE);
exec dbms_monitor.session_trace_disable(144, 253);


Using ORADEBUG from SQL*Plus
ORADEBUG SETMYPID; -- Debug current session.
ORADEBUG SETOSPID 1234; -- Debug session with the specified OS process.
ORADEBUG SETORAPID 123456; -- Debug session with the specified Oracle process ID.
ORADEBUG EVENT 10046 TRACE NAME CONTEXT FOREVER, LEVEL 12;
ORADEBUG TRACEFILE_NAME; -- Display the current trace file.
ORADEBUG EVENT 10046 TRACE NAME CONTEXT OFF;

  • 0 - No trace. Like switching sql_trace off.
  • 2 - The equivalent of regular sql_trace.
  • 4 - The same as 2, but with the addition of bind variable values.
  • 8 - The same as 2, but with the addition of wait events.
  • 12 - The same as 2, but with both bind variable values and wait events.

Tracing session

A great reference for tracing sessions in Oracle at oracle-base.com

Possible ways of doing this:

To trace your own session
ALTER SESSION SET EVENTS '10046 trace name context forever, level 8';
ALTER SESSION SET EVENTS '10046 trace name context off';

To trace other sessions
EXEC DBMS_SUPPORT.start_trace_in_session(sid=>123, serial=>1234, waits=>TRUE, binds=>FALSE);
EXEC DBMS_SUPPORT.stop_trace_in_session(sid=>123, serial=>1234);

The DBMS_SUPPORT package is not present by default, but can be loaded as the SYS user by executing the @$ORACLE_HOME/rdbms/admin/dbmssupp.sql script

Using ORADEBUG from SQL*Plus
ORADEBUG SETMYPID; -- Debug current session.
ORADEBUG SETOSPID 1234; -- Debug session with the specified OS process.
ORADEBUG SETORAPID 123456; -- Debug session with the specified Oracle process ID.
ORADEBUG EVENT 10046 TRACE NAME CONTEXT FOREVER, LEVEL 12;
ORADEBUG TRACEFILE_NAME; -- Display the current trace file.
ORADEBUG EVENT 10046 TRACE NAME CONTEXT OFF;

Monday, May 28, 2007

Inserting into multiple tables

A funny way of inserting data. Some kind of "conditional" insert.

INSERT [ALL|FIRST] WHEN {condition} THEN {INTO clause} WHEN... ELSE {INTO clause}

When specifying FIRST data is inserted when first WHEN evaluates to TRUE
When specifying ALL, all WHEN conditions are evaluated.

eg

insert first
when col1 = 'A' then
into some_table (col1, col2)
values (val1, val2)
when some_col = 'B' then
into some_other_table (col1, col2)
valuess (val1, val2)
select col1, col2 from some_source;

Sunday, May 27, 2007

Database Audit

Configuring and Administering Auditing

1. Set the AUDIT_TRAIL parameter

Possible values are:
none - audit disabled
os - output is sent to op system
db - output is stored in database sys.aud$ table
db,extended - additionally SQLBIND and SQLTEXT fields are populated
xml
xml,extended - stores output as XML files

2. Possible types of AUDIT

* statement
* privilege
* object
* fine-grained

A special type of audit is SYS AUDIT. To enable it, set the AUDIT_SYS_OPERATIONS = TRUE

This will log all the activity of SYS user or any other user who logs in with SYSDBA privilege.

3. Managing audit events

Use the AUDIT and NOAUDIT statements to control the audit events
Possible options:
BY SESSION/BY ACCESS
WHENEVER SUCCESSFUL/WHENEVER NOT SUCCESSFUL

Example.
To enable logging of jeff's and lori's login/logoff attempts, run:
AUDIT session BY jeff, lori;

To disable: NOAUDIT session BY jeff, lori;

To turn off all statement auditing:
NOAUDIT ALL;

All privilege auditing:
NOAUDIT ALL PRIVILEGES;

You must specify the USER if the audit was created with BY SOME_USER option. Like this:
NOAUDIT ALL BY SCOTT;

To see the list of currently enabled statement audit events:
select * from DBA_STMT_AUDIT_OPTS
To see the list of privilege audits:
select * from DBA_PRIV_AUDIT_OPTS
And object audits:
select * from DBA_OBJ_AUDIT_OPTS
All audit trail views



4. Checking audit results

Query the DBA_AUDIT_TRAIL or DBA_COMMON_AUDIT_TRAIL view to see the audit logs

Useful tip. The AUDIT SESSION event registers the logon and logoff events and it also saves the number of logical and physical reads for the disconnected session. This could be used for tracking the performance issues. To see this information query the DBA_AUDIT_SESSION view.

5. Purging AUDIT records

You have to manually delete the no longer needed records from the SYS.AUD$ table.
For example to delete the records which are older than 90 days, execute:
delete from sys.aud$ where timestamp# < sysdate - 90
6. Fine-grained audit (FGA)

You use the DBMS_FGA package to configure and manage FGA.

To add a policy
BEGIN DBMS_FGA.ADD_POLICY
(object_schema=>'SCOTT',
object_name=>'EMP',
policy_name=>'track_the_shit',
audit_condition=>'sal > 2000',
audit_column=>'SAL, COMM',
statement_types=>'UPDATE, DELETE');
END;
To remove policy
exec dbms_fga.drop_policy('SCOTT','EMP','TRACK_THE_SHIT');
To enable or disable use .ENABLE_POLICY and .DISABLE_POLICY respectively.

To get the list of policies, query:
select * from dba_audit_policies;
To get the audit records, query:
select * from dba_fga_audit_trail;
It seems that any activity by a user logged in as sysdba is not recorded by FGA. I haven't found oracle docs saying this but this was the behavior I noticed while doing some experiments.

Friday, May 25, 2007

Shared Server

1. Set the DISPATCHERS parameter to something like:
alter system set dispatchers = "(PROTOCOL=TCP)(DISPATCHERS=2)(SERVICE=ORCL)"

this will start 2 dispathers for the ORCL service. This operates dynamically.

2. Then you set the SHARED_SERVERS parameter. This sets the minimum shared server processes to start when the instance is started. This parameter also takes effect immediately, no restart is needed. If set to 0, shared server is disabled.
You specify the maximum shared servers with MAX_SHARED_SERVERS. Oracle should handle the needed shared server quantity by itself.

3. You can monitor the usage by querying the V$SHARED_SERVER_MONITOR view.
To see the actual circuits, query the view V$CIRCUITS
select dispatcher, saddr, circuit from v$circuit;
You can join with v$session, to see more information
select c.dispatcher, c.saddr, c.circuit, s.username, s.program, s.osuser, s.machine from v$circuit c, v$session s where c.saddr = s.saddr
Another important view is v$queue. It shows the statuses of the common receive queue and each dispatcher's response queue. The critical column is QUEUED, which shows how many requests is being on the queue. If there are items on the common queue, this indicates that there is not enough shared server resources.

Other views: v$shared_server, v$dispatcher, V$DISPATCHER_CONFIG

Also the SERVER column in the v$session view shows whether user is being connected through SHARED or DEDICATED server.

4. To specify the maximum number of sessions allowed to connect through the shared server, you use the SHARED_SERVER_SESSIONS parameter. You can also limit the physical connections by setting the CIRCUITS parameter. Usually one session maps to one physical connection but it is also possible to channel several sessions through one physical connection.

5. Configure the Large Pool
When Large Pool is not configured, Oracle places UGA of each session into the Shared Pool and this can impact Shared Pool's performance. So configure it, set the parameter large_pool_size to the appropriate value. So determine what is the enough size of the large pool, query the v$sesstat view, which holds the information on what was the maximum amount of memory allocated to all shared sessions since the instance was started.
SQL> select sum(value) from v$sesstat ss, v$statname st where ss.statistic# = st.statistic# and name = 'session uga memory max';
This was what the book said. But it seems to me that things are a little bit different. The statistic in v$sesstat is a real-time statistic, so it shows what was the maximum memory usage for every currently connected session. To get this info run the following in sqlplus:
column program format a25;
column username format a15;
select s.sid,username,program,st.value from v$session s, v$sesstat st, v$statname ss where ss.statistic# = st.statistic# and st.sid= s.sid and ss.name = 'session uga memory max';
You can also limit the amount of memory allowed for a session to use by the means of session profiles.

6. To determine whether you have enough dispatchers.
You can run the following query to see how busy have your dispatchers been:
select name, (busy / (busy + idle)) * 100 from v$dispatcher;
If dispatchers are busy for more than 50% of their time, then consider adding more dispatchers.

7. To see what is the average time users have to wait in a queue, run this query:
select sum(wait)/sum(totalq) from v$queue;
8. To see what is the average wait time on the requests queue:
select round(Wait/totalq,5) || ' hundredths of seconds' "Average Wait Time per request" from v$queue where type='COMMON';