Tuesday, May 29, 2007

SQL*Loader

Reference

SQL*Loader files:
log - contains the loading session log
control - a mandatory file containing all the control information (where are the other files, how to parse data and where to save it) It can also contain the data as well
bad - contains the records not loaded due to an error
discard - contains the records not satisfying the record select criteria

Possible ways of running the sql*loader:
sqlldr user/password control=some_file.ctl
sqlldr user/password some_file.ctl
You can also specify the direct = TRUE|FALSE parameter to use the direct path load. When left to default (FALSE) conventional path is used.

Direct path loading is fast. It doesn't generate undo or redo, it always writes above the high water mark, it doesn't issue COMMIT, it bypasses most of the constraints.

Some restrictions with the direct path loading:
  • Indexes are always rebuilt after the loading. If unique constraint is violated, the unique index is left in unusable state. You need to find the duplicate, eliminate them and rebuild the index.
  • Direct path load cannot occur if there are active transactions against the target table as it places an exclusive table lock on the whole table for the time of load.
  • Triggers do not fire during direct path loads
  • Direct path loading into cluster tables is not supported. In fact only loading to heap organized tables is supported.
  • During the direct path loads, foreign key constraints are disabled and then re-enabled

Possible types of input data formats:
  • fixed length. All rows have the same length and fields are separated by some character. This format will be the quickest in processing.
  • Variable length. Each row starts with a number showing how long the row is.
  • Stream format. Will be the slowest to process. But no information on the row length is needed. You just need to specify the field separator.

Import/Export, Datapump

DataPump is new to 10g. It replaces the old exp and imp utilities, which operated as clients. Datapump works on server side thus gaining access to the data directly, which dramatically increases export/import operations performance.

DP is called from the command line (expdp and impdp) or trough the DBMS_DATAPUMP package.

Note: You need to have a EXP_FULL_DATABASE role is you want to export schema other than yours.
Note: You also need the exp_full_database role if you want the user definition and grants to be included in the export. Otherwise you will only have the schema objects.

Possible export types using EXPDP
full = y | Exports full database
schema = [schema_list] | If omitted then connected user's schema will be exported
tablespace = [tablespace_list] | Exports all object belonging to a tablespace along with dependant object even if they are from a different tablespace
table = [table_list]


Other parameters
content = metadata_only | data_only | default ALL
directory = dir | Oracle DIRECTORY object, found in DBA_DIRECTORIES. User must have the read/write rights for the DIR
dumpfile = dir:filename, ... | dir is again Oracle DIRECTORY
logfile = dir:filename
filesize = n [G|M|K] | Sets the maximum size of each dump file.
parallel=degree | In case
network_link = db_link | Makes possible to use remote database as export source

To initiate export using the DBMS_DATAPUMP you have to do a little bit of programming:
Reference

The script would look something like this:
begin
--1st step
h1 := DBMS_DATAPUMP.OPEN (operation => 'EXPORT' ... so on)
--2nd step
DBMS_DATAPUMP.ADD_FILE ( ... --add the log file
--3rd step
DBMS_DATAPUMP.ADD_FILE (... --add the dump file
--4th step
DBMS_DATAPUMP.METADATA_FILTER (... --filter out not needed stuff
--5th step
DBMS_DATAPUMP.START_JOB ( handle => h1 ... -- from the open call
--6th step
DBMS_DATAPUMP.DETACH (handle => h1 ... --detach from the job
end;
You can then monitor the job through DBA_DATAPUMP_JOBS or USER_DATAPUMP_JOBS view or by looking inside the log file.

You can also perform these activities from the dbconsole interface. Log in into database control and go to Maintenance section. Locate the "Export to files" link, complete the wizard.


Importing using the impdp utility

You have mostly the same options as for the expdp.
Different parameters:

sqlfile = dir:file | Specifies a file to write the DDL statements to.
network_link = db_link | Importing froma live database! A very nice feature.
reuse_datafiles= Y|N | Specifies if the datafiles should be overwritten.
remap_dataile = source:target | Specifies how the datafiles should be renamed
remap_schema = source:target
remap_tablespace = source:target
include = object_list
exclude = object_list

include paramter usage example:
impdp scott/tiger full=y ... include=PROCEDURE:"LIKE SMTHNG%"
This will import only the procedure object which names start with "SMTHNG"


Import using the DBMS_DATAPUMP is done in a similar way to export.

And of course you can run import from EM.

Triggers

Reference

Triggers can be firing on the following types of events:
  • DML events
  • DDL events
  • Database events
DDL events include events such as CREATE, DROP, GRANT, ANALYZE, ASSOCIATE/DISASSOCIATE STATISTICS, REVOKE, AUDIT, COMMENT. Basically each DDL statement can be used as the triggering event. There is also a special event called "DDL", which incorporates all the DDL event so that trigger will fire for any DDL run against the specified object.

Example of DDL trigger:
CREATE OR REPLACE TRIGGER some_trigger
BEFORE GRANT ON schema.object
BEGIN
...
END;

Database triggers are something even more interesting. They fire on the following events:

SERVERERROR Specify SERVERERROR to fire the trigger whenever a server error message is logged.

The following errors do not cause a SERVERERROR trigger to fire:

  • ORA-01403: no data found

  • ORA-01422: exact fetch returns more than requested number of rows

  • ORA-01423: error encountered while checking for extra rows in exact fetch

  • ORA-01034: ORACLE not available

  • ORA-04030: out of process memory when trying to allocate string bytes (string, string)

LOGON Specify LOGON to fire the trigger whenever a client application logs onto the database.

LOGOFF Specify LOGOFF to fire the trigger whenever a client application logs off the database.

STARTUP Specify STARTUP to fire the trigger whenever the database is opened.

SHUTDOWN Specify SHUTDOWN to fire the trigger whenever an instance of the database is shut down.

SUSPEND Specify SUSPEND to fire the trigger whenever a server error causes a transaction to be suspended.

DB_ROLE_CHANGE In a Data Guard configuration, specify DB_ROLE_CHANGE to fire the trigger whenever a role change occurs from standby to primary or from primary to standby.

SERVERERROR event seemed very useful to me. You can create a trigger which will send each error message to your mail box, using the UTL_SMTP package.

Tracing session

A great reference for tracing sessions in Oracle at oracle-base.com

Possible ways of doing this:

To trace your own session
ALTER SESSION SET EVENTS '10046 trace name context forever, level 8';
ALTER SESSION SET EVENTS '10046 trace name context off';
To trace other sessions
EXEC DBMS_SUPPORT.start_trace_in_session(sid=>123, serial=>1234, waits=>TRUE, binds=>FALSE);
EXEC DBMS_SUPPORT.stop_trace_in_session(sid=>123, serial=>1234);
The DBMS_SUPPORT package is not present by default, but can be loaded as the SYS user by executing the @$ORACLE_HOME/rdbms/admin/dbmssupp.sql script

Alternatively use the DBMS_MONITOR package:
DBMS_MONITOR.SESSION_TRACE_ENABLE(
session_id IN BINARY_INTEGER DEFAULT NULL,
serial_num IN BINARY_INTEGER DEFAULT NULL,
waits IN BOOLEAN DEFAULT TRUE,
binds IN BOOLEAN DEFAULT FALSE)

exec dbms_monitor.session_trace_enable(144, 253, TRUE, FALSE);
exec dbms_monitor.session_trace_disable(144, 253);


Using ORADEBUG from SQL*Plus
ORADEBUG SETMYPID; -- Debug current session.
ORADEBUG SETOSPID 1234; -- Debug session with the specified OS process.
ORADEBUG SETORAPID 123456; -- Debug session with the specified Oracle process ID.
ORADEBUG EVENT 10046 TRACE NAME CONTEXT FOREVER, LEVEL 12;
ORADEBUG TRACEFILE_NAME; -- Display the current trace file.
ORADEBUG EVENT 10046 TRACE NAME CONTEXT OFF;

  • 0 - No trace. Like switching sql_trace off.
  • 2 - The equivalent of regular sql_trace.
  • 4 - The same as 2, but with the addition of bind variable values.
  • 8 - The same as 2, but with the addition of wait events.
  • 12 - The same as 2, but with both bind variable values and wait events.

Tracing session

A great reference for tracing sessions in Oracle at oracle-base.com

Possible ways of doing this:

To trace your own session
ALTER SESSION SET EVENTS '10046 trace name context forever, level 8';
ALTER SESSION SET EVENTS '10046 trace name context off';

To trace other sessions
EXEC DBMS_SUPPORT.start_trace_in_session(sid=>123, serial=>1234, waits=>TRUE, binds=>FALSE);
EXEC DBMS_SUPPORT.stop_trace_in_session(sid=>123, serial=>1234);

The DBMS_SUPPORT package is not present by default, but can be loaded as the SYS user by executing the @$ORACLE_HOME/rdbms/admin/dbmssupp.sql script

Using ORADEBUG from SQL*Plus
ORADEBUG SETMYPID; -- Debug current session.
ORADEBUG SETOSPID 1234; -- Debug session with the specified OS process.
ORADEBUG SETORAPID 123456; -- Debug session with the specified Oracle process ID.
ORADEBUG EVENT 10046 TRACE NAME CONTEXT FOREVER, LEVEL 12;
ORADEBUG TRACEFILE_NAME; -- Display the current trace file.
ORADEBUG EVENT 10046 TRACE NAME CONTEXT OFF;

Monday, May 28, 2007

Inserting into multiple tables

A funny way of inserting data. Some kind of "conditional" insert.

INSERT [ALL|FIRST] WHEN {condition} THEN {INTO clause} WHEN... ELSE {INTO clause}

When specifying FIRST data is inserted when first WHEN evaluates to TRUE
When specifying ALL, all WHEN conditions are evaluated.

eg

insert first
when col1 = 'A' then
into some_table (col1, col2)
values (val1, val2)
when some_col = 'B' then
into some_other_table (col1, col2)
valuess (val1, val2)
select col1, col2 from some_source;

Sunday, May 27, 2007

Database Audit

Configuring and Administering Auditing

1. Set the AUDIT_TRAIL parameter

Possible values are:
none - audit disabled
os - output is sent to op system
db - output is stored in database sys.aud$ table
db,extended - additionally SQLBIND and SQLTEXT fields are populated
xml
xml,extended - stores output as XML files

2. Possible types of AUDIT

* statement
* privilege
* object
* fine-grained

A special type of audit is SYS AUDIT. To enable it, set the AUDIT_SYS_OPERATIONS = TRUE

This will log all the activity of SYS user or any other user who logs in with SYSDBA privilege.

3. Managing audit events

Use the AUDIT and NOAUDIT statements to control the audit events
Possible options:
BY SESSION/BY ACCESS
WHENEVER SUCCESSFUL/WHENEVER NOT SUCCESSFUL

Example.
To enable logging of jeff's and lori's login/logoff attempts, run:
AUDIT session BY jeff, lori;

To disable: NOAUDIT session BY jeff, lori;

To turn off all statement auditing:
NOAUDIT ALL;

All privilege auditing:
NOAUDIT ALL PRIVILEGES;

You must specify the USER if the audit was created with BY SOME_USER option. Like this:
NOAUDIT ALL BY SCOTT;

To see the list of currently enabled statement audit events:
select * from DBA_STMT_AUDIT_OPTS
To see the list of privilege audits:
select * from DBA_PRIV_AUDIT_OPTS
And object audits:
select * from DBA_OBJ_AUDIT_OPTS
All audit trail views



4. Checking audit results

Query the DBA_AUDIT_TRAIL or DBA_COMMON_AUDIT_TRAIL view to see the audit logs

Useful tip. The AUDIT SESSION event registers the logon and logoff events and it also saves the number of logical and physical reads for the disconnected session. This could be used for tracking the performance issues. To see this information query the DBA_AUDIT_SESSION view.

5. Purging AUDIT records

You have to manually delete the no longer needed records from the SYS.AUD$ table.
For example to delete the records which are older than 90 days, execute:
delete from sys.aud$ where timestamp# < sysdate - 90
6. Fine-grained audit (FGA)

You use the DBMS_FGA package to configure and manage FGA.

To add a policy
BEGIN DBMS_FGA.ADD_POLICY
(object_schema=>'SCOTT',
object_name=>'EMP',
policy_name=>'track_the_shit',
audit_condition=>'sal > 2000',
audit_column=>'SAL, COMM',
statement_types=>'UPDATE, DELETE');
END;
To remove policy
exec dbms_fga.drop_policy('SCOTT','EMP','TRACK_THE_SHIT');
To enable or disable use .ENABLE_POLICY and .DISABLE_POLICY respectively.

To get the list of policies, query:
select * from dba_audit_policies;
To get the audit records, query:
select * from dba_fga_audit_trail;
It seems that any activity by a user logged in as sysdba is not recorded by FGA. I haven't found oracle docs saying this but this was the behavior I noticed while doing some experiments.

Friday, May 25, 2007

Shared Server

1. Set the DISPATCHERS parameter to something like:
alter system set dispatchers = "(PROTOCOL=TCP)(DISPATCHERS=2)(SERVICE=ORCL)"

this will start 2 dispathers for the ORCL service. This operates dynamically.

2. Then you set the SHARED_SERVERS parameter. This sets the minimum shared server processes to start when the instance is started. This parameter also takes effect immediately, no restart is needed. If set to 0, shared server is disabled.
You specify the maximum shared servers with MAX_SHARED_SERVERS. Oracle should handle the needed shared server quantity by itself.

3. You can monitor the usage by querying the V$SHARED_SERVER_MONITOR view.
To see the actual circuits, query the view V$CIRCUITS
select dispatcher, saddr, circuit from v$circuit;
You can join with v$session, to see more information
select c.dispatcher, c.saddr, c.circuit, s.username, s.program, s.osuser, s.machine from v$circuit c, v$session s where c.saddr = s.saddr
Another important view is v$queue. It shows the statuses of the common receive queue and each dispatcher's response queue. The critical column is QUEUED, which shows how many requests is being on the queue. If there are items on the common queue, this indicates that there is not enough shared server resources.

Other views: v$shared_server, v$dispatcher, V$DISPATCHER_CONFIG

Also the SERVER column in the v$session view shows whether user is being connected through SHARED or DEDICATED server.

4. To specify the maximum number of sessions allowed to connect through the shared server, you use the SHARED_SERVER_SESSIONS parameter. You can also limit the physical connections by setting the CIRCUITS parameter. Usually one session maps to one physical connection but it is also possible to channel several sessions through one physical connection.

5. Configure the Large Pool
When Large Pool is not configured, Oracle places UGA of each session into the Shared Pool and this can impact Shared Pool's performance. So configure it, set the parameter large_pool_size to the appropriate value. So determine what is the enough size of the large pool, query the v$sesstat view, which holds the information on what was the maximum amount of memory allocated to all shared sessions since the instance was started.
SQL> select sum(value) from v$sesstat ss, v$statname st where ss.statistic# = st.statistic# and name = 'session uga memory max';
This was what the book said. But it seems to me that things are a little bit different. The statistic in v$sesstat is a real-time statistic, so it shows what was the maximum memory usage for every currently connected session. To get this info run the following in sqlplus:
column program format a25;
column username format a15;
select s.sid,username,program,st.value from v$session s, v$sesstat st, v$statname ss where ss.statistic# = st.statistic# and st.sid= s.sid and ss.name = 'session uga memory max';
You can also limit the amount of memory allowed for a session to use by the means of session profiles.

6. To determine whether you have enough dispatchers.
You can run the following query to see how busy have your dispatchers been:
select name, (busy / (busy + idle)) * 100 from v$dispatcher;
If dispatchers are busy for more than 50% of their time, then consider adding more dispatchers.

7. To see what is the average time users have to wait in a queue, run this query:
select sum(wait)/sum(totalq) from v$queue;
8. To see what is the average wait time on the requests queue:
select round(Wait/totalq,5) || ' hundredths of seconds' "Average Wait Time per request" from v$queue where type='COMMON';

Wednesday, May 23, 2007

sql to prolong tracks at tarantula

update file
set expires = DATE_ADD(expires, INTERVAL 30 DAY)
where type=1 AND deleted = 0

Sunday, May 13, 2007

Time zones

To get time zone of the database:
select DBTIMEZONE from dual;
To get your session time zone:
select SESSIONTIMEZONE from dual;
To change database time zone (instance restart is needed):
alter database set time_zone='Europe/Tallinn'; --or '+02:00'
To change session time zone:
alter session set time_zone='Europe/Tallinn';
To get the list of all time zone names:
select * from v$timezone_names;

Case insensitive operations in 10g

to get a linguistic sorts, set the NLS_SORT session parameter to the desired linguistic rule set.
eg. alter session set nls_sort=ESTONIAN

add a "_CI" to get case insensitive or "_AI" to get both case and accent insensitive sorting

or use the NLSSORT function

select name from empliyees order by NLSSSORT(name,'NLS_SORT=ESTONIAN_AI');

to perform case insensitive comparisons set the NLS_COMP parameter to 'LINGUISTIC'. This will make the WHERE clause conditions use the rules set by the NLS_SORT parameter

Like this:
alter session set NLS_COMP=LINGUISTIC;
alter session set NLS_SORT=ESTONIAN_CI;
select name from employees where name like 'SomEnAme';

To achive same results without setting the NLS_SORT, NLSSORT function could be used like this:
select * from scott.l where nlssort(name,'nls_sort=BINARY_AI') like nlssort('koger', 'nls_sort=BINARY_AI');

Globalization settings

Views holding the NLS parameters.
In ascending order of priority:

select * from nls_database_parameters;
select * from nls_instance_parameters;


To get your currently active settings:
select * from nls_session_parameters;
or a more complete version: v$nls_parameters

Friday, May 4, 2007

UNDO tablespace size

How to find out what does the size of the undo tbs need to be to satisfy both transaction and long running queries:

select
(select max(undoblks)/600 * max(maxquerylen) from v$undostat)
*
(select value from v$parameter where name='db_block_size')
from dual;

I got 14GB as a result :)
But so far is was ok to have undo tbs just 1GB of size

The maxquerylen is in seconds and undoblks is the number of blocks used within 10 minutes. So everything has to be taken to minutes and then multiplied.

To query the undo statistics use

select begin_time, end_time, undoblks, maxquerylen, ssolderrcnt, nospaceerrcnt from v$undostat
ssolderrcnt is the number of Snapshot too old errors and
nospaeerrcnt is the number of errors from transactions running out of undo space