Monday, April 23, 2007

Oracle object definitions

To get the DDL of any object in the database, DMBS_METADATA package can be used.

DBMS_METADATA.GET_DDL
(object_type IN VARCHAR2,
name IN VARCHAR2,
schema IN VARCHAR2 DEFAULT NULL,
version IN VARCHAR2 DEFAULT 'COMPATIBLE',
model IN VARCHAR2 DEFAULT 'ORACLE',
transform IN VARCHAR2 DEFAULT 'DDL')
RETURN CLOB;

Oracle Documentation

usage example:
select DBMS_METADATA.GET_DDL('VIEW','ALL_CONSTRAINTS','SYS') from dual;

Thursday, April 5, 2007

notes from Tom Kyte's book

  • SQLPlus environment settings

under [oracle_home]\sqlplus\admin
there is a glogin.sql file. This one is being run each time you login into a database.
Here are some commands which will set a couple of useful things up:
--enable the DBMS_OUTPUT and set the buffer size to 100000 bytes
set serveroutput on size 100000
--trim the spaces, set pagesize and line length
set trimspool on
set long 5000
set linesize 100
set pagesize 9999
column plan_plus_exp format a80
--set the prompt to "[user]@[dbname]>"
column global_name new_value gname
set termout off;
select lower(user) || '@' || global_name as global_name from global_name;
set sqlprompt '&gname> '
set termout on
  • setup the PLAN_TABLE and AUTOTRACE
I already had the PLAN_TABLE$ in SYS schema in 10g XE, but if it's not there, create it using the utlxplan.sql script out of the rdbms\admin dir.
Then create a public synonym with same name and issue GRANT ALL ON PLAN_TABLE TO PUBLIC
can now use the explain plan statement in any schema.
  • set the AUTOTRACE in SQLPlus
first create the PLUSTRACE role by running /sqlplus/admin/plustrce.sql in SYS priveleges
then run GRANT PLUSTRACE TO PUBLIC

to use the autotrace issue one of the following:
set autotrace on
set autotrace off
set autotrace on explain
set autotrace on statistics
set autotrace traceonly
  • avoid using long running transactions in MTS environment
  • using bind variables causes already compiled SQL statements to be reused from shared pool. Otherwise they are being compiled over and over again.
  • B* tree based index, does not index NULL values

  • Memory
PGA belongs to process, UGA belongs to session. If dedicated server is used then UGA is located in PGA, in case of the MTS UGA is part of SGA. sort_area_size is part of the PGA, sort_area_retained_size is part of UGA. To get the pga and uga statistics for the current session, run:

select a.name, b.value
from v$statname a join v$mystat b
on (a.statistic# = b.statistic#) where a.name like '%ga %';
SGA consists of: java pool, large pool, shared pool, null pool
null pool has fixed sga, buffer cache and redo log buffer inside.
shared pool keeps all the compilted sql and pl/sql objects. To make use of shared pool optimal - always use bind variables. Otherwise shared pool will grow too large and handling an over sized shared pool takes a lot of processor time and this leads to dramatic system slow down. Using the dbms_shared_pool utility, you can make some certain object to stay in sp for ever, otherwise unused objects are removed from sp when it's getting full.
shared_pool_size init parameter is always smaller than the actual sp size.
large pool is meant for larger mem structures than in shared pool. It also doesn't keep cache things: after memory has been used, it may be rewritten. large pool keep things like these: UGA are in SGA in case MTS is used, parallel stuff, RMAN IO buffering.
to get sga statistics run in sqlplus

compute sum of bytes on pool
break on pool skip 1
select pool, name, bytes
from v$sgastat
order by pool, name;
more simple report: show sga

  • Locks
It could be a problem (at lease in 8i) that a table got blocked if it had no index on foreign key and the primary key of the referenced table was changed. To locate tables which have foreign keys without indexes use this script no_index_fks.sql
This was fixed in 9i, but still creating indexes on fks is good.

TX - transaction lock. Is set when a transaction starts modifying some rows. This lock is kept until transaction issues commit or rollback. Initial number of transactions capable of blocking some data in a block is set my INITRANS parameter of CREATE statement (default 2). Maximum number of transactions is set by MAXTRANS (default 255). But it might be also limited by the free space in block's header.
To get information on TX locks, v$lock can be queried

select username,
v$lock.sid,
trunc(id1/power(2,16)) rbs,
bitand(id1,to_number('ffff','xxxx'))+0 slot,
id2 seq,
lmode,
request
from v$lock, v$session
where v$lock.type = 'TX'
and v$lock.sid = v$session.sid
and v$session.username = USER
/
here ID1 and ID2 fields contain the transaction id. since it has three numbers saved in two fields, some math tricks are needed to get those numbers. You can see that rbs, slot and seq coincide with
select XIDUSN, XIDSLOT, XIDSQN from v$transaction
TM lock - locks an object from being modified by DDL statements while transaction is modifying data in it. Can use the same query as above, only exclude the where = 'TX' and query simple values of ID1 and ID2. Here ID1 will contain object id being blocked. It coincide with object id from
column object_name format a20
select object_name, object_id from user_objects;

it is also possible to forbid DDL statements at all by using the DML_LOCK init param. Or do the same for certain tables by using ALTER TABLE [table] DISABLE TABLE LOCK

  • DDL Locks
Important! All DDL statements are wrapped into two commit; calls. Like this:

begin
commit;
DDL statement
commit;
exception
when others then rollback
end;
This is done so not to rollback all the previous steps in case DDL fails. So it is important to remember that DDL will initiate a commit silently.
To get list of all ddl locks held at the moment use the DBA_DDL_LOCKS view
  • Transactions
To perform a partial rollback in case of an error, a SAVEPOINT operator must be used. Like this:
begin
savepoit sp1
[some pl/sql stuff]
exception
when others then
rollback to sp1;
end;

INSERTS are handled by Oracle in the following way:
savepoint statement1
insert blabla
if error then rollback to statement1
savepoint statement2
insert blabla
if error then rollback to statement2
So when an error occurs with some INSERT operator then only the portion of transaction related to this operator is rolled back.
  • Tables
A trick to get the create statement for a table:
imp user/pass tables=[tablename,..]
exp user/pass full=y indexfile=[filename]
you will get the full create statements inside the index file.
  • Temp tables
table objects are kept in the data dictionary for ever but data inside them is available only to the current session (or transaction)
To generate the table statistics, so that CBO would optimize the queries against temp tables correctly, you can create a normal table with the same name and then export/import statistics using the dmbs_stats package.
  • Indexes
B*
The most common index type. Stores the indexed values in a tree structure.
If several columns are used in an index and values in some of them are repeated a lot, then index could be compressed using the compress n option. In this case index structure will take less space and index reading will require less IO. On the other hand, index operation will require more processor time.
B* in descending order
Indexes could be stored in ASC or DESC order. This is only needed when you have several columns in an index and you need to select each of them with different order. Like this: order by key1 DESC, key2 ASC. You need to specify the desired order when creating an index.
BITMAP
Use bitmap index when you have a very non-unique column. Drawback: when updating a row, a lot more rows are getting locked at the same time.

Other notices concerning index.
When using something like select * from t where x = 5 But x is of a character type, the statement will be silently rewritten as select * from t where to_number(x) = 5 And of course in this case index will not be used.

  • EXP and IMP
Indexes with SYS_blabla (automatically generated) names will not be exported!

Constrains with automatically generated names, will be imported even if such constraint already exists: oracle will simply generate a new name for it. This could be influencing performance in case import is performed repeatedly. So always give names to the constraints!

Monday, April 2, 2007

Oracle OLEDB connection string

this one's from oracle's own documentation

"Provider=OraOLEDB.Oracle;User ID=user;Password=pwd;Data Source=constr;"

other paramters include

CacheType - specifies the type of cache used to store the rowset data on the client.
ChunkSize - specifies the size of LONG or LONG RAW column data stored in the provider's cache.
DistribTX - enables or disables distributed transaction enlistment capability.FetchSize - specifies the size of the fetch array in rows.
OSAuthent - specifies whether OS Authentication will be used when connecting to an Oracle database.
PLSQLRSet - enables or disables the return of a rowset from PL/SQL stored procedures. PwdChgDlg - enables or disables displaying the password change dialog box when the password expires.
OLEDB.NET - enables or disables compatibility with OLE DB .NET Data Provider. See "OLE DB .NET Data Provider Compatibility".