Monday, April 23, 2007

Oracle object definitions

To get the DDL of any object in the database, DMBS_METADATA package can be used.

DBMS_METADATA.GET_DDL
(object_type IN VARCHAR2,
name IN VARCHAR2,
schema IN VARCHAR2 DEFAULT NULL,
version IN VARCHAR2 DEFAULT 'COMPATIBLE',
model IN VARCHAR2 DEFAULT 'ORACLE',
transform IN VARCHAR2 DEFAULT 'DDL')
RETURN CLOB;

Oracle Documentation

usage example:
select DBMS_METADATA.GET_DDL('VIEW','ALL_CONSTRAINTS','SYS') from dual;

Thursday, April 5, 2007

notes from Tom Kyte's book

  • SQLPlus environment settings

under [oracle_home]\sqlplus\admin
there is a glogin.sql file. This one is being run each time you login into a database.
Here are some commands which will set a couple of useful things up:
--enable the DBMS_OUTPUT and set the buffer size to 100000 bytes
set serveroutput on size 100000
--trim the spaces, set pagesize and line length
set trimspool on
set long 5000
set linesize 100
set pagesize 9999
column plan_plus_exp format a80
--set the prompt to "[user]@[dbname]>"
column global_name new_value gname
set termout off;
select lower(user) || '@' || global_name as global_name from global_name;
set sqlprompt '&gname> '
set termout on
  • setup the PLAN_TABLE and AUTOTRACE
I already had the PLAN_TABLE$ in SYS schema in 10g XE, but if it's not there, create it using the utlxplan.sql script out of the rdbms\admin dir.
Then create a public synonym with same name and issue GRANT ALL ON PLAN_TABLE TO PUBLIC
can now use the explain plan statement in any schema.
  • set the AUTOTRACE in SQLPlus
first create the PLUSTRACE role by running /sqlplus/admin/plustrce.sql in SYS priveleges
then run GRANT PLUSTRACE TO PUBLIC

to use the autotrace issue one of the following:
set autotrace on
set autotrace off
set autotrace on explain
set autotrace on statistics
set autotrace traceonly
  • avoid using long running transactions in MTS environment
  • using bind variables causes already compiled SQL statements to be reused from shared pool. Otherwise they are being compiled over and over again.
  • B* tree based index, does not index NULL values

  • Memory
PGA belongs to process, UGA belongs to session. If dedicated server is used then UGA is located in PGA, in case of the MTS UGA is part of SGA. sort_area_size is part of the PGA, sort_area_retained_size is part of UGA. To get the pga and uga statistics for the current session, run:

select a.name, b.value
from v$statname a join v$mystat b
on (a.statistic# = b.statistic#) where a.name like '%ga %';
SGA consists of: java pool, large pool, shared pool, null pool
null pool has fixed sga, buffer cache and redo log buffer inside.
shared pool keeps all the compilted sql and pl/sql objects. To make use of shared pool optimal - always use bind variables. Otherwise shared pool will grow too large and handling an over sized shared pool takes a lot of processor time and this leads to dramatic system slow down. Using the dbms_shared_pool utility, you can make some certain object to stay in sp for ever, otherwise unused objects are removed from sp when it's getting full.
shared_pool_size init parameter is always smaller than the actual sp size.
large pool is meant for larger mem structures than in shared pool. It also doesn't keep cache things: after memory has been used, it may be rewritten. large pool keep things like these: UGA are in SGA in case MTS is used, parallel stuff, RMAN IO buffering.
to get sga statistics run in sqlplus

compute sum of bytes on pool
break on pool skip 1
select pool, name, bytes
from v$sgastat
order by pool, name;
more simple report: show sga

  • Locks
It could be a problem (at lease in 8i) that a table got blocked if it had no index on foreign key and the primary key of the referenced table was changed. To locate tables which have foreign keys without indexes use this script no_index_fks.sql
This was fixed in 9i, but still creating indexes on fks is good.

TX - transaction lock. Is set when a transaction starts modifying some rows. This lock is kept until transaction issues commit or rollback. Initial number of transactions capable of blocking some data in a block is set my INITRANS parameter of CREATE statement (default 2). Maximum number of transactions is set by MAXTRANS (default 255). But it might be also limited by the free space in block's header.
To get information on TX locks, v$lock can be queried

select username,
v$lock.sid,
trunc(id1/power(2,16)) rbs,
bitand(id1,to_number('ffff','xxxx'))+0 slot,
id2 seq,
lmode,
request
from v$lock, v$session
where v$lock.type = 'TX'
and v$lock.sid = v$session.sid
and v$session.username = USER
/
here ID1 and ID2 fields contain the transaction id. since it has three numbers saved in two fields, some math tricks are needed to get those numbers. You can see that rbs, slot and seq coincide with
select XIDUSN, XIDSLOT, XIDSQN from v$transaction
TM lock - locks an object from being modified by DDL statements while transaction is modifying data in it. Can use the same query as above, only exclude the where = 'TX' and query simple values of ID1 and ID2. Here ID1 will contain object id being blocked. It coincide with object id from
column object_name format a20
select object_name, object_id from user_objects;

it is also possible to forbid DDL statements at all by using the DML_LOCK init param. Or do the same for certain tables by using ALTER TABLE [table] DISABLE TABLE LOCK

  • DDL Locks
Important! All DDL statements are wrapped into two commit; calls. Like this:

begin
commit;
DDL statement
commit;
exception
when others then rollback
end;
This is done so not to rollback all the previous steps in case DDL fails. So it is important to remember that DDL will initiate a commit silently.
To get list of all ddl locks held at the moment use the DBA_DDL_LOCKS view
  • Transactions
To perform a partial rollback in case of an error, a SAVEPOINT operator must be used. Like this:
begin
savepoit sp1
[some pl/sql stuff]
exception
when others then
rollback to sp1;
end;

INSERTS are handled by Oracle in the following way:
savepoint statement1
insert blabla
if error then rollback to statement1
savepoint statement2
insert blabla
if error then rollback to statement2
So when an error occurs with some INSERT operator then only the portion of transaction related to this operator is rolled back.
  • Tables
A trick to get the create statement for a table:
imp user/pass tables=[tablename,..]
exp user/pass full=y indexfile=[filename]
you will get the full create statements inside the index file.
  • Temp tables
table objects are kept in the data dictionary for ever but data inside them is available only to the current session (or transaction)
To generate the table statistics, so that CBO would optimize the queries against temp tables correctly, you can create a normal table with the same name and then export/import statistics using the dmbs_stats package.
  • Indexes
B*
The most common index type. Stores the indexed values in a tree structure.
If several columns are used in an index and values in some of them are repeated a lot, then index could be compressed using the compress n option. In this case index structure will take less space and index reading will require less IO. On the other hand, index operation will require more processor time.
B* in descending order
Indexes could be stored in ASC or DESC order. This is only needed when you have several columns in an index and you need to select each of them with different order. Like this: order by key1 DESC, key2 ASC. You need to specify the desired order when creating an index.
BITMAP
Use bitmap index when you have a very non-unique column. Drawback: when updating a row, a lot more rows are getting locked at the same time.

Other notices concerning index.
When using something like select * from t where x = 5 But x is of a character type, the statement will be silently rewritten as select * from t where to_number(x) = 5 And of course in this case index will not be used.

  • EXP and IMP
Indexes with SYS_blabla (automatically generated) names will not be exported!

Constrains with automatically generated names, will be imported even if such constraint already exists: oracle will simply generate a new name for it. This could be influencing performance in case import is performed repeatedly. So always give names to the constraints!

Monday, April 2, 2007

Oracle OLEDB connection string

this one's from oracle's own documentation

"Provider=OraOLEDB.Oracle;User ID=user;Password=pwd;Data Source=constr;"

other paramters include

CacheType - specifies the type of cache used to store the rowset data on the client.
ChunkSize - specifies the size of LONG or LONG RAW column data stored in the provider's cache.
DistribTX - enables or disables distributed transaction enlistment capability.FetchSize - specifies the size of the fetch array in rows.
OSAuthent - specifies whether OS Authentication will be used when connecting to an Oracle database.
PLSQLRSet - enables or disables the return of a rowset from PL/SQL stored procedures. PwdChgDlg - enables or disables displaying the password change dialog box when the password expires.
OLEDB.NET - enables or disables compatibility with OLE DB .NET Data Provider. See "OLE DB .NET Data Provider Compatibility".

Monday, February 12, 2007

killing sessions and OS processes

been dealing with hung sessions today.
I'm still not sure of why are they being left there hanging. They are just ready to wait eternally for the "SQL*Net message from dblink" event. I guess that remote instance is failing or something.. I don't have access to that server, so I don't know what's happening there and admins are finns - pretty quiet types, they don't talk much. Anyway. I have faced a problem of having a big number of sessions either waiting for dblink or for client message and I KNOW that neither is coming. So a need in mass killing appears :) You can kill oracle sessions with


ALTER SYSTEM KILL SESSION SID, SERIAL# immediate;

But after you kill it like this, it still stays in the v$session view having the 'KILLED' status. And you cannot get rid of it otherwise but killing the process on the OS level. Because PMON waits for a client to issue next command so to send him "your session has been killed" message, before he can actually eliminate the process. Quite polite, but sometimes you just know that client is not coming back... For this matter there is a orakill.exe tool (for Windows). Used like this:

orakill [instance_name] [spid]

This one will kill the thread inside the oracle.exe process. You need to pass the spid of the process, taken from the v$process view.

So there were two problems in front of me:

  1. To kill all the sessions started by a specific program.
  2. To kill the processes so that all the resources and locks would be freed.

To solve the first one I coded a nice PL/SQL procedure:


CREATE OR REPLACE PROCEDURE KILL_SESSIONS_BY_PROGRAM (in_program IN varchar2) IS
CURSOR sessions_cur IS
select s.sid
,s.serial#
from v$session s
where lower(s.program)=lower(in_program)
and s.status != 'KILLED';
sessions_rec sessions_cur%ROWTYPE;
sess_to_kill varchar2(50);
cmd_string varchar2(100);
e_marked_for_kill exception;
PRAGMA EXCEPTION_INIT(e_marked_for_kill, -31);
begin
OPEN sessions_cur;
LOOP
FETCH sessions_cur INTO sessions_rec;
IF sessions_cur%ROWCOUNT = 0 AND sessions_cur%NOTFOUND THEN
RAISE_APPLICATION_ERROR (-20001,'NO sessions found!');
END IF;
EXIT WHEN sessions_cur%NOTFOUND;
sess_to_kill := TO_CHAR(sessions_rec.sid)','TO_CHAR(sessions_rec.serial#);
cmd_string:='ALTER SYSTEM KILL SESSION '''sess_to_kill''' immediate';
begin
DBMS_OUTPUT.PUT_LINE(cmd_string);
execute immediate cmd_string;
exception when e_marked_for_kill then
null;
end;
END LOOP;
end;
/

And for the second one I made a small SQLPlus script, which generates me a .cmd file containg as much orakill commands as it is needed. Here it is:


CLEAR BUFFER
set heading off;
set feedback off;
variable sess refcursor;
begin
open :sess for select 'orakill &instance ' p.spid
from v$session s, v$process p
where s.paddr = p.addr
and lower(s.program)=lower('&in_program')
and s.status = 'KILLED';
end;
/
spool h:\sql\kill.cmd;
print sess;
spool off;

Thursday, February 8, 2007

privileges in bulk

have been dealing with oracle privileges today for the first time seiously.

It turned out that granting privileges is quite a pain (as everything else) in oracle. I needed to grant select (or any other) privilege on all tables in a schema at once. But oracle standard command only makes it possible to grant privs on a single object. So I had to write my own procedure, which would grant privileges in bulk. Here it is:


CREATE OR REPLACE PROCEDURE GRANT_BULK_PRIV (inSchema IN varchar,
inObjType IN varchar,
inPriv IN varchar,
inUser IN varchar) IS
/******************************************************************************
NAME: GRANT_MULTI_PRIV
PURPOSE: to grant priviledges in bulk
NOTES:
PARAMETERS:
inObjType must specify the oracle defined type of the object (in upper case)
inSchema: Name of the schema where the objects reside (must be in upper case)
inPriv: what priviledge to give (oracle defined privilege name)
inUser: a user or a role to whom the privilege will be granted
Sysdate: 8.02.2007
******************************************************************************/
cursor allobjects (inOwner IN varchar, inType IN varchar) IS
Select o.object_name, o.status
from sys.all_objects o
where o.object_type = inType
and o.owner = inOwner
and o.status = 'VALID' ;
allobjects_rec allobjects%rowtype;
cmdtext varchar(255);
BEGIN
OPEN allobjects (inSchema, inObjType);
LOOP
FETCH allobjects INTO allobjects_rec;
EXIT WHEN allobjects%NOTFOUND;
cmdtext:= 'grant ' ||inPriv|| ' on ' ||inSchema|| '."' || allobjects_rec.object_name ||'" to '|| inUser;
--DBMS_OUTPUT.PUT_LINE(cmdtext);
execute immediate cmdtext;
END LOOP;
EXCEPTION
WHEN NO_DATA_FOUND THEN
NULL;
WHEN OTHERS THEN
-- Consider logging the error and then re-raise
RAISE;
END GRANT_BULK_PRIV;
/



usage like this:
execute grant_bulk_priv ('SHEMA', 'TABLE', 'SELECT', 'SOME_ROLE');

Monday, January 29, 2007

oracle archive log destinations

been recently having a problem with oracle when the disk was getting full with archivelogs and oracle just stopped responding until the logwriter could save the log and the redo log could be switched. So I thought I should add an alternate location, which would be used if the primary one becomes invalid.

So I read the manual and did like it says:

firstly:


The first method is to use the LOG_ARCHIVE_DEST_n parameter (where n is an integer from 1 to 10) to specify from one to ten different destinations for archival. Each numerically-suffixed parameter uniquely identifies an individual destination.

secondly:


The LOG_ARCHIVE_DEST_STATE_n (where n is an integer from 1 to 10) initialization parameter allows you to control the availability state of the specified destination (n). The destination state can have three values: ENABLE,DEFER, or ALTERNATE. The value ENABLE indicates that Oracle can use the destination, whereas DEFER indicates that the location is temporarily disabled. The third value, ALTERNATE, means that the destination is an alternate. It's availability state is DEFER, unless there is a failure of its parent destination, in which case its state becomes ENABLE.


So after I added parameters to my init.ora file accordingly, the database wouldn't start!

Alert log didn't contain any errors, it would simply end with this line:
Using log_archive_dest parameter default value


But if to try starting the database from sqlplus with startup nomount, it would give this error:
ORA-00439: feature not enabled: Managed Standby
So it comes out that the Managed Standby is a feature of Enterprise Edition ONLY! And LOG_ARCHIVE_DEST_n depends on that feature and therefore cannot be used with Standard Edition.
So I have do with LOG_ARCHIVE_DEST parameter and just make the disk larger... Can't think of any other good solution now...

Administrator's guide doesn't say a word about this. It has a link although to the 9i Reference, which has a note of this dependency trick..
This parameter is valid only if you have installed Oracle Enterprise
Edition. You may continue to use LOG_ARCHIVE_DEST if you have installed Oracle
Enterprise Edition. However, you cannot use both LOG_ARCHIVE_DEST_n and
LOG_ARCHIVE_DEST, as they are not compatible.

Tuesday, January 23, 2007

rich text editor for web

lately have been looking for some free and crossbrowser rich text editors. Out of all the variety, picked out two which seemed the best:

1. TinyMCE
http://tinymce.moxiecode.com/
pros:
nice interface, loads faster, has a lot of commands (huge number of them). it's a practically winword editor under web. *amazed*. Has API. All browsers supported (even opera and safari)
cons:
the image and file browsers are commercial. Although it is possible to link your own filebrowser as plugin. But I couldn't find any freeware ones. And who wants to code, if there is something ready and for free? Therefore..

2. FCKeditor
http://www.fckeditor.net/

pros: also very powerful, a lot of commands, API. And has a free file and image browser included.
cons: interface is a little bit slow and loads a little bit slower than Tiny. Neither Opera nor Safari are supported.


I personally have picked the FCKeditor, mostly because of the free filebrowser. And it's not my problem if someone likes to use stupid browsers ;)
Othewise would have picked the TinyMCE...

Hello World! :)

here comes the first record.

I have been into some ajax programming lately. Feels like this is the future of the application development in the whole. Google's spreadsheet has totally amazed me. There is even some server->client communication. How is that implemented I wonder... Got to learn it someday.

Today I have been into some beginner programming using jQuery (http://jquery.com/)
It's really easy to learn and it saves a lot of time when it comes to DHTML tricks.

The best thing with it, is that you don't have to specify events on objects using something like onclick="foo()". With jQuery you set the events from the script area. And what is really nice is that you can select many object (and do it very easily) at once, using XPath and/or CSS.

So for example you set up a click event on a button called "btn1" like this:

$("button[@name=btn1]").click(function(){ alert("button clicked!") });

this will register an "onclick" event with the button who's name parameter is equal "btn1"

But what I was fighting with today was: how to reference the parent windows out of an iframe?
Turned out to be like this:

parent.$("element").action()

So it was as easy as putting "parent" in front of the $ sign.
Oh and also that the parent page must include the jquery source too.

Particularly what I have built today. Is a script which uploads pictures "on the fly", without a page reload. And immediately displays the newly uploaded picture on the page. For this matter I have used an iframe which had an upload form inside of it. After the form submits, the php script saves and registeres the picture in db and outputs some jQuery code, which dynamically adds the new picture thumb to the parent frame (this is why I needed to reference the parent windows with jQuery). So this way it looks as if picture being uploaded totally dynamically, without a page reload.

Soon my work will be done, I'll post a link to it.