Friday, December 14, 2007

MATVIEWS

first experience with materialized views

1. the refresh needs hell lot of UNDO space and produces tonns of archivelog
2. if mview was created before the MVIEW LOG, then you cannot perform FAST refresh. You need to perform COMPLETE refresh first. Otherwise you will get ORA-12034: materialized view log on "%s"."%s" younger than last refresh. Metalink Note:365157.1 discusses this problem.
3. to refresh it manually, use the DBMS_MVIEW package's refresh procedure
DBMS_MVIEW.REFRESH('[MVIEW NAME]','C'); --where C is for Complete
4. MVIEW must be registered with it's MASTER site and be listed in SYS.SLOG$ table.
5. It is much cheaper to drop and re-create the m.view than to perform a complete refresh!

How to monitor the progress of a materialized view refresh (MVIEW)
MATERIALIZED VIEW REFRESH: Locking, Performance, Monitoring

Tuesday, December 11, 2007

export/import considerations

some things to consider when performing export/import operations:

- turn off the archive logging for the time of IMPORT if possible
- JOBS are created in the schema of the user who is running the import
- user must have the IMP_FULL_DATABASE privilege to be able to import JOBS
**I'm not sure which privilege exactly is necessary here. But tests show that if there's no IMP_FULL_DATABASE role, then data pump simply ignores the JOB objects in the dump**
- when performing SCHEMA import, it might be reasonable to import USER, ROLE_GRANTS, SYSTEM_GRANTS objects at first step logged in as DBA and all the rest in the second step as the schema user himself (so that JOBS will be created in the correct schema)

Wednesday, November 7, 2007

create pfile from spfile without a running instance

when you messed up an instance and need to change some parameters in your PFILE, but what you have is only an SPFILE, which is not editable, you can still get yourself a pfile in case if you're running on unix. There's a
strings
command, which will extract printable strings from spfile.
$ strings spfile$ORACLE_SID.ora > pfile$ORACLE_SID.ora

Wednesday, October 24, 2007

cursor cache

An article at orafaq on cursors

Main point is that
OPEN_CURSORS
parameters limits the number of currently opened cursors by each session. If this number is exceeded the session will get en error. And
session_cached_cursors
is something totally different and independent from open_cursors. The idea behind the cursor cache is to hold "popular" cursors opened. If a cursor is being parsed every time it's opened, it causes a latch contention and concurrency starts to suffer. So to protect the database from a badly written code, you can cache the cursors. The number of cached cursors is set by the session_cached_cursors. The value is per-session. It can be set system-wide (needs a restart) or by ALTER SESSION. As I have understood, cursor cache is a part of PGA, so it will not effect the shared pool if this parameter is increased.

Wednesday, October 17, 2007

export/import through gzip

1. create a pipe:
$ mknod gzpipe p
2. start gzip
$ gzip -c < gzpipe > exp.gz &
3. run exp
$ exp scott/tiger file=gzpipe ... 
import would then be:
$ gunzip -c < exp.gz > gzpipe &
$ imp scott/tiger file=gzpipe ...
Unfortunately piping cannot be used with 10g datapump. Here's what oracle whitepaper says:
Can I use gzip with Data Pump?
Because Data Pump uses parallel operations to achieve its high performance, you
cannot pipe the output of Data Pump export through gzip. Starting in Oracle
Database 11g, the COMPRESSION parameter can be used to compress a Data Pump
dump file as it is being created. The COMPRESSION parameter is available as part
of the Advanced Compression Option for Oracle Database 11g.

Tuesday, October 16, 2007

How to change database SID

dbnewid is an oracle utility which enables you to change sid of a database.

You can change either the DBNAME or DBID or both.

dbnewid does not change the global name of the database. to do that the following command must be run:
ALTER DATABASE RENAME GLOBAL_NAME TO [name].[domain];

Also you must manually change the DB_NAME instance parameter and recreate the password file. And if you changed DBID, database must be opened using RESETLOGS parameter.

SHUTDOWN IMMEDIATE
STARTUP MOUNT


1. changing only database name:
% nid TARGET=SYS/oracle@test_db DBNAME=test_db SETNAME=YES

2. changing only dbid:
% nid TARGET=SYS/oracle@test_db
3. changing both:
% nid TARGET=SYS/oracle@test DBNAME=test_db

Wednesday, September 26, 2007

EM Grid Control target discovery. How to add more targets after the agent is already there?

The article which explains it all:Metalink Note:239224.1

In short, the general points are:
1. There is a number of perl scripts in sysman/admin/discover directory, which are run by the agent configuration assistant at the end of agent installation.
2. Discover scripts search for all kinds of targets in all the known places (TNS_ADMIN, oratab, etc) and create the targets.xml file, which contains the list of discovered targets in xml format. file is saved into /sysman/emd directory.
3. If you need to add more targets after agent installation, you need to initiate the discovery process from within the grid control itself, by clicking "add" from the targets view.
4. As a last resort you could also try running the perl scripts manually, but this is not recommended.
5. Or even you can edit the targets.xml file and add targets manually and then run >emctl reload

Friday, September 21, 2007

Unsetting the instance parameters

Following syntax is used to unset a parameter:
SQL> alter system reset [parameter_name] scope=spfile sid='*';

where "sid" is the sid of the instance which needs to be affected in a RAC environment. In case of a single instance use "*".

Tuesday, September 4, 2007

What does it mean if SERVER = 'NONE' in v$session?

Definition:
SERVER VARCHAR2(9) Server type (DEDICATED| SHARED| PSEUDO| NONE)

From metalink:
In MTS/Shared Server configuration when you see value 'NONE' (as shown below), it means there is no task being processed by shared server for that session. The server column will infact show status of 'SHARED' if there is some task being processed at that particular time by the shared server process for that session.

In is also proved by the fact, that STATUS of session is always 'ACTIVE' for the sessions with 'SHARED' as value of SERVER and 'INACTIVE' for those with SERVER='NONE'.

Wednesday, August 15, 2007

... LOBs

Initial problem was: "LOB segment is wasting disk space, need to shrink it". So how can one investigate if there is some "wasted" space inside a LOB and how to reclaim it?

Note:386341.1 metalink article that quite helped me on this

To get the LOB segment size, use the dba_lobs and dba_segment views.
First find out the name of the LOB segment:
select segment_name,table_name,column_name,tablespace_name from dba_lobs where table_name = '[table_name]'
Then the size on disk:
select blocks, bytes from dba_segments where segment_name = '[lob_system_segment_name]';
To get to know how much data is stored in this LOB:
select sum(dbms_lob.getlength([lob_column])) from [table];
If the last number if much smaller than previous, then apparently there is some space wasted and could be reclaimed. If you have been deleting rows from the LOB's parent table, then LOB segment blocks (or chunks) are left in "deleted" state. Theoretically this space will be reused when subsequent "insert" will come [upd: tested and it really does work like that]. But if you need to shrink the LOB and reclaim some space for other segments, then you have the following opportunities.

First, you will have to rebuild the freepools, using the following command:
alter table [table name] modify lob([lob column name])
(freepools [number of free pools]);
In RAC configuration, the number of freepools = number of RAC instances, otherwise it is 1 (or it can be looked up in dba_lobs view. In case null is what you see there, then 1 is your choice).

:(((( Although this did not work with my LOB...

There is also a special keyword for this in ALTER TABLE syntax in 10.2:
alter table [table_name] modify lob [lob_column] (freepools rebuild);
After this you should see the free blocks either using the DBMS_SPACE.SPACE_USAGE (in case of an ASSM tablespace) or with DBMS_SPACE.FREE_BLOCKS in case of a manual segment management. You can also use the very convenient and useful Tom Kyte's show_space() procedure. Which actually uses the same DBMS_SPACE package, but it also detects automatically what kind of segment management do you have set and acts appropriately.

Now to actually reclaim the space from a LOB into a tablespace, starting 10.2 you have the wonderful shrink space command:
alter table [table name] modify lob([lob column name]) (shrink space [cascade]);
If lower than 10.2 then the only option is to rebuild the LOB:
alter table [table name] move lob [lob column name]) store as (tablespace [tablespace name]);
This is done online, so no need to bring anything off-line.

Now why this did not work with my LOB, even though the difference between "real" data size and the size of the segment itself was more than 2 times.
The truth was, that there were actually no bytes wasted. It was just normal, since the field was of NCLOB type and Oracle stores NCLOBs in UCS2 compatible internal encoding, which has fixed-width of 2 bytes per character. Now what the dbms_lob.getlength() function was returning, was the number of characters. So this is were I was getting this difference and this is why you have to multiply the result by 2, if you are using NCLOBs.
But what has got me totally distracted on this, was that when I performed a test with CLOBs, which had 2K chunks, inserting 2K into each, I was also getting a double sized LOB segments. The reason was revealed by Jonathan Lewis himself on comp.databases.oracle.server
Here's what he wrote:
[A 2KB block allows 1,988 bytes of space for LOB
storage - the remaining 60 bytes is the standard block
overhead plus a few bytes of data specific to the LOB,
such as the object number, LOB ID, and SCN. So if
your character set is single byte then you can get 1,988
characters per block.]
It is also must be kept in mind that LOB segment doesn't use PCTFREE PCTUSED parameters, it always allocates a full chunk. So (roughly) if you have 8K chunks and you store 2K of data in each row, then you will have 6K of disk space wasted per row.

Also here could be found a very useful document by Tanel Põder, which explains a lot about LOB internals.

Some sqls needed to perform test with LOBs.

To create a table with a clob named 't1_lob', with 2K chunksize (need to have a 2K buffer cache and a 2K block tablespace called 'smallblock'). Make the LOB to be out-of-line and leave no space for concurrency:
create table t1 (i int, b clob)
lob (b) store as t1_lob (
chunk 2K
disable storage in row
tablespace smallblock
pctversion 0);
To fill the table:
begin
for i in 1..1000 loop
insert into t1 values (mod(i,5), rpad('*',1988,'*'));
end loop;
end;

To load CLOB from a file:
declare
nc clob;
begin
select b into nc from t1 where i = 1 for update;
loadCLOB1_proc(nc,'ROOT_DIR','test.txt');
update t1 set b = nc where i = 1;
end;
The loadCLOB1_proc source

Other LOB related examples could be found in $ORACLE_HOME/rdbms/demo/lobs/plsql

Wednesday, August 1, 2007

Parallel query execution

helpful article: metalink docid: Note:237287.1
oracle docs

first check if the parallel query option is enabled:
At SQLPLUS
==========
- SQL>select * from v$option;

PARAMETER VALUE
-------------------------
Parallel execution TRUE

General stats:
 select *  from  V$PX_PROCESS_SYSSTAT;
your own:
 select * from v$pq_sesstat;
To make the query use parallelism, use one of three options:
  • set the degree of parallelism for the table: ALTER TABLE t PARALLEL 2;
  • use ALTER SESSION FORCE PARALLEL QUERY
  • give a hint within the sql itself: SELECT /*+ PARALLEL(t,2) */ * from t;

Wednesday, July 18, 2007

Temporary tablespaces

Restriction on Taking Tablespaces Offline: You cannot take a temporary tablespace offline.

To get an overview of temp tablespace usage:
select TABLESPACE_NAME,CURRENT_USERS,SEGMENT_FILE,USED_EXTENTS,FREE_EXTENTS from v$sort_segment;

select username, user, contents, segtype, tablespace from v$tempseg_usage;
If you want to drop non-default temporary tablespace, you issue something like:
> drop tablespace tempt including contents and datafiles;
At least in 10.2 the previous statement got hanging until the temporary segments (temp tables in my case) were not cleared.

Also after killing the non-default temp tablespace 'tempt', the temp tables created in there became invalid:
SCOTT@andrkydb> insert into te values (6);
insert into te values (6)
*
ERROR at line 1:
ORA-00959: tablespace 'TEMPT' does not exist

it is also funny, that I now can neither use, nor drop the temp table 'te':
SCOTT@andrkydb> drop table te;
drop table te
*
ERROR at line 1:
ORA-14452: attempt to create, alter or drop an index on temporary table already in use

Metalink article (Doc ID: Note:270645.1) says that table need to be truncated before drop. But it still doesn't help. I cannot drop the table.

If I recreate the tablespace 'tempt' I can use the table but still cannot drop it!

Instance restart helped...

This problem doesn't appear if we operate only with default temporary tablespaces.

Wednesday, July 11, 2007

Archive logs FORMAT

You use LOG_ARCHIVE_FORMAT parameter to specify how will the archive log files look like. You have the following possible variables:
%s log sequence number
%S log sequence number, zero filled
%t thread number
%T thread number, zero filled
%a activation ID
%d database ID
%r resetlogs ID that ensures unique names are constructed for the archived log files across multiple incarnations of the database
There is not a single word in the document that some of the variables are actually REQUIRED
So after setting it to something like
ALTER SYSTEM SET log_archive_format='%d_%s_%r.arc' SCOPE=SPFILE;

After restarting the instance you will get an error like this:
ORA-19905: log_archive_format must contain %s, %t and %r
Why not to do the check when the parameter is actually set? Instead oracle chooses to check it on startup and you end up with an unstartable instance. You also cannot restore the spfile from autobackup, because rman needs instance running at least in NOMOUNT mode. The only option is to boot with a pfile (no matter where you'll get it) . But this is the only solution oracle offers you

It also seems that you cannot affect the format of archive logs saved to the DESTINATION 10, which is set to USE_DB_RECOVERY_FILE_DEST value. It must be that oracle manages filenames on his own inside the recovery area.

Tuesday, July 10, 2007

RMAN Point-in-time recovery

Metalink Docid Note:370500.1

RMAN> run {
set until time "to_date('10.07.07 17:23:36','dd.mm.yy hh24:mi:ss')";
restore database;
recover database;
}
This shit is not working if you don't set the NLS_LANG environment variable. It fails with an error like:
ORA-01841: (full) year must be between -4713 and +9999, and not be 0

Also note how is the date parameter passed to the set until time. it's in double-quotes on some reason.

Friday, July 6, 2007

Compile invalid objects

Article at orace-base

SQL*Plus script to list not valid objects:
COLUMN object_name FORMAT A30
SELECT owner,
object_type,
object_name,
status
FROM dba_objects
WHERE status != 'VALID'
ORDER BY owner, object_type, object_name;

To compile automatically use one of these:
UTL_RECOMP.RECOMP_SERIAL(schema=>'SCOTT');

-- if schema is null, then all objects will be compiled

EXEC DBMS_UTILITY.compile_schema(schema => 'SCOTT');

Wednesday, June 13, 2007

Creating a multiname SSL cert

1. install openSSL
2. generate a private key
openssl genrsa -out key.pem 1024
3. make a conf file. the main idea of which is to have the multiple commonName and have the subjectAltName = DNS:domain in the req_extensions.
contents:
[ req ]
default_bits = 1024
default_keyfile = key.pem
distinguished_name = req_distinguished_name
req_extensions = v3_ca
x509_exentions = v3_ca

[ req_distinguished_name ]
countryName = Your Country
countryName_default = EE
stateOrProvinceName = State/Province
stateOrProvinceName_default = Harjumaa
localityName = Locality
localityName_default = Tallinn
organizationName = Organization
organizationName_default = OЬ JOT Eesti
organizationalUnitName = Organizational Unit
organizationalUnitName_default = OЬ JOT Eesti
emailAddress = e-mail
emailAddress_default = syse@elektrobit.com
0.commonName = Common Name (eg, YOUR name)
0.commonName_default = intranet
0.commonName_max = 64
1.commonName = Common Name (eg, YOUR name)
1.commonName_default = intranet.ee.ebgroup.elektrobit.com
1.commonName_max = 64

[ v3_ca ]
subjectAltName = DNS:intranet, DNS:intranet.ee.ebgroup.elektrobit.com



4. generate a request 4 CA
openssl req -new -config cert.conf -out cert.req
4.1. feed the cert.req to the CA
5. retrive the base64 encoded cert
6. generate a p12 file for importing it to the server
openssl pkcs12 -export -in intranetb64.cer -inkey key.pem -out intra.p12 -name "JOT Eesti Intranet"
7. at the webserver open the certificate mmc snapin for Computer and import the p12 file into the Personal storage
8. apply the installed cert to the website using the IIS snapin.

Tuesday, June 5, 2007

Gathering statistics

If database was created using DBCA, an automatic statistics gathering procedure should be already set up. It can be observed under the Scheduler jobs. You need to be logged in as "sys as sysdba" to see it.

To gather statistics manually you can use the ANALYZE SQL command, or DBMS_STATS package. You can also use EM, but that's just an interface for the DBMS_STATS.

When gathering statistics you can use the COMPUTE or ESTIMATE option. COMPUTE will gather fill stats when ESTIMATE will only take a representative sample of the rows in the table thus performing faster but not guaranteeing maximum correctness like COMPUTE does.

Gathering stats using the DBMS_STATS
Summary of DBMS_STATS subprograms

Most frequently needed are:
GATHER_DATABASE_STATS
GATHER_FIXED_OBJECTS_STATS
GATHER_INDEX_STATS
GATHER_SCHEMA_STATS
GATHER_TABLE_STATS

There is also a "DELETE_%" procedure corresponding to each "GATHER_%' procedure.

It is also possible to EXPORT and IMPORT statistics. You could have several different sets of statistics for different purposes.

Monday, June 4, 2007

Monitoring and Advisors

In 10g statistics are gathered automatically by background processes MMON (memory monitor) and MMNL (memory monitor light). By default every 60 minutes MMON wakes up and saves the statistics out of the dictionary views into AWR (automatic workload repository).

AWR data is owned by user SYSMAN and stored in SYSAUX tablespace.

Amount of statistics gathered is defined by parameter statistics_level
There are three possible options:
BASIC - practically disables statistics gathering, very few is saved.
TYPICAL - standard level
ALL - same as TYPICAL but also keeps the execution plans and timing information from the operating system

The statistics are kept in AWR for 7 days by default.

To change the collection interval and retention period DBMS_WORKLOAD_REPOSITORY package can be used. Like this:

exec dbms_workload_repository.modify_snapshot_settings(interval=>90,retention=>8640);
This sets interval to 90 minutes and retention to 8640 minutes (this makes 6 days or something)

After the statistics snapshot is taken, a process named ADDM (automatic database diagnostic monitoring) analyzes newly gathered information and compares it with two previously taken snapshots. By comparing these three, it can identify potential performance problems and suggest possible remedies. The main goal is to minimize the DBTime, which is being CPU and WAIT time put together for non-idle database users.

To get the list of snapshots:

SELECT * FROM dba_hist_snapshot;
To take snapshot manually:

EXECUTE dbms_workload_repository.create_snapshot('TYPICAL');
To delete snapshots:

exec dbms_workload_repository.drop_snapshot_range(begin_snap_id, end_snap_id);
To get a text ADDM report:

@?/rdbms/admin/addmrpt.sql
Additionally the following views can be queried to get the ADDM findings: DBA_ADVISOR_FINDINGS, DBA_ADVISOR_OBJECTS, DBA_ADVISOR_RECOMMENDATIONS, DBA_ADVISOR_RATIONALE

There is also a number of special-purpose advisors which include:

  • SQL Tuning Advisor
Analyzes SQL (recent or from the AWR). SQLs can be combined into SETs and whole SETs sent for analysis.

  • SQL Access Advisor
Analyzes SQL from the schema point of view. Suggests creating indexes and maetrialized views.

  • Memory Advisor
Advises on 3 memory parameters: buffer cache, shared pool, and PGA. In case of buffer cache performance measure is "relative change in physical reads". For shared pool "relative change in parse time savings" is used and for PGA it is the "cache hit %".

  • Mean Time To Recover Advisor
Here you only set the desired recovery time in seconds and oracle sets a number of internal parameters to make the recovery time as close as possible to your desired value.


  • Segment Advisor
Finds segments which can be shrunk and thus to release the unused space back to the tablespace. After finding the segements worth shrinking, following SQLs are executed:
alter table "SCOTT"."T2" enable row movement;
alter table "SCOTT"."T2" shrink space;


  • Undo Management Advisor
Analyzes the UNDO generation rates and suggests a correct retention time and undo tablespace size.

Saturday, June 2, 2007

Locking

Reference: Types of locks.

Developers Guide - Locking Data explicitly

Metalink Doc ID: Note:15476.1 FAQ about Detecting and Resolving Locking Conflicts

SQL*Plus script to query locks information

Locks can be: DML, DDL or internal
There are two basic types of DML locks: row locks and table locks.

Types of tables locks:
ROW SHARE (RS) - obtained when SELECT ... FOR UPDATE is issued. At least this is what Oracle documentation says. Testing shows that when SELECT FOR UPDATE is used, an RX locks is obtained. So it seems that... I don't know ;)

Basically this type of lock needed to show that some data is going to be changed in this table so that no other transaction could drop or modify the table or obtain a EXCLUSIVE lock.

ROW EXCLUSIVE (RX) - same as previous but it also prohibits other transactions to lock the table in SHARE and SHARE ROW EXCLUSIVE modes. It is set automatically when DML statements (such as update) are used against the table.
This seems fair since SHARE lock is needed to prohibit any updates, then it is fair that an update is prohibiting the SHARE lock to be obtained.

SHARE (S) - prohibits others to obtain RS, RX locks thus preventing a table to be updated while SHARE lock is active. This is needed if table level read consistency needs to be reached. Other sessions can also obtain SHARE locks at the same time. If two or more transactions will try to update the table while having the SHARE lock on, this will result in a deadlock even if they update different rows. If you need to get the read consistent table plus an exclusive right to update it, use the SHARE ROW EXCLUSIVE lock.
I also noticed that if after obtaining a SHARE lock, try to update the table, lock type is automatically converted into SHARE ROW EXCLUSIVE type. This is logical because you cannot update the table until others are holding the SHARE lock, you have to get the SHARE + ROW EXCLUSIVE lock type to do that.
Note:This lock is automatically obtained when CREATE INDEX statement is run.

SHARE ROW EXCLUSIVE (SRX) - prohibits all other locks to be obtained, except for the RS lock. So this is almost exclusive lock. You get both read consistency and exclusive right to update the table. Oracle docs say that others still can issue the SELECT .. FOR UPDATE statement on this table but my test show that this is not true...

EXCLUSIVE - this one has to be obtained before all of the DDL stuff can be executed.

Tuesday, May 29, 2007

SQL*Loader

Reference

SQL*Loader files:
log - contains the loading session log
control - a mandatory file containing all the control information (where are the other files, how to parse data and where to save it) It can also contain the data as well
bad - contains the records not loaded due to an error
discard - contains the records not satisfying the record select criteria

Possible ways of running the sql*loader:
sqlldr user/password control=some_file.ctl
sqlldr user/password some_file.ctl
You can also specify the direct = TRUE|FALSE parameter to use the direct path load. When left to default (FALSE) conventional path is used.

Direct path loading is fast. It doesn't generate undo or redo, it always writes above the high water mark, it doesn't issue COMMIT, it bypasses most of the constraints.

Some restrictions with the direct path loading:
  • Indexes are always rebuilt after the loading. If unique constraint is violated, the unique index is left in unusable state. You need to find the duplicate, eliminate them and rebuild the index.
  • Direct path load cannot occur if there are active transactions against the target table as it places an exclusive table lock on the whole table for the time of load.
  • Triggers do not fire during direct path loads
  • Direct path loading into cluster tables is not supported. In fact only loading to heap organized tables is supported.
  • During the direct path loads, foreign key constraints are disabled and then re-enabled

Possible types of input data formats:
  • fixed length. All rows have the same length and fields are separated by some character. This format will be the quickest in processing.
  • Variable length. Each row starts with a number showing how long the row is.
  • Stream format. Will be the slowest to process. But no information on the row length is needed. You just need to specify the field separator.

Import/Export, Datapump

DataPump is new to 10g. It replaces the old exp and imp utilities, which operated as clients. Datapump works on server side thus gaining access to the data directly, which dramatically increases export/import operations performance.

DP is called from the command line (expdp and impdp) or trough the DBMS_DATAPUMP package.

Note: You need to have a EXP_FULL_DATABASE role is you want to export schema other than yours.
Note: You also need the exp_full_database role if you want the user definition and grants to be included in the export. Otherwise you will only have the schema objects.

Possible export types using EXPDP
full = y | Exports full database
schema = [schema_list] | If omitted then connected user's schema will be exported
tablespace = [tablespace_list] | Exports all object belonging to a tablespace along with dependant object even if they are from a different tablespace
table = [table_list]


Other parameters
content = metadata_only | data_only | default ALL
directory = dir | Oracle DIRECTORY object, found in DBA_DIRECTORIES. User must have the read/write rights for the DIR
dumpfile = dir:filename, ... | dir is again Oracle DIRECTORY
logfile = dir:filename
filesize = n [G|M|K] | Sets the maximum size of each dump file.
parallel=degree | In case
network_link = db_link | Makes possible to use remote database as export source

To initiate export using the DBMS_DATAPUMP you have to do a little bit of programming:
Reference

The script would look something like this:
begin
--1st step
h1 := DBMS_DATAPUMP.OPEN (operation => 'EXPORT' ... so on)
--2nd step
DBMS_DATAPUMP.ADD_FILE ( ... --add the log file
--3rd step
DBMS_DATAPUMP.ADD_FILE (... --add the dump file
--4th step
DBMS_DATAPUMP.METADATA_FILTER (... --filter out not needed stuff
--5th step
DBMS_DATAPUMP.START_JOB ( handle => h1 ... -- from the open call
--6th step
DBMS_DATAPUMP.DETACH (handle => h1 ... --detach from the job
end;
You can then monitor the job through DBA_DATAPUMP_JOBS or USER_DATAPUMP_JOBS view or by looking inside the log file.

You can also perform these activities from the dbconsole interface. Log in into database control and go to Maintenance section. Locate the "Export to files" link, complete the wizard.


Importing using the impdp utility

You have mostly the same options as for the expdp.
Different parameters:

sqlfile = dir:file | Specifies a file to write the DDL statements to.
network_link = db_link | Importing froma live database! A very nice feature.
reuse_datafiles= Y|N | Specifies if the datafiles should be overwritten.
remap_dataile = source:target | Specifies how the datafiles should be renamed
remap_schema = source:target
remap_tablespace = source:target
include = object_list
exclude = object_list

include paramter usage example:
impdp scott/tiger full=y ... include=PROCEDURE:"LIKE SMTHNG%"
This will import only the procedure object which names start with "SMTHNG"


Import using the DBMS_DATAPUMP is done in a similar way to export.

And of course you can run import from EM.

Triggers

Reference

Triggers can be firing on the following types of events:
  • DML events
  • DDL events
  • Database events
DDL events include events such as CREATE, DROP, GRANT, ANALYZE, ASSOCIATE/DISASSOCIATE STATISTICS, REVOKE, AUDIT, COMMENT. Basically each DDL statement can be used as the triggering event. There is also a special event called "DDL", which incorporates all the DDL event so that trigger will fire for any DDL run against the specified object.

Example of DDL trigger:
CREATE OR REPLACE TRIGGER some_trigger
BEFORE GRANT ON schema.object
BEGIN
...
END;

Database triggers are something even more interesting. They fire on the following events:

SERVERERROR Specify SERVERERROR to fire the trigger whenever a server error message is logged.

The following errors do not cause a SERVERERROR trigger to fire:

  • ORA-01403: no data found

  • ORA-01422: exact fetch returns more than requested number of rows

  • ORA-01423: error encountered while checking for extra rows in exact fetch

  • ORA-01034: ORACLE not available

  • ORA-04030: out of process memory when trying to allocate string bytes (string, string)

LOGON Specify LOGON to fire the trigger whenever a client application logs onto the database.

LOGOFF Specify LOGOFF to fire the trigger whenever a client application logs off the database.

STARTUP Specify STARTUP to fire the trigger whenever the database is opened.

SHUTDOWN Specify SHUTDOWN to fire the trigger whenever an instance of the database is shut down.

SUSPEND Specify SUSPEND to fire the trigger whenever a server error causes a transaction to be suspended.

DB_ROLE_CHANGE In a Data Guard configuration, specify DB_ROLE_CHANGE to fire the trigger whenever a role change occurs from standby to primary or from primary to standby.

SERVERERROR event seemed very useful to me. You can create a trigger which will send each error message to your mail box, using the UTL_SMTP package.

Tracing session

A great reference for tracing sessions in Oracle at oracle-base.com

Possible ways of doing this:

To trace your own session
ALTER SESSION SET EVENTS '10046 trace name context forever, level 8';
ALTER SESSION SET EVENTS '10046 trace name context off';
To trace other sessions
EXEC DBMS_SUPPORT.start_trace_in_session(sid=>123, serial=>1234, waits=>TRUE, binds=>FALSE);
EXEC DBMS_SUPPORT.stop_trace_in_session(sid=>123, serial=>1234);
The DBMS_SUPPORT package is not present by default, but can be loaded as the SYS user by executing the @$ORACLE_HOME/rdbms/admin/dbmssupp.sql script

Alternatively use the DBMS_MONITOR package:
DBMS_MONITOR.SESSION_TRACE_ENABLE(
session_id IN BINARY_INTEGER DEFAULT NULL,
serial_num IN BINARY_INTEGER DEFAULT NULL,
waits IN BOOLEAN DEFAULT TRUE,
binds IN BOOLEAN DEFAULT FALSE)

exec dbms_monitor.session_trace_enable(144, 253, TRUE, FALSE);
exec dbms_monitor.session_trace_disable(144, 253);


Using ORADEBUG from SQL*Plus
ORADEBUG SETMYPID; -- Debug current session.
ORADEBUG SETOSPID 1234; -- Debug session with the specified OS process.
ORADEBUG SETORAPID 123456; -- Debug session with the specified Oracle process ID.
ORADEBUG EVENT 10046 TRACE NAME CONTEXT FOREVER, LEVEL 12;
ORADEBUG TRACEFILE_NAME; -- Display the current trace file.
ORADEBUG EVENT 10046 TRACE NAME CONTEXT OFF;

  • 0 - No trace. Like switching sql_trace off.
  • 2 - The equivalent of regular sql_trace.
  • 4 - The same as 2, but with the addition of bind variable values.
  • 8 - The same as 2, but with the addition of wait events.
  • 12 - The same as 2, but with both bind variable values and wait events.

Tracing session

A great reference for tracing sessions in Oracle at oracle-base.com

Possible ways of doing this:

To trace your own session
ALTER SESSION SET EVENTS '10046 trace name context forever, level 8';
ALTER SESSION SET EVENTS '10046 trace name context off';

To trace other sessions
EXEC DBMS_SUPPORT.start_trace_in_session(sid=>123, serial=>1234, waits=>TRUE, binds=>FALSE);
EXEC DBMS_SUPPORT.stop_trace_in_session(sid=>123, serial=>1234);

The DBMS_SUPPORT package is not present by default, but can be loaded as the SYS user by executing the @$ORACLE_HOME/rdbms/admin/dbmssupp.sql script

Using ORADEBUG from SQL*Plus
ORADEBUG SETMYPID; -- Debug current session.
ORADEBUG SETOSPID 1234; -- Debug session with the specified OS process.
ORADEBUG SETORAPID 123456; -- Debug session with the specified Oracle process ID.
ORADEBUG EVENT 10046 TRACE NAME CONTEXT FOREVER, LEVEL 12;
ORADEBUG TRACEFILE_NAME; -- Display the current trace file.
ORADEBUG EVENT 10046 TRACE NAME CONTEXT OFF;

Monday, May 28, 2007

Inserting into multiple tables

A funny way of inserting data. Some kind of "conditional" insert.

INSERT [ALL|FIRST] WHEN {condition} THEN {INTO clause} WHEN... ELSE {INTO clause}

When specifying FIRST data is inserted when first WHEN evaluates to TRUE
When specifying ALL, all WHEN conditions are evaluated.

eg

insert first
when col1 = 'A' then
into some_table (col1, col2)
values (val1, val2)
when some_col = 'B' then
into some_other_table (col1, col2)
valuess (val1, val2)
select col1, col2 from some_source;

Sunday, May 27, 2007

Database Audit

Configuring and Administering Auditing

1. Set the AUDIT_TRAIL parameter

Possible values are:
none - audit disabled
os - output is sent to op system
db - output is stored in database sys.aud$ table
db,extended - additionally SQLBIND and SQLTEXT fields are populated
xml
xml,extended - stores output as XML files

2. Possible types of AUDIT

* statement
* privilege
* object
* fine-grained

A special type of audit is SYS AUDIT. To enable it, set the AUDIT_SYS_OPERATIONS = TRUE

This will log all the activity of SYS user or any other user who logs in with SYSDBA privilege.

3. Managing audit events

Use the AUDIT and NOAUDIT statements to control the audit events
Possible options:
BY SESSION/BY ACCESS
WHENEVER SUCCESSFUL/WHENEVER NOT SUCCESSFUL

Example.
To enable logging of jeff's and lori's login/logoff attempts, run:
AUDIT session BY jeff, lori;

To disable: NOAUDIT session BY jeff, lori;

To turn off all statement auditing:
NOAUDIT ALL;

All privilege auditing:
NOAUDIT ALL PRIVILEGES;

You must specify the USER if the audit was created with BY SOME_USER option. Like this:
NOAUDIT ALL BY SCOTT;

To see the list of currently enabled statement audit events:
select * from DBA_STMT_AUDIT_OPTS
To see the list of privilege audits:
select * from DBA_PRIV_AUDIT_OPTS
And object audits:
select * from DBA_OBJ_AUDIT_OPTS
All audit trail views



4. Checking audit results

Query the DBA_AUDIT_TRAIL or DBA_COMMON_AUDIT_TRAIL view to see the audit logs

Useful tip. The AUDIT SESSION event registers the logon and logoff events and it also saves the number of logical and physical reads for the disconnected session. This could be used for tracking the performance issues. To see this information query the DBA_AUDIT_SESSION view.

5. Purging AUDIT records

You have to manually delete the no longer needed records from the SYS.AUD$ table.
For example to delete the records which are older than 90 days, execute:
delete from sys.aud$ where timestamp# < sysdate - 90
6. Fine-grained audit (FGA)

You use the DBMS_FGA package to configure and manage FGA.

To add a policy
BEGIN DBMS_FGA.ADD_POLICY
(object_schema=>'SCOTT',
object_name=>'EMP',
policy_name=>'track_the_shit',
audit_condition=>'sal > 2000',
audit_column=>'SAL, COMM',
statement_types=>'UPDATE, DELETE');
END;
To remove policy
exec dbms_fga.drop_policy('SCOTT','EMP','TRACK_THE_SHIT');
To enable or disable use .ENABLE_POLICY and .DISABLE_POLICY respectively.

To get the list of policies, query:
select * from dba_audit_policies;
To get the audit records, query:
select * from dba_fga_audit_trail;
It seems that any activity by a user logged in as sysdba is not recorded by FGA. I haven't found oracle docs saying this but this was the behavior I noticed while doing some experiments.

Friday, May 25, 2007

Shared Server

1. Set the DISPATCHERS parameter to something like:
alter system set dispatchers = "(PROTOCOL=TCP)(DISPATCHERS=2)(SERVICE=ORCL)"

this will start 2 dispathers for the ORCL service. This operates dynamically.

2. Then you set the SHARED_SERVERS parameter. This sets the minimum shared server processes to start when the instance is started. This parameter also takes effect immediately, no restart is needed. If set to 0, shared server is disabled.
You specify the maximum shared servers with MAX_SHARED_SERVERS. Oracle should handle the needed shared server quantity by itself.

3. You can monitor the usage by querying the V$SHARED_SERVER_MONITOR view.
To see the actual circuits, query the view V$CIRCUITS
select dispatcher, saddr, circuit from v$circuit;
You can join with v$session, to see more information
select c.dispatcher, c.saddr, c.circuit, s.username, s.program, s.osuser, s.machine from v$circuit c, v$session s where c.saddr = s.saddr
Another important view is v$queue. It shows the statuses of the common receive queue and each dispatcher's response queue. The critical column is QUEUED, which shows how many requests is being on the queue. If there are items on the common queue, this indicates that there is not enough shared server resources.

Other views: v$shared_server, v$dispatcher, V$DISPATCHER_CONFIG

Also the SERVER column in the v$session view shows whether user is being connected through SHARED or DEDICATED server.

4. To specify the maximum number of sessions allowed to connect through the shared server, you use the SHARED_SERVER_SESSIONS parameter. You can also limit the physical connections by setting the CIRCUITS parameter. Usually one session maps to one physical connection but it is also possible to channel several sessions through one physical connection.

5. Configure the Large Pool
When Large Pool is not configured, Oracle places UGA of each session into the Shared Pool and this can impact Shared Pool's performance. So configure it, set the parameter large_pool_size to the appropriate value. So determine what is the enough size of the large pool, query the v$sesstat view, which holds the information on what was the maximum amount of memory allocated to all shared sessions since the instance was started.
SQL> select sum(value) from v$sesstat ss, v$statname st where ss.statistic# = st.statistic# and name = 'session uga memory max';
This was what the book said. But it seems to me that things are a little bit different. The statistic in v$sesstat is a real-time statistic, so it shows what was the maximum memory usage for every currently connected session. To get this info run the following in sqlplus:
column program format a25;
column username format a15;
select s.sid,username,program,st.value from v$session s, v$sesstat st, v$statname ss where ss.statistic# = st.statistic# and st.sid= s.sid and ss.name = 'session uga memory max';
You can also limit the amount of memory allowed for a session to use by the means of session profiles.

6. To determine whether you have enough dispatchers.
You can run the following query to see how busy have your dispatchers been:
select name, (busy / (busy + idle)) * 100 from v$dispatcher;
If dispatchers are busy for more than 50% of their time, then consider adding more dispatchers.

7. To see what is the average time users have to wait in a queue, run this query:
select sum(wait)/sum(totalq) from v$queue;
8. To see what is the average wait time on the requests queue:
select round(Wait/totalq,5) || ' hundredths of seconds' "Average Wait Time per request" from v$queue where type='COMMON';

Wednesday, May 23, 2007

sql to prolong tracks at tarantula

update file
set expires = DATE_ADD(expires, INTERVAL 30 DAY)
where type=1 AND deleted = 0

Sunday, May 13, 2007

Time zones

To get time zone of the database:
select DBTIMEZONE from dual;
To get your session time zone:
select SESSIONTIMEZONE from dual;
To change database time zone (instance restart is needed):
alter database set time_zone='Europe/Tallinn'; --or '+02:00'
To change session time zone:
alter session set time_zone='Europe/Tallinn';
To get the list of all time zone names:
select * from v$timezone_names;

Case insensitive operations in 10g

to get a linguistic sorts, set the NLS_SORT session parameter to the desired linguistic rule set.
eg. alter session set nls_sort=ESTONIAN

add a "_CI" to get case insensitive or "_AI" to get both case and accent insensitive sorting

or use the NLSSORT function

select name from empliyees order by NLSSSORT(name,'NLS_SORT=ESTONIAN_AI');

to perform case insensitive comparisons set the NLS_COMP parameter to 'LINGUISTIC'. This will make the WHERE clause conditions use the rules set by the NLS_SORT parameter

Like this:
alter session set NLS_COMP=LINGUISTIC;
alter session set NLS_SORT=ESTONIAN_CI;
select name from employees where name like 'SomEnAme';

To achive same results without setting the NLS_SORT, NLSSORT function could be used like this:
select * from scott.l where nlssort(name,'nls_sort=BINARY_AI') like nlssort('koger', 'nls_sort=BINARY_AI');

Globalization settings

Views holding the NLS parameters.
In ascending order of priority:

select * from nls_database_parameters;
select * from nls_instance_parameters;


To get your currently active settings:
select * from nls_session_parameters;
or a more complete version: v$nls_parameters

Friday, May 4, 2007

UNDO tablespace size

How to find out what does the size of the undo tbs need to be to satisfy both transaction and long running queries:

select
(select max(undoblks)/600 * max(maxquerylen) from v$undostat)
*
(select value from v$parameter where name='db_block_size')
from dual;

I got 14GB as a result :)
But so far is was ok to have undo tbs just 1GB of size

The maxquerylen is in seconds and undoblks is the number of blocks used within 10 minutes. So everything has to be taken to minutes and then multiplied.

To query the undo statistics use

select begin_time, end_time, undoblks, maxquerylen, ssolderrcnt, nospaceerrcnt from v$undostat
ssolderrcnt is the number of Snapshot too old errors and
nospaeerrcnt is the number of errors from transactions running out of undo space

Monday, April 23, 2007

Oracle object definitions

To get the DDL of any object in the database, DMBS_METADATA package can be used.

DBMS_METADATA.GET_DDL
(object_type IN VARCHAR2,
name IN VARCHAR2,
schema IN VARCHAR2 DEFAULT NULL,
version IN VARCHAR2 DEFAULT 'COMPATIBLE',
model IN VARCHAR2 DEFAULT 'ORACLE',
transform IN VARCHAR2 DEFAULT 'DDL')
RETURN CLOB;

Oracle Documentation

usage example:
select DBMS_METADATA.GET_DDL('VIEW','ALL_CONSTRAINTS','SYS') from dual;

Thursday, April 5, 2007

notes from Tom Kyte's book

  • SQLPlus environment settings

under [oracle_home]\sqlplus\admin
there is a glogin.sql file. This one is being run each time you login into a database.
Here are some commands which will set a couple of useful things up:
--enable the DBMS_OUTPUT and set the buffer size to 100000 bytes
set serveroutput on size 100000
--trim the spaces, set pagesize and line length
set trimspool on
set long 5000
set linesize 100
set pagesize 9999
column plan_plus_exp format a80
--set the prompt to "[user]@[dbname]>"
column global_name new_value gname
set termout off;
select lower(user) || '@' || global_name as global_name from global_name;
set sqlprompt '&gname> '
set termout on
  • setup the PLAN_TABLE and AUTOTRACE
I already had the PLAN_TABLE$ in SYS schema in 10g XE, but if it's not there, create it using the utlxplan.sql script out of the rdbms\admin dir.
Then create a public synonym with same name and issue GRANT ALL ON PLAN_TABLE TO PUBLIC
can now use the explain plan statement in any schema.
  • set the AUTOTRACE in SQLPlus
first create the PLUSTRACE role by running /sqlplus/admin/plustrce.sql in SYS priveleges
then run GRANT PLUSTRACE TO PUBLIC

to use the autotrace issue one of the following:
set autotrace on
set autotrace off
set autotrace on explain
set autotrace on statistics
set autotrace traceonly
  • avoid using long running transactions in MTS environment
  • using bind variables causes already compiled SQL statements to be reused from shared pool. Otherwise they are being compiled over and over again.
  • B* tree based index, does not index NULL values

  • Memory
PGA belongs to process, UGA belongs to session. If dedicated server is used then UGA is located in PGA, in case of the MTS UGA is part of SGA. sort_area_size is part of the PGA, sort_area_retained_size is part of UGA. To get the pga and uga statistics for the current session, run:

select a.name, b.value
from v$statname a join v$mystat b
on (a.statistic# = b.statistic#) where a.name like '%ga %';
SGA consists of: java pool, large pool, shared pool, null pool
null pool has fixed sga, buffer cache and redo log buffer inside.
shared pool keeps all the compilted sql and pl/sql objects. To make use of shared pool optimal - always use bind variables. Otherwise shared pool will grow too large and handling an over sized shared pool takes a lot of processor time and this leads to dramatic system slow down. Using the dbms_shared_pool utility, you can make some certain object to stay in sp for ever, otherwise unused objects are removed from sp when it's getting full.
shared_pool_size init parameter is always smaller than the actual sp size.
large pool is meant for larger mem structures than in shared pool. It also doesn't keep cache things: after memory has been used, it may be rewritten. large pool keep things like these: UGA are in SGA in case MTS is used, parallel stuff, RMAN IO buffering.
to get sga statistics run in sqlplus

compute sum of bytes on pool
break on pool skip 1
select pool, name, bytes
from v$sgastat
order by pool, name;
more simple report: show sga

  • Locks
It could be a problem (at lease in 8i) that a table got blocked if it had no index on foreign key and the primary key of the referenced table was changed. To locate tables which have foreign keys without indexes use this script no_index_fks.sql
This was fixed in 9i, but still creating indexes on fks is good.

TX - transaction lock. Is set when a transaction starts modifying some rows. This lock is kept until transaction issues commit or rollback. Initial number of transactions capable of blocking some data in a block is set my INITRANS parameter of CREATE statement (default 2). Maximum number of transactions is set by MAXTRANS (default 255). But it might be also limited by the free space in block's header.
To get information on TX locks, v$lock can be queried

select username,
v$lock.sid,
trunc(id1/power(2,16)) rbs,
bitand(id1,to_number('ffff','xxxx'))+0 slot,
id2 seq,
lmode,
request
from v$lock, v$session
where v$lock.type = 'TX'
and v$lock.sid = v$session.sid
and v$session.username = USER
/
here ID1 and ID2 fields contain the transaction id. since it has three numbers saved in two fields, some math tricks are needed to get those numbers. You can see that rbs, slot and seq coincide with
select XIDUSN, XIDSLOT, XIDSQN from v$transaction
TM lock - locks an object from being modified by DDL statements while transaction is modifying data in it. Can use the same query as above, only exclude the where = 'TX' and query simple values of ID1 and ID2. Here ID1 will contain object id being blocked. It coincide with object id from
column object_name format a20
select object_name, object_id from user_objects;

it is also possible to forbid DDL statements at all by using the DML_LOCK init param. Or do the same for certain tables by using ALTER TABLE [table] DISABLE TABLE LOCK

  • DDL Locks
Important! All DDL statements are wrapped into two commit; calls. Like this:

begin
commit;
DDL statement
commit;
exception
when others then rollback
end;
This is done so not to rollback all the previous steps in case DDL fails. So it is important to remember that DDL will initiate a commit silently.
To get list of all ddl locks held at the moment use the DBA_DDL_LOCKS view
  • Transactions
To perform a partial rollback in case of an error, a SAVEPOINT operator must be used. Like this:
begin
savepoit sp1
[some pl/sql stuff]
exception
when others then
rollback to sp1;
end;

INSERTS are handled by Oracle in the following way:
savepoint statement1
insert blabla
if error then rollback to statement1
savepoint statement2
insert blabla
if error then rollback to statement2
So when an error occurs with some INSERT operator then only the portion of transaction related to this operator is rolled back.
  • Tables
A trick to get the create statement for a table:
imp user/pass tables=[tablename,..]
exp user/pass full=y indexfile=[filename]
you will get the full create statements inside the index file.
  • Temp tables
table objects are kept in the data dictionary for ever but data inside them is available only to the current session (or transaction)
To generate the table statistics, so that CBO would optimize the queries against temp tables correctly, you can create a normal table with the same name and then export/import statistics using the dmbs_stats package.
  • Indexes
B*
The most common index type. Stores the indexed values in a tree structure.
If several columns are used in an index and values in some of them are repeated a lot, then index could be compressed using the compress n option. In this case index structure will take less space and index reading will require less IO. On the other hand, index operation will require more processor time.
B* in descending order
Indexes could be stored in ASC or DESC order. This is only needed when you have several columns in an index and you need to select each of them with different order. Like this: order by key1 DESC, key2 ASC. You need to specify the desired order when creating an index.
BITMAP
Use bitmap index when you have a very non-unique column. Drawback: when updating a row, a lot more rows are getting locked at the same time.

Other notices concerning index.
When using something like select * from t where x = 5 But x is of a character type, the statement will be silently rewritten as select * from t where to_number(x) = 5 And of course in this case index will not be used.

  • EXP and IMP
Indexes with SYS_blabla (automatically generated) names will not be exported!

Constrains with automatically generated names, will be imported even if such constraint already exists: oracle will simply generate a new name for it. This could be influencing performance in case import is performed repeatedly. So always give names to the constraints!

Monday, April 2, 2007

Oracle OLEDB connection string

this one's from oracle's own documentation

"Provider=OraOLEDB.Oracle;User ID=user;Password=pwd;Data Source=constr;"

other paramters include

CacheType - specifies the type of cache used to store the rowset data on the client.
ChunkSize - specifies the size of LONG or LONG RAW column data stored in the provider's cache.
DistribTX - enables or disables distributed transaction enlistment capability.FetchSize - specifies the size of the fetch array in rows.
OSAuthent - specifies whether OS Authentication will be used when connecting to an Oracle database.
PLSQLRSet - enables or disables the return of a rowset from PL/SQL stored procedures. PwdChgDlg - enables or disables displaying the password change dialog box when the password expires.
OLEDB.NET - enables or disables compatibility with OLE DB .NET Data Provider. See "OLE DB .NET Data Provider Compatibility".

Monday, February 12, 2007

killing sessions and OS processes

been dealing with hung sessions today.
I'm still not sure of why are they being left there hanging. They are just ready to wait eternally for the "SQL*Net message from dblink" event. I guess that remote instance is failing or something.. I don't have access to that server, so I don't know what's happening there and admins are finns - pretty quiet types, they don't talk much. Anyway. I have faced a problem of having a big number of sessions either waiting for dblink or for client message and I KNOW that neither is coming. So a need in mass killing appears :) You can kill oracle sessions with


ALTER SYSTEM KILL SESSION SID, SERIAL# immediate;

But after you kill it like this, it still stays in the v$session view having the 'KILLED' status. And you cannot get rid of it otherwise but killing the process on the OS level. Because PMON waits for a client to issue next command so to send him "your session has been killed" message, before he can actually eliminate the process. Quite polite, but sometimes you just know that client is not coming back... For this matter there is a orakill.exe tool (for Windows). Used like this:

orakill [instance_name] [spid]

This one will kill the thread inside the oracle.exe process. You need to pass the spid of the process, taken from the v$process view.

So there were two problems in front of me:

  1. To kill all the sessions started by a specific program.
  2. To kill the processes so that all the resources and locks would be freed.

To solve the first one I coded a nice PL/SQL procedure:


CREATE OR REPLACE PROCEDURE KILL_SESSIONS_BY_PROGRAM (in_program IN varchar2) IS
CURSOR sessions_cur IS
select s.sid
,s.serial#
from v$session s
where lower(s.program)=lower(in_program)
and s.status != 'KILLED';
sessions_rec sessions_cur%ROWTYPE;
sess_to_kill varchar2(50);
cmd_string varchar2(100);
e_marked_for_kill exception;
PRAGMA EXCEPTION_INIT(e_marked_for_kill, -31);
begin
OPEN sessions_cur;
LOOP
FETCH sessions_cur INTO sessions_rec;
IF sessions_cur%ROWCOUNT = 0 AND sessions_cur%NOTFOUND THEN
RAISE_APPLICATION_ERROR (-20001,'NO sessions found!');
END IF;
EXIT WHEN sessions_cur%NOTFOUND;
sess_to_kill := TO_CHAR(sessions_rec.sid)','TO_CHAR(sessions_rec.serial#);
cmd_string:='ALTER SYSTEM KILL SESSION '''sess_to_kill''' immediate';
begin
DBMS_OUTPUT.PUT_LINE(cmd_string);
execute immediate cmd_string;
exception when e_marked_for_kill then
null;
end;
END LOOP;
end;
/

And for the second one I made a small SQLPlus script, which generates me a .cmd file containg as much orakill commands as it is needed. Here it is:


CLEAR BUFFER
set heading off;
set feedback off;
variable sess refcursor;
begin
open :sess for select 'orakill &instance ' p.spid
from v$session s, v$process p
where s.paddr = p.addr
and lower(s.program)=lower('&in_program')
and s.status = 'KILLED';
end;
/
spool h:\sql\kill.cmd;
print sess;
spool off;

Thursday, February 8, 2007

privileges in bulk

have been dealing with oracle privileges today for the first time seiously.

It turned out that granting privileges is quite a pain (as everything else) in oracle. I needed to grant select (or any other) privilege on all tables in a schema at once. But oracle standard command only makes it possible to grant privs on a single object. So I had to write my own procedure, which would grant privileges in bulk. Here it is:


CREATE OR REPLACE PROCEDURE GRANT_BULK_PRIV (inSchema IN varchar,
inObjType IN varchar,
inPriv IN varchar,
inUser IN varchar) IS
/******************************************************************************
NAME: GRANT_MULTI_PRIV
PURPOSE: to grant priviledges in bulk
NOTES:
PARAMETERS:
inObjType must specify the oracle defined type of the object (in upper case)
inSchema: Name of the schema where the objects reside (must be in upper case)
inPriv: what priviledge to give (oracle defined privilege name)
inUser: a user or a role to whom the privilege will be granted
Sysdate: 8.02.2007
******************************************************************************/
cursor allobjects (inOwner IN varchar, inType IN varchar) IS
Select o.object_name, o.status
from sys.all_objects o
where o.object_type = inType
and o.owner = inOwner
and o.status = 'VALID' ;
allobjects_rec allobjects%rowtype;
cmdtext varchar(255);
BEGIN
OPEN allobjects (inSchema, inObjType);
LOOP
FETCH allobjects INTO allobjects_rec;
EXIT WHEN allobjects%NOTFOUND;
cmdtext:= 'grant ' ||inPriv|| ' on ' ||inSchema|| '."' || allobjects_rec.object_name ||'" to '|| inUser;
--DBMS_OUTPUT.PUT_LINE(cmdtext);
execute immediate cmdtext;
END LOOP;
EXCEPTION
WHEN NO_DATA_FOUND THEN
NULL;
WHEN OTHERS THEN
-- Consider logging the error and then re-raise
RAISE;
END GRANT_BULK_PRIV;
/



usage like this:
execute grant_bulk_priv ('SHEMA', 'TABLE', 'SELECT', 'SOME_ROLE');

Monday, January 29, 2007

oracle archive log destinations

been recently having a problem with oracle when the disk was getting full with archivelogs and oracle just stopped responding until the logwriter could save the log and the redo log could be switched. So I thought I should add an alternate location, which would be used if the primary one becomes invalid.

So I read the manual and did like it says:

firstly:


The first method is to use the LOG_ARCHIVE_DEST_n parameter (where n is an integer from 1 to 10) to specify from one to ten different destinations for archival. Each numerically-suffixed parameter uniquely identifies an individual destination.

secondly:


The LOG_ARCHIVE_DEST_STATE_n (where n is an integer from 1 to 10) initialization parameter allows you to control the availability state of the specified destination (n). The destination state can have three values: ENABLE,DEFER, or ALTERNATE. The value ENABLE indicates that Oracle can use the destination, whereas DEFER indicates that the location is temporarily disabled. The third value, ALTERNATE, means that the destination is an alternate. It's availability state is DEFER, unless there is a failure of its parent destination, in which case its state becomes ENABLE.


So after I added parameters to my init.ora file accordingly, the database wouldn't start!

Alert log didn't contain any errors, it would simply end with this line:
Using log_archive_dest parameter default value


But if to try starting the database from sqlplus with startup nomount, it would give this error:
ORA-00439: feature not enabled: Managed Standby
So it comes out that the Managed Standby is a feature of Enterprise Edition ONLY! And LOG_ARCHIVE_DEST_n depends on that feature and therefore cannot be used with Standard Edition.
So I have do with LOG_ARCHIVE_DEST parameter and just make the disk larger... Can't think of any other good solution now...

Administrator's guide doesn't say a word about this. It has a link although to the 9i Reference, which has a note of this dependency trick..
This parameter is valid only if you have installed Oracle Enterprise
Edition. You may continue to use LOG_ARCHIVE_DEST if you have installed Oracle
Enterprise Edition. However, you cannot use both LOG_ARCHIVE_DEST_n and
LOG_ARCHIVE_DEST, as they are not compatible.

Tuesday, January 23, 2007

rich text editor for web

lately have been looking for some free and crossbrowser rich text editors. Out of all the variety, picked out two which seemed the best:

1. TinyMCE
http://tinymce.moxiecode.com/
pros:
nice interface, loads faster, has a lot of commands (huge number of them). it's a practically winword editor under web. *amazed*. Has API. All browsers supported (even opera and safari)
cons:
the image and file browsers are commercial. Although it is possible to link your own filebrowser as plugin. But I couldn't find any freeware ones. And who wants to code, if there is something ready and for free? Therefore..

2. FCKeditor
http://www.fckeditor.net/

pros: also very powerful, a lot of commands, API. And has a free file and image browser included.
cons: interface is a little bit slow and loads a little bit slower than Tiny. Neither Opera nor Safari are supported.


I personally have picked the FCKeditor, mostly because of the free filebrowser. And it's not my problem if someone likes to use stupid browsers ;)
Othewise would have picked the TinyMCE...

Hello World! :)

here comes the first record.

I have been into some ajax programming lately. Feels like this is the future of the application development in the whole. Google's spreadsheet has totally amazed me. There is even some server->client communication. How is that implemented I wonder... Got to learn it someday.

Today I have been into some beginner programming using jQuery (http://jquery.com/)
It's really easy to learn and it saves a lot of time when it comes to DHTML tricks.

The best thing with it, is that you don't have to specify events on objects using something like onclick="foo()". With jQuery you set the events from the script area. And what is really nice is that you can select many object (and do it very easily) at once, using XPath and/or CSS.

So for example you set up a click event on a button called "btn1" like this:

$("button[@name=btn1]").click(function(){ alert("button clicked!") });

this will register an "onclick" event with the button who's name parameter is equal "btn1"

But what I was fighting with today was: how to reference the parent windows out of an iframe?
Turned out to be like this:

parent.$("element").action()

So it was as easy as putting "parent" in front of the $ sign.
Oh and also that the parent page must include the jquery source too.

Particularly what I have built today. Is a script which uploads pictures "on the fly", without a page reload. And immediately displays the newly uploaded picture on the page. For this matter I have used an iframe which had an upload form inside of it. After the form submits, the php script saves and registeres the picture in db and outputs some jQuery code, which dynamically adds the new picture thumb to the parent frame (this is why I needed to reference the parent windows with jQuery). So this way it looks as if picture being uploaded totally dynamically, without a page reload.

Soon my work will be done, I'll post a link to it.