Thursday, September 24, 2009

to enable the CBO trace

alter session set tracefile_identifier='_10053';
alter session set events '10053 trace name context forever, level 1';
alter session set events '10053 trace name context off';


http://www.psoug.org/reference/trace_tkprof.html

Tuesday, August 11, 2009

my old topic...

found my old topic on user groups

main point which I forgot and was looking for:

-- notice "<==" that remote v$session.process = local v$process.spid

Monday, July 27, 2009

incomplete recovery

Even though all of the datafiles might have been restored to the same SCN, still some of them might be left in 'fuzzy' state.
select substr(name,1,40),recover, fuzzy, to_char(checkpoint_change#) checkpoint_scn from v$datafile_header;
select file#,STATUS, FUZZY from v$datafile_header;

to ignore those files:
ALTER DATABASE DATAFILE [datafile name] OFFLINE DROP;
or if you need to open, use hidden parameter:
_allow_resetlogs_corruption=TRUE;
this will open the database even though it's inconsistent. the database then must be rebuilt using export/import.

Refs:
435201.1

Monday, March 23, 2009

ORA-01114 and tempfile id

this error indicates
ORA-01114: IO error writing block to file 1023 (block # 455946)
Cause: The device on which the file resides is probably offline. If the file is a temporary file, then it is also possible that the device has run out of space. This could happen because disk space of temporary files is not necessarily allocated at file creation time.

but there's no file with id 1023 in the database.

This is actually a tempfile. Tempfile id is being reckoned from (db_files + dba_temp_files.file_id).
So in this case the value of init.ora parameter "db_files" is 1022 and that makes the first tempfile id = 1023.

fuck all those who say to read documentation. there's supposed to be a metalink note which talks about it, but its hard to find it due to metalinks screwed search engine.
ненавижу бля ебаный оракл.

Thursday, February 19, 2009

investigating block corruption

step 1. Populate the v$database_block_corruption
backup validate check logical database;
step 2. Check whether there are any corrupt blocks:
select count(1) from V$DATABASE_BLOCK_CORRUPTION;
step 3. investigate which objects these blocks belong to
create table block_corr as select file#,block# from V$DATABASE_BLOCK_CORRUPTION where 1=0;
create unique index bcorr_pk on block_corr(file#,block#);
insert into block_corr select file#,block# from V$DATABASE_BLOCK_CORRUPTION order by file#,block#;

step 4. create a table to store faulted segments:
create table corr_objects as
SELECT de.segment_type, de.owner, de.segment_name, de.PARTITION_NAME
FROM dba_extents de, block_corr bc
WHERE de.file_id = bc.file#
and bc.block# between block_id AND block_id + blocks - 1
group by de.segment_type, de.owner, de.segment_name, de.PARTITION_NAME;

set lines 300 pages 999
col segment_name for a30
col PARTITION_NAME for a30
select * from corr_objects;

step 5. segment header block might be corrupt causing dba_extents not returning rows
select ds.owner, ds.segment_name, ds.segment_type, bc.file#, bc.block#
from dba_segments ds, block_corr bc
where ds.header_file=bc.file# and ds.header_block = bc.block#;

step 6. If it doesn't belong to an object, double check if it does exists in dba_free_space to check if the block belongs to file space usage bitmap.
select * from dba_free_space where file_id= &AFN
and &CBN between block_id and block_id + blocks -1;
list all corr blocks displaying whether it belongs to segment or free space
col is_used_block for a4
col is_free_space for a4
col file_name for a60
col file# for 999

select bc.file#, bc.block#
, DECODE(
(select de.block_id from dba_extents de where de.file_id = bc.file# and bc.block# between de.block_id and de.block_id + de.blocks -1 )
,null,'NO','YES') is_used_block
, DECODE(
(select dfs.block_id from dba_free_space dfs where dfs.file_id = bc.file# and bc.block# between dfs.block_id and dfs.block_id + dfs.blocks -1)
,null,'NO','YES') is_free_space
,ddf.file_name
from block_corr bc, dba_data_files ddf
where bc.file# = ddf.file_id
order by bc.file#, bc.block#;

step 7. if the blocks belong to indexes or index partitions, then rebuild them using ONLINE option (EE)

--to rebuild indexes and index partitions:
select 'alter index '||t.owner||'.'||t.segment_name||' REBUILD ' ||decode(t.segment_type,'INDEX PARTITION','PARTITION '||t.partition_name,'') || ' ONLINE;' from (
SELECT de.segment_type, de.owner, de.segment_name, de.partition_name
FROM dba_extents de, block_corr bc
WHERE de.file_id = bc.file#
and bc.block# between block_id AND block_id + blocks - 1
) t where t.segment_type IN ('INDEX','INDEX PARTITION')
group by t.owner,t.segment_type,t.segment_name,t.partition_name
/


step 8. Note 782137.1 If the block belongs to a datafile of a tablespace which has been dropped, then V$DATABASE_BLOCK_CORRUPTION is not going to be cleared by a subsequent rman backup. This is fixed in 11g. For 10g the workaround is:
execute dbms_backup_restore.resetCfileSection(17); /** clear v$backup_corruption
execute dbms_backup_restore.resetCfileSection(18); /**clear v$copy_corruption


relevant metalink notes:
How to Find All the Corrupted Objects in Your Database.
V$Database_block_corruption Shows Entries for Non-existent Datafile
RMAN Does not Report a Corrupt Block if it is not Part of Any Segment

Tuesday, February 3, 2009

DDL locks

No way to specify DDL with wait option. It will always fail with "resource busy" error.
In 11g there's a way to do this, but before have to use brute force

declare
success boolean := false;
begin
while ( not success)
loop
begin
execute immediate 'alter table t1 drop column n5 ';
success := true;
exception
when others then
null;
dbms_lock.sleep(10);
end;
end loop;
end;


ref:

Friday, January 23, 2009

Store a good exec plan with OUTLINES

Main article: http://www.oracle-base.com/articles/misc/Outlines.php
Metalink Doc ID: 463288.1

basics:
ALTER SESSION SET create_stored_outlines=TRUE; -- will make each executed SQL saved in the outlines

CREATE ANY OUTLINE -- privilege is needed for storing the outlines

Use CREATE OUTLINE statement or DBMS_OUTLN package.

Note
: in my case DBMS_OUTLN.CREATE_OUTLINE only worked from under user SYS
select sql_text, sql_id, plan_hash_value, HASH_VALUE, CHILD_NUMBER from v$sql where sql_text like '&sqltext';
alter session set create_stored_outlines = true; -- <- to workaround a bug, fixed in 10.2.0.4
exec dbms_outln.CREATE_OUTLINE(1850253776,0,'XXX_OUTLINES');

to check your outlines:
select name,owner,category,sql_text,used,enabled from dba_outlines;
if used column shows 'UNUSED', then check that:
query_rewrite_enabled=TRUE;
use_stored_outlines=[TRUE|CATEGORY];

Note: a huge drawback is that use_stored_outlines is not a spfile parameter. It can only be set by running ALTER SESSION or ALTER SYSTEM command.

To set the parameter forever a startup trigger must be created:
create or replace trigger enable_outlines_trig
after startup on database
begin
execute immediate('alter system set use_stored_outlines=XXX_OUTLINES');
end;

How to Transfer Stored Outlines from One Database to Another (9i and above):
exp system/ file=myoutln.dmp tables=(outln.ol$,outln.ol$hints,outln.ol$nodes) query="where category='MYCAT'" statistics=none
imp system/ file=myoutln.dmp full=y ignore=y

Thursday, January 8, 2009

datapump filter data

To filter out certain data from exported tables you can use the QUERY parameter of datapump.

Which is basically a WHERE clause of your query. So you can filter out certain rows by using table columns in your filter. But if you need to use a JOIN and a SUBQUERY in your WHERE clause, then it won't work. Because datapump is setting an alias for your main table, so you cannot reference it in your WHERE clause by it's name. In 10.2.0.3 the alias for the table is KU$. So you need to use that when referencing your original table and then it will work just fine.

eg parameters might look like this:
include=TABLE:"LIKE 'table1'"
query="WHERE exists (select * from table2 where KU$.fk=table2.pk and table2.foo='xyz')"