Wednesday, August 15, 2007

... LOBs

Initial problem was: "LOB segment is wasting disk space, need to shrink it". So how can one investigate if there is some "wasted" space inside a LOB and how to reclaim it?

Note:386341.1 metalink article that quite helped me on this

To get the LOB segment size, use the dba_lobs and dba_segment views.
First find out the name of the LOB segment:
select segment_name,table_name,column_name,tablespace_name from dba_lobs where table_name = '[table_name]'
Then the size on disk:
select blocks, bytes from dba_segments where segment_name = '[lob_system_segment_name]';
To get to know how much data is stored in this LOB:
select sum(dbms_lob.getlength([lob_column])) from [table];
If the last number if much smaller than previous, then apparently there is some space wasted and could be reclaimed. If you have been deleting rows from the LOB's parent table, then LOB segment blocks (or chunks) are left in "deleted" state. Theoretically this space will be reused when subsequent "insert" will come [upd: tested and it really does work like that]. But if you need to shrink the LOB and reclaim some space for other segments, then you have the following opportunities.

First, you will have to rebuild the freepools, using the following command:
alter table [table name] modify lob([lob column name])
(freepools [number of free pools]);
In RAC configuration, the number of freepools = number of RAC instances, otherwise it is 1 (or it can be looked up in dba_lobs view. In case null is what you see there, then 1 is your choice).

:(((( Although this did not work with my LOB...

There is also a special keyword for this in ALTER TABLE syntax in 10.2:
alter table [table_name] modify lob [lob_column] (freepools rebuild);
After this you should see the free blocks either using the DBMS_SPACE.SPACE_USAGE (in case of an ASSM tablespace) or with DBMS_SPACE.FREE_BLOCKS in case of a manual segment management. You can also use the very convenient and useful Tom Kyte's show_space() procedure. Which actually uses the same DBMS_SPACE package, but it also detects automatically what kind of segment management do you have set and acts appropriately.

Now to actually reclaim the space from a LOB into a tablespace, starting 10.2 you have the wonderful shrink space command:
alter table [table name] modify lob([lob column name]) (shrink space [cascade]);
If lower than 10.2 then the only option is to rebuild the LOB:
alter table [table name] move lob [lob column name]) store as (tablespace [tablespace name]);
This is done online, so no need to bring anything off-line.

Now why this did not work with my LOB, even though the difference between "real" data size and the size of the segment itself was more than 2 times.
The truth was, that there were actually no bytes wasted. It was just normal, since the field was of NCLOB type and Oracle stores NCLOBs in UCS2 compatible internal encoding, which has fixed-width of 2 bytes per character. Now what the dbms_lob.getlength() function was returning, was the number of characters. So this is were I was getting this difference and this is why you have to multiply the result by 2, if you are using NCLOBs.
But what has got me totally distracted on this, was that when I performed a test with CLOBs, which had 2K chunks, inserting 2K into each, I was also getting a double sized LOB segments. The reason was revealed by Jonathan Lewis himself on comp.databases.oracle.server
Here's what he wrote:
[A 2KB block allows 1,988 bytes of space for LOB
storage - the remaining 60 bytes is the standard block
overhead plus a few bytes of data specific to the LOB,
such as the object number, LOB ID, and SCN. So if
your character set is single byte then you can get 1,988
characters per block.]
It is also must be kept in mind that LOB segment doesn't use PCTFREE PCTUSED parameters, it always allocates a full chunk. So (roughly) if you have 8K chunks and you store 2K of data in each row, then you will have 6K of disk space wasted per row.

Also here could be found a very useful document by Tanel Põder, which explains a lot about LOB internals.

Some sqls needed to perform test with LOBs.

To create a table with a clob named 't1_lob', with 2K chunksize (need to have a 2K buffer cache and a 2K block tablespace called 'smallblock'). Make the LOB to be out-of-line and leave no space for concurrency:
create table t1 (i int, b clob)
lob (b) store as t1_lob (
chunk 2K
disable storage in row
tablespace smallblock
pctversion 0);
To fill the table:
begin
for i in 1..1000 loop
insert into t1 values (mod(i,5), rpad('*',1988,'*'));
end loop;
end;

To load CLOB from a file:
declare
nc clob;
begin
select b into nc from t1 where i = 1 for update;
loadCLOB1_proc(nc,'ROOT_DIR','test.txt');
update t1 set b = nc where i = 1;
end;
The loadCLOB1_proc source

Other LOB related examples could be found in $ORACLE_HOME/rdbms/demo/lobs/plsql

Wednesday, August 1, 2007

Parallel query execution

helpful article: metalink docid: Note:237287.1
oracle docs

first check if the parallel query option is enabled:
At SQLPLUS
==========
- SQL>select * from v$option;

PARAMETER VALUE
-------------------------
Parallel execution TRUE

General stats:
 select *  from  V$PX_PROCESS_SYSSTAT;
your own:
 select * from v$pq_sesstat;
To make the query use parallelism, use one of three options:
  • set the degree of parallelism for the table: ALTER TABLE t PARALLEL 2;
  • use ALTER SESSION FORCE PARALLEL QUERY
  • give a hint within the sql itself: SELECT /*+ PARALLEL(t,2) */ * from t;

Wednesday, July 18, 2007

Temporary tablespaces

Restriction on Taking Tablespaces Offline: You cannot take a temporary tablespace offline.

To get an overview of temp tablespace usage:
select TABLESPACE_NAME,CURRENT_USERS,SEGMENT_FILE,USED_EXTENTS,FREE_EXTENTS from v$sort_segment;

select username, user, contents, segtype, tablespace from v$tempseg_usage;
If you want to drop non-default temporary tablespace, you issue something like:
> drop tablespace tempt including contents and datafiles;
At least in 10.2 the previous statement got hanging until the temporary segments (temp tables in my case) were not cleared.

Also after killing the non-default temp tablespace 'tempt', the temp tables created in there became invalid:
SCOTT@andrkydb> insert into te values (6);
insert into te values (6)
*
ERROR at line 1:
ORA-00959: tablespace 'TEMPT' does not exist

it is also funny, that I now can neither use, nor drop the temp table 'te':
SCOTT@andrkydb> drop table te;
drop table te
*
ERROR at line 1:
ORA-14452: attempt to create, alter or drop an index on temporary table already in use

Metalink article (Doc ID: Note:270645.1) says that table need to be truncated before drop. But it still doesn't help. I cannot drop the table.

If I recreate the tablespace 'tempt' I can use the table but still cannot drop it!

Instance restart helped...

This problem doesn't appear if we operate only with default temporary tablespaces.

Wednesday, July 11, 2007

Archive logs FORMAT

You use LOG_ARCHIVE_FORMAT parameter to specify how will the archive log files look like. You have the following possible variables:
%s log sequence number
%S log sequence number, zero filled
%t thread number
%T thread number, zero filled
%a activation ID
%d database ID
%r resetlogs ID that ensures unique names are constructed for the archived log files across multiple incarnations of the database
There is not a single word in the document that some of the variables are actually REQUIRED
So after setting it to something like
ALTER SYSTEM SET log_archive_format='%d_%s_%r.arc' SCOPE=SPFILE;

After restarting the instance you will get an error like this:
ORA-19905: log_archive_format must contain %s, %t and %r
Why not to do the check when the parameter is actually set? Instead oracle chooses to check it on startup and you end up with an unstartable instance. You also cannot restore the spfile from autobackup, because rman needs instance running at least in NOMOUNT mode. The only option is to boot with a pfile (no matter where you'll get it) . But this is the only solution oracle offers you

It also seems that you cannot affect the format of archive logs saved to the DESTINATION 10, which is set to USE_DB_RECOVERY_FILE_DEST value. It must be that oracle manages filenames on his own inside the recovery area.

Tuesday, July 10, 2007

RMAN Point-in-time recovery

Metalink Docid Note:370500.1

RMAN> run {
set until time "to_date('10.07.07 17:23:36','dd.mm.yy hh24:mi:ss')";
restore database;
recover database;
}
This shit is not working if you don't set the NLS_LANG environment variable. It fails with an error like:
ORA-01841: (full) year must be between -4713 and +9999, and not be 0

Also note how is the date parameter passed to the set until time. it's in double-quotes on some reason.

Friday, July 6, 2007

Compile invalid objects

Article at orace-base

SQL*Plus script to list not valid objects:
COLUMN object_name FORMAT A30
SELECT owner,
object_type,
object_name,
status
FROM dba_objects
WHERE status != 'VALID'
ORDER BY owner, object_type, object_name;

To compile automatically use one of these:
UTL_RECOMP.RECOMP_SERIAL(schema=>'SCOTT');

-- if schema is null, then all objects will be compiled

EXEC DBMS_UTILITY.compile_schema(schema => 'SCOTT');

Wednesday, June 13, 2007

Creating a multiname SSL cert

1. install openSSL
2. generate a private key
openssl genrsa -out key.pem 1024
3. make a conf file. the main idea of which is to have the multiple commonName and have the subjectAltName = DNS:domain in the req_extensions.
contents:
[ req ]
default_bits = 1024
default_keyfile = key.pem
distinguished_name = req_distinguished_name
req_extensions = v3_ca
x509_exentions = v3_ca

[ req_distinguished_name ]
countryName = Your Country
countryName_default = EE
stateOrProvinceName = State/Province
stateOrProvinceName_default = Harjumaa
localityName = Locality
localityName_default = Tallinn
organizationName = Organization
organizationName_default = OЬ JOT Eesti
organizationalUnitName = Organizational Unit
organizationalUnitName_default = OЬ JOT Eesti
emailAddress = e-mail
emailAddress_default = syse@elektrobit.com
0.commonName = Common Name (eg, YOUR name)
0.commonName_default = intranet
0.commonName_max = 64
1.commonName = Common Name (eg, YOUR name)
1.commonName_default = intranet.ee.ebgroup.elektrobit.com
1.commonName_max = 64

[ v3_ca ]
subjectAltName = DNS:intranet, DNS:intranet.ee.ebgroup.elektrobit.com



4. generate a request 4 CA
openssl req -new -config cert.conf -out cert.req
4.1. feed the cert.req to the CA
5. retrive the base64 encoded cert
6. generate a p12 file for importing it to the server
openssl pkcs12 -export -in intranetb64.cer -inkey key.pem -out intra.p12 -name "JOT Eesti Intranet"
7. at the webserver open the certificate mmc snapin for Computer and import the p12 file into the Personal storage
8. apply the installed cert to the website using the IIS snapin.

Tuesday, June 5, 2007

Gathering statistics

If database was created using DBCA, an automatic statistics gathering procedure should be already set up. It can be observed under the Scheduler jobs. You need to be logged in as "sys as sysdba" to see it.

To gather statistics manually you can use the ANALYZE SQL command, or DBMS_STATS package. You can also use EM, but that's just an interface for the DBMS_STATS.

When gathering statistics you can use the COMPUTE or ESTIMATE option. COMPUTE will gather fill stats when ESTIMATE will only take a representative sample of the rows in the table thus performing faster but not guaranteeing maximum correctness like COMPUTE does.

Gathering stats using the DBMS_STATS
Summary of DBMS_STATS subprograms

Most frequently needed are:
GATHER_DATABASE_STATS
GATHER_FIXED_OBJECTS_STATS
GATHER_INDEX_STATS
GATHER_SCHEMA_STATS
GATHER_TABLE_STATS

There is also a "DELETE_%" procedure corresponding to each "GATHER_%' procedure.

It is also possible to EXPORT and IMPORT statistics. You could have several different sets of statistics for different purposes.

Monday, June 4, 2007

Monitoring and Advisors

In 10g statistics are gathered automatically by background processes MMON (memory monitor) and MMNL (memory monitor light). By default every 60 minutes MMON wakes up and saves the statistics out of the dictionary views into AWR (automatic workload repository).

AWR data is owned by user SYSMAN and stored in SYSAUX tablespace.

Amount of statistics gathered is defined by parameter statistics_level
There are three possible options:
BASIC - practically disables statistics gathering, very few is saved.
TYPICAL - standard level
ALL - same as TYPICAL but also keeps the execution plans and timing information from the operating system

The statistics are kept in AWR for 7 days by default.

To change the collection interval and retention period DBMS_WORKLOAD_REPOSITORY package can be used. Like this:

exec dbms_workload_repository.modify_snapshot_settings(interval=>90,retention=>8640);
This sets interval to 90 minutes and retention to 8640 minutes (this makes 6 days or something)

After the statistics snapshot is taken, a process named ADDM (automatic database diagnostic monitoring) analyzes newly gathered information and compares it with two previously taken snapshots. By comparing these three, it can identify potential performance problems and suggest possible remedies. The main goal is to minimize the DBTime, which is being CPU and WAIT time put together for non-idle database users.

To get the list of snapshots:

SELECT * FROM dba_hist_snapshot;
To take snapshot manually:

EXECUTE dbms_workload_repository.create_snapshot('TYPICAL');
To delete snapshots:

exec dbms_workload_repository.drop_snapshot_range(begin_snap_id, end_snap_id);
To get a text ADDM report:

@?/rdbms/admin/addmrpt.sql
Additionally the following views can be queried to get the ADDM findings: DBA_ADVISOR_FINDINGS, DBA_ADVISOR_OBJECTS, DBA_ADVISOR_RECOMMENDATIONS, DBA_ADVISOR_RATIONALE

There is also a number of special-purpose advisors which include:

  • SQL Tuning Advisor
Analyzes SQL (recent or from the AWR). SQLs can be combined into SETs and whole SETs sent for analysis.

  • SQL Access Advisor
Analyzes SQL from the schema point of view. Suggests creating indexes and maetrialized views.

  • Memory Advisor
Advises on 3 memory parameters: buffer cache, shared pool, and PGA. In case of buffer cache performance measure is "relative change in physical reads". For shared pool "relative change in parse time savings" is used and for PGA it is the "cache hit %".

  • Mean Time To Recover Advisor
Here you only set the desired recovery time in seconds and oracle sets a number of internal parameters to make the recovery time as close as possible to your desired value.


  • Segment Advisor
Finds segments which can be shrunk and thus to release the unused space back to the tablespace. After finding the segements worth shrinking, following SQLs are executed:
alter table "SCOTT"."T2" enable row movement;
alter table "SCOTT"."T2" shrink space;


  • Undo Management Advisor
Analyzes the UNDO generation rates and suggests a correct retention time and undo tablespace size.

Saturday, June 2, 2007

Locking

Reference: Types of locks.

Developers Guide - Locking Data explicitly

Metalink Doc ID: Note:15476.1 FAQ about Detecting and Resolving Locking Conflicts

SQL*Plus script to query locks information

Locks can be: DML, DDL or internal
There are two basic types of DML locks: row locks and table locks.

Types of tables locks:
ROW SHARE (RS) - obtained when SELECT ... FOR UPDATE is issued. At least this is what Oracle documentation says. Testing shows that when SELECT FOR UPDATE is used, an RX locks is obtained. So it seems that... I don't know ;)

Basically this type of lock needed to show that some data is going to be changed in this table so that no other transaction could drop or modify the table or obtain a EXCLUSIVE lock.

ROW EXCLUSIVE (RX) - same as previous but it also prohibits other transactions to lock the table in SHARE and SHARE ROW EXCLUSIVE modes. It is set automatically when DML statements (such as update) are used against the table.
This seems fair since SHARE lock is needed to prohibit any updates, then it is fair that an update is prohibiting the SHARE lock to be obtained.

SHARE (S) - prohibits others to obtain RS, RX locks thus preventing a table to be updated while SHARE lock is active. This is needed if table level read consistency needs to be reached. Other sessions can also obtain SHARE locks at the same time. If two or more transactions will try to update the table while having the SHARE lock on, this will result in a deadlock even if they update different rows. If you need to get the read consistent table plus an exclusive right to update it, use the SHARE ROW EXCLUSIVE lock.
I also noticed that if after obtaining a SHARE lock, try to update the table, lock type is automatically converted into SHARE ROW EXCLUSIVE type. This is logical because you cannot update the table until others are holding the SHARE lock, you have to get the SHARE + ROW EXCLUSIVE lock type to do that.
Note:This lock is automatically obtained when CREATE INDEX statement is run.

SHARE ROW EXCLUSIVE (SRX) - prohibits all other locks to be obtained, except for the RS lock. So this is almost exclusive lock. You get both read consistency and exclusive right to update the table. Oracle docs say that others still can issue the SELECT .. FOR UPDATE statement on this table but my test show that this is not true...

EXCLUSIVE - this one has to be obtained before all of the DDL stuff can be executed.