Wednesday, July 18, 2007

Temporary tablespaces

Restriction on Taking Tablespaces Offline: You cannot take a temporary tablespace offline.

To get an overview of temp tablespace usage:
select TABLESPACE_NAME,CURRENT_USERS,SEGMENT_FILE,USED_EXTENTS,FREE_EXTENTS from v$sort_segment;

select username, user, contents, segtype, tablespace from v$tempseg_usage;
If you want to drop non-default temporary tablespace, you issue something like:
> drop tablespace tempt including contents and datafiles;
At least in 10.2 the previous statement got hanging until the temporary segments (temp tables in my case) were not cleared.

Also after killing the non-default temp tablespace 'tempt', the temp tables created in there became invalid:
SCOTT@andrkydb> insert into te values (6);
insert into te values (6)
*
ERROR at line 1:
ORA-00959: tablespace 'TEMPT' does not exist

it is also funny, that I now can neither use, nor drop the temp table 'te':
SCOTT@andrkydb> drop table te;
drop table te
*
ERROR at line 1:
ORA-14452: attempt to create, alter or drop an index on temporary table already in use

Metalink article (Doc ID: Note:270645.1) says that table need to be truncated before drop. But it still doesn't help. I cannot drop the table.

If I recreate the tablespace 'tempt' I can use the table but still cannot drop it!

Instance restart helped...

This problem doesn't appear if we operate only with default temporary tablespaces.

Wednesday, July 11, 2007

Archive logs FORMAT

You use LOG_ARCHIVE_FORMAT parameter to specify how will the archive log files look like. You have the following possible variables:
%s log sequence number
%S log sequence number, zero filled
%t thread number
%T thread number, zero filled
%a activation ID
%d database ID
%r resetlogs ID that ensures unique names are constructed for the archived log files across multiple incarnations of the database
There is not a single word in the document that some of the variables are actually REQUIRED
So after setting it to something like
ALTER SYSTEM SET log_archive_format='%d_%s_%r.arc' SCOPE=SPFILE;

After restarting the instance you will get an error like this:
ORA-19905: log_archive_format must contain %s, %t and %r
Why not to do the check when the parameter is actually set? Instead oracle chooses to check it on startup and you end up with an unstartable instance. You also cannot restore the spfile from autobackup, because rman needs instance running at least in NOMOUNT mode. The only option is to boot with a pfile (no matter where you'll get it) . But this is the only solution oracle offers you

It also seems that you cannot affect the format of archive logs saved to the DESTINATION 10, which is set to USE_DB_RECOVERY_FILE_DEST value. It must be that oracle manages filenames on his own inside the recovery area.

Tuesday, July 10, 2007

RMAN Point-in-time recovery

Metalink Docid Note:370500.1

RMAN> run {
set until time "to_date('10.07.07 17:23:36','dd.mm.yy hh24:mi:ss')";
restore database;
recover database;
}
This shit is not working if you don't set the NLS_LANG environment variable. It fails with an error like:
ORA-01841: (full) year must be between -4713 and +9999, and not be 0

Also note how is the date parameter passed to the set until time. it's in double-quotes on some reason.

Friday, July 6, 2007

Compile invalid objects

Article at orace-base

SQL*Plus script to list not valid objects:
COLUMN object_name FORMAT A30
SELECT owner,
object_type,
object_name,
status
FROM dba_objects
WHERE status != 'VALID'
ORDER BY owner, object_type, object_name;

To compile automatically use one of these:
UTL_RECOMP.RECOMP_SERIAL(schema=>'SCOTT');

-- if schema is null, then all objects will be compiled

EXEC DBMS_UTILITY.compile_schema(schema => 'SCOTT');

Wednesday, June 13, 2007

Creating a multiname SSL cert

1. install openSSL
2. generate a private key
openssl genrsa -out key.pem 1024
3. make a conf file. the main idea of which is to have the multiple commonName and have the subjectAltName = DNS:domain in the req_extensions.
contents:
[ req ]
default_bits = 1024
default_keyfile = key.pem
distinguished_name = req_distinguished_name
req_extensions = v3_ca
x509_exentions = v3_ca

[ req_distinguished_name ]
countryName = Your Country
countryName_default = EE
stateOrProvinceName = State/Province
stateOrProvinceName_default = Harjumaa
localityName = Locality
localityName_default = Tallinn
organizationName = Organization
organizationName_default = OЬ JOT Eesti
organizationalUnitName = Organizational Unit
organizationalUnitName_default = OЬ JOT Eesti
emailAddress = e-mail
emailAddress_default = syse@elektrobit.com
0.commonName = Common Name (eg, YOUR name)
0.commonName_default = intranet
0.commonName_max = 64
1.commonName = Common Name (eg, YOUR name)
1.commonName_default = intranet.ee.ebgroup.elektrobit.com
1.commonName_max = 64

[ v3_ca ]
subjectAltName = DNS:intranet, DNS:intranet.ee.ebgroup.elektrobit.com



4. generate a request 4 CA
openssl req -new -config cert.conf -out cert.req
4.1. feed the cert.req to the CA
5. retrive the base64 encoded cert
6. generate a p12 file for importing it to the server
openssl pkcs12 -export -in intranetb64.cer -inkey key.pem -out intra.p12 -name "JOT Eesti Intranet"
7. at the webserver open the certificate mmc snapin for Computer and import the p12 file into the Personal storage
8. apply the installed cert to the website using the IIS snapin.

Tuesday, June 5, 2007

Gathering statistics

If database was created using DBCA, an automatic statistics gathering procedure should be already set up. It can be observed under the Scheduler jobs. You need to be logged in as "sys as sysdba" to see it.

To gather statistics manually you can use the ANALYZE SQL command, or DBMS_STATS package. You can also use EM, but that's just an interface for the DBMS_STATS.

When gathering statistics you can use the COMPUTE or ESTIMATE option. COMPUTE will gather fill stats when ESTIMATE will only take a representative sample of the rows in the table thus performing faster but not guaranteeing maximum correctness like COMPUTE does.

Gathering stats using the DBMS_STATS
Summary of DBMS_STATS subprograms

Most frequently needed are:
GATHER_DATABASE_STATS
GATHER_FIXED_OBJECTS_STATS
GATHER_INDEX_STATS
GATHER_SCHEMA_STATS
GATHER_TABLE_STATS

There is also a "DELETE_%" procedure corresponding to each "GATHER_%' procedure.

It is also possible to EXPORT and IMPORT statistics. You could have several different sets of statistics for different purposes.

Monday, June 4, 2007

Monitoring and Advisors

In 10g statistics are gathered automatically by background processes MMON (memory monitor) and MMNL (memory monitor light). By default every 60 minutes MMON wakes up and saves the statistics out of the dictionary views into AWR (automatic workload repository).

AWR data is owned by user SYSMAN and stored in SYSAUX tablespace.

Amount of statistics gathered is defined by parameter statistics_level
There are three possible options:
BASIC - practically disables statistics gathering, very few is saved.
TYPICAL - standard level
ALL - same as TYPICAL but also keeps the execution plans and timing information from the operating system

The statistics are kept in AWR for 7 days by default.

To change the collection interval and retention period DBMS_WORKLOAD_REPOSITORY package can be used. Like this:

exec dbms_workload_repository.modify_snapshot_settings(interval=>90,retention=>8640);
This sets interval to 90 minutes and retention to 8640 minutes (this makes 6 days or something)

After the statistics snapshot is taken, a process named ADDM (automatic database diagnostic monitoring) analyzes newly gathered information and compares it with two previously taken snapshots. By comparing these three, it can identify potential performance problems and suggest possible remedies. The main goal is to minimize the DBTime, which is being CPU and WAIT time put together for non-idle database users.

To get the list of snapshots:

SELECT * FROM dba_hist_snapshot;
To take snapshot manually:

EXECUTE dbms_workload_repository.create_snapshot('TYPICAL');
To delete snapshots:

exec dbms_workload_repository.drop_snapshot_range(begin_snap_id, end_snap_id);
To get a text ADDM report:

@?/rdbms/admin/addmrpt.sql
Additionally the following views can be queried to get the ADDM findings: DBA_ADVISOR_FINDINGS, DBA_ADVISOR_OBJECTS, DBA_ADVISOR_RECOMMENDATIONS, DBA_ADVISOR_RATIONALE

There is also a number of special-purpose advisors which include:

  • SQL Tuning Advisor
Analyzes SQL (recent or from the AWR). SQLs can be combined into SETs and whole SETs sent for analysis.

  • SQL Access Advisor
Analyzes SQL from the schema point of view. Suggests creating indexes and maetrialized views.

  • Memory Advisor
Advises on 3 memory parameters: buffer cache, shared pool, and PGA. In case of buffer cache performance measure is "relative change in physical reads". For shared pool "relative change in parse time savings" is used and for PGA it is the "cache hit %".

  • Mean Time To Recover Advisor
Here you only set the desired recovery time in seconds and oracle sets a number of internal parameters to make the recovery time as close as possible to your desired value.


  • Segment Advisor
Finds segments which can be shrunk and thus to release the unused space back to the tablespace. After finding the segements worth shrinking, following SQLs are executed:
alter table "SCOTT"."T2" enable row movement;
alter table "SCOTT"."T2" shrink space;


  • Undo Management Advisor
Analyzes the UNDO generation rates and suggests a correct retention time and undo tablespace size.

Saturday, June 2, 2007

Locking

Reference: Types of locks.

Developers Guide - Locking Data explicitly

Metalink Doc ID: Note:15476.1 FAQ about Detecting and Resolving Locking Conflicts

SQL*Plus script to query locks information

Locks can be: DML, DDL or internal
There are two basic types of DML locks: row locks and table locks.

Types of tables locks:
ROW SHARE (RS) - obtained when SELECT ... FOR UPDATE is issued. At least this is what Oracle documentation says. Testing shows that when SELECT FOR UPDATE is used, an RX locks is obtained. So it seems that... I don't know ;)

Basically this type of lock needed to show that some data is going to be changed in this table so that no other transaction could drop or modify the table or obtain a EXCLUSIVE lock.

ROW EXCLUSIVE (RX) - same as previous but it also prohibits other transactions to lock the table in SHARE and SHARE ROW EXCLUSIVE modes. It is set automatically when DML statements (such as update) are used against the table.
This seems fair since SHARE lock is needed to prohibit any updates, then it is fair that an update is prohibiting the SHARE lock to be obtained.

SHARE (S) - prohibits others to obtain RS, RX locks thus preventing a table to be updated while SHARE lock is active. This is needed if table level read consistency needs to be reached. Other sessions can also obtain SHARE locks at the same time. If two or more transactions will try to update the table while having the SHARE lock on, this will result in a deadlock even if they update different rows. If you need to get the read consistent table plus an exclusive right to update it, use the SHARE ROW EXCLUSIVE lock.
I also noticed that if after obtaining a SHARE lock, try to update the table, lock type is automatically converted into SHARE ROW EXCLUSIVE type. This is logical because you cannot update the table until others are holding the SHARE lock, you have to get the SHARE + ROW EXCLUSIVE lock type to do that.
Note:This lock is automatically obtained when CREATE INDEX statement is run.

SHARE ROW EXCLUSIVE (SRX) - prohibits all other locks to be obtained, except for the RS lock. So this is almost exclusive lock. You get both read consistency and exclusive right to update the table. Oracle docs say that others still can issue the SELECT .. FOR UPDATE statement on this table but my test show that this is not true...

EXCLUSIVE - this one has to be obtained before all of the DDL stuff can be executed.

Tuesday, May 29, 2007

SQL*Loader

Reference

SQL*Loader files:
log - contains the loading session log
control - a mandatory file containing all the control information (where are the other files, how to parse data and where to save it) It can also contain the data as well
bad - contains the records not loaded due to an error
discard - contains the records not satisfying the record select criteria

Possible ways of running the sql*loader:
sqlldr user/password control=some_file.ctl
sqlldr user/password some_file.ctl
You can also specify the direct = TRUE|FALSE parameter to use the direct path load. When left to default (FALSE) conventional path is used.

Direct path loading is fast. It doesn't generate undo or redo, it always writes above the high water mark, it doesn't issue COMMIT, it bypasses most of the constraints.

Some restrictions with the direct path loading:
  • Indexes are always rebuilt after the loading. If unique constraint is violated, the unique index is left in unusable state. You need to find the duplicate, eliminate them and rebuild the index.
  • Direct path load cannot occur if there are active transactions against the target table as it places an exclusive table lock on the whole table for the time of load.
  • Triggers do not fire during direct path loads
  • Direct path loading into cluster tables is not supported. In fact only loading to heap organized tables is supported.
  • During the direct path loads, foreign key constraints are disabled and then re-enabled

Possible types of input data formats:
  • fixed length. All rows have the same length and fields are separated by some character. This format will be the quickest in processing.
  • Variable length. Each row starts with a number showing how long the row is.
  • Stream format. Will be the slowest to process. But no information on the row length is needed. You just need to specify the field separator.

Import/Export, Datapump

DataPump is new to 10g. It replaces the old exp and imp utilities, which operated as clients. Datapump works on server side thus gaining access to the data directly, which dramatically increases export/import operations performance.

DP is called from the command line (expdp and impdp) or trough the DBMS_DATAPUMP package.

Note: You need to have a EXP_FULL_DATABASE role is you want to export schema other than yours.
Note: You also need the exp_full_database role if you want the user definition and grants to be included in the export. Otherwise you will only have the schema objects.

Possible export types using EXPDP
full = y | Exports full database
schema = [schema_list] | If omitted then connected user's schema will be exported
tablespace = [tablespace_list] | Exports all object belonging to a tablespace along with dependant object even if they are from a different tablespace
table = [table_list]


Other parameters
content = metadata_only | data_only | default ALL
directory = dir | Oracle DIRECTORY object, found in DBA_DIRECTORIES. User must have the read/write rights for the DIR
dumpfile = dir:filename, ... | dir is again Oracle DIRECTORY
logfile = dir:filename
filesize = n [G|M|K] | Sets the maximum size of each dump file.
parallel=degree | In case
network_link = db_link | Makes possible to use remote database as export source

To initiate export using the DBMS_DATAPUMP you have to do a little bit of programming:
Reference

The script would look something like this:
begin
--1st step
h1 := DBMS_DATAPUMP.OPEN (operation => 'EXPORT' ... so on)
--2nd step
DBMS_DATAPUMP.ADD_FILE ( ... --add the log file
--3rd step
DBMS_DATAPUMP.ADD_FILE (... --add the dump file
--4th step
DBMS_DATAPUMP.METADATA_FILTER (... --filter out not needed stuff
--5th step
DBMS_DATAPUMP.START_JOB ( handle => h1 ... -- from the open call
--6th step
DBMS_DATAPUMP.DETACH (handle => h1 ... --detach from the job
end;
You can then monitor the job through DBA_DATAPUMP_JOBS or USER_DATAPUMP_JOBS view or by looking inside the log file.

You can also perform these activities from the dbconsole interface. Log in into database control and go to Maintenance section. Locate the "Export to files" link, complete the wizard.


Importing using the impdp utility

You have mostly the same options as for the expdp.
Different parameters:

sqlfile = dir:file | Specifies a file to write the DDL statements to.
network_link = db_link | Importing froma live database! A very nice feature.
reuse_datafiles= Y|N | Specifies if the datafiles should be overwritten.
remap_dataile = source:target | Specifies how the datafiles should be renamed
remap_schema = source:target
remap_tablespace = source:target
include = object_list
exclude = object_list

include paramter usage example:
impdp scott/tiger full=y ... include=PROCEDURE:"LIKE SMTHNG%"
This will import only the procedure object which names start with "SMTHNG"


Import using the DBMS_DATAPUMP is done in a similar way to export.

And of course you can run import from EM.