Showing posts with label oracle 10g. Show all posts
Showing posts with label oracle 10g. Show all posts

Thursday, March 10, 2011

STATS Locked after import

Metalink: ORA-38029 "Object Statistics Are Locked" - Possible Causes [ID 433240.1]

To prevent import (imp) from locking the table's statistics when importing a table without the rows (rows=n), use statistics=none. To prevent data pump import (impdp) from locking the table's statistics when importing a table without the rows (content=metadata_only), use exclude=(table_statistics,index_statistics).

Monday, June 4, 2007

Monitoring and Advisors

In 10g statistics are gathered automatically by background processes MMON (memory monitor) and MMNL (memory monitor light). By default every 60 minutes MMON wakes up and saves the statistics out of the dictionary views into AWR (automatic workload repository).

AWR data is owned by user SYSMAN and stored in SYSAUX tablespace.

Amount of statistics gathered is defined by parameter statistics_level
There are three possible options:
BASIC - practically disables statistics gathering, very few is saved.
TYPICAL - standard level
ALL - same as TYPICAL but also keeps the execution plans and timing information from the operating system

The statistics are kept in AWR for 7 days by default.

To change the collection interval and retention period DBMS_WORKLOAD_REPOSITORY package can be used. Like this:

exec dbms_workload_repository.modify_snapshot_settings(interval=>90,retention=>8640);
This sets interval to 90 minutes and retention to 8640 minutes (this makes 6 days or something)

After the statistics snapshot is taken, a process named ADDM (automatic database diagnostic monitoring) analyzes newly gathered information and compares it with two previously taken snapshots. By comparing these three, it can identify potential performance problems and suggest possible remedies. The main goal is to minimize the DBTime, which is being CPU and WAIT time put together for non-idle database users.

To get the list of snapshots:

SELECT * FROM dba_hist_snapshot;
To take snapshot manually:

EXECUTE dbms_workload_repository.create_snapshot('TYPICAL');
To delete snapshots:

exec dbms_workload_repository.drop_snapshot_range(begin_snap_id, end_snap_id);
To get a text ADDM report:

@?/rdbms/admin/addmrpt.sql
Additionally the following views can be queried to get the ADDM findings: DBA_ADVISOR_FINDINGS, DBA_ADVISOR_OBJECTS, DBA_ADVISOR_RECOMMENDATIONS, DBA_ADVISOR_RATIONALE

There is also a number of special-purpose advisors which include:

  • SQL Tuning Advisor
Analyzes SQL (recent or from the AWR). SQLs can be combined into SETs and whole SETs sent for analysis.

  • SQL Access Advisor
Analyzes SQL from the schema point of view. Suggests creating indexes and maetrialized views.

  • Memory Advisor
Advises on 3 memory parameters: buffer cache, shared pool, and PGA. In case of buffer cache performance measure is "relative change in physical reads". For shared pool "relative change in parse time savings" is used and for PGA it is the "cache hit %".

  • Mean Time To Recover Advisor
Here you only set the desired recovery time in seconds and oracle sets a number of internal parameters to make the recovery time as close as possible to your desired value.


  • Segment Advisor
Finds segments which can be shrunk and thus to release the unused space back to the tablespace. After finding the segements worth shrinking, following SQLs are executed:
alter table "SCOTT"."T2" enable row movement;
alter table "SCOTT"."T2" shrink space;


  • Undo Management Advisor
Analyzes the UNDO generation rates and suggests a correct retention time and undo tablespace size.

Tuesday, May 29, 2007

Import/Export, Datapump

DataPump is new to 10g. It replaces the old exp and imp utilities, which operated as clients. Datapump works on server side thus gaining access to the data directly, which dramatically increases export/import operations performance.

DP is called from the command line (expdp and impdp) or trough the DBMS_DATAPUMP package.

Note: You need to have a EXP_FULL_DATABASE role is you want to export schema other than yours.
Note: You also need the exp_full_database role if you want the user definition and grants to be included in the export. Otherwise you will only have the schema objects.

Possible export types using EXPDP
full = y | Exports full database
schema = [schema_list] | If omitted then connected user's schema will be exported
tablespace = [tablespace_list] | Exports all object belonging to a tablespace along with dependant object even if they are from a different tablespace
table = [table_list]


Other parameters
content = metadata_only | data_only | default ALL
directory = dir | Oracle DIRECTORY object, found in DBA_DIRECTORIES. User must have the read/write rights for the DIR
dumpfile = dir:filename, ... | dir is again Oracle DIRECTORY
logfile = dir:filename
filesize = n [G|M|K] | Sets the maximum size of each dump file.
parallel=degree | In case
network_link = db_link | Makes possible to use remote database as export source

To initiate export using the DBMS_DATAPUMP you have to do a little bit of programming:
Reference

The script would look something like this:
begin
--1st step
h1 := DBMS_DATAPUMP.OPEN (operation => 'EXPORT' ... so on)
--2nd step
DBMS_DATAPUMP.ADD_FILE ( ... --add the log file
--3rd step
DBMS_DATAPUMP.ADD_FILE (... --add the dump file
--4th step
DBMS_DATAPUMP.METADATA_FILTER (... --filter out not needed stuff
--5th step
DBMS_DATAPUMP.START_JOB ( handle => h1 ... -- from the open call
--6th step
DBMS_DATAPUMP.DETACH (handle => h1 ... --detach from the job
end;
You can then monitor the job through DBA_DATAPUMP_JOBS or USER_DATAPUMP_JOBS view or by looking inside the log file.

You can also perform these activities from the dbconsole interface. Log in into database control and go to Maintenance section. Locate the "Export to files" link, complete the wizard.


Importing using the impdp utility

You have mostly the same options as for the expdp.
Different parameters:

sqlfile = dir:file | Specifies a file to write the DDL statements to.
network_link = db_link | Importing froma live database! A very nice feature.
reuse_datafiles= Y|N | Specifies if the datafiles should be overwritten.
remap_dataile = source:target | Specifies how the datafiles should be renamed
remap_schema = source:target
remap_tablespace = source:target
include = object_list
exclude = object_list

include paramter usage example:
impdp scott/tiger full=y ... include=PROCEDURE:"LIKE SMTHNG%"
This will import only the procedure object which names start with "SMTHNG"


Import using the DBMS_DATAPUMP is done in a similar way to export.

And of course you can run import from EM.

Sunday, May 13, 2007

Case insensitive operations in 10g

to get a linguistic sorts, set the NLS_SORT session parameter to the desired linguistic rule set.
eg. alter session set nls_sort=ESTONIAN

add a "_CI" to get case insensitive or "_AI" to get both case and accent insensitive sorting

or use the NLSSORT function

select name from empliyees order by NLSSSORT(name,'NLS_SORT=ESTONIAN_AI');

to perform case insensitive comparisons set the NLS_COMP parameter to 'LINGUISTIC'. This will make the WHERE clause conditions use the rules set by the NLS_SORT parameter

Like this:
alter session set NLS_COMP=LINGUISTIC;
alter session set NLS_SORT=ESTONIAN_CI;
select name from employees where name like 'SomEnAme';

To achive same results without setting the NLS_SORT, NLSSORT function could be used like this:
select * from scott.l where nlssort(name,'nls_sort=BINARY_AI') like nlssort('koger', 'nls_sort=BINARY_AI');