some things to consider when performing export/import operations:
- turn off the archive logging for the time of IMPORT if possible
- JOBS are created in the schema of the user who is running the import
- user must have the IMP_FULL_DATABASE privilege to be able to import JOBS
**I'm not sure which privilege exactly is necessary here. But tests show that if there's no IMP_FULL_DATABASE role, then data pump simply ignores the JOB objects in the dump**
- when performing SCHEMA import, it might be reasonable to import USER, ROLE_GRANTS, SYSTEM_GRANTS objects at first step logged in as DBA and all the rest in the second step as the schema user himself (so that JOBS will be created in the correct schema)
Tuesday, December 11, 2007
Wednesday, November 7, 2007
create pfile from spfile without a running instance
when you messed up an instance and need to change some parameters in your PFILE, but what you have is only an SPFILE, which is not editable, you can still get yourself a pfile in case if you're running on unix. There's a
stringscommand, which will extract printable strings from spfile.
$ strings spfile$ORACLE_SID.ora > pfile$ORACLE_SID.ora
Wednesday, October 24, 2007
cursor cache
An article at orafaq on cursors
Main point is that
Main point is that
OPEN_CURSORSparameters limits the number of currently opened cursors by each session. If this number is exceeded the session will get en error. And
session_cached_cursorsis something totally different and independent from open_cursors. The idea behind the cursor cache is to hold "popular" cursors opened. If a cursor is being parsed every time it's opened, it causes a latch contention and concurrency starts to suffer. So to protect the database from a badly written code, you can cache the cursors. The number of cached cursors is set by the session_cached_cursors. The value is per-session. It can be set system-wide (needs a restart) or by ALTER SESSION. As I have understood, cursor cache is a part of PGA, so it will not effect the shared pool if this parameter is increased.
Wednesday, October 17, 2007
export/import through gzip
1. create a pipe:
$ mknod gzpipe p2. start gzip
$ gzip -c < gzpipe > exp.gz &3. run exp
$ exp scott/tiger file=gzpipe ...import would then be:
$ gunzip -c < exp.gz > gzpipe &Unfortunately piping cannot be used with 10g datapump. Here's what oracle whitepaper says:
$ imp scott/tiger file=gzpipe ...
Can I use gzip with Data Pump?
Because Data Pump uses parallel operations to achieve its high performance, you
cannot pipe the output of Data Pump export through gzip. Starting in Oracle
Database 11g, the COMPRESSION parameter can be used to compress a Data Pump
dump file as it is being created. The COMPRESSION parameter is available as part
of the Advanced Compression Option for Oracle Database 11g.
Tuesday, October 16, 2007
How to change database SID
dbnewid is an oracle utility which enables you to change sid of a database.
You can change either the DBNAME or DBID or both.
dbnewid does not change the global name of the database. to do that the following command must be run:
Also you must manually change the DB_NAME instance parameter and recreate the password file. And if you changed DBID, database must be opened using RESETLOGS parameter.
1. changing only database name:
2. changing only dbid:
You can change either the DBNAME or DBID or both.
dbnewid does not change the global name of the database. to do that the following command must be run:
ALTER DATABASE RENAME GLOBAL_NAME TO[name].[domain];
Also you must manually change the DB_NAME instance parameter and recreate the password file. And if you changed DBID, database must be opened using RESETLOGS parameter.
SHUTDOWN IMMEDIATE
STARTUP MOUNT
1. changing only database name:
% nid TARGET=SYS/oracle@test_db DBNAME=test_db SETNAME=YES
2. changing only dbid:
% nid TARGET=SYS/oracle@test_db3. changing both:
% nid TARGET=SYS/oracle@test DBNAME=test_db
Wednesday, September 26, 2007
EM Grid Control target discovery. How to add more targets after the agent is already there?
The article which explains it all:Metalink Note:239224.1
In short, the general points are:
1. There is a number of perl scripts in sysman/admin/discover directory, which are run by the agent configuration assistant at the end of agent installation.
2. Discover scripts search for all kinds of targets in all the known places (TNS_ADMIN, oratab, etc) and create the targets.xml file, which contains the list of discovered targets in xml format. file is saved into /sysman/emd directory.
3. If you need to add more targets after agent installation, you need to initiate the discovery process from within the grid control itself, by clicking "add" from the targets view.
4. As a last resort you could also try running the perl scripts manually, but this is not recommended.
5. Or even you can edit the targets.xml file and add targets manually and then run >emctl reload
In short, the general points are:
1. There is a number of perl scripts in sysman/admin/discover directory, which are run by the agent configuration assistant at the end of agent installation.
2. Discover scripts search for all kinds of targets in all the known places (TNS_ADMIN, oratab, etc) and create the targets.xml file, which contains the list of discovered targets in xml format. file is saved into /sysman/emd directory.
3. If you need to add more targets after agent installation, you need to initiate the discovery process from within the grid control itself, by clicking "add" from the targets view.
4. As a last resort you could also try running the perl scripts manually, but this is not recommended.
5. Or even you can edit the targets.xml file and add targets manually and then run >emctl reload
Friday, September 21, 2007
Unsetting the instance parameters
Following syntax is used to unset a parameter:
where "sid" is the sid of the instance which needs to be affected in a RAC environment. In case of a single instance use "*".
SQL> alter system reset [parameter_name] scope=spfile sid='*';
where "sid" is the sid of the instance which needs to be affected in a RAC environment. In case of a single instance use "*".
Tuesday, September 4, 2007
What does it mean if SERVER = 'NONE' in v$session?
Definition:
SERVER VARCHAR2(9) Server type (DEDICATED| SHARED| PSEUDO| NONE)
From metalink:
In MTS/Shared Server configuration when you see value 'NONE' (as shown below), it means there is no task being processed by shared server for that session. The server column will infact show status of 'SHARED' if there is some task being processed at that particular time by the shared server process for that session.
In is also proved by the fact, that STATUS of session is always 'ACTIVE' for the sessions with 'SHARED' as value of SERVER and 'INACTIVE' for those with SERVER='NONE'.
Wednesday, August 15, 2007
... LOBs
Initial problem was: "LOB segment is wasting disk space, need to shrink it". So how can one investigate if there is some "wasted" space inside a LOB and how to reclaim it?
Note:386341.1 metalink article that quite helped me on this
To get the LOB segment size, use the dba_lobs and dba_segment views.
First find out the name of the LOB segment:
First, you will have to rebuild the freepools, using the following command:
:(((( Although this did not work with my LOB...
There is also a special keyword for this in ALTER TABLE syntax in 10.2:
Now to actually reclaim the space from a LOB into a tablespace, starting 10.2 you have the wonderful shrink space command:
Now why this did not work with my LOB, even though the difference between "real" data size and the size of the segment itself was more than 2 times.
The truth was, that there were actually no bytes wasted. It was just normal, since the field was of NCLOB type and Oracle stores NCLOBs in UCS2 compatible internal encoding, which has fixed-width of 2 bytes per character. Now what the dbms_lob.getlength() function was returning, was the number of characters. So this is were I was getting this difference and this is why you have to multiply the result by 2, if you are using NCLOBs.
But what has got me totally distracted on this, was that when I performed a test with CLOBs, which had 2K chunks, inserting 2K into each, I was also getting a double sized LOB segments. The reason was revealed by Jonathan Lewis himself on comp.databases.oracle.server
Here's what he wrote:
Also here could be found a very useful document by Tanel Põder, which explains a lot about LOB internals.
Some sqls needed to perform test with LOBs.
To create a table with a clob named 't1_lob', with 2K chunksize (need to have a 2K buffer cache and a 2K block tablespace called 'smallblock'). Make the LOB to be out-of-line and leave no space for concurrency:
To load CLOB from a file:
Other LOB related examples could be found in $ORACLE_HOME/rdbms/demo/lobs/plsql
Note:386341.1 metalink article that quite helped me on this
To get the LOB segment size, use the dba_lobs and dba_segment views.
First find out the name of the LOB segment:
select segment_name,table_name,column_name,tablespace_name from dba_lobs where table_name = '[table_name]'Then the size on disk:
select blocks, bytes from dba_segments where segment_name = '[lob_system_segment_name]';To get to know how much data is stored in this LOB:
select sum(dbms_lob.getlength([lob_column])) from [table];If the last number if much smaller than previous, then apparently there is some space wasted and could be reclaimed. If you have been deleting rows from the LOB's parent table, then LOB segment blocks (or chunks) are left in "deleted" state. Theoretically this space will be reused when subsequent "insert" will come [upd: tested and it really does work like that]. But if you need to shrink the LOB and reclaim some space for other segments, then you have the following opportunities.
First, you will have to rebuild the freepools, using the following command:
alter table [table name] modify lob([lob column name])In RAC configuration, the number of freepools = number of RAC instances, otherwise it is 1 (or it can be looked up in dba_lobs view. In case null is what you see there, then 1 is your choice).
(freepools [number of free pools]);
:(((( Although this did not work with my LOB...
There is also a special keyword for this in ALTER TABLE syntax in 10.2:
alter table [table_name] modify lob [lob_column] (freepools rebuild);After this you should see the free blocks either using the DBMS_SPACE.SPACE_USAGE (in case of an ASSM tablespace) or with DBMS_SPACE.FREE_BLOCKS in case of a manual segment management. You can also use the very convenient and useful Tom Kyte's show_space() procedure. Which actually uses the same DBMS_SPACE package, but it also detects automatically what kind of segment management do you have set and acts appropriately.
Now to actually reclaim the space from a LOB into a tablespace, starting 10.2 you have the wonderful shrink space command:
alter table [table name] modify lob([lob column name]) (shrink space [cascade]);
If lower than 10.2 then the only option is to rebuild the LOB:alter table [table name] move lob [lob column name]) store as (tablespace [tablespace name]);This is done online, so no need to bring anything off-line.
Now why this did not work with my LOB, even though the difference between "real" data size and the size of the segment itself was more than 2 times.
The truth was, that there were actually no bytes wasted. It was just normal, since the field was of NCLOB type and Oracle stores NCLOBs in UCS2 compatible internal encoding, which has fixed-width of 2 bytes per character. Now what the dbms_lob.getlength() function was returning, was the number of characters. So this is were I was getting this difference and this is why you have to multiply the result by 2, if you are using NCLOBs.
But what has got me totally distracted on this, was that when I performed a test with CLOBs, which had 2K chunks, inserting 2K into each, I was also getting a double sized LOB segments. The reason was revealed by Jonathan Lewis himself on comp.databases.oracle.server
Here's what he wrote:
[A 2KB block allows 1,988 bytes of space for LOBIt is also must be kept in mind that LOB segment doesn't use PCTFREE PCTUSED parameters, it always allocates a full chunk. So (roughly) if you have 8K chunks and you store 2K of data in each row, then you will have 6K of disk space wasted per row.
storage - the remaining 60 bytes is the standard block
overhead plus a few bytes of data specific to the LOB,
such as the object number, LOB ID, and SCN. So if
your character set is single byte then you can get 1,988
characters per block.]
Also here could be found a very useful document by Tanel Põder, which explains a lot about LOB internals.
Some sqls needed to perform test with LOBs.
To create a table with a clob named 't1_lob', with 2K chunksize (need to have a 2K buffer cache and a 2K block tablespace called 'smallblock'). Make the LOB to be out-of-line and leave no space for concurrency:
create table t1 (i int, b clob)To fill the table:
lob (b) store as t1_lob (
chunk 2K
disable storage in row
tablespace smallblock
pctversion 0);
begin
for i in 1..1000 loop
insert into t1 values (mod(i,5), rpad('*',1988,'*'));
end loop;
end;
To load CLOB from a file:
declareThe loadCLOB1_proc source
nc clob;
begin
select b into nc from t1 where i = 1 for update;
loadCLOB1_proc(nc,'ROOT_DIR','test.txt');
update t1 set b = nc where i = 1;
end;
Other LOB related examples could be found in $ORACLE_HOME/rdbms/demo/lobs/plsql
Wednesday, August 1, 2007
Parallel query execution
helpful article: metalink docid: Note:237287.1
oracle docs
first check if the parallel query option is enabled:
At SQLPLUS
==========
- SQL>select * from v$option;
PARAMETER VALUE
-------------------------
Parallel execution TRUE
General stats:
oracle docs
first check if the parallel query option is enabled:
At SQLPLUS
==========
- SQL>select * from v$option;
PARAMETER VALUE
-------------------------
Parallel execution TRUE
General stats:
select * from V$PX_PROCESS_SYSSTAT;
your own:select * from v$pq_sesstat;To make the query use parallelism, use one of three options:
- set the degree of parallelism for the table: ALTER TABLE t PARALLEL 2;
- use ALTER SESSION FORCE PARALLEL QUERY
- give a hint within the sql itself: SELECT /*+ PARALLEL(t,2) */ * from t;
Subscribe to:
Posts (Atom)