
Tuesday, February 19, 2008
Wednesday, January 30, 2008
How To List All The Named Events Set For A Database
Metalink
SQL> oradebug setmypid //This will set the current session's pid.
SQL> oradebug setmypid //This will set the current session's pid.
SQL> oradebug dump events 1 //This will dump the events information in the tracefile based on the level
SQL> oradebug tracefile_name //This will return the trace file generated for this event.
Friday, January 25, 2008
ASM and Clulster Manager (CSSD)
One interesting thing, which oracle installation docs keep in secret is Cluster Manager service, which is needed to run ASM instance.
in 10.1 it is installed by default (in 10.2 it is installed along with ASM instance creation, I believe)
How do you setup / remove / modify Cluster Synchronization Services Daemon (CSSD) ?
to uninstall: $ORACLE_HOME/bin/localconfig delete
to install: $ORACLE_HOME/bin/localconfig add
to start: /etc/init.d/init.cssd start
to stop: /etc/init.d/init.cssd stop
in 10.1 it is installed by default (in 10.2 it is installed along with ASM instance creation, I believe)
How do you setup / remove / modify Cluster Synchronization Services Daemon (CSSD) ?
to uninstall: $ORACLE_HOME/bin/localconfig delete
to install: $ORACLE_HOME/bin/localconfig add
to start: /etc/init.d/init.cssd start
to stop: /etc/init.d/init.cssd stop
Monday, January 21, 2008
CONTROL_FILES
control_files parameter is a list of comma-separated strings! it's not a single string of comma-separated values, which would be more logical to my mind...
to change the control_files parameter:
alter system set control_files="/path/control01.ctl", "/path/control02.ctl", "/path/control03.ctl" scope=spfile;
to change the control_files parameter:
alter system set control_files="/path/control01.ctl", "/path/control02.ctl", "/path/control03.ctl" scope=spfile;
Friday, December 14, 2007
MATVIEWS
first experience with materialized views
1. the refresh needs hell lot of UNDO space and produces tonns of archivelog
2. if mview was created before the MVIEW LOG, then you cannot perform FAST refresh. You need to perform COMPLETE refresh first. Otherwise you will get ORA-12034: materialized view log on "%s"."%s" younger than last refresh. Metalink Note:365157.1 discusses this problem.
3. to refresh it manually, use the DBMS_MVIEW package's refresh procedure
DBMS_MVIEW.REFRESH('[MVIEW NAME]','C'); --where C is for Complete
4. MVIEW must be registered with it's MASTER site and be listed in SYS.SLOG$ table.
5. It is much cheaper to drop and re-create the m.view than to perform a complete refresh!
How to monitor the progress of a materialized view refresh (MVIEW)
MATERIALIZED VIEW REFRESH: Locking, Performance, Monitoring
1. the refresh needs hell lot of UNDO space and produces tonns of archivelog
2. if mview was created before the MVIEW LOG, then you cannot perform FAST refresh. You need to perform COMPLETE refresh first. Otherwise you will get ORA-12034: materialized view log on "%s"."%s" younger than last refresh. Metalink Note:365157.1 discusses this problem.
3. to refresh it manually, use the DBMS_MVIEW package's refresh procedure
DBMS_MVIEW.REFRESH('[MVIEW NAME]','C'); --where C is for Complete
4. MVIEW must be registered with it's MASTER site and be listed in SYS.SLOG$ table.
5. It is much cheaper to drop and re-create the m.view than to perform a complete refresh!
How to monitor the progress of a materialized view refresh (MVIEW)
MATERIALIZED VIEW REFRESH: Locking, Performance, Monitoring
Tuesday, December 11, 2007
export/import considerations
some things to consider when performing export/import operations:
- turn off the archive logging for the time of IMPORT if possible
- JOBS are created in the schema of the user who is running the import
- user must have the IMP_FULL_DATABASE privilege to be able to import JOBS
**I'm not sure which privilege exactly is necessary here. But tests show that if there's no IMP_FULL_DATABASE role, then data pump simply ignores the JOB objects in the dump**
- when performing SCHEMA import, it might be reasonable to import USER, ROLE_GRANTS, SYSTEM_GRANTS objects at first step logged in as DBA and all the rest in the second step as the schema user himself (so that JOBS will be created in the correct schema)
- turn off the archive logging for the time of IMPORT if possible
- JOBS are created in the schema of the user who is running the import
- user must have the IMP_FULL_DATABASE privilege to be able to import JOBS
**I'm not sure which privilege exactly is necessary here. But tests show that if there's no IMP_FULL_DATABASE role, then data pump simply ignores the JOB objects in the dump**
- when performing SCHEMA import, it might be reasonable to import USER, ROLE_GRANTS, SYSTEM_GRANTS objects at first step logged in as DBA and all the rest in the second step as the schema user himself (so that JOBS will be created in the correct schema)
Wednesday, November 7, 2007
create pfile from spfile without a running instance
when you messed up an instance and need to change some parameters in your PFILE, but what you have is only an SPFILE, which is not editable, you can still get yourself a pfile in case if you're running on unix. There's a
stringscommand, which will extract printable strings from spfile.
$ strings spfile$ORACLE_SID.ora > pfile$ORACLE_SID.ora
Wednesday, October 24, 2007
cursor cache
An article at orafaq on cursors
Main point is that
Main point is that
OPEN_CURSORSparameters limits the number of currently opened cursors by each session. If this number is exceeded the session will get en error. And
session_cached_cursorsis something totally different and independent from open_cursors. The idea behind the cursor cache is to hold "popular" cursors opened. If a cursor is being parsed every time it's opened, it causes a latch contention and concurrency starts to suffer. So to protect the database from a badly written code, you can cache the cursors. The number of cached cursors is set by the session_cached_cursors. The value is per-session. It can be set system-wide (needs a restart) or by ALTER SESSION. As I have understood, cursor cache is a part of PGA, so it will not effect the shared pool if this parameter is increased.
Wednesday, October 17, 2007
export/import through gzip
1. create a pipe:
$ mknod gzpipe p2. start gzip
$ gzip -c < gzpipe > exp.gz &3. run exp
$ exp scott/tiger file=gzpipe ...import would then be:
$ gunzip -c < exp.gz > gzpipe &Unfortunately piping cannot be used with 10g datapump. Here's what oracle whitepaper says:
$ imp scott/tiger file=gzpipe ...
Can I use gzip with Data Pump?
Because Data Pump uses parallel operations to achieve its high performance, you
cannot pipe the output of Data Pump export through gzip. Starting in Oracle
Database 11g, the COMPRESSION parameter can be used to compress a Data Pump
dump file as it is being created. The COMPRESSION parameter is available as part
of the Advanced Compression Option for Oracle Database 11g.
Tuesday, October 16, 2007
How to change database SID
dbnewid is an oracle utility which enables you to change sid of a database.
You can change either the DBNAME or DBID or both.
dbnewid does not change the global name of the database. to do that the following command must be run:
Also you must manually change the DB_NAME instance parameter and recreate the password file. And if you changed DBID, database must be opened using RESETLOGS parameter.
1. changing only database name:
2. changing only dbid:
You can change either the DBNAME or DBID or both.
dbnewid does not change the global name of the database. to do that the following command must be run:
ALTER DATABASE RENAME GLOBAL_NAME TO[name].[domain];
Also you must manually change the DB_NAME instance parameter and recreate the password file. And if you changed DBID, database must be opened using RESETLOGS parameter.
SHUTDOWN IMMEDIATE
STARTUP MOUNT
1. changing only database name:
% nid TARGET=SYS/oracle@test_db DBNAME=test_db SETNAME=YES
2. changing only dbid:
% nid TARGET=SYS/oracle@test_db3. changing both:
% nid TARGET=SYS/oracle@test DBNAME=test_db
Subscribe to:
Posts (Atom)