Replication Error "QUISCED" Group. How to use DBMS_DEFER_SYS.EXECUTE_ERROR & DBMS_DEFER_SYS.DELETE_ERROR
Delete all Errors from replication sites
Check if there are any errors "select * from deferror;"
Delete all Errors from replication sites
Check if there are any errors "select * from deferror;"
I have a live 4 node 4 way cluster between repA,repAA,repB & repBB instances. The node repA is Master which I want to move to repAA.
Prerequisites for Advanced Replication
1. Must have distributed and replication option installed
Must run $ORACLE_HOME/rdbms/admin/catrep.sql as INTERNAL
Task: We have a working setup of 4 node 4 way replication cluster built on Advanced/MultiMaster replication. 2 nodes out of these 4 nodes should be moved to a different physical location without a downtime.
Optimizer stats can play a key part in deciding execution plan. Here is an example
Table “RSODSACTREQ” has 313783 of total rows
select A.snap_id AWR_SNAP_ID, A.INSTANCE_NUMBER,
to_char(B.BEGIN_INTERVAL_TIME,’DD-MON-YYYY HH24:MI:SS’) AWR_START_TIME,
to_char(B.END_INTERVAL_TIME,’DD-MON-YYYY HH24:MI:SS’) AWR_END_TIME,
A.RESOURCE_NAME, MAX_UTILIZATION
from sys.wrh$_resource_limit A, sys.wrm$_snapshot B
where A.resource_name like ‘%processes%’
and A.snap_id=b.snap_id
and A.INSTANCE_NUMBER= B.INSTANCE_NUMBER
and A.INSTANCE_NUMBER= 1
and B.BEGIN_INTERVAL_TIME > sysdate – 12/24;
select
sid,
username,
round(100 * total_user_io/total_io,2) tot_io_pct
from
(select
b.sid sid,
nvl(b.username,p.name) username,
sum(value) total_user_io
from
sys.v_$statname c,
sys.v_$sesstat a,
sys.v_$session b,
sys.v_$bgprocess p
where
a.statistic#=c.statistic# and
p.paddr (+) = b.paddr and
b.sid=a.sid and
c.name in (‘physical reads’,
‘physical writes’,
‘physical writes direct’,
‘physical reads direct’,
‘physical writes direct (lob)’,
‘physical reads direct (lob)’)
group by
b.sid, nvl(b.username,p.name)),
(select
sum(value) total_io
from
sys.v_$statname c,
sys.v_$sesstat a
where
a.statistic#=c.statistic# and
c.name in (‘physical reads’,
‘physical writes’,
‘physical writes direct’,
‘physical reads direct’,
‘physical writes direct (lob)’,
‘physical reads direct (lob)’))
order by
3 desc;
1. Grep occsd Log file
[oracle @ tadrac1]: /u1/app/../cssd >grep -i “master node” ocssd.log | tail -1
[CSSD]CLSS-3001: local node number 1, master node number 1
Step A> Sequence of events to pull cluster database down..
1. Bring down load balanced/TAF service
srvctl stop service -d orcl -s RAC
CRS DAEMON FUNCTIONALITY
CRSD: Performs high availability recovery and management operations such as maintaining the OCR and managing application resources.
- Engine for HA operation
- Manages ‘application resources’
- Starts, stops, and fails ‘application resources’ over
- Spawns separate ‘actions’ to start/stop/check application resources
- Maintains configuration profiles in the OCR (Oracle Configuration Repository)
- Stores current known state in the OCR.
- Runs as root
- Is restarted automatically on failure
Why standby periodically lags during the day?
The script reports the time it took to apply the log, the size of the log, and the redo apply rate for that log.
- Data Pump runs only on the server side.
- You may initiate the export from a client but the job(s) themselves will run inside an Oracle server.
- There are no dump files (expdat.dmp) or log files that will be created on your local machine
PURPOSE : Provide a summary report of all disk groups.
SET LINESIZE 145
SET PAGESIZE 9999
SET VERIFY off
– +—————————————————————————-+
– | Jeffrey M. Hunter |
– |—————————————————————————-|
– | PURPOSE : Provide a summary report of all disks contained within all ASM |
– | disk groups along with their performance metrics. |
– | NOTE : As with any code, ensure to test this script in a development |
– | environment before attempting to run it in production. |
– +—————————————————————————-+
cd Changes the current directory to the specified directory.
column tsname format a30 heading ‘Tablespace Name’
column tbs_size_mb format 99999,999 heading ‘Size|(MB)’
column used format 99999,999 heading ‘Used|(MB)’
column avail format 99999,999 heading ‘Free|(MB)’
column used_visual format a11 heading ‘Used’
column pct_used format 999 heading ‘% Used’
Login to one of the nodes (i.e. linux1) and disable the cluster instance parameter by setting cluster_database to FALSE from the current instance:
$ sqlplus “/ as sysdba”
SQL> alter system set cluster_database=false scope=spfile sid=’orcl1′;
Sequence of events to bring cluster database back..
1. Start all services using “start -nodeapps”
2. Start ASM instnace using “srvctl start asm -n (node)”
3. Start RAC instances using “srvctl start instance -d (database) -I (instance)”
4. Finish up by bringing our load balanced/TAF service online “srvctl start service -d orcl -s RAC”
Different DBAs have different views on % for analyze. The oracle documentation recommends to carry full analyze on entire database which is not possible for most live systems runnning into terabytes.
In past I had performance issue on my database just over 300 GB. There were one table ORDER_DATA with 400 million rows. That one table pulled entire system down number of times just because it wasn’t properly analyzed and oracle didn’t knew data distribution in the table.
I was struggling to understand where things are going wrong as we were analysing entire table every night but not with hash buckets-histograms and surprisingly in SQL execution it was using a right index.
Exporting and Importing Statistics
Caveat: Always use import/export and use imp/exp utility on schema user who owns tables. I have wasted a week where I was exporting as DBA for XYZ user and then importing into different system under different username.
1. Login as PERFSTAT user on database. It won’t work unless U login as PERFSTAT user
Change a.statistic# to respective value
Stats for Working Hours
select to_char(trunc(b.snap_time),'DD-MM-YYYY') ,statistic#,name, sum(value) from STATS$SYSSTAT A, stats$snapshot B where a.snap_id=b.snap_id and trunc(b.snap_time) > trunc(sysdate -30) and to_char(b.SNAP_TIME,'HH24') > 8 and to_char(b.SNAP_TIME,'HH24') <18 and a.statistic#=54 group by trunc(b.snap_time) ,statistic#,name order by trunc(b.snap_time)
Create RMAN Catalogue steps
• Select database for catalogue.
• Create catalogue owner RMANC, default tablespace TOOLS.
• Grant RECOVERY_CATALOG_OWNER role to RMANC.
• Run the RMAN executable, being the same version as the catalogue database codeset and connect to the catalogue as the RMANC user.
• Issue the CREATE CATALOG command.
Datafiles Backed up during past 24 Hours
SELECT dbfiles||' from '||numfiles "Datafiles backed up", cfiles "Control Files backed up", spfiles "SPFiles backed up" FROM (select count(*) numfiles from sys.v_$datafile), (select count(*) dbfiles from sys.v_$backup_datafile a, sys.v_$datafile b where a.file# = b.file# and a.completion_time > sysdate - 1), (select count(*) cfiles from sys.v_$backup_datafile where file# = 0 and completion_time > sysdate - 1), (select count(*) spfiles from sys.v_$backup_spfile where completion_time > sysdate - 1)
Some cheap/easy ways to Install RAC on inexpensive hardware