AWR Warehouse and Data Guard (Part Two)

We will continue with setup configured for the previous blog entry.
This post includes details about updating some internal AWRW tables. These steps are  provided for education purposes only; if you do this on a real system make sure you know exactly what you are doing and understand the risks. If you break anything, Oracle Support or I may not be able or willing to help you.

First I delete the two instances DGA and DGB previously configured from AWR Warehouse with a view to re-configuring them to resolve some of the problems we encountered last time.

I re-add DGA and perform “Upload Snapshots Now” and make sure an entry is created in the DBID mapping table.

SQL> SELECT target_name, old_dbid, new_dbid FROM DBSNMP.CAW_DBID_MAPPING order by mapping_id;

TARGET_NAME   OLD_DBID	 NEW_DBID
----------- ---------- ----------
cawr	    1585153544 1585153544
DGA	    1852644603 1852644603

I switch-over to instance DGB. Before re-adding DGB to AWR Warehouse, I backup the extract metadata table.

SQL> create table dbsnmp.caw_extract_metadata_backup as select * from dbsnmp.caw_extract_metadata;

Table created.

I then re-add DGB to AWRW, but before uploading snapshots I update the mapping for DGA temprarily to set new_dbid column to a different value 1.

SQL> update DBSNMP.CAW_DBID_MAPPING set new_dbid=1 where target_name='DGA';
1 row updated.
SQL> COMMIT;
Commit complete.

I then verify that caw_extract_metadata is empty (it has been  recreated when adding DGB to AWRW) and restore the record from my backup table. This should fool AWR extract process into realizing which snapshots have already been uploaded from DGA. I also update the relevant sequence so there will not get an error when creating  a new extract metadata record.  I create a couple of snapshots for testing.

SQL> select count(*) from dbsnmp.caw_extract_metadata;
  COUNT(*)
----------
	 0
SQL> insert into dbsnmp.caw_extract_metadata select * from dbsnmp.caw_extract_metadata_backup;
1 row created.
SQL> commit;
Commit complete.
SQL> select dbsnmp.caw_extract_id_seq.nextval from dual;
21
SQL> exec dbms_workload_repository.create_snapshot;
PL/SQL procedure successfully completed.
SQL> exec dbms_workload_repository.create_snapshot;
PL/SQL procedure successfully completed.

Finally I perform “Upload Snapshots Now” on DGB.  The process completes successfully.  Let’s check the contents of the dbid mapping table:

SQL> select mapping_id, target_name, old_dbid, new_dbid from DBSNMP.CAW_DBID_MAPPING;
MAPPING_ID TARGET_NAME	 OLD_DBID   NEW_DBID
---------- ----------- ---------- ----------
	 1 cawr        1585153544 1585153544
	 4 DGA	       1852644603	   1
	 5 DGB	       1852644603 1852644603

Because we had temporarily re-mapped DGA, OEM has not had to remap the dbid for DGB.  I now revert the  change we made to DGA mapping and review the mappings:

SQL> update DBSNMP.CAW_DBID_MAPPING set new_dbid = 1852644603 where target_name = 'DGA';
1 row updated.
SQL> commit;
Commit complete.
SQL> select mapping_id, target_name, old_dbid, new_dbid from DBSNMP.CAW_DBID_MAPPING;
MAPPING_ID TARGET_NAME	 OLD_DBID   NEW_DBID
---------- ----------- ---------- ----------
	 1 cawr        1585153544 1585153544
	 4 DGA	       1852644603 1852644603
	 5 DGB	       1852644603 1852644603

To double check everything is OK after this change, I create some new snapshots, and perform “Upload Snapshots Now” again on DGB.  I then switch back to DGA, create some more snapshots, and again perform “Upload Snapshots Now”.
Both tasks complete successfully, and I can check everything is as hoped from the load metadata table:

SQL> select target_name, dbid, begin_snap_id, end_snap_id from dbsnmp.caw_load_metadata order by dump_id;

TARGET_NAME	  DBID BEGIN_SNAP_ID END_SNAP_ID
----------- ---------- ------------- -----------
DGA	    1852644603		   1	      10
DGB	    1852644603		  11	      13
DGB	    1852644603		  14	      15
DGA	    1852644603		  16	      18

One thing to note with this ‘hack’ is that is best to perform “Upload Snapshots Now” on the active database before a switch-over is performed to make sure there are no extracts ‘left behind’ when the database become standby. Obviously this may not be possible in fail-over situations. I will try to cover work-arounds for this in a follow-up blog post.

1 thought on “AWR Warehouse and Data Guard (Part Two)”

  1. Pingback: AWR Warehouse and Data Guard (Part Three) | hkpatora

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top