Note, if you are not interested in the nitty gritty of my testing and results, you may just want to skip ahead to the blog entry i have made that summarizes my findings, and makes some recommendations.
One limitation of AWR Warehouse (AWRW) is that it doesn’t play nicely with Data Guard. I have been spending some time recently to reproduce issues that we have seen on our production systems. With luck some of the workarounds we have implemented to minimize there problems will be discussed in a follow up post.
First here are some details about my testing environment; note in particular that it does not follow the best practice of separating the OEM and AWR Warehouse repositories:
- Host dga holds database instance DGA
- Host dgb holds database instance DGB
- DGA and DGB are configured as primary and standby via Oracle Data Guard
- Host oem hosts Enterprise Manager (OEM) and database instance OMR
- OMR holds both OEM and AWRW repositories
- All hosts are running Oracle Linux 7.3
- All databases are Oracle 18.104.22.168
- All hosts and database instances are configured as targets in OEM
- Preferred credentials have been defined for all databases.
Let’s add DGA as an AWRW target in AWRW, from AWR homepage in OEM we click the Add button and select DGA. After a few seconds we can see DGA in our list of targets:
I then highlight this target and select “Upload Snapshots Now” from Actions drop-down list.
Success. We can see more information in CAW_MAPPINGS table in the AWRW repository:
SQL> SELECT mapping_id, target_name, old_dbid, new_dbid FROM DBSNMP.CAW_DBID_MAPPING; MAPPING_ID TARG OLD_DBID NEW_DBID ---------- ---- ---------- ---------- 1 cawr 1585153544 1585153544 2 DGA 1852644603 1852644603 SQL>
Let’s try to add DGB (currently running as standby) as a target:
As you can see, adding the standby instance fails.
Next switchover is performed such that DGB is primary, and DGA is standby.
One thing to note is that AWR tables now exist in DGB (they were created on DGA when it was primary and have been copied as part of normal standby redo apply).
Addition of DGB to AWRW now completes successfully:
“Upload Snapshots Now” also runs without problems successfully. However if looking closer shows some problems. The CAW_MAPPINGS table reveals that during the addition, OEM detected it already had an instance in AWRW with the same DBID, so it re-mapped DGB to a new DBID, 1.
SQL> select mapping_id, target_name, old_dbid, new_dbid from dbsnmp.caw_dbid_mapping order by mapping_id; MAPPING_ID TARG OLD_DBID NEW_DBID ---------- ---- ---------- ---------- 1 cawr 1585153544 1585153544 2 DGA 1852644603 1852644603 3 DGB 1852644603 1 SQL>
When performing analysis of AWRW data, this mapping would need to be considered.
Analysis of table CAW_LOAD_METADATA reveals further problems:
SQL> select target_name, begin_snap_id, end_snap_id from dbsnmp.caw_load_metadata; TARG BEGIN_SNAP_ID END_SNAP_ID ---- ------------- ----------- DGA 1 6 DGB 1 7
AWR history for snapshot ids 1 through 6 have been uploaded twice, once for DGA, once for DGB. In a real environment where the volume of AWR data on the source database is large, this could be an nasty surprise. Also any analysis of AWRW data will likely be invalid with some snapshots counted twice under different DBIDs.
Switch-over such that DGA is returned to primary role is then performed, and “Upload Snapshots Now” run.
The task completes successfully, and this time, checking CAW_LOAD_METADATA we can see that we don’t have any additional duplicates.
SQL> select target_name, begin_snap_id, end_snap_id from dbsnmp.caw_load_metadata order by dump_id; TARG BEGIN_SNAP_ID END_SNAP_ID ---- ------------- ----------- DGA 1 6 DGB 1 7 DGA 8 10 SQL>
It seems the main problem was caused during the initial configuration of the target once this has occurred, that in general, uploads take place with no problems after switch-over, with the caveat that uploads from DGB will be mapped to the new DBID 1.
Note in particular that the mapping is determined by the instance that performed the upload which may or may not be the instance were the snapshot was made.
There are exceptions however; in particular, note that during normal operations the Extract, Transfer and Load operations are independent tasks, so that if the extract completes successfully on one instance and then switch-over occurs before the transfer runs, the transfer tasks (on both primary and standby instances) will fail to upload the relevant snapshots.
More details about this case, and as promised, some of the workarounds we have implemented to follow.