I was always a fan of simplicity, and using rman with controlfile has proven to me to be suited for 99% of standalone databases of all sizes. But now I came to the conclusion that there is a huge flaw in this aproach, and since I felt it on my skin I feel obliged to share with you all.
I have a 180GB, 200 tx/sec database that does EM rman job backups, and has the space very limited for Flash Recovery Area. We do incremental backups everyday, switching from level 0 and level 1 day in day out. Last time we ran out of space because level 0 backup failed and in the next day was level 1 day, the retention policy is redundancy 1, kept the level 0, the level 1 and the new level 1, since the level 0 in between has failed.
Some one at the site tried to bounce the DB and that operation alone kept us from knowing what went wrong with the backup, because the rman logs are kept in memory and cleaned at each reboot. Since this information is not kept in the controlfile, pooof! We lost the rman output, and the oportunity to know what went wrong.
Here’s how you check this:
SQL> select count(*) from v$rman_output; COUNT(*) ---------- 34 SQL> startup force ORACLE instance started. Total System Global Area 426864640 bytes Fixed Size 1300352 bytes Variable Size 247466112 bytes Database Buffers 171966464 bytes Redo Buffers 6131712 bytes Database mounted. Database opened. SQL> select count(*) from v$rman_output COUNT(*) ---------- 0
Where’s my backup log info?? Gone baby gone.
Possible solutions for those who don’t want to use catalog:
1) Use a trigger on the V_$RMAN_OUTPUT view to keep track of all backup info on a separate table
2) Create a new schema on the same database and create a catalog there and dump this daily
3) Be a man and create a new DB for the catalog!