Posts Tagged ‘configuration


Configuring dataguard to manage the standby database

It’s possible to have dataguard configured using a grid controller but today I will explain how to configure using command lines.

To anyone using this how-to, I strongly suggest reading this post.
You will find a list of parameters which have a big impact on the smoothness of the configuration of dataguard client (DGMGRL)

pre-requisites are:

1) You got a standby database synchronizing with you primary DB without any problem.
2) The parameters listed in the link above are specified.

Create dataguard configuration

The command can be run on either the primary or the standby server because the parameter files for dataguard will then be transfered to the standby database’s host.
I’ve choosed to perform it on the primary server.

C:\>set oracle_sid=<SID>

DGMGRL>create configuration ‘somename’ as
primary database is ‘primary’
connect identifier is;

In this command, the word ‘primary’ is the db_unique_name value specified in your parameter file.
If you are not sure about it, you can connect to sqlplus as sysdba and type:

SQL>show parameter db_unique_name;
note: If you are not sure about the exact parameter name, type only a part of it. It will display any parameter featuring that piece of string typed.

DGMGRL>show configuration;
This command will display the configuration of dataguard for the specified primary DB. Obviously the status is “DISABLED”

Adding the standby to the configuration

DGMGRL>add database ‘standby’ as
connect identifier is
maintained as physical;

Database “standby” added.

DGMGRL>show configuration;
The configuration will now display both the primary and standby databases in your configuration but the status is still disabled.
It’s now time to enable the configuration.

DGMGRL>enable configuration;

DGMGRL> Show configuration;
The status is now turned to SUCCESS.

But this is a perfect world…

How to configure dataguard again if the first try fails

In a world where your first trial fails for some reason, returning to the initial state (a primary, a standby, no dataguard configuration) is required and it’s not easy.
If you are trying to enable the configuration and the command hangs, then you have a problem.

All the previous steps, including the modification of parameters in the SPFILE don’t require a shutdown of the DB.
But to reset the configuration, a shutdown of the DB will be required.
If you are working on a critical system, a shutdown will have to be scheduled off peak hours etc… etc…

What the command “enable configuration” did is the generation of configuration files, located in ORACLE_HOME\database. It’s the same folder as the SPFILE.
It will also transfer those files to the standby host. The reason is simple, when the production server fails, I would love to be able to still use what is left.
And what is left is my standby database and server.

The files are named as follow:

Deleting those files would mess up with a dataguard configuration at best but will not harm a critical system.
But as it turns out, they are not easy to delete because they are accessed by oracle services.
DR1 and DR2 are easily removed.

To remove this hc_ file:

connect to SQLplus:

C:\>sqlplus / as sysdba
SQL>alter system set dg_broker_start=false scope=both;

When this is done, shutdown the database and delete all the .dat files specified above.
The same process must be performed on the standby server and database.

When the files are deleted:
Connect to sqlplus again and set the parameter back to “TRUE”
SQL>alter system set dg_broker_start=true scope=both;
Perform the same on the standby.
It is now possible to try again to enable the configuration.

To complete this explanation on the configuration of DGMGRL, here is a list of parameters which were very important to the success of the configuration

*.LOG_ARCHIVE_DEST_1=’location=g:\oradata\TEST10\arch VALID_FOR=(ALL_LOGFILES, ALL_ROLES)db_unique_name=TESTDG


On my first configuration, all the parts marked in Orange were absent of my parameter files. That was all I had in the how-to I used.
The synchronization of the primary and the standby worked that way.
But when it comes to configuring Dataguard, it was a complete failure.
As it turns out, the primary cause of all my problems were the missing parameters and options.

Adding the databases to the Grid controller

If later on , for more control a grid controller is installed (makes sense isnt it ^^). All that will be needed is to install the controller agent on the primary and the standby server.
The grid controller will detect right away that there is a standby database managed by dataguard and it will be possible to view it through the grid controller.

Enjoy! It has been a great project. I’ve enjoyed a lot going through this.
There has been a lot of frustration on the way due to the difficulty of getting information on how to do this. Particularly after it failed the first time.
I sincerely hope that this document will be useful to a lot of people.
Succeeding gave a great feeling of achievement.

If there’s anything wrong or unclear, just let me know in the comments. Let me know as well how it went for you too.


Stuck with 2 different networks

Recently, while working on dataguard we had a problem configuring it accross networks between our production site and our DR site.
After a few tests, it seems that the configuration works fine locally but not through the lease line.
So, to bypass this, we wanted to take the standby server back to production site and configure dataguard locally.

This raises a few problems.

1) We can’t modify the IP address of Primary or Standby server
2) The network on the production site is different from the network on the DR site.

Basically, when we bring it to the production site, it’s not possible to ping or connect to the standby server.

The solution is very simple:

The production site network is 172.30.XX.0/
The DR site network is 172.30.XY.0/

Prerequisite: Both servers need 2 NICs

What we are going to do is the following:

On primary:
Configure a 2nd NIC to use an IP address of the DR site: 172.30.XY.34

On standby:
Configure a 2nd NIC to use an IP address of the production site: 172.30.XX.12

On primary:
Type the following command on command line:
C:\>route add 172.30.XY.0 MASK 172.30.XX.12

On standby:
C:\>route add 172.30.XX.0 MASK 172.30.XY.34

If we now try to ping the IP address 172.30.XY.XXX of the standby database from production site, we will receive an answer.
What we are telling to the production server is: If you are sending a packet to the DR Site, then send it to the address 172.30.XX.12 instead. The same applies to the standby server.

With this route added, the production server won’t be visible to the DR site anymore since every packet are routed to a different location. So use it with caution.

Finally, when all is done, what is left to do after the configuration is to delete the route:

On primary:
C:\route delete 172.30.XY.0

On standby:
C:\route delete 172.30.XX.0

Disable the secondary NICs. All is back to normal.