Archive for the 'Maintenance' Category


How to perform and recover from a cold backup

Back to basic.

The cold backup is the easiest backup that could be performed.

It can’t really be used in a backup strategy for businesses with critical systems because there is a risk of loss of data up to 24h.

It is useful anyway to know how to perform it because it makes duplicating a database useful and can save lives sometimes.

Perform a cold backup

A cold back requires a clean shutdown of the database.

shutdown immediate or normal are the only shutdown allowed.

A database cold backed up after a shutdown abort will result into fuzzy files which can’t be recovered.

The reason is that the datafiles must have had a checkpoint performed (which is done with shutdown immediate).

When the clean shutdown is done, copy the oradata folders to a new location.

After the files are copied, the database can be started up.

Recover from a cold backup

Depending on the structure given on the server to recover, the recovery can involve renaming datafiles and controlfiles.

1) If the service doesn’t exist yet, issue the oradim command:

oradim -NEW -SID SAME_AS_BACKED_UP_DB -pfile FULLPATH\init.ora -syspass somepassword

If you have taken the measure to copy the password file and initialization parameter from your primary server to the server to recover in ORACLE_HOME\database, then the options -pfile and -syspass are not required.

2) Copy the oradata folders to the exact same location from primary to the server to recover.

If the oradata files are located in the same drives, same folder structure, the recovery stops here and a startup can be issued.

3) If their are hard disk constraints (more drive on primary than on recovered server or not enough disk space), it must be required to move some datafiles from one drive to another.

In the case a data file has been moved to new location, the database must be started in mount mode:

C:\set oracle_sid= mySID

C:\sqlplus / as sysdba

SQL> startup mount; (if the database doesn’t mount, the location to thepfile can be specified by adding the parameter spfile=’LOCATION\SPFILESSID.ora’)

and a rename command must be issued for each files relocated:

SQL> alter database rename file ‘OLD_LOCATION\DATAFILE_1.ora’ to ‘NEW_LOCATION\DATAFIE_1.ora’;

SQL>alter database datafile ‘NEW_LOCATION\DATAFILE1.ora’ online;

SQL>alter database open;


The Oracle patching best practice

On windows, patching an Oracle database can be tricky.

Very often for example, the dll file will be used by some process which is a bit… Undefined. So to make sure everything run smoothly, the following process must be used.

It should in fact be used as a best practice on windows.

Stopping Oracle services might not be enough.

– the instances,
– the Intelligent Agent,
– the OLAP agent,
– the Oracle Management Server,
– the TNSListener,
– HTTP server.

– SNMP service,
– MSDTC (Microsoft Distributed Transaction Coordinator) service,
– Microsoft IIS (admin) service,
– Microsoft COM+ Event System,
– MTSrecovery service,
– BackupExec agent,
– ArcServe Agent,
– IBM Director Support Program.


The list above comes from oracle support.

  1. Stop OracleService, Listener, DBConsole and anything else related to Oracle + additional services as stated above.
    1. Change the services start mode to manual
  2. Go to system>Environment variable.
    1. Backup PATH environment variable to notepad
    2. Remove all Oracle path from PATH (I had c:\oracle\product\102\bin)
  3. Reboot the server
  4. Install the patch:
    1. Make sure to select the correct oracle_home. By default, it will install the patch in a new home.
  5. Put the oracle path back in PATH environment variable.
  6. start the oracle service using the following command: C:\>oradim -startup -sid [SID] starttype srvc
  7. Start the listener via windows service manager
  8. In SqlPlus, log in as sysdba:
    1. C:\>set oracle_sid=[SID]
    2. C:\>sqlplus / as sysdba
    3. SQL>startup upgrade
    4. spool d:\somewhere\upgde.log
    5. @[oracle_home]\RDBMS\admin\catupgrd.sql (depending on the machine it can take up to 40 minutes)
    6. spool off
    7. Check if there is any ORA- error (this is long because there are a lot in the comments)
    8. shutdown
    9. startup
    10. @[oracle_home]\RDBMS\admin\utlrp.sql (takes up to 10 minutes)
    11. @[oracle_home]\RDBMS\admin\utlrp.sql (takes about 1 minute)

ORA-12514: TNS:listener does not currently know of service requested in connect descriptor

The ORA-12154 error is fairly common.

There is a lot of reason why this could happen. Very often, it’s due to multiple oracle_homes on the server conflicting each others.

I will try to address the issues of multiple homes first and after that, explain a bit the use of service_name and SID in tnsnames.ora and how this affects the connectivity.

Configuration for a multiple home environment

The situation is as follow:

We have installed oracle 10g on a server and created a database on it. Everything connects fine. We can use the following commands:

C:\>set oracle_sid=SID
C:\>sqlplus / as sysdba

We can also use tns methods to connect to the database.

Then, we install an application using oracle forms and therefore install oracle forms.

Weird stuff starts happening.

When running an sqlplus command we can see that sqlplus version is 8.1.7. Same for tnsping.
Clearly, when typing the command, we are directed to the wrong oracle home. I would prefer my latest version

This problem involves a wrong configuration of the PATH environment variable.
Go to control panel>system>advanced>environment variable.

Locate the PATH variable.

Normally, c:\[oracle_home]\bin should be at the very beginning of the PATH variable.
On my environment it’s c:\oracle\product\102\bin
However, what I get is c:\orant\bin;

The solution is simple:

Move the string c:\oracle\product\102\bin to be at the very beginning of the PATH environment variable.

Run sqlplus:

SQL*Plus: Release

Great. Now it’s using oracle 10g again.

However, when I try to connect to my database using tnsnames.ora regardless of the location (client or server), I get an ORA-12154 error.

The problem is that due to too many oracle forms, network information are conflicting.
There are:


So, what we are going to do is create one to rule them all!

Create a new folder outside of the oracle binaries. We are going to make it independant of the binaries.
My folder looks like this:


Go back to the environment variable screen in control panel>system>advanced

Create a new environment variable called TNS_ADMIN:
in the ‘value’ field, add the new folder: c:\oracle\network\admin

Copy tnsnames.ora, sqlnet.ora and listener.ora from c:\oracle\product\102\network\admin
Paste into c:\oracle\network\admin

Stop/start listener

C:\>lsnrctl stop
C:\>lsnrctl start

From this point, you should be able to call your database using tnsnames.ora without conflict.

However, the path to a proper use is long.

My sqlnet.ora looks like this:


sqlnet.authentication_services = (NTS)





The interesting part is the bold value:

The second interesting part is in the tnsnames.ora:


When the domain is specified in the sqlnet, and if you are using a service_name, then it is required to add the domain name following your DB name. A Service_name is different from the SID parameter.

When using a SID, you have to call the actual DB SID (MYDB) without the .world.
But service_name is not really a SID. It’s usually used when multiple applications run on the same DB to make them think they got their own database (where in fact they are sharing it).

When SERVICE_NAME is used, make sure to use .world following your db name.
Make sure to add .world to your tns name entry (the bold areas above).

Finally, when calling sqlplus make sure to use the following syntax:

sqlplus user/

If the .world is not specified, the listener will consider that the tns entry does not exist thus generating an ORA-12154.

I hope this guide to connectivity with tnsnames.ora and its sometimes murky areas has been cleared with this post.

Powered by ScribeFire.


expdp and impdp are great

Before oracle 10g, there was exp and imp tools. They are good export and import tools but I had only one problem with them.

In Thailand, people tend to put all tables in the same schema even if it’s a different application. That is simply because they don’t know any better.

For better administration and security, I want my tables splitted per application in different schemas with limited access to other schemas if required. Additionally to that, I would like to have those tables in a separated tablespace (and datafile).

exp/imp tool allows to move tables from one schema to another schema and the option is logically called fromuser/touser.

The problem is that if I wanted to move the tables to another tabelspace it was a little more tricky. It involves generating the script of the tables using the exp tool. Modify them manually by changing the targetted tablespace and then run them in sqlplus to generate the tables in the new schema and new tablespace.

With expdp and impdp, I can first select the tables I want to move (that will be the same as before using tables option).

The greatness comes with impdp where I can mention the following options: REMAP_SCHEMA (a bit like the fromuser/touser of imp) and REMAP_TABLESPACE. That is really cool.

I had a bit of trouble getting expdp and impdp to work. The thing is that before using them , the DBA must create a directory as follow:

C:\ sqlplus / as sysdba



In my opinion, the commit is not required because create directory is a ddl command. But I was struggling and I saw an example doing like this. I’m not too sure what worked because I had to drop the directory first (just like we drop a table but we mention directory instead) and recreate it. So it might be it or might not. If anyone got any idea, I’m happy to know and the comments would do marvel for this : )

lLet’s get back to work.

It is now possible to call expdp

C:\ expdp system\acomplexepassword@tnsname directory=data_pump_dir dumpfile=Mydump%U.dmp tables=my,big,list,of,tables

And here you go. Your data are exported.

Then use the impdp command as follow:

C:\ impdp system\acomplexepassword@tnsname directory=data_pump_dir dumpfile=mydump01.dmp remap_schema=source_schema:dest_schema remap_tablespace=source_tablespace:dest_tablespace

And this is it.

In 2 command lines, what would have been otherwise a complexe situation becomes very easy.


1) If you don’t have the data_pump_dir directory specified the you’ll get error messages

ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-39087: directory name DATA_PUMP_DIR is invalid

2) The dumpfile name in impdp is called dump01.dmp because I specified the variable %U in the name at export time. It’s an increment. It means you can keep the same name, if the file already exists in the directory then dump02.dmp will be created etc…

3) A valid directory in the example above is a directory that exists on your server (such as D:\dumps)


Oracle sucks on windows

Ok. I hate to say that because I prefer Oracle more than others.

But Oracle has a lot of memory leaks which won’t show until you build it on your live Server. The problem is that to starting observing the problem, the database needs to get some workload so it won’t show on a test or train server.

The reason is that there are bugs into this particular release that will threaten your critical systems.

The symptoms where we installed it are as follow:

1) Inability for new users to login to Oracle after a while because new sessions are not properly closed.

This problem can be solved with a tool such as currport. You should be able to see that you got a lot of sessions without a session ID. Oracle is not even able to kill them. Currports fortunately allows us to look into the sessions at OS level and kill the Close_wait sessions.

There are a few actions to take on the server which involve addind new keys in the registry.

2) Impossibility to perform a proper shutdown  without Oracle shutting down the service.

3) Oracle shutting down without warning

After looking into the problem and searching on Metalink, I found a list of bug full of memory leaks.

These issues appeared only on this particular release. This means that if you are on Oracle, you should skip this version and upgrade to Oracle

Oracle get one problem with bind variable but it’s minor compared to the bugs of


Oracle 10g instant client – Hard to make it work

Oracle 10g works fine with oracle 8i clients. SQL plus, toad, they connect fine using the oracle 8i client.

But there are some reasons why it’s a good idea to move to Oracle 10g client.

1) Oracle 10g has an instant client which is very light 254 KB compared to 259MB for the runtime client.
2) Oracle 8i is getting a bit old and won’t be supported anymore.

3) It’s good to have some homogeneity in the IT parc. (at least for Oracle,not for printers but that’s another history)

So, I’ve been testing the instant client a bit because I found it cool.

It install just the basic to connect using TNS or ODBC. It doesn’t install any tools of its own. It means that when you install that client you might have some tools or application of your own to work with.

Things to know when Installing 10g instant client

1) After installing the Oracle 10g instant client, a new environment variable must be created.

Go to control panel> system> advanced tab> Environment variable.

Create a new system variable called TNS_ADMIN.

In the Value field, put the path where your tnsnames.ora and sqlnet.ora will be located.

Note: If you already have a folder called like that from Oracle forms for example, just point to that folder. It makes less work for future maintenance.

2) Very often, if not all the time, after installing the client, I tried to create my first Oracle 10g ODBC.But I got the following error message:

“The setup routines for the Oracle in instantclient10_2 ODBC Driver could not be loaded due to system error code 126”

followed by:

“Could not load the setup or translator library”

Well it’s an error, let’s check Metalink to see what they say about it.

Brilliant! They describe the error exactly as I have it.

Here is the article. You will need a metalink login to get it.

What Oracle say


This error may occur for the following two reasons.

1. There is not a mfc71.dll and msvcrt.dll located on the pc that the Instant Client is installed on.

2. The directory that mfc71.dll and msvcrt.dll is in is not included in the path environment variable.



To implement the solution, please execute the following steps:

1. Find the mfc71.dll and msvcrt.dll if it on your pc and include the directory in which it is in in the path variable. It is normally found in the C:\WINDOWS\system32 directory. Or copy mfc71.dll and msvcrt.dll to the directory where the instant client is installed.

2. If you have installed the full Oracle Client and ODBC Drivers for version both mfc71.dll and msvcrt.dll are located in the ORACLE_HOME\bin directory

It couldn’t be that easy

Ok. First of all I check if the two dlls are on my test computer. I check ORACLE_HOME\bin

None of them are in it. So it seems the instant client doesn’t install a couple of required dlls. But apparently windows should have it.

If it is, there are 2 solutions:

a) Modify the “path” variable in Control Panel>System>advance tab> environment variable> system variable and add the path where the dll should be. Logically it’s C:\windows\system32.

b) Copy the dlls from C:\windows\system32 to the location specify in the “path” environment described at point a)

OK let’s copy them: I found msvcrt.dll but there was no mfc71.dll in system32. I’m using Windows XP. I’ve tried with Windows 2000 too.

I had a runtime client installed on my computer to manage oracle. I decide to copy them from the bin folder on my computer and into the location specified in “path” on the test computer.

I have to admit at that point that I thought it would work. Well it didn’t.

I finally cleaned up the instant client installation 254KB and installed the runtime client 259MB.

Then I wanted to create my finally working ODBC connection.

Oupsy: There’s no ODBC driver installed with the runtime client.

But this time it’s easy.

On top of that runtime client, make a new installation but this time customized, make sure to use the same home. Locate Oracle 10g ODBC and install it.

Finally, it’s possible to enjoy the new ODBC driver.


To get Oracle 10g ODBC driver installed and working on your computer:

1) Install Oracle runtime client

2) Install Oracle 10g ODBC driver on top of that using a customized installation.


The recovery manager – RMAN

Here is one tool that DBA’s will be happy to use.

On oracle 9i it was still having some problem, on Oracle 10.2 it’s working just fine.

Using RMAN, it’s now possible to create some backups strategy with simplicity.

Let’s take the most simple example:

RMAN> backup database; Frankly, if you didn’t know that you might think it’s too easy to be true.

RMAN>backup archivelog all;

Personally I would have allowed RMAN to understand backup all archive log which is more human.

Oops! I’ve just lost a tablespace Oo’

RMAN>restore tablespace SYSTEM01;

Oops! I’m not too sure what’s wrong finally

RMAN>restore database;

RMAN>recover database;

The Recovery Manager also features a direct tape channel provided you can configure it with your provider driver. It’s a bit tricky.

It will check a retention time what you would have set up. I use 15 days.

However, you must make sure you have enough disk space to allow 15 backups living together on the same disk.

That was just a introduction to RMAN. I will come back we advanced commands later when I have a little time.

I will also show how RMAN looks like in the OEM and how it is possible to give the task to someone less knowledgeable in Oracle thanks to the cool graphical interface.

I mean, it does the backups but it still need to be checked on.

July 2018
« Mar    


Blog Stats

  • 515,988 DB lovers