Showing posts with label DB. Show all posts
Showing posts with label DB. Show all posts

Thursday, September 28, 2023

sys username and password invalid

 SYS USERNAME and PASSWORD INVALID:


check select * from v$pwfile_users;

it should give value for sys user if not create password file in $ORACLE_HOME/dbs


cd  <ORACLE_HOME>/dbs

CREATE PASSWORD FILE

$ORACLE_HOME/bin/orapwd file=<OH>/dbs/orapw<sid> password=<password> entries=<max entries number>;


Using the sys user and the password which is set in the password file to logging into the dbconsole

Make sure the DB parameter REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE

SQL> show parameter REMOTE_LOGIN_PASSWORDFILE

NAME                                   TYPE     VALUE
------------------------------------ -----------  ------------------------------
remote_login_passwordfile       string       EXCLUSIVE


check    sqlnet.ora below enters: 

# sqlnet.ora Network Configuration File: /data/app/oracle/product/12.1.0/dbhome_1/network/admin/sqlnet.ora

# Generated by Oracle configuration tools.


NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT)


SQLNET.EXPIRE_TIME=10


NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT)

SQLNET.ALLOWED_LOGON_VERSION_SERVER=8

SQLNET.ALLOWED_LOGON_VERSION_CLIENT=8



now login with sys user it should login.


Monday, September 4, 2023

What if I add data files or create tablespace in snapshot standby database Oracle 19c

 

What if I add data files or create tablespace in snapshot standby database Oracle 19c


Note : new tablespace or datafile will b droped if you change standby to physical database 


Check the database role. Execute below command in snapshot standby database.

SQL>  select name,open_mode,database_role,controlfile_type from v$database;

NAME OPEN_MODE    DATABASE_ROLE    CONTROL
---- ------------ ---------------- -------
PR   READ WRITE   SNAPSHOT STANDBY CURRENT

Capture the data file details with tablespace names before performing any testing.

SQL> col name for a60
SQL> select ts#,file#,name File_Name,bytes/1024/1024 File_Size_MB from v$datafile order by 1;

TS# FILE# FILE_NAME                                      FILE_SIZE_MB
--- ----- ---------------------------------------------  ------------
  0     1 D:\TESTDB\SETUPS\STANDBY\DB\DR\SYSTEM01.DBF            1180
  1     3 D:\TESTDB\SETUPS\STANDBY\DB\DR\SYSAUX01.DBF             990
  2     4 D:\TESTDB\SETUPS\STANDBY\DB\DR\UNDOTBS01.DBF            440
  4     7 D:\TESTDB\SETUPS\STANDBY\DB\DR\USERS01.DBF                5

SQL> select ts#,name from v$tablespace order by 1;

TS# NAME
--- ---------
  0 SYSTEM
  1 SYSAUX
  2 UNDOTBS1
  3 TEMP
  4 USERS

Now create new tablespace TEST_FLB in snapshot standby database and add some data files in this tablespace or any existing tablespace. You can also resize existing data files or create any table.

SQL> create tablespace TEST_FLB datafile 'D:\TESTDB\SETUPS\STANDBY\DB\DR\TEST_FLB01.dbf' size 1g;

Tablespace created.

SQL> alter tablespace TEST_FLB add datafile 'D:\TESTDB\SETUPS\STANDBY\DB\DR\TEST_FLB02.dbf' size 1g;

Tablespace altered.

SQL> alter database datafile 'D:\TESTDB\SETUPS\STANDBY\DB\DR\TEST_FLB02.dbf' resize 3g;

Database altered.

SQL> create table customer (cust_id number(10), cust_name varchar2(14)) tablespace TEST_FLB;

Table created.

SQL> insert into customer values(1,'A');

1 row created.

SQL>  insert into customer values(2,'B');

1 row created.

SQL>  insert into customer values(3,'C');

1 row created.

SQL>  insert into customer values(4,'D');

1 row created.

SQL>  insert into customer values(5,'E');

1 row created.

SQL>  commit;

Commit complete.

SQL> select * from customer;

   CUST_ID CUST_NAME
---------- --------------
         1 A
         2 B
         3 C
         4 D
         5 E

Snapshot Standby Alert Log File:
2023-09-02T20:53:49.598851+05:30
create tablespace TEST_FLB datafile 'D:\TESTDB\SETUPS\STANDBY\DB\DR\TEST_FLB01.dbf' size 1g
2023-09-02T20:53:51.052190+05:30
Completed: create tablespace TEST_FLB datafile 'D:\TESTDB\SETUPS\STANDBY\DB\DR\TEST_FLB01.dbf' size 1g
2023-09-02T20:53:53.860701+05:30
alter tablespace TEST_FLB add datafile 'D:\TESTDB\SETUPS\STANDBY\DB\DR\TEST_FLB02.dbf' size 1g
2023-09-02T20:53:55.967650+05:30
Completed: alter tablespace TEST_FLB add datafile 'D:\TESTDB\SETUPS\STANDBY\DB\DR\TEST_FLB02.dbf' size 1g
2023-09-02T20:53:57.391041+05:30
alter database datafile 'D:\TESTDB\SETUPS\STANDBY\DB\DR\TEST_FLB02.dbf' resize 3g
2023-09-02T20:53:59.654427+05:30
Resize operation completed for file# 2, fname D:\TESTDB\SETUPS\STANDBY\DB\DR\TEST_FLB02.DBF, old size 1048576K, new size 3145728K
Completed: alter database datafile 'D:\TESTDB\SETUPS\STANDBY\DB\DR\TEST_FLB02.dbf' resize 3g
2023-09-02T20:54:26.785953+05:30

You can check the data file and tablespace details by querying below command:

SQL> col name for a60
SQL> select ts#,file#,name File_Name,bytes/1024/1024 File_Size_MB from v$datafile order by 1;

TS# FILE# FILE_NAME                                     FILE_SIZE_MB
--- ----- --------------------------------------------- ------------
  0     1 D:\TESTDB\SETUPS\STANDBY\DB\DR\SYSTEM01.DBF           1180
  1     3 D:\TESTDB\SETUPS\STANDBY\DB\DR\SYSAUX01.DBF            990
  2     4 D:\TESTDB\SETUPS\STANDBY\DB\DR\UNDOTBS01.DBF           440
  4     7 D:\TESTDB\SETUPS\STANDBY\DB\DR\USERS01.DBF               5
  6     5 D:\TESTDB\SETUPS\STANDBY\DB\DR\TEST_FLB01.DBF         1024
  6     2 D:\TESTDB\SETUPS\STANDBY\DB\DR\TEST_FLB02.DBF         3072

6 rows selected.

SQL> select ts#,name from v$tablespace order by 1;

TS# NAME
--- -----------
  0 SYSTEM
  1 SYSAUX
  2 UNDOTBS1
  3 TEMP
  4 USERS
  6 TEST_FLB

6 rows selected.

Now suppose, your testing is done and your plan is to convert back the snapshot standby database into physical standby database then perform below steps.

SQL> shut immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup mount
ORACLE instance started.

Total System Global Area 2147482432 bytes
Fixed Size                  9030464 bytes
Variable Size             452984832 bytes
Database Buffers         1677721600 bytes
Redo Buffers                7745536 bytes
Database mounted.

SQL> select name,db_unique_name,open_mode,database_role,protection_mode,controlfile_type from v$database;

NAME DB_UNIQUE_NAME OPEN_MODE DATABASE_ROLE    PROTECTION_MODE     CONTROL
---- -------------- --------- ---------------- ------------------- -------
PR   DR             MOUNTED   SNAPSHOT STANDBY MAXIMUM PERFORMANCE CURRENT


SQL> alter database convert to physical standby;

Database altered.

Once you execute the above "convert to physical standby" command, it will delete the data files and drop the tablespaces created in snapshot standby database which means all the changes are discarded once you execute the "convert to physical standby" command.  

Snapshot Standby Database Alert Log File:
2023-09-02T20:58:39.261800+05:30
alter database convert to physical standby
2023-09-02T20:58:39.482013+05:30
Flashback Restore Start
Deleted file D:\TESTDB\SETUPS\STANDBY\DB\DR\TEST_FLB02.DBF
Flashback: deleted datafile #2 in tablespace #6 of pdb #0 from control file.
Deleted file D:\TESTDB\SETUPS\STANDBY\DB\DR\TEST_FLB01.DBF
Flashback: deleted datafile #5 in tablespace #6 of pdb #0 from control file.
Flashback: dropped tablespace #6: 'TEST_FLB' of pdb #0 from the control file.
Flashback Restore Complete
Drop guaranteed restore point 
Stopping background process RVWR
2023-09-02T20:58:41.228496+05:30
Deleted Oracle managed file D:\TESTDB\SETUPS\STANDBY\FLB\DR\FLASHBACK\O1_MF_LH6NTGC1_.FLB
Deleted Oracle managed file D:\TESTDB\SETUPS\STANDBY\FLB\DR\FLASHBACK\O1_MF_LH6NTKCV_.FLB
Guaranteed restore point  dropped
2023-09-02T20:58:41.275867+05:30
.... (PID:18848): Database role cleared from SNAPSHOT STANDBY [kcvs.c:8969]
Clearing standby activation ID 2841025717 (0xa9569cb5)
The primary database controlfile was created using the
'MAXLOGFILES 16' clause.
There is space for up to 13 standby redo logfiles
Use the following SQL commands on the standby database to create
standby redo logfiles that match the primary database:
ALTER DATABASE ADD STANDBY LOGFILE 'srl1.f' SIZE 209715200;
ALTER DATABASE ADD STANDBY LOGFILE 'srl2.f' SIZE 209715200;
ALTER DATABASE ADD STANDBY LOGFILE 'srl3.f' SIZE 209715200;
ALTER DATABASE ADD STANDBY LOGFILE 'srl4.f' SIZE 209715200;
.... (PID:18848): Database role changed from PRIMARY to PHYSICAL STANDBY [kcvs.c:8974]
.... (PID:18848): RT: Role transition work is not done
.... (PID:18848): Redo network throttle feature is disabled at mount time
Physical Standby Database mounted.
2023-09-02T20:58:41.386798+05:30
In-memory operation on ADG is currently only supported on Engineered systems and PaaS.
inmemory_adg_enabled is turned off automatically.
Please contact our support team for EXADATA solutions
CONVERT TO PHYSICAL STANDBY: Complete - Database mounted as physical standby
Completed: alter database convert to physical standby

You can also verify by executing below query to ensure newly created tablespace or added data files are not a part of the database now.

SQL> set lines 300 pages 3000
SQL> select name,db_unique_name,open_mode,database_role,protection_mode,controlfile_type from v$database;

NAME DB_UNIQUE_NAME OPEN_MODE DATABASE_ROLE    PROTECTION_MODE     CONTROL
---- -------------- --------- ---------------- ------------------- -------
PR   DR             MOUNTED   PHYSICAL STANDBY MAXIMUM PERFORMANCE STANDBY


SQL> col name for a60
SQL> select ts#,file#,name File_Name,bytes/1024/1024 File_Size_MB from v$datafile order by 1;

TS# FILE# FILE_NAME                                    FILE_SIZE_MB
--- ----- -------------------------------------------- ------------
  0     1 D:\TESTDB\SETUPS\STANDBY\DB\DR\SYSTEM01.DBF          1180
  1     3 D:\TESTDB\SETUPS\STANDBY\DB\DR\SYSAUX01.DBF           990
  2     4 D:\TESTDB\SETUPS\STANDBY\DB\DR\UNDOTBS01.DBF          440
  4     7 D:\TESTDB\SETUPS\STANDBY\DB\DR\USERS01.DBF              5

SQL> select ts#,name from v$tablespace order by 1;

TS# NAME
--- ----------
  0 SYSTEM
  1 SYSAUX
  2 UNDOTBS1
  3 TEMP
  4 USERS

Sunday, July 9, 2023

Install oracle apex on Linux

Install oracle apex on Linux:


 Following software are installed in the given order for Apex.


1- openjdk-11.0.2_linux-x64_bin.tar

2- Oracle Database.

3- apex_20.2.zip

4- apache-tomcat-9.0.41.tar.gz

5- ords-20.4.1.013.1644.zip (ords have a .war file which will be deployed on tomacat to access apex).


OS: Linux_x64


1- install database. (I installed 12.1.0.2.0)


2- ------------ Installing Apex ------------------------


a) create dedicated tablespace for apex.


create tablespace apex datafile '/opt/oracle/oradata/orcl/apex01.dbf' SIZE 100M AUTOEXTEND ON NEXT 1M;


b) create directy in which we want to install the apex. unzip the zip file of apex setup (apex_20.2.zip).


mkdir /opt/apex

unzip apex_20.2.zip


c) Connect to SQL*Plus as the SYS user and run the "apexins.sql" script, specifying the relevant tablespace names and image URL.

cd /opt/apex

sqlplus / as sysdba

@apexins.sql APEX APEX TEMP /i/


d) change the admin password:

cd /opt/apex

sqlplus /as sysdba

@apxchpwd.sql

-----------------------------

e) Create the APEX_LISTENER and APEX_REST_PUBLIC_USER users by running the "apex_rest_config.sql" script.

sqlplus / as sysdba

@apex_rest_config.sql (i set the password apex)



3- --------------Installing Tomcat -------------------


a) create a directory for java and unzip it.


mkdir /opt/java

tar -xzvf openjdk-11.0.2_linux-x64_bin.tar.gz


b) create soft link to set java as latest java.


ln -s jdk-11.0.1 latest


c) make a directory for tomcat and unzip it.


mkdir /opt/tomcat

tar -xzvf apache-tomcat-9.0.41.tar.gz


 ln -s apache-tomcat-9.0.14 latest

d) create .env file for java.

vi /home/oracle/java.env

export JAVA_HOME=/opt/java/jdk-11.0.2

export PATH=$JAVA_HOME/bin:$PATH

e) start the tomcat services.

. /home/oracle/java.env

/opt/tomcat/apache-tomcat-9.0.41/bin

./startup.sh

f) check the status of the tomcat:

ps -ef|grep tomcat

http://192.168.0.100:8080/


4- --------------Installing ORDS ------------------



a) Unlock the following users and change their passwords:


ALTER USER SYS IDENTIFIED BY nosys ACCOUNT UNLOCK;

ALTER USER APEX_LISTENER IDENTIFIED BY apex ACCOUNT UNLOCK;

ALTER USER APEX_PUBLIC_USER IDENTIFIED BY apex ACCOUNT UNLOCK;

ALTER USER APEX_REST_PUBLIC_USER IDENTIFIED BY apex ACCOUNT UNLOCK;


-- The next one will fail if you've never installed ORDS before. Ignore errors.

ALTER USER ORDS_PUBLIC_USER IDENTIFIED BY apex ACCOUNT UNLOCK;



b) Unzip the ORDS.

mkdir /opt/ords

unzip ords-20.4.1.013.1644.zip


c) Make a directory to hold the configuration.


mkdir -p /opt/ords/conf


edit the following file and input only given entries.


vi /opt/ords/params/ords_params.properties


db.hostname=localhost

db.port=1521

db.servicename=orcl

db.sid=orcl


d) Set the configuration directory:

cd /opt/ords

. /home/oracle/java.env


$JAVA_HOME/bin/java -jar ords.war configdir /opt/ords/conf


e) Start the configuration:


$JAVA_HOME/bin/java -jar ords.war


--------

[oracle@apex1 ords]$ $JAVA_HOME/bin/java -jar ords.war

Requires to login with administrator privileges to verify Oracle REST Data Services schema.


Enter the administrator username:sys

Enter the database password for SYS AS SYSDBA:

Confirm password:

Connecting to database user: SYS AS SYSDBA url: jdbc:oracle:thin:@//localhost:1521/orcl


Retrieving information.

Enter 1 if you want to use PL/SQL Gateway or 2 to skip this step.

If using Oracle Application Express or migrating from mod_plsql then you must enter 1 [1]:

Enter a number to select a feature to enable:

   [1] SQL Developer Web  (Enables all features)

   [2] REST Enabled SQL

   [3] Database API

   [4] REST Enabled SQL and Database API

   [5] None

Choose [1]:

2021-02-01T19:51:51.705Z INFO        reloaded pools: []

Installing Oracle REST Data Services version 20.4.1.r0131644

... Log file written to /home/oracle/ords_install_core_2021-02-01_145151_00860.log

... Verified database prerequisites

... Created Oracle REST Data Services proxy user

Warning: Nashorn engine is planned to be removed from a future JDK release

... Created Oracle REST Data Services schema

... Granted privileges to Oracle REST Data Services

... Created Oracle REST Data Services database objects

... Log file written to /home/oracle/ords_install_datamodel_2021-02-01_145218_00571.log

... Log file written to /home/oracle/ords_install_apex_2021-02-01_145222_00690.log

Completed installation for Oracle REST Data Services version 20.4.1.r0131644. Elapsed time: 00:00:34.176 


Enter 1 if you wish to start in standalone mode or 2 to exit [1]:2

--------

f)

mkdir -p /opt/tomcat/apache-tomcat-9.0.41/webapps/i

cp -R /opt/apex/images/* /opt/tomcat/apache-tomcat-9.0.41/webapps/i


cp /opt/ords/ords.war /opt/tomcat/apache-tomcat-9.0.41/webapps



g) Change the Tomcat default listen port from 8080 to 8085:


vi /opt/tomcat/apache-tomcat-9.0.41/conf/server.xml


<Connector port="8085" protocol="HTTP/1.1"

               connectionTimeout="20000"

               redirectPort="8443" />
-----------------------------------------------------------------------------

In-Case of error (WARNING: The pool named: |apex|| is invalid and will be ignored), update the password from following user and 

edit the following files and input new password.


alter user ORDS_PUBLIC_USER identified by  381;

alter user APEX_PUBLIC_USER identified by 381;

alter user APEX_LISTENER identified by 381;

alter user APEX_REST_PUBLIC_USER identfied by 381;


cd /opt/ords/conf/ords/conf/

vi apex_rt.xml

vi apex.xml 

vi apex_al.xml

vi apex_pu.xml

Monday, June 5, 2023

DUPLICATE ERROR

 During the duplicate process I got this error

RMAN-06402: Oracle instance shut down
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 14/11/2022 14:08:33
RMAN-05501: aborting duplication of target database
RMAN-03015: error occurred in stored script Memory Script
RMAN-11003: failure during parse/execution of SQL statement: alter database recover logfile '+RECO/DUTS/ARCHIVELOG/2022_11_14/thread_2_seq_45.4262.1023037434'
ORA-10877: error signaled in parallel recovery slave

In alert.log

2022-11-14T14:07:52.556223-06:00
kcrf_decrypt_redokey: wallet is not opened..(err 28365) <<<< wallet is not open
Errors with log +RECO/DUTS/ARCHIVELOG/2022_11_14/thread_2_seq_45.4262.1023037434
2022-11-14T14:07:52.673668-06:00
Media Recovery failed with error 28365
Recovery interrupted!
2022-11-14T14:07:52.851222-06:00
Errors in file /u01/app/oracle/diag/rdbms/duts/DUTS/trace/DUTS_pr00_24026.trc:
ORA-00283: recovery session canceled due to errors
ORA-28365: wallet is not open

Solution

  • Copy the wallet file (ewallet.p12) from source database server to clone database server. You can check the wallet file location on source database from sqlnet.ora file of the source database ORACLE_HOME scp oracle@duts1:/u01/app/oracle/admin/DUTS/wallet/ewallet.p12 .
  • Modify sqlnet.ora file in clone database ORACLE_HOME to reflect the location of the wallet file:
ENCRYPTION_WALLET_LOCATION =
(SOURCE =
(METHOD = FILE)
(METHOD_DATA =
(DIRECTORY = /u01/app/oracle/admin/DUTS/wallet)
)
)
  • Invoke orapki utility on the clone database server to make the wallet auto-login:
orapki wallet create -wallet /u01/app/oracle/admin/DUTS/wallet -pwd "oracle" -auto_login
  • If above step is successful, you should find a new file is created in the same directory: cwallet.sso
[oracle@duts2 wallet]$ ll *.sso
-rw-rw-r-- 10 oracle oinstall 2048 Nov 14 16:11 cwallet.sso
  • STARTUP NOMOUNT the auxiliary instance and try the duplicate command.

After few minutes the problem fixed.

starting media recovery
media recovery complete, elapsed time: 00:50:00
Finished recover at 14-NOV-22
RMAN>

Tuesday, May 30, 2023

ORA-28040: No matching authentication protocol

Exception occurred: java.sql.SQLException: ORA-28040: No matching authentication protocol


Issue:


Unable to connect to database, getting below error:


Exception occurred: java.sql.SQLException: ORA-28040: No matching authentication protocol


Cause:


DB Connectivity Issue after upgrading to Oracle Database 12c.


The issue is caused by the default setting for allowed logon version in the Oracle 12c database.


Note that the SQLNET.ALLOWED_LOGON_VERSION parameter has been deprecated in 12c.


The parameter has been replaced with:


SQLNET.ALLOWED_LOGON_VERSION_SERVER



Solution:


To resolve the database connectivity issue, perform the following steps :


1. Review the sqlnet_ifile.ora or sqlnet.ora file and confirm the following entries are present:

  SQLNET.ALLOWED_LOGON_VERSION_SERVER


2. Update the sqlnet_ifile.ora or sqlnet.ora settings for the above parameters to the lowest version level 

that is required in your environment. 


For example:


a) If the initialization parameter SEC_CASE_SENSITIVE_LOGON is set to FALSE:


  SQLNET.ALLOWED_LOGON_VERSION_SERVER = 8


b) If SEC_CASE_SENSITIVE_LOGON is set to TRUE


  SQLNET.ALLOWED_LOGON_VERSION_SERVER = 10


After this Database connectivity issue should be resolved.

MRP Stuck and No RFS Process on Standby

Primary node No Longer Transmits Archive Log Files To the physical standby database, MRP Stuck and No RFS Process on Standby


Issue:


Recently I faced strange issue where I observed that one of the standby node was not showing the RFS process, checked in V$MANAGED_STANDBY

V$MANAGED_STANDBY  - showing only ARCH and MRP processes.


Archive logs not shipping from Primary to Standby Node and there are no errors observed in primary and standby alert logs.


Tried to copy Archive logs manually from Primary and Standby but MRP recovery process not applying the further archive logs on Standby.



Cause:


This caused due to outage on the standby server for the network activity.



Observations:


On Primary:


ARCH process stuck on Primary instance

SELECT PROCESS, PID, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM V$MANAGED_STANDBY;


 




DB Mode Showing UNKNOWN

select dest_id id,database_mode db_mode,recovery_mode,

protection_mode,standby_logfile_count "SRLs",

standby_logfile_active ACTIVE,

archived_seq#

from v$archive_dest_status

where dest_id =2;


 


On Standby:

No RFS Process showing when query below:

SELECT PROCESS, PID, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM V$MANAGED_STANDBY;


Solution:


1. On Primary:


- Set log transport state to DEFER status:

SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=DEFER;



2. On standby server:


- Shutdown Database Listener


- Cancel Managed Recovery

SQL> alter database recover managed standby database cancel;


- Shutdown the Standby Database

SQL> shutdown immediate




3. On the Primary


Kill the ARCx Processes and the Database will respawn them automatically immediately without harming it.


ps -ef | grep -i arc


kill -9 <ospid of ARC process> <another ospid of ARC process> ...




4. On standby server


- Startup Standby Database and resume Managed Recovery

SQL> startup mount;

SQL> alter database recover managed standby database disconnect from session;



- Start Database Listener



5. Set log transport state to ENABLE status:

SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE;



6. Monitor the alert logs at each site and ensure log shipping and

apply is occurring again.


Also, verify the DB Mode on Primary -

select dest_id id,database_mode db_mode,recovery_mode,

protection_mode,standby_logfile_count "SRLs",

standby_logfile_active ACTIVE,

archived_seq#

from v$archive_dest_status

where dest_id =2; 



For Information: In case after Terminating ARCH process on primary node,

if it doesn't restart automatically, you can do the following :


On Primary:

SQL> alter system set log_archive_max_processes=4;

(increase log_archive_max_processes accordingly)


Tuesday, May 23, 2023

RMAN backup

 1)Backupsets 


2)Images copies


1)Backupsets 


Whole database backup


Full database backup


Partial database backup




Whole database backup:


Whole backups of a database include the complete contents of all datafile of the database, plus control file,arhived redo log files, and server parameter files 




RMAN>backup database plus archivelog;




Full database backup:


Full backups of a database include the complete content of all datafiles of the database plus the controlfile and server parameter file 




RMAN>backup database;




Partial database backup:


Backup of all datafiles for an individual tablespace 


Backup a single datafile.


 RMAN>backup tablespace users;


RMAN>backup datafile 6;




Image Copies:


An image copy backup is the exact copy of the data files including free space.


RMAN>backup as copy database;




Cold or consistent backup :(offline backups)


offline backups are taken while the database is not open.




Hot or Inconsistent backup :(Online backups)


Online backups are taken while the database is open.




RMAN Tools:




RMAN> backup database plus archivelog


RMAN> backup database;


RMAN> backup tablespace users;


RMAN> backup datafile 6;


RMAN> backup archivelog all;




RMAN List Backup Commands:


RMAN>list backup;


RMAN>list backup summary ;


RMAN>list backup by file;


RMAN>list backup of database;


RMAN>list backup of tablespace system;


RMAN>list backup of tablespace users;


RMAN>list backup of datafile 1;


RMAN>list backup of datafile 2;


RMAN>list backup of controlfile;


RMAN>list backup of archivelog all;

Sunday, May 14, 2023

Check Why Shutdown Immediate hangs

 


Ref. Doc ID 164504.1

In order to check reason why shutdown immediate hangs
SQL>connect / as SYSDBA
SQL>Select * from x$ktuxe where ktuxecfl = 'DEAD';
This shows dead transactions that SMON is looking to rollback.
Now Plan to shutdown again and gather some information. Before issuing the shutdown immediate command set some events as follows:
SQL>alter session set events '10046 trace name context forever, level 12';
SQL>alter session set events '10400 trace name context forever, level 1';
SQL>shutdown immediate;
10046 turns on extended SQL_TRACE for the shutdown process.
10400 dumps a system state every 5 minutes.
The trace files should show where the time is going. To check the progress of SMON is very important in this case. You can find it with the below query.
SELECT r.NAME "RB Segment Name", dba_seg.size_mb,
DECODE(TRUNC(SYSDATE - LOGON_TIME), 0, NULL, TRUNC(SYSDATE - LOGON_TIME) || ' Days' || ' + ') || TO_CHAR(TO_DATE(TRUNC(MOD(SYSDATE-LOGON_TIME,1) * 86400), 'SSSSS'), 'HH24:MI:SS') LOGON, v$session.SID, v$session.SERIAL#, p.SPID, v$session.process,
v$session.USERNAME, v$session.STATUS, v$session.OSUSER, v$session.MACHINE,
v$session.PROGRAM, v$session.module, action
FROM v$lock l, v$process p, v$rollname r, v$session,
(SELECT segment_name, ROUND(bytes/(1024*1024),2) size_mb FROM dba_segments
WHERE segment_type = 'TYPE2 UNDO' ORDER BY bytes DESC) dba_seg
WHERE l.SID = p.pid(+) AND v$session.SID = l.SID AND
TRUNC (l.id1(+)/65536)=r.usn
-- AND l.TYPE(+) = 'TX' AND
-- l.lmode(+) = 6
AND r.NAME = dba_seg.segment_name
--AND v$session.username = 'SYSTEM'
--AND status = 'INACTIVE'
ORDER BY size_mb DESC;
Reason: Shut down immediate may hang because of various reasons.
§         Processes still continue to be connected to the database and do not terminate.
§         SMON is cleaning temp segments or performing delayed block cleanouts.
§         Uncommitted transactions are being rolled back.
Debugging a hung database 
Back in oracle 10g a hung database was real problem, especially could not connect via SQL*plus release the source of the hanging. There is a new feature in Oracle 11g SQL*Plus called the “prelim” option. This option is very useful for running oradebug and other utilities that do not require a real connection to the database.
sqlplus –prelim
-or- in SQL you can set
SQL>Set _prelim on
SQL>connect / as sysdba
Now you are able to run oradebug commands to diagnose a hung database issue:
SQL> oradebug hanganalyze 3
Wait at least 2 minutes to give time to identify process state changes.
SQL>oradebug hanganalyze 3
Open a separate SQL session and immediately generate a system state dump.
SQL>alter session set events 'immediate trace name SYSTEMSTATE level 10';
How to Check why shutdown immediate taking longer time to shutdown?
Ref. 1076161.6: Shutdown immediate or shutdown Normal hangs. SMON disabling TX recovery
Ref. Note 375935.1: What to do and not to do when shutdown immediate hangs.
Ref. Note 428688.1: Shutdown immediate very slow to close database.
When shutdown immediate taking longer time as compare to the normal time usually it is taking. You must perform following task before performing actual shutdown immediate.
  1. All active session.
  2. Temporary Tablespace Recover.
  3. Long Running Query in Database.
  4. Large Transaction.
  5. Progress of the Transaction that oracle is recovering.
  6. Parallel Transaction Recovery.
SQL> Select sid, serial#, username, status, schemaname, logon_time from v$session where status='ACTIVE' and username is not null;
If Active session is exist then, try to find out what is doing in the database by this session. Active session makeshutdown slower
SQL> Select f.R "Recovered", u.nr "Need Recovered" from (select count(block#) R , 1 ch from sys.fet$ ) f,(selectcount(block#) NR, 1 ch from sys.uet$) u where f.ch=u.ch;
Check to see any long query is running into the database while you are trying to shutdown the database.
SQL> Select * from v$session_longops where time_remaining>0 order by username;
Check to ensure large transaction is not going on while you are trying to shutdown the database.
SQL>Select sum(used_ublk) from v$transaction;
Check the progress of the transaction that oracle is recovering.
SQL>Select * from v$fast_start_transactions;
Check to ensure that any parallel transaction recovery is going on before performing shutdown immediate.
SQL>Select * from v$fast_start_servers;
Finally if you do not understand the reason why the shutdown is hanging or taking longer time to shutdown then try to shutdown your database with ‘abort’ option and startup with ‘restrict’ option and try shutdown with ‘immediate’ option.
Check the alert.log, if you find any error related ‘Thread 1 cannot allocate new log, sequence’ then you need to enable your archival process. Your archival is disable due to any reason.
Process:
1. In command prompt set the oracle_sid first
ORACLE_SID = ‘your db_name’
2. Now start the SQL*plus:
sqlplus /nolog
SQL>connect sys/***@instance_name
SQL>Select instance_name from v$instance;
3. Try to checkpoint before shutdown abort
SQL>alter system checkpoint;
SQL> shutdown abort;
4. Start the database with ‘restrict’ option so that no other user is able to connect you in the mean time.
SQL>startup restrict;
SQL>select logins from v$instance;
RESTRICTED
SQL>shutdown immediate;
5. Mount the database and ensure archive process is enabling by using archive log list command. If it is disabling then enable it.
SQL>startup mount;
SQL> archive log list;  --if disable then enable it
SQL>Alter database archivelog;
SQL> Alter system archive log start;
Note: If your archivelog destination and format is already set no need to set again. After setting check with the ‘archive log list’ command archival is enable or not.
SQL> alter database open;
Now check if your database is still in restricted mode then remove the restriction.
SQL>select logins from v$instance;
SQL>alter system disable restricted session;
Note: Now try to generate archivelog with any command
SQL>alter system archivelog current;
SQL>alter system switch logfile;
Now try to check or perform normal shutdown and startup with the database

Wednesday, April 12, 2023

RECOVER DELETE DATA FROM TABLE options

 

Recovering Data Using Flashback


Recovering the data using flashback can be done in a number of ways using either last DROP statement, SCN or the actual timestamp.

FLASHBACK TABLE SQUIRRELPATCH TO BEFORE DROP;
FLASHBACK TABLE SQUIRRELPATCH TO SCN 2202666520;
FLASHBACK TABLE SQUIRRELPATCH to timestamp to_timestamp ('21/03/2006   18:51:06', 'mm/dd/yyyy hh24:mi:ss');

Using flashback ?AS? query it is quite easy to select a version of data at a certain time as long as it was not too long ago. This is very powerful for a forensics investigator to see a version of the data as of a specific time. The most convenient way to recover data to a recent previous state is using Flashback and the Oracle Recycle bin.


SQL> select OLDEST_FLASHBACK_TIME from  V$FLASHBACK_DATABASE_LOG;

For recovery to a previous state longer than this we need to use LogMiner (see later). Relational schemas tend towards keeping a single row for each instance of a thing e.g. a single row for an employee in an employees table. This is good for organizing sets of data but not as useful for organizing information about each tuple over time. For instance if the employee left the company and then returned, this data might cause problems.

Data may be truly deleted by a user with the keyword PURGE as below.

DROP TABLE test PURGE; --this will really delete table test.
PURGE RECYCLEBIN; --this will purge the users recyclebin
PURGE TABLE TEST; --this will delete table test from recyclebin
PURGE TABLE "BIN$04LhcpndanfgMAAAAAANPw==$0"; -- purge by new name.
purge index in_test3_03;--you can purge indexes
PURGE TABLESPACE USERS; --purge by tablespace
PURGE TABLESPACE USERS USER SCOTT;--user within tablespace
PURGE DBA_RECYCLEBIN;--purge all objects in recyclebins

Flashback and LogMiner are dependant on the online redo logs and Archived redo logs so attention should be paid to securing these resources and these should be backed up as part of an incident handling process. 

LogMiner used on redo logs can be used to view and recover deleted historical data from the archived redo logs quite effectively.

Using LogMiner to query archived redo logs:

The concise order of events to run LogMiner are as follows:

1.       Switch on supplemental logging (optional)

2.       Specify the redo log file(s) and the path to them

3.       Allocate a Dictionary

4.       Start LogMiner

5.       Read the data about past state and recover the database

6.       Stop LogMiner 

In more detail the above order of events are implemented as follows.

1.  Supplemental logging should be enabled in order to use LogMiner which can be done      with the following command. 

SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
And then check it has worked with the following query.
SQL> SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;
SUPPLEME..
YES

2.  Specify the location of the online redo logs.

SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME =>
'/export/home/u01/app/oracle/oradata/sales/redo01.log', OPTIONS =>
DBMS_LOGMNR.NEW);
PL/SQL procedure successfully completed.
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME =>
'/export/home/u01/app/oracle/oradata/sales/redo02.log', OPTIONS =>
DBMS_LOGMNR.ADDFILE);
PL/SQL procedure successfully completed.
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME =>
'/export/home/u01/app/oracle/oradata/sales/redo03.log', OPTIONS =>
DBMS_LOGMNR.ADDFILE);
PL/SQL procedure successfully completed.

I issued each of these three commands on a single line as I did not have time to experiment with carriage returns, but the character ??? will allow a new line to extend a command over multiple lines. Then we need the command to tell it where the dictionary will be taken from the online database directly.

3.  Start LogMiner with the online data dictionary catalogue.

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS =>
DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
PL/SQL procedure successfully completed

This means that in this case LogMiner will only work correctly when the database is started and open as we are using the source DB's online dictionary. The problem with using the online catalogue is that only the current version of the db can be queried as the old schemas are lost. Therefore it is advisable if using LogMiner in production circumstances to back up the versions of the schema either in an accompanying flattext file or in the redo logs themselves. LogMiner is now started and ready to query.

4.  Example query run upon the LogMiner view - v$logmnr_contents

This is an example query on the v$logmnr_contents view which represents all the data LogMiner is able to extract from the redo logs.

SQL> select scn,timestamp,username,table_name,operation from v$logmnr_contents;

509304 04-JAN-2005 14:00:57 WRH$_SQLBIND INSERT
509304 04-JAN-2005 14:00:57 WRH$_SQLBIND UPDATE
509304 04-JAN-2005 14:00:57 INTERNAL
509304 04-JAN-2005 14:00:57 WRH$_SQLBIND INSERT
509304 04-JAN-2005 14:00:57 WRH$_SQLBIND UPDATE
509304 04-JAN-2005 14:00:57 INTERNAL
509304 04-JAN-2005 14:00:57 WRH$_SQLBIND INSERT
509304 04-JAN-2005 14:00:57 WRH$_SQLBIND UPDATE

5.  End the LogMiner session

      SQL> EXECUTE DBMS_LOGMNR.END_LOGMNR

LogMiner will be very useful for querying previous versions of data but it does not actually show the actions that the user took to gain those states. However this information can be gained from DBEXTENDED audit recorded in the redo logs as will be shown later.

Recover a Dropped Table Using Oracle Flashback Drop

 

Create a New Table

In this section you create a new table.

  1. Invoke SQL*Plus and connect as the SYSTEM user.
  2. Alter session to pluggable database container orclpdb.
    # SQL> alter session set container=orclpdb; Session altered.
  3. Create a new table named HR.REGIONS_HIST. Use the following statement to create a table that has the same structure and content as the HR.REGIONS table.
    SQL>create table hr.regions_hist as select * from hr.regions; Table created.
  4. Query the new HR.REGIONS_HIST table.
    SQL>select * from hr.regions_hist; REGION_ID REGION_NAME ---------- ------------------------- 1 Europe 2 Americas 3 Asia 4 Middle East and Africa

section 2Drop a Table

In this section you drop your new HR.REGIONS_HIST table.

  1. Execute the DROP TABLE command to drop the table.
    SQL>drop table hr.regions_hist; Table dropped.
  2. Query the HR.REGIONS_HIST table.
    SQL>select * from hr.regions_hist; select * from hr.regions_hist * ERROR at line 1: ORA-00942: table or view does not exist

    Notice that you get an error because the table is dropped.


section 3Recover the Dropped Table

In this section you recover the HR.REGIONS_HIST table from the recycle bin.

  1. Execute the FLASHBACK TABLE command to recover the dropped table.
    SQL>flashback table hr.regions_hist to before drop; Flashback complete.
  2. Once again query the HR.REGIONS_HIST table to verify the data has been restored.
    SQL>select * from hr.regions_hist; REGION_ID REGION_NAME ---------- ------------------------- 1 Europe 2 Americas 3 Asia 4 Middle East and Africa
  3. Exit from SQL*Plus.

Monday, April 10, 2023

Restoring Archivelogs

 

Restoring Archivelogs From RMAN Tape Backup

                   If you want to restore particular sequence of archivelogs from RMAN tape backup, then follow below steps.

 

connect target sys/******@CRM_DB


connect catalog RMAN_tst/*****@catdb


run

{

allocate channel t1 type SBT_TAPE parms ‘ENV=(NSR_SERVER=nwwerpw,NSR_CLIENT=tsc_test01,NSR_DATA_VOLUME_POOL=DD086A1)’connect sys/****@CRM_DB;

set archivelog destination to ‘/dumparea/’;

restore archivelog from sequence 7630 until sequence 7640;

release channel t1;

}


Above script will restore the archive sequences from 7630 to 7640 to  to /dumparea location .


Point In Time Restoring RMAN Backup From Tape:

1. start the database in mount :

SQL> SHUTDOWN IMMEDIATE;

SQL> startup mount
ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance
ORACLE instance started.

Total System Global Area 8754618368 bytes
Fixed Size 4646288 bytes
Variable Size 4160756336 bytes
Database Buffers 4429185024 bytes
Redo Buffers 160030720 bytes
Database mounted.
SQL> exit

2. connect to catalog db
rman target sys/oracle@TESTDBA catalog RMAN_cat/rman@catalogdb

Recovery Manager: Release 12.1.0.1.0 - Production on Thu Jan 21 09:31:19 2016

Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved.

connected to target database: TESTDB(DBID=1258953048, not open)
connected to recovery catalog database
recovery catalog schema release 12.01.00.02. is newer than RMAN release

 

3. Execute the below rman run block
run {
allocate channel t1 type SBT_TAPE;
allocate channel t2 type SBT_TAPE;
allocate channel t3 type SBT_TAPE;
send 'ENV=(NSR_SERVER=nriya02,NSR_CLIENT=bsst-dev,NSR_DATA_VOLUME_POOL=DD001B3)';
set until time "TO_DATE('2016-01-20 16:00:00','yyyy-mm-dd hh24:mi:ss')";
restore database;
recover database;
release channel t1;
release channel t2;
release channel t3;
}

 

4. Open the database with resetlog:
RMAN> alter database open resetlogs;

Statement processed
new incarnation of database registered in recovery catalog
starting full resync of recovery catalog
full resync complete