SlideShare a Scribd company logo
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 1
ORACLE CLUSTER INSTALLTION WITH GRID, KEEPALIVE
& NFS HIGH AVAILABILITY – 12C RAC
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 2
SETTING UP PRE-REQUIREMENTS
Date:
date -s "9 AUG 2013 11:32:08"
SETTING UP EPEL REPOSITORY ON ALL THE SERVERS
yum install http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm -y
INSTALLING ORACLE ASMLIB PACKAGE ON BOTH NODES/RACS - (192.168.0.139 & 192.168.0.140)
cd /etc/yum.repos.d ; wget https://public-yum.oracle.com/public-yum-ol6.repo --no-check-certificate
wget http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 -O /etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
yum install kernel-uek-devel* kernel-devel oracleasm oracleasm-support elfutils-libelf-devel kmod-oracleasm
oracleasmlib tcpdump htop -y
yum install oracleasmlib-2.0.12-1.el6.x86_64.rpm
INSTALLING ORACLE GRID AND DATABASE PRE-REQUIREMENTS ON BOTH NODES/RACS -
(192.168.0.139 & 192.168.0.140)
yum install binutils-2.* elfutils-libelf-0.* glibc-2.* glibc-common-2.* ksh-2* libaio-0.* libgcc-4.* libstdc++-4.*
make-3.* elfutils-libelf-devel-* gcc-4.* gcc-c++-4.* glibc-devel-2.* glibc-headers-2.* libstdc++-devel-4.*
unixODBC-2.* compat-libstdc++-33* libaio-devel-0.* unixODBC-devel-2.* sysstat-7.* -y
INSTALLING BIND PRE-REQUIREMENTS ON DNS SERVER - (192.168.0.110)
yum -y install bind bind-utils
INSTALLING NFS SERVER PRE-REQUIREMENTS (10.75.40.31 & 10.75.40.32)
yum -y install nfs-utils
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 3
TO OVERCOME ORA-00845: MEMORY_TARGET NOT SUPPORTED ON BOTH NODES/RACS -
(192.168.0.139 & 192.168.0.140)
SQL> startup nomount;
ORA-00845: MEMORY_TARGET not supported on this system
This error comes up because you tried to use the Automatic Memory Management (AMM) feature of Oracle 12C.
It seems that your shared memory filesystem (shmfs) is not big enough and enlarge your shared memory filesystem
to avoid the error above.
First of all, login as root and have a look at the filesystem:
df -hT
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_oracleem-lv_root
93G 19G 69G 22% /
tmpfs 5.9G 112K 5.9G 1% /dev/shm
/dev/sda1 485M 99M 362M 22% /boot
We can see that tmpfs has a size of 6GB. We can change the size of that filesystem by issuing the following
command (where “12g” is the size I want for my MEMORY_TARGET):
mount -t tmpfs shmfs -o size=12g /dev/shm
The shared memory file system should be big enough to accommodate the MEMORY_TARGET and
MEMORY_MAX_TARGET values, or Oracle will throw the ORA-00845 error. Note that when changing
something with the mount command, the changes are not permanent.
To make the change persistent, edit your /etc/fstab file to include the option you specified above:
tmpfs /dev/shm tmpfs size=12g 0 0
SQL> startup nomount
ORACLE instance started.
Total System Global Area 1.1758E+10 bytes
Fixed Size 2239056 bytes
Variable Size 5939135920 bytes
Database Buffers 5804916736 bytes
Redo Buffers 12128256 bytes
ADDING SWAP SPACE ON BOTH NODES/RACS - (192.168.0.139 & 192.168.0.140)
dd if=/dev/zero of=/root/newswapfile bs=1M count=8198
chmod +x /root/newswapfile
mkswap /root/newswapfile
swapon /root/newswapfile
To make the change persistent, edit your /etc/fstab file to include the option you specified above:
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 4
vim /etc/fstab
/root/newswapfile swap swap defaults 0 0
Verify:
swapon -s
free -k
EDIT “/ETC/SYSCONFIG/NETWORK” AS ROOT USER ON BOTH NODES/RACS - (192.168.0.139 &
192.168.0.140)
NETWORKING=yes
HOSTNAME=kkcodb01
# Recommended value for NOZEROCONF
NOZEROCONF=yes
hostname kkcodb01
NETWORKING=yes
HOSTNAME=kkcodb02
# Recommended value for NOZEROCONF
NOZEROCONF=yes
hostname kkcodb02
UPDATE /ETC/HOSTS FILE ON BOTH NODES/RACS - (192.168.0.139 & 192.168.0.140)
Make sure that hosts file has right entries (remove or comment out lines with ipv6), make sure there is correct IP and
hostname, edit /etc/hosts as root:
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
#public
192.168.0.139 kkcodb01 kkcodb01.example.com
192.168.0.140 kkcodb02 kkcodb02.example.com
#vip
192.168.0.143 kkcodb01-vip kkcodb01-vip.example.com
192.168.0.144 kkcodb02-vip kkcodb02-vip.example.com
#scan vip
#192.168.0.145 kkcodb-scan kkcodb-scan.example.com
#192.168.0.146 kkcodb-scan kkcodb-scan.example.com
#192.168.0.147 kkcodb-scan kkcodb-scan.example.com
#192.168.0.148 kkcodb-scan kkcodb-scan.example.com
#priv
10.75.40.143 kkcodb01-priv1 kkcodb01-priv1.example.com
10.75.40.144 kkcodb02-priv1 kkcodb02-priv2.example.com
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 5
BOND / TEAM MULTIPLE NETWORK INTERFACES (NIC) INTO A SINGLE INTERFACE ON BOTH
NODES/RACS - (192.168.0.139 & 192.168.0.140)
The Linux bonding driver provides a method for aggregating multiple network interfaces into a single logical
“bonded” interface. The behavior of the bonded interfaces depends upon the mode; generally speaking, modes
provide either hot standby or load balancing services. Additionally, link integrity monitoring may be performed.
Modify eth0, eth1, eth3…. up to ethx config files to bond with bond0 & bond1
Create a bond0 Configuration File
vim /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
BOOTPROTO=none
ONBOOT=yes
NETWORK=192.168.0.0
NETMASK=255.255.255.0
IPADDR=192.168.0.139
USERCTL=no
PEERDNS=no
BONDING_OPTS="mode=1 miimon=100"
vim /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
HWADDR=
TYPE=Ethernet
MASTER=bond0
SLAVE=yes
ONBOOT=yes
BOOTPROTO=none
IPV6INIT=no
USERCTL=no
vim /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
HWADDR=
TYPE=Ethernet
MASTER=bond0
SLAVE=yes
ONBOOT=yes
BOOTPROTO=none
IPV6INIT=no
USERCTL=no
Create a bond1 Configuration File
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 6
vim /etc/sysconfig/network-scripts/ifcfg-bond1
DEVICE=bond1
BOOTPROTO=none
ONBOOT=yes
NETWORK=10.75.40.0
NETMASK=255.255.255.0
IPADDR=10.75.40.143
USERCTL=no
PEERDNS=no
BONDING_OPTS="mode=1 miimon=100"
vim /etc/sysconfig/network-scripts/ifcfg-eth2
DEVICE=eth2
HWADDR=
TYPE=Ethernet
MASTER=bond1
SLAVE=yes
ONBOOT=yes
BOOTPROTO=none
IPV6INIT=no
USERCTL=no
vim /etc/sysconfig/network-scripts/ifcfg-eth3
DEVICE=eth3
HWADDR=
TYPE=Ethernet
MASTER=bond1
SLAVE=yes
ONBOOT=yes
BOOTPROTO=none
IPV6INIT=no
USERCTL=no
vim /etc/modprobe.conf
alias bond0 bonding
alias bond1 bonding
CREATE USER AND GROUPS FOR ORACLE DATABASE AND GRID ON BOTH NODES/RACS -
(192.168.0.139 & 192.168.0.140)
groupadd -g 1000 oinstall
groupadd -g 1200 dba
useradd -u 1100 -g dba -G oinstall grid
useradd -u 1300 -g dba -G oinstall oracle
passwd grid
passwd oracle
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 7
mkdir -p /app/oracle
mkdir -p /app/12.1.0/grid
chown grid:dba /app
chown grid:dba /app/oracle
chown grid:dba /app/12.1.0
chown grid:dba /app/12.1.0/grid
chmod -R 775 /app
mkdir -p /u01 ; mkdir -p /u02 ; mkdir -p /u03
(Giving R/W/E permission for grid user in dba gruop)
chown grid:dba /u01
chown grid:dba /u02
chown grid:dba /u03
chmod +x /u01
chmod +x /u02
chmod +x /u03
or
(Giving R/W/E permission for gird/oracle - all users in dba gruop)
chgrp dba /u01
chgrp dba /u02
chgrp dba /u03
chmod g+swr /u01
chmod g+swr /u02
chmod g+swr /u03
SETTING UP ENVIRONMENT VARIABLES FOR OS ACCOUNTS: GRID AND ORACLE ON BOTH
NODES/RACS - (192.168.0.139 & 192.168.0.140)
@ the kkcodb01 as the gird user
su – grid
vim /home/grid/.bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
# Oracle Settings
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 8
ORACLE_HOSTNAME=kkcodb01; export ORACLE_HOSTNAME
ORACLE_UNQNAME=RAC; export ORACLE_UNQNAME
ORACLE_BASE=/app/oracle; export ORACLE_BASE
GRID_HOME=/app/12.1.0/grid; export GRID_HOME
DB_HOME=$ORACLE_BASE/product/12.1.0/db_1; export DB_HOME
ORACLE_HOME=$GRID_HOME; export ORACLE_HOME
ORACLE_SID=RAC1; export ORACLE_SID
ORACLE_TERM=xterm; export ORACLE_TERM
BASE_PATH=/usr/sbin:$PATH; export BASE_PATH
PATH=$ORACLE_HOME/bin:$BASE_PATH; export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export
CLASSPATH
umask 022
@ the kkcodb02 as the gird user
su – grid
vim /home/grid/.bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
# Oracle Settings
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR
ORACLE_HOSTNAME=kkcodb02; export ORACLE_HOSTNAME
ORACLE_UNQNAME=RAC; export ORACLE_UNQNAME
ORACLE_BASE=/app/oracle; export ORACLE_BASE
GRID_HOME=/app/12.1.0/grid; export GRID_HOME
DB_HOME=$ORACLE_BASE/product/12.1.0/db_1; export DB_HOME
ORACLE_HOME=$GRID_HOME; export ORACLE_HOME
ORACLE_SID=RAC2; export ORACLE_SID
ORACLE_TERM=xterm; export ORACLE_TERM
BASE_PATH=/usr/sbin:$PATH; export BASE_PATH
PATH=$ORACLE_HOME/bin:$BASE_PATH; export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export
CLASSPATH
umask 022
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 9
@ the kkcodb01 as the oracle user
su – oracle
vim /home/grid/.bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
# Oracle Settings
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR
ORACLE_HOSTNAME=kkcodb01; export ORACLE_HOSTNAME
ORACLE_UNQNAME=oradb; export ORACLE_UNQNAME
ORACLE_BASE=/app/oracle; export ORACLE_BASE
GRID_HOME=/app/12.1.0/grid; export GRID_HOME
DB_HOME=$ORACLE_BASE/product/12.1.0/db_1; export DB_HOME
ORACLE_HOME=$DB_HOME; export ORACLE_HOME
ORACLE_SID=oradb1; export ORACLE_SID
ORACLE_TERM=xterm; export ORACLE_TERM
BASE_PATH=/usr/sbin:$PATH; export BASE_PATH
PATH=$ORACLE_HOME/bin:$BASE_PATH; export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export
CLASSPATH
umask 022
@ the kkcodb02 as the oracle user
su – oracle
vim /home/grid/.bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
# Oracle Settings
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR
ORACLE_HOSTNAME=kkcodb02; export ORACLE_HOSTNAME
ORACLE_UNQNAME=oradb; export ORACLE_UNQNAME
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 10
ORACLE_BASE=/app/oracle; export ORACLE_BASE
GRID_HOME=/app/12.1.0/grid; export GRID_HOME
DB_HOME=$ORACLE_BASE/product/12.1.0/db_1; export DB_HOME
ORACLE_HOME=$DB_HOME; export ORACLE_HOME
ORACLE_SID=oradb2; export ORACLE_SID
ORACLE_TERM=xterm; export ORACLE_TERM
BASE_PATH=/usr/sbin:$PATH; export BASE_PATH
PATH=$ORACLE_HOME/bin:$BASE_PATH; export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export
CLASSPATH
umask 022
KERNEL PARAMETERS ON BOTH NODES/RACS - (192.168.0.139 & 192.168.0.140)
MEMTOTAL=$(free -b | sed -n '2p' | awk '{print $2}')
SHMMAX=$(expr $MEMTOTAL / 2)
SHMMNI=4096
PAGESIZE=$(getconf PAGE_SIZE)
cat >> /etc/sysctl.conf << EOF
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmmax = $SHMMAX
kernel.shmall = `expr ( $SHMMAX / $PAGESIZE ) * ( $SHMMNI / 16 )`
kernel.shmmni = $SHMMNI
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
EOF
cat >> /etc/security/limits.conf <<EOF
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
oracle hard memlock 5437300
EOF
cat >> /etc/pam.d/login <<EOF
session required pam_limits.so
EOF
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 11
cat >> /etc/profile <<EOF
if [ $USER = "oracle" ] || [ $USER = "grid" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
umask 022
fi
EOF
cat >> /etc/csh.login <<EOF
if ( $USER == "oracle" || $USER == "grid" )
then
limit maxproc 16384
limit descriptors 65536
endif
EOF
Execute the shutdown -r now on both nodes
DOWNLOADING ORACLE DATABASE AND GRID INFRASTRUCTURE SOFTWARE
You would have to download Oracle Database 12c Release 1 Grid Infrastructure (12.1.0.2.0) for Linux x86-64 –
here
Download – linuxamd64_12102_grid_1of2.zip
Download – linuxamd64_12102_grid_2of2.zip
Downloading and installing Oracle Database software
You would have to download Oracle Database 12c Release (12.1.0.2.0) for Linux x86-64 – here
Download – linuxamd64_12102_database_1of2.zip
Download – linuxamd64_12102_database_2of2.zip
Copy zip files to kkcodb01 server to /tmp directory using WinSCP
As a root user,
cd /tmp
chmod +x *.zip
for i in /tmp/linuxamd64_12102_grid_*.zip; do unzip $i -d /home/grid/stage; done
for i in /tmp/linuxamd64_12102_database_*.zip; do unzip $i -d /home/oracle/stage; done
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 12
INSTALL BIND TO CONFIGURE DNS SERVER ON 192.168.0.138 WHICH RESOLVES DOMAIN NAME
OR IP ADDRESS.
yum -y install bind bind-utils
Configure BIND.
vim /etc/named.conf
//
// named.conf
//
// Provided by Red Hat bind package to configure the ISC BIND named(8) DNS
// server as a caching only nameserver (as a localhost DNS resolver only).
//
// See /usr/share/doc/bind*/sample/ for example named configuration files.
//
acl "trusted" {
192.168.0.0/24;
10.75.40.0/24;
};
options {
listen-on port 53 { 127.0.0.1; 192.168.0.0/24; 10.75.40.0/24;};
#listen-on-v6 port 53 { ::1; };
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
memstatistics-file "/var/named/data/named_mem_stats.txt";
allow-transfer { any; };
allow-query { localhost; trusted; };
recursion yes;
dnssec-enable yes;
dnssec-validation yes;
/* Path to ISC DLV key */
bindkeys-file "/etc/named.iscdlv.key";
managed-keys-directory "/var/named/dynamic";
};
logging {
channel default_debug {
file "data/named.run";
severity dynamic;
};
};
zone "." IN {
type hint;
file "named.ca";
};
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 13
include "/etc/named.rfc1912.zones";
include "/etc/named.root.key";
include "/etc/named/named.conf.local";
vim /etc/named/named.conf.local
zone "example.com" {
type master;
file "/etc/named/zones/db.example.com"; # zone file path
};
zone "0.192.in-addr.arpa" {
type master;
file "/etc/named/zones/db.192.0"; # 192.168.0.0/16
};
vim /etc/named/zones/db.example.com
$TTL 604800
@ IN SOA ns1.example.com. root.example.com. (
3 ; Serial
604800 ; Refresh
86400 ; Retry
2419200 ; Expire
604800 ) ; Negative Cache TTL
; name servers - NS records
IN NS ns1.example.com.
; name servers - A records
ns1.example.com. IN A 192.168.0.139
; A records
kkcodb-scan IN A 192.168.0.145
kkcodb-scan IN A 192.168.0.146
kkcodb-scan IN A 192.168.0.147
kkcodb-scan IN A 192.168.0.148
;
kkcodb01-priv1 IN A 10.75.40.143
kkcodb02-priv1 IN A 10.75.40.144
;
kkcodb01 IN A 192.168.0.139
kkcodb02 IN A 192.168.0.140
;
nfs IN A 192.168.0.30
nfs-active IN A 10.75.40.31
nfs-pasive IN A 10.75.40.32
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 14
vim /etc/named/zones/db.192.0
$TTL 604800
@ IN SOA ns1.example.com. root.example.com. (
3 ; Serial
604800 ; Refresh
86400 ; Retry
2419200 ; Expire
604800 ) ; Negative Cache TTL
; name servers - NS records
IN NS ns1.example.com.
; PTR Records
139.0 IN PTR ns1.example.com. ; 192.168.0.139
;
145.0 IN PTR kkcodb-scan.example.com. ; 192.168.0.145
146.0 IN PTR kkcodb-scan.example.com. ; 192.168.0.146
147.0 IN PTR kkcodb-scan.example.com. ; 192.168.0.147
148.0 IN PTR kkcodb-scan.example.com. ; 192.168.0.148
;
143.40 IN PTR kkcodb01-priv1.example.com. ; 10.75.40.143
144.40 IN PTR kkcodb02-priv2.example.com. ; 10.75.40.144
;
139.0 IN PTR kkcodb01.example.com. ; 192.168.0.139
140.0 IN PTR kkcodb02.example.com. ; 192.168.0.140
;
30.0 IN PTR nfs.example.com. ; 192.168.0.30
31.40 IN PTR nfs-active.example.com. ; 10.75.40.31
32.40 IN PTR nfs-pasive.example.com. ; 10.75.40.32
chkconfig named on
service named restart
named-checkzone 0.192.in-addr.arpa /etc/named/zones/db.192.0
@ ALL OF THEM ARE SHOULD BE CONFIGURING DNS CLIENT SETTING AS FOLLOWS
(INCLUDING BIND SERVER ALSO),
vim /etc/sysconfig/networking/profiles/default/resolv.conf
nameserver 192.168.0.138
search example.com
vim /etc/resolv.conf
nameserver 192.168.0.138
search example.com
service network restart
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 15
chkconfig NetworkManager off
service network restart
cat /etc/resolv.conf
ID DEVICE eth0, eth1…ethx DOES NOT SEEM TO BE PRESENT:
I was able to fix the problem by deleting the /etc/udev/rules.d/70-persistant-net.rules file
and restarting the virtual machine which generated a new file and got everything set up correctly.
Remove .ssh directory form individual users and restart the servers.
LOGICAL VOLUME MANAGEMENT – LVM ON NFS & NFS BACKUP KEEPER SERVERS
LVM is a logical volume manager for the Linux kernel that manages disk drives and similar mass-storage devices.
Heinz Mauelshagen wrote the original code in 1998, taking its primary design guidelines from the HP-UX's volume
manager.
The installers for the CrunchBang, CentOS, Debian, Fedora, Gentoo, Mandriva, MontaVista Linux, openSUSE,
Pardus, Red Hat Enterprise Linux, Slackware, SLED, SLES, Linux Mint, Kali Linux, and Ubuntu distributions are
LVM-aware and can install a bootable system with a root filesystem on a logical volume.
LVM IS COMMONLY USED FOR THE FOLLOWING PURPOSES:
1. Managing large hard disk farms by allowing disks to be added and replaced without downtime or service
disruption, in combination with hot swapping.
2. On small systems (like a desktop at home), instead of having to estimate at installation time how big a partition
might need to be in the future, LVM allows file systems to be easily resized later as needed.
3. Performing consistent backups by taking snapshots of the logical volumes.
4. Creating single logical volumes of multiple physical volumes or entire hard disks (somewhat similar to RAID
0, but more similar to JBOD), allowing for dynamic volume resizing.
5. the Ganeti solution stack relies on the Linux Logical Volume Manager
6. LVM can be considered as a thin software layer on top of the hard disks and partitions, which creates an
abstraction of continuity and ease-of-use for managing hard drive replacement, re-partitioning, and backup.
THE LVM CAN:
1. Resize volume groups online by absorbing new physical volumes (PV) or ejecting existing ones.
2. Resize logical volumes (LV) online by concatenating extents onto them or truncating extents from them.
3. Create read-only snapshots of logical volumes (LVM1).
4. Create read-write snapshots of logical volumes (LVM2).
5. Create RAID logical volumes (available in newer LVM implementations): RAID 1, RAID 5, RAID 6, etc.
6. Stripe whole or parts of logical volumes across multiple PVs, in a fashion similar to RAID 0.
7. Configure a RAID 1 backend device (a PV) as write-mostly, resulting in reads being avoided to such devices.
8. Allocate thin-provisioned logical volumes from a pool.
9. Move online logical volumes between PVs.
Split or merge volume groups in situ (as long as no logical volumes span the split).
This can be useful when migrating whole logical volumes to or from offline storage.
10. Create hybrid volumes by using the dm-cache target, which allows one or more fast storage devices, such as
flash-based solid-state drives (SSDs), to act as a cache for one or more slower hard disk drives (HDDs).
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 16
CREATE A PHYSICAL VOLUME
Input Command
pvcreate -ff /dev/sdb
Output
Physical volume "/dev/sdb" successfully created
DISPLAY A STATUS OF PHYSICAL VOLUMES
Input Command
pvdisplay /dev/sdb
Output
"/dev/sdb" is a new physical volume of "150.00 GiB"
--- NEW Physical volume ---
PV Name /dev/sdb
VG Name
PV Size 150.00 GiB
CREATE A VOLUME GROUP
Input Command
vgcreate volg1 /dev/sdb
Output
Volume group "volg1" successfully created
DISPLAY VOLUME GROUPS
Input Command
vgdisplay
Output
--- Volume group ---
VG Name volg1
System ID
Format l vm2
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 17
VG Access read/write
VG Status resizable
VG Size 150.00 GiB
CREATE A LOGICAL VOLUME
Input Command
lvcreate -L 149G -n lv_data volg1
NOTE: create a Logical Volumes 'lv_data' as 150G in volume group 'vg_data'
Output
Logical volume "lv_data" created
DISPLAY STATUS OF LOGICAL VOLUMES
Input Command
lvdisplay
Output
--- Logical volume ---
LV Path /dev/volg1/lv_data
LV Name lv_data
VG Name vg_data
LV Write Access read/write
LV Status available
FORMATTING LOGICAL VOLUME BEFORE MOUNT IT.
Input Command
mkfs.ext4 /dev/volg1/lv_data
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 18
MOUNTING LOGICAL VOLUME INTO A SPECIFIC USER’S FOLDER
Input Command
mkdir -p /u01/VM/nfs_shares
mount /dev/volg1/lv_data /u01/VM/nfs_shares
vim /etc/fstab
/dev/volg1/lv_data /u01/VM/nfs_shares ext4 defaults 0 0
CONFIGURING LSYNCD BACKUP SERVER AS A BACKUP KEEPER FOR THE NFS SERVER
(CONFIGURE A SEPARATE NETWORK ADDRESS FOR THE BACKUP REPLICATION)
Lsyncd is a tool I discovered a few weeks ago, it is a synchronization server based primarily on Rsync. It is a server
daemon that runs on the “master” server, and it can sync / mirror any file or directory changes within seconds into
your “slaves” servers, you can have as many slave servers as you want. Lsyncd is constantly watching a local directory
and monitoring file system changes using inotify / fsevents.
By default, lsyncd uses rsync to send the data over the slave machines, however there are other ways to do it.
It does not require you to build new filesystems or block devices, and does not harm your server I/O performance.
yum -y install lua lua-devel pkgconfig gcc asciidoc lsyncd rsync
@ NFS SERVER (10.75.40.30/192.168.0.30)
vim /etc/lsyncd.conf
settings={
logfile="/var/log/lsyncd.log",
statusFile="/tmp/lsyncd.stat",
statusInterval=1,
}
sync{
default.rsync,
source="/u01/VM/nfs_shares",
target="192.168.0.31:/u01/VM/nfs_shares",
rsync={rsh="/usr/bin/ssh -l root -i /root/.ssh/id_rsa",}
}
rsync = {
compress = true,
acls = true,
verbose = true,
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 19
owner = true,
group = true,
perms = true,
rsh = "/usr/bin/ssh -p 22 -o StrictHostKeyChecking=no"
service lsyncd start
chkconfig lsyncd on
mkdir -p /var/log/lsyncd
GENERATE THE SSH PUBLIC KEYS BETWEEN NFS & NFS BACKUP KEEPER SERVERS
#!/bin/sh
echo "Are the both of the server’s reachable conditions satisfied? (y/n)"
read sslkeygen
case $sslkeygen in
y)
echo "Please enter the IP address of the Source Linux Server node."
read ipaddr1
echo "Please enter the IP address of the Destination Linux Server."
read ipaddr2
echo ""
echo "Generating SSH key..."
ssh-keygen -t rsa
echo ""
echo "Copying SSH key to the Destination Linux Server..."
echo "Please enter the root password for the Remote Linux Server."
ssh root@$ipaddr2 mkdir -p .ssh
cat /root/.ssh/id_rsa.pub | ssh root@$ipaddr2 'cat >> .ssh/authorized_keys'
ssh root@$ipaddr2 "chmod 700 .ssh; chmod 640 .ssh/authorized_keys"
echo ""
echo "SSH Key Authentication successfully set up ... continuing Next Linux RSA Key installation
form Remote server to Source Server..."
echo "Generating SSH key on Destination Server..."
ssh root@$ipaddr2 ssh-keygen -t rsa
echo ""
echo "Copying SSH key to the Destination Linux Server..."
echo "Please enter the root password for the Remote Linux Server."
mkdir -p .ssh
ssh root@$ipaddr2 cat /root/.ssh/id_rsa.pub | ssh root@$ipaddr1 'cat >> .ssh/authorized_keys'
chmod 700 .ssh; chmod 640 .ssh/authorized_keys
echo ""
;;
n)
echo "Root access must be enabled on the second machine...exiting!"
exit 1
;;
*)
echo "Unknown choice ... exiting!"
exit 2
esac
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 20
CONFIGURING NFS SERVER (10.75.40.30 / 192.168.0.30)
groupadd -g 1000 oinstall
groupadd -g 1200 dba
useradd -u 1100 -g dba -G oinstall grid
useradd -u 1300 -g dba -G oinstall oracle
passwd grid
passwd oracle
mkdir -p /u01/VM/nfs_shares/shared_1
mkdir -p /u01/VM/nfs_shares/shared_2
mkdir -p /u01/VM/nfs_shares/shared_3
chown grid:dba /u01/VM/nfs_shares/shared_1
chown grid:dba /u01/VM/nfs_shares/shared_2
chown grid:dba /u01/VM/nfs_shares/shared_3
chmod +x /u01/VM/nfs_shares/shared_1
chmod +x /u01/VM/nfs_shares/shared_2
chmod +x /u01/VM/nfs_shares/shared_3
vim /etc/exports
OPTIMISED BY 50% (ASYNC):
/u01/VM/nfs_shares/shared_1 *(rw,async,no_wdelay,insecure_locks,no_root_squash)
/u01/VM/nfs_shares/shared_2 *(rw,async,no_wdelay,insecure_locks,no_root_squash)
/u01/VM/nfs_shares/shared_3 *(rw,async,no_wdelay,insecure_locks,no_root_squash)
chkconfig nfs on
service nfs restart
showmount -e 10.75.40.30
NFS MASTER RESULT:
/u01/VM/nfs_shares/shared_3 *
/u01/VM/nfs_shares/shared_2 *
/u01/VM/nfs_shares/shared_1 *
showmount -e 10.75.40.32
NFS SLAVE RESULT:
/u01/VM/nfs_shares/shared_3 *
/u01/VM/nfs_shares/shared_2 *
/u01/VM/nfs_shares/shared_1 *
showmount -e 10.75.40.30
NFS VIRTUAL IP RESULT:
/u01/VM/nfs_shares/shared_3 *
/u01/VM/nfs_shares/shared_2 *
/u01/VM/nfs_shares/shared_1 *
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 21
CONFIGURING NFS MOUNT POINTS ON BOTH NODES/RACS - (192.168.0.139 & 192.168.0.140)
vim /etc/fstab
10.75.40.30:/u01/VM/nfs_shares/shared_1 /u01 nfs rw,bg,hard,nolock,noac,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0
10.75.40.30:/u01/VM/nfs_shares/shared_2 /u02 nfs rw,bg,hard,nolock,noac,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0
10.75.40.30:/u01/VM/nfs_shares/shared_3 /u03 nfs rw,bg,hard,nolock,noac,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0
mount /u01
mount /u02
mount /u03
df -hT
INSTALLING ORACLE GRID ON BOTH NODES FROM 192.168.0.139
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 22
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 23
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 24
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 25
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 26
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 27
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 28
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 29
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 30
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 31
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 32
INSTALLING ORACLE DATABASE ON BOTH NODES FROM 192.168.0.139
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 33
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 34
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 35
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 36
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 37
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 38
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 39
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 40
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 41
ADMINISTRATING THE GRID INSFRASTUCTURE ON BOTH OF RAC NODES (as a grid user)
su - grid
crsctl status resource -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
ONLINE ONLINE kkcodb01 STABLE
ONLINE ONLINE kkcodb02 STABLE
ora.asm
OFFLINE OFFLINE kkcodb01 Instance Shutdown,ST
ABLE
OFFLINE OFFLINE kkcodb02 STABLE
ora.net1.network
ONLINE ONLINE kkcodb01 STABLE
ONLINE ONLINE kkcodb02 STABLE
ora.ons
ONLINE ONLINE kkcodb01 STABLE
ONLINE ONLINE kkcodb02 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE kkcodb01 STABLE
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE kkcodb01 STABLE
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE kkcodb01 STABLE
ora.LISTENER_SCAN4.lsnr
1 ONLINE ONLINE kkcodb01 STABLE
ora.MGMTLSNR
1 ONLINE ONLINE kkcodb01 169.254.225.48 10.75
.40.143,STABLE
ora.cvu
1 ONLINE ONLINE kkcodb01 STABLE
ora.kkcodb01.vip
1 ONLINE ONLINE kkcodb01 STABLE
ora.kkcodb02.vip
1 ONLINE ONLINE kkcodb02 STABLE
ora.mgmtdb
1 ONLINE ONLINE kkcodb01 Open,STABLE
ora.oc4j
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 42
1 ONLINE ONLINE kkcodb01 STABLE
ora.oradb.db
1 ONLINE ONLINE kkcodb01 Open,STABLE
2 ONLINE ONLINE kkcodb02 Open,STABLE
ora.scan1.vip
1 ONLINE ONLINE kkcodb01 STABLE
ora.scan2.vip
1 ONLINE ONLINE kkcodb01 STABLE
ora.scan3.vip
1 ONLINE ONLINE kkcodb01 STABLE
ora.scan4.vip
1 ONLINE ONLINE kkcodb01 STABLE
srvctl status instance -db oradb -node kkcodb01
srvctl status instance -db oradb -node kkcodb02
Instance oradb1 is running on node kkcodb01
Instance oradb2 is running on node kkcodb02
srvctl status instance -d oradb -i oradb1
srvctl status instance -d oradb -i oradb2
Instance oradb1 is running on node kkcodb01
Instance oradb2 is running on node kkcodb02
SUMMARY OF THE MOST IMPORTANT COMMANDS TO RAISE / STOP / CHECK CLUSTER
RESOURCES
crsctl check crs
crsctl check cluster -n kkcodb01
crsctl check ctss
crsctl config crs (requiere root)
cat /etc/oracle/scls_scr/rac1/root/ohasdstr
crsctl stat res -t
crsctl stat res ora.rac.db -p
crsctl stat res ora.rac.db -f
crsctl query css votedisk
olsnodes -n -i -s -t
oifcfg getif
ocrcheck
ocrcheck -local (requiere root)
ocrconfig -showbackup
ocrconfig -add +TEST
cluvfy comp crs -n rac1
srvctl status database -d oradb
srvctl status instance -d oradb -i kkcodb01
srvctl status service -d oradb
srvctl status nodeapps
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 43
srvctl status vip -n kkcodb01
srvctl status listener -l LISTENER
srvctl status asm -n kkcodb01
srvctl status scan
srvctl status scan_listener
srvctl status server -n kkcodb01
srvctl status diskgroup -g DGRAC
srvctl config database -d oradb
srvctl config service -d oradb
srvctl config nodeapps
srvctl config vip -n kkcodb01
srvctl config asm -a
srvctl config listener -l LISTENER
srvctl config scan
srvctl config scan_listener
crsctl stop cluster
crsctl start cluster
crsctl stop crs
crsctl start crs
crsctl disable
crsctl disable
srvctl stop database -d oradb -o immediate
srvctl start database -d oradb
srvctl stop instance -d oradb -i kkcodb01 -o immediate
srvctl start instance -d oradb -i kkcodb01
srvctl stop service -d oradb -s OLTP -n kkcodb01
srvctl sart service -d oradb -s OLTP
srvctl stop nodeapps -n kkcodb01
srvctl start nodeapps
srvctl stop vip -n rac1
srvctl start vip -n rac1
srvctl stop asm -n rac1 -o abort -f
srvctl start asm -n rac1
srvctl stop listener -l LISTENER
srvctl start listener -l LISTENER
srvctl stop scan -i 1
srvctl start scan -i 1
srvctl stop scan_listener -i 1
srvctl start scan_listener -i 1
srvctl stop diskgroup -g TEST -n kkcodb01, kkcodb02
srvctl start diskgroup -g TEST -n kkcodb01, kkcodb02
srvctl relocate service -d RAC -s OLTP -i kkcodb01 -t kkcodb02
srvctl relocate scan_listener -i 1 rac1
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 44
DELETING THE GRID INSFRASTUCTURE ON BOTH OF RAC NODES
Login as grid user:
Removin GRID
$ORACLE_HOME/deinstall/deinstall
Removing Brocken Grid Installation:
On all cluster nodes except the last, run the following command as the "root" user.
perl /u01/app/11.2.0/grid/crs/install/rootcrs.pl -verbose -deconfig -force
perl $GRID_HOME/crs/install/rootcrs.pl -verbose -deconfig -force
Cleanup steps for cluster node removal for following scenarios:
rm -rf /app/*
#rm -rf /u01/* ; rm -rf /u02/* ; rm -rf /u03/*
rm -rf /etc/oracle/*
rm -rf /etc/oraInst.loc
rm -rf /etc/oratab
rm -rf /var/tmp/.oracle/*
rm -rf /home/oracle/.ssh/
rm -rf /home/grid/.ssh/
shutdown -r now
CONNECTING WITH ORACLE SQL DEVELOPER
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 45

More Related Content

What's hot (20)

PPTX
High Availability Server with DRBD in linux
Ali Rachman
 
PDF
Open Source Backup Conference 2014: Workshop bareos introduction, by Philipp ...
NETWAYS
 
PDF
Docker and friends at Linux Days 2014 in Prague
tomasbart
 
PDF
My sql fabric ha and sharding solutions
Louis liu
 
PPTX
Postgres-BDR with Google Cloud Platform
SungJae Yun
 
PDF
Как понять, что происходит на сервере? / Александр Крижановский (NatSys Lab.,...
Ontico
 
PPTX
RHCE Training
ajeet yadav
 
PDF
Advanced RAC troubleshooting: Network
Riyaj Shamsudeen
 
PDF
Introduction to eBPF and XDP
lcplcp1
 
PPTX
HADOOP 실제 구성 사례, Multi-Node 구성
Young Pyo
 
PDF
Red Hat Enterprise Linux OpenStack Platform on Inktank Ceph Enterprise
Red_Hat_Storage
 
PDF
Introduction to Stacki at Atlanta Meetup February 2016
StackIQ
 
PDF
1 m+ qps on mysql galera cluster
OlinData
 
PPTX
Cpu高效编程技术
Feng Yu
 
PDF
Debugging Ruby
Aman Gupta
 
PDF
Мастер-класс "Логическая репликация и Avito" / Константин Евтеев, Михаил Тюр...
Ontico
 
PDF
Debugging Ruby Systems
Engine Yard
 
PDF
MySQL async message subscription platform
Louis liu
 
PDF
Troubleshooting PostgreSQL with pgCenter
Alexey Lesovsky
 
PDF
High Availability With DRBD & Heartbeat
Chris Barber
 
High Availability Server with DRBD in linux
Ali Rachman
 
Open Source Backup Conference 2014: Workshop bareos introduction, by Philipp ...
NETWAYS
 
Docker and friends at Linux Days 2014 in Prague
tomasbart
 
My sql fabric ha and sharding solutions
Louis liu
 
Postgres-BDR with Google Cloud Platform
SungJae Yun
 
Как понять, что происходит на сервере? / Александр Крижановский (NatSys Lab.,...
Ontico
 
RHCE Training
ajeet yadav
 
Advanced RAC troubleshooting: Network
Riyaj Shamsudeen
 
Introduction to eBPF and XDP
lcplcp1
 
HADOOP 실제 구성 사례, Multi-Node 구성
Young Pyo
 
Red Hat Enterprise Linux OpenStack Platform on Inktank Ceph Enterprise
Red_Hat_Storage
 
Introduction to Stacki at Atlanta Meetup February 2016
StackIQ
 
1 m+ qps on mysql galera cluster
OlinData
 
Cpu高效编程技术
Feng Yu
 
Debugging Ruby
Aman Gupta
 
Мастер-класс "Логическая репликация и Avito" / Константин Евтеев, Михаил Тюр...
Ontico
 
Debugging Ruby Systems
Engine Yard
 
MySQL async message subscription platform
Louis liu
 
Troubleshooting PostgreSQL with pgCenter
Alexey Lesovsky
 
High Availability With DRBD & Heartbeat
Chris Barber
 

Similar to Oracle cluster installation with grid and nfs (20)

PPTX
RAC-Installing your First Cluster and Database
Nikhil Kumar
 
PDF
Installing oracle grid infrastructure and database 12c r1
Voeurng Sovann
 
PPTX
tow nodes Oracle 12c RAC on virtualbox
justinit
 
PDF
Oracle RAC 12c Best Practices Sanger OOW13 [CON8805]
Markus Michalewicz
 
PDF
Oracle RAC 12c Collaborate Best Practices - IOUG 2014 version
Markus Michalewicz
 
PDF
Oracle RAC 12c Best Practices with Appendices DOAG2013
Markus Michalewicz
 
PDF
real-application-clusters-installation-guide-linux-and-unix.pdf
MitJiu
 
PDF
Oracle 11g R2 RAC setup on rhel 5.0
Santosh Kangane
 
PDF
Rac on NFS
mengjiagou
 
PDF
Step by Step to Install oracle grid 11.2.0.3 on solaris 11.1
Osama Mustafa
 
PPT
01_Architecture_JFV14_01_Architecture_JFV14.ppt
MahmoudGad93
 
PPTX
Managing Oracle Enterprise Manager Cloud Control 12c with Oracle Clusterware
Leighton Nelson
 
PDF
Install oracle grid infrastructure on linux 6.6
Osama Mustafa
 
PDF
Presentation Template - NCOAUG Conference Presentation - 16 9
Mohamed Sadek
 
PDF
UKOUG Tech15 - Deploying Oracle 12c Cloud Control in Maximum Availability Arc...
Zahid Anwar (OCM)
 
PDF
les01.pdf
VAMSICHOWDARY61
 
PPT
les_02.ppt of the Oracle course train_2 file
YulinLiu27
 
PDF
Oracle Real Application Clusters 19c- Best Practices and Internals- EMEA Tour...
Sandesh Rao
 
DOCX
2nodesoracle12craconyourlaptopvirtualboxstepbystepguide1 0-130627143310-phpapp02
shaikyunus1980
 
DOCX
RAC 12c
Waseem Shahzad
 
RAC-Installing your First Cluster and Database
Nikhil Kumar
 
Installing oracle grid infrastructure and database 12c r1
Voeurng Sovann
 
tow nodes Oracle 12c RAC on virtualbox
justinit
 
Oracle RAC 12c Best Practices Sanger OOW13 [CON8805]
Markus Michalewicz
 
Oracle RAC 12c Collaborate Best Practices - IOUG 2014 version
Markus Michalewicz
 
Oracle RAC 12c Best Practices with Appendices DOAG2013
Markus Michalewicz
 
real-application-clusters-installation-guide-linux-and-unix.pdf
MitJiu
 
Oracle 11g R2 RAC setup on rhel 5.0
Santosh Kangane
 
Rac on NFS
mengjiagou
 
Step by Step to Install oracle grid 11.2.0.3 on solaris 11.1
Osama Mustafa
 
01_Architecture_JFV14_01_Architecture_JFV14.ppt
MahmoudGad93
 
Managing Oracle Enterprise Manager Cloud Control 12c with Oracle Clusterware
Leighton Nelson
 
Install oracle grid infrastructure on linux 6.6
Osama Mustafa
 
Presentation Template - NCOAUG Conference Presentation - 16 9
Mohamed Sadek
 
UKOUG Tech15 - Deploying Oracle 12c Cloud Control in Maximum Availability Arc...
Zahid Anwar (OCM)
 
les01.pdf
VAMSICHOWDARY61
 
les_02.ppt of the Oracle course train_2 file
YulinLiu27
 
Oracle Real Application Clusters 19c- Best Practices and Internals- EMEA Tour...
Sandesh Rao
 
2nodesoracle12craconyourlaptopvirtualboxstepbystepguide1 0-130627143310-phpapp02
shaikyunus1980
 
Ad

More from Chanaka Lasantha (20)

PDF
Storing, Managing, and Deploying Docker Container Images with Amazon ECR
Chanaka Lasantha
 
PDF
Building A Kubernetes App With Amazon EKS
Chanaka Lasantha
 
PDF
ERP System Implementation Kubernetes Cluster with Sticky Sessions
Chanaka Lasantha
 
PDF
Free radius for wpa2 enterprise with active directory integration
Chanaka Lasantha
 
PDF
Distributed replicated block device
Chanaka Lasantha
 
PDF
Configuring apache, php, my sql, ftp, ssl, ip tables phpmyadmin and server mo...
Chanaka Lasantha
 
PDF
Complete squid &amp; firewall configuration. plus easy mac binding
Chanaka Lasantha
 
PDF
Athenticated smaba server config with open vpn
Chanaka Lasantha
 
PDF
Ask by linux kernel add or delete a hdd
Chanaka Lasantha
 
PDF
Free radius billing server with practical vpn exmaple
Chanaka Lasantha
 
PDF
One key sheard site to site open vpn
Chanaka Lasantha
 
PDF
Usrt to ethernet connectivity over the wolrd cubieboard bords
Chanaka Lasantha
 
PDF
Site to-multi site open vpn solution with mysql db
Chanaka Lasantha
 
PDF
Site to-multi site open vpn solution. with active directory auth
Chanaka Lasantha
 
DOCX
Site to-multi site open vpn solution-latest
Chanaka Lasantha
 
DOCX
Install elasticsearch, logstash and kibana
Chanaka Lasantha
 
PDF
AUTOMATIC JBOSS CLUSTER MANAGEMENT SYSTEM (PYTHON)
Chanaka Lasantha
 
PDF
ully Automatic WSO2 Enterprise Service Bus(ESB) Cluster Management System
Chanaka Lasantha
 
PPTX
Docker framework
Chanaka Lasantha
 
PPTX
CYBER SECURITY WORKSHOP (Only For Educational Purpose)
Chanaka Lasantha
 
Storing, Managing, and Deploying Docker Container Images with Amazon ECR
Chanaka Lasantha
 
Building A Kubernetes App With Amazon EKS
Chanaka Lasantha
 
ERP System Implementation Kubernetes Cluster with Sticky Sessions
Chanaka Lasantha
 
Free radius for wpa2 enterprise with active directory integration
Chanaka Lasantha
 
Distributed replicated block device
Chanaka Lasantha
 
Configuring apache, php, my sql, ftp, ssl, ip tables phpmyadmin and server mo...
Chanaka Lasantha
 
Complete squid &amp; firewall configuration. plus easy mac binding
Chanaka Lasantha
 
Athenticated smaba server config with open vpn
Chanaka Lasantha
 
Ask by linux kernel add or delete a hdd
Chanaka Lasantha
 
Free radius billing server with practical vpn exmaple
Chanaka Lasantha
 
One key sheard site to site open vpn
Chanaka Lasantha
 
Usrt to ethernet connectivity over the wolrd cubieboard bords
Chanaka Lasantha
 
Site to-multi site open vpn solution with mysql db
Chanaka Lasantha
 
Site to-multi site open vpn solution. with active directory auth
Chanaka Lasantha
 
Site to-multi site open vpn solution-latest
Chanaka Lasantha
 
Install elasticsearch, logstash and kibana
Chanaka Lasantha
 
AUTOMATIC JBOSS CLUSTER MANAGEMENT SYSTEM (PYTHON)
Chanaka Lasantha
 
ully Automatic WSO2 Enterprise Service Bus(ESB) Cluster Management System
Chanaka Lasantha
 
Docker framework
Chanaka Lasantha
 
CYBER SECURITY WORKSHOP (Only For Educational Purpose)
Chanaka Lasantha
 
Ad

Recently uploaded (20)

PDF
Newgen 2022-Forrester Newgen TEI_13 05 2022-The-Total-Economic-Impact-Newgen-...
darshakparmar
 
PPTX
MuleSoft MCP Support (Model Context Protocol) and Use Case Demo
shyamraj55
 
PDF
Kit-Works Team Study_20250627_한달만에만든사내서비스키링(양다윗).pdf
Wonjun Hwang
 
PDF
AI Agents in the Cloud: The Rise of Agentic Cloud Architecture
Lilly Gracia
 
PPTX
Future Tech Innovations 2025 – A TechLists Insight
TechLists
 
PDF
NLJUG Speaker academy 2025 - first session
Bert Jan Schrijver
 
PDF
Bitcoin for Millennials podcast with Bram, Power Laws of Bitcoin
Stephen Perrenod
 
PDF
Mastering Financial Management in Direct Selling
Epixel MLM Software
 
PDF
Staying Human in a Machine- Accelerated World
Catalin Jora
 
PDF
“Computer Vision at Sea: Automated Fish Tracking for Sustainable Fishing,” a ...
Edge AI and Vision Alliance
 
PDF
🚀 Let’s Build Our First Slack Workflow! 🔧.pdf
SanjeetMishra29
 
PDF
POV_ Why Enterprises Need to Find Value in ZERO.pdf
darshakparmar
 
PPTX
Mastering ODC + Okta Configuration - Chennai OSUG
HathiMaryA
 
PDF
[Newgen] NewgenONE Marvin Brochure 1.pdf
darshakparmar
 
PDF
How do you fast track Agentic automation use cases discovery?
DianaGray10
 
PDF
“Squinting Vision Pipelines: Detecting and Correcting Errors in Vision Models...
Edge AI and Vision Alliance
 
PDF
Future-Proof or Fall Behind? 10 Tech Trends You Can’t Afford to Ignore in 2025
DIGITALCONFEX
 
DOCX
Cryptography Quiz: test your knowledge of this important security concept.
Rajni Bhardwaj Grover
 
PDF
SIZING YOUR AIR CONDITIONER---A PRACTICAL GUIDE.pdf
Muhammad Rizwan Akram
 
PPT
Ericsson LTE presentation SEMINAR 2010.ppt
npat3
 
Newgen 2022-Forrester Newgen TEI_13 05 2022-The-Total-Economic-Impact-Newgen-...
darshakparmar
 
MuleSoft MCP Support (Model Context Protocol) and Use Case Demo
shyamraj55
 
Kit-Works Team Study_20250627_한달만에만든사내서비스키링(양다윗).pdf
Wonjun Hwang
 
AI Agents in the Cloud: The Rise of Agentic Cloud Architecture
Lilly Gracia
 
Future Tech Innovations 2025 – A TechLists Insight
TechLists
 
NLJUG Speaker academy 2025 - first session
Bert Jan Schrijver
 
Bitcoin for Millennials podcast with Bram, Power Laws of Bitcoin
Stephen Perrenod
 
Mastering Financial Management in Direct Selling
Epixel MLM Software
 
Staying Human in a Machine- Accelerated World
Catalin Jora
 
“Computer Vision at Sea: Automated Fish Tracking for Sustainable Fishing,” a ...
Edge AI and Vision Alliance
 
🚀 Let’s Build Our First Slack Workflow! 🔧.pdf
SanjeetMishra29
 
POV_ Why Enterprises Need to Find Value in ZERO.pdf
darshakparmar
 
Mastering ODC + Okta Configuration - Chennai OSUG
HathiMaryA
 
[Newgen] NewgenONE Marvin Brochure 1.pdf
darshakparmar
 
How do you fast track Agentic automation use cases discovery?
DianaGray10
 
“Squinting Vision Pipelines: Detecting and Correcting Errors in Vision Models...
Edge AI and Vision Alliance
 
Future-Proof or Fall Behind? 10 Tech Trends You Can’t Afford to Ignore in 2025
DIGITALCONFEX
 
Cryptography Quiz: test your knowledge of this important security concept.
Rajni Bhardwaj Grover
 
SIZING YOUR AIR CONDITIONER---A PRACTICAL GUIDE.pdf
Muhammad Rizwan Akram
 
Ericsson LTE presentation SEMINAR 2010.ppt
npat3
 

Oracle cluster installation with grid and nfs

  • 1. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 1 ORACLE CLUSTER INSTALLTION WITH GRID, KEEPALIVE & NFS HIGH AVAILABILITY – 12C RAC
  • 2. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 2 SETTING UP PRE-REQUIREMENTS Date: date -s "9 AUG 2013 11:32:08" SETTING UP EPEL REPOSITORY ON ALL THE SERVERS yum install http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm -y INSTALLING ORACLE ASMLIB PACKAGE ON BOTH NODES/RACS - (192.168.0.139 & 192.168.0.140) cd /etc/yum.repos.d ; wget https://public-yum.oracle.com/public-yum-ol6.repo --no-check-certificate wget http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 -O /etc/pki/rpm-gpg/RPM-GPG-KEY-oracle yum install kernel-uek-devel* kernel-devel oracleasm oracleasm-support elfutils-libelf-devel kmod-oracleasm oracleasmlib tcpdump htop -y yum install oracleasmlib-2.0.12-1.el6.x86_64.rpm INSTALLING ORACLE GRID AND DATABASE PRE-REQUIREMENTS ON BOTH NODES/RACS - (192.168.0.139 & 192.168.0.140) yum install binutils-2.* elfutils-libelf-0.* glibc-2.* glibc-common-2.* ksh-2* libaio-0.* libgcc-4.* libstdc++-4.* make-3.* elfutils-libelf-devel-* gcc-4.* gcc-c++-4.* glibc-devel-2.* glibc-headers-2.* libstdc++-devel-4.* unixODBC-2.* compat-libstdc++-33* libaio-devel-0.* unixODBC-devel-2.* sysstat-7.* -y INSTALLING BIND PRE-REQUIREMENTS ON DNS SERVER - (192.168.0.110) yum -y install bind bind-utils INSTALLING NFS SERVER PRE-REQUIREMENTS (10.75.40.31 & 10.75.40.32) yum -y install nfs-utils
  • 3. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 3 TO OVERCOME ORA-00845: MEMORY_TARGET NOT SUPPORTED ON BOTH NODES/RACS - (192.168.0.139 & 192.168.0.140) SQL> startup nomount; ORA-00845: MEMORY_TARGET not supported on this system This error comes up because you tried to use the Automatic Memory Management (AMM) feature of Oracle 12C. It seems that your shared memory filesystem (shmfs) is not big enough and enlarge your shared memory filesystem to avoid the error above. First of all, login as root and have a look at the filesystem: df -hT Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_oracleem-lv_root 93G 19G 69G 22% / tmpfs 5.9G 112K 5.9G 1% /dev/shm /dev/sda1 485M 99M 362M 22% /boot We can see that tmpfs has a size of 6GB. We can change the size of that filesystem by issuing the following command (where “12g” is the size I want for my MEMORY_TARGET): mount -t tmpfs shmfs -o size=12g /dev/shm The shared memory file system should be big enough to accommodate the MEMORY_TARGET and MEMORY_MAX_TARGET values, or Oracle will throw the ORA-00845 error. Note that when changing something with the mount command, the changes are not permanent. To make the change persistent, edit your /etc/fstab file to include the option you specified above: tmpfs /dev/shm tmpfs size=12g 0 0 SQL> startup nomount ORACLE instance started. Total System Global Area 1.1758E+10 bytes Fixed Size 2239056 bytes Variable Size 5939135920 bytes Database Buffers 5804916736 bytes Redo Buffers 12128256 bytes ADDING SWAP SPACE ON BOTH NODES/RACS - (192.168.0.139 & 192.168.0.140) dd if=/dev/zero of=/root/newswapfile bs=1M count=8198 chmod +x /root/newswapfile mkswap /root/newswapfile swapon /root/newswapfile To make the change persistent, edit your /etc/fstab file to include the option you specified above:
  • 4. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 4 vim /etc/fstab /root/newswapfile swap swap defaults 0 0 Verify: swapon -s free -k EDIT “/ETC/SYSCONFIG/NETWORK” AS ROOT USER ON BOTH NODES/RACS - (192.168.0.139 & 192.168.0.140) NETWORKING=yes HOSTNAME=kkcodb01 # Recommended value for NOZEROCONF NOZEROCONF=yes hostname kkcodb01 NETWORKING=yes HOSTNAME=kkcodb02 # Recommended value for NOZEROCONF NOZEROCONF=yes hostname kkcodb02 UPDATE /ETC/HOSTS FILE ON BOTH NODES/RACS - (192.168.0.139 & 192.168.0.140) Make sure that hosts file has right entries (remove or comment out lines with ipv6), make sure there is correct IP and hostname, edit /etc/hosts as root: 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 #public 192.168.0.139 kkcodb01 kkcodb01.example.com 192.168.0.140 kkcodb02 kkcodb02.example.com #vip 192.168.0.143 kkcodb01-vip kkcodb01-vip.example.com 192.168.0.144 kkcodb02-vip kkcodb02-vip.example.com #scan vip #192.168.0.145 kkcodb-scan kkcodb-scan.example.com #192.168.0.146 kkcodb-scan kkcodb-scan.example.com #192.168.0.147 kkcodb-scan kkcodb-scan.example.com #192.168.0.148 kkcodb-scan kkcodb-scan.example.com #priv 10.75.40.143 kkcodb01-priv1 kkcodb01-priv1.example.com 10.75.40.144 kkcodb02-priv1 kkcodb02-priv2.example.com
  • 5. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 5 BOND / TEAM MULTIPLE NETWORK INTERFACES (NIC) INTO A SINGLE INTERFACE ON BOTH NODES/RACS - (192.168.0.139 & 192.168.0.140) The Linux bonding driver provides a method for aggregating multiple network interfaces into a single logical “bonded” interface. The behavior of the bonded interfaces depends upon the mode; generally speaking, modes provide either hot standby or load balancing services. Additionally, link integrity monitoring may be performed. Modify eth0, eth1, eth3…. up to ethx config files to bond with bond0 & bond1 Create a bond0 Configuration File vim /etc/sysconfig/network-scripts/ifcfg-bond0 DEVICE=bond0 BOOTPROTO=none ONBOOT=yes NETWORK=192.168.0.0 NETMASK=255.255.255.0 IPADDR=192.168.0.139 USERCTL=no PEERDNS=no BONDING_OPTS="mode=1 miimon=100" vim /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 HWADDR= TYPE=Ethernet MASTER=bond0 SLAVE=yes ONBOOT=yes BOOTPROTO=none IPV6INIT=no USERCTL=no vim /etc/sysconfig/network-scripts/ifcfg-eth1 DEVICE=eth1 HWADDR= TYPE=Ethernet MASTER=bond0 SLAVE=yes ONBOOT=yes BOOTPROTO=none IPV6INIT=no USERCTL=no Create a bond1 Configuration File
  • 6. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 6 vim /etc/sysconfig/network-scripts/ifcfg-bond1 DEVICE=bond1 BOOTPROTO=none ONBOOT=yes NETWORK=10.75.40.0 NETMASK=255.255.255.0 IPADDR=10.75.40.143 USERCTL=no PEERDNS=no BONDING_OPTS="mode=1 miimon=100" vim /etc/sysconfig/network-scripts/ifcfg-eth2 DEVICE=eth2 HWADDR= TYPE=Ethernet MASTER=bond1 SLAVE=yes ONBOOT=yes BOOTPROTO=none IPV6INIT=no USERCTL=no vim /etc/sysconfig/network-scripts/ifcfg-eth3 DEVICE=eth3 HWADDR= TYPE=Ethernet MASTER=bond1 SLAVE=yes ONBOOT=yes BOOTPROTO=none IPV6INIT=no USERCTL=no vim /etc/modprobe.conf alias bond0 bonding alias bond1 bonding CREATE USER AND GROUPS FOR ORACLE DATABASE AND GRID ON BOTH NODES/RACS - (192.168.0.139 & 192.168.0.140) groupadd -g 1000 oinstall groupadd -g 1200 dba useradd -u 1100 -g dba -G oinstall grid useradd -u 1300 -g dba -G oinstall oracle passwd grid passwd oracle
  • 7. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 7 mkdir -p /app/oracle mkdir -p /app/12.1.0/grid chown grid:dba /app chown grid:dba /app/oracle chown grid:dba /app/12.1.0 chown grid:dba /app/12.1.0/grid chmod -R 775 /app mkdir -p /u01 ; mkdir -p /u02 ; mkdir -p /u03 (Giving R/W/E permission for grid user in dba gruop) chown grid:dba /u01 chown grid:dba /u02 chown grid:dba /u03 chmod +x /u01 chmod +x /u02 chmod +x /u03 or (Giving R/W/E permission for gird/oracle - all users in dba gruop) chgrp dba /u01 chgrp dba /u02 chgrp dba /u03 chmod g+swr /u01 chmod g+swr /u02 chmod g+swr /u03 SETTING UP ENVIRONMENT VARIABLES FOR OS ACCOUNTS: GRID AND ORACLE ON BOTH NODES/RACS - (192.168.0.139 & 192.168.0.140) @ the kkcodb01 as the gird user su – grid vim /home/grid/.bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs PATH=$PATH:$HOME/bin export PATH # Oracle Settings TMP=/tmp; export TMP TMPDIR=$TMP; export TMPDIR
  • 8. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 8 ORACLE_HOSTNAME=kkcodb01; export ORACLE_HOSTNAME ORACLE_UNQNAME=RAC; export ORACLE_UNQNAME ORACLE_BASE=/app/oracle; export ORACLE_BASE GRID_HOME=/app/12.1.0/grid; export GRID_HOME DB_HOME=$ORACLE_BASE/product/12.1.0/db_1; export DB_HOME ORACLE_HOME=$GRID_HOME; export ORACLE_HOME ORACLE_SID=RAC1; export ORACLE_SID ORACLE_TERM=xterm; export ORACLE_TERM BASE_PATH=/usr/sbin:$PATH; export BASE_PATH PATH=$ORACLE_HOME/bin:$BASE_PATH; export PATH LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH umask 022 @ the kkcodb02 as the gird user su – grid vim /home/grid/.bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs PATH=$PATH:$HOME/bin export PATH # Oracle Settings TMP=/tmp; export TMP TMPDIR=$TMP; export TMPDIR ORACLE_HOSTNAME=kkcodb02; export ORACLE_HOSTNAME ORACLE_UNQNAME=RAC; export ORACLE_UNQNAME ORACLE_BASE=/app/oracle; export ORACLE_BASE GRID_HOME=/app/12.1.0/grid; export GRID_HOME DB_HOME=$ORACLE_BASE/product/12.1.0/db_1; export DB_HOME ORACLE_HOME=$GRID_HOME; export ORACLE_HOME ORACLE_SID=RAC2; export ORACLE_SID ORACLE_TERM=xterm; export ORACLE_TERM BASE_PATH=/usr/sbin:$PATH; export BASE_PATH PATH=$ORACLE_HOME/bin:$BASE_PATH; export PATH LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH umask 022
  • 9. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 9 @ the kkcodb01 as the oracle user su – oracle vim /home/grid/.bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs PATH=$PATH:$HOME/bin export PATH # Oracle Settings TMP=/tmp; export TMP TMPDIR=$TMP; export TMPDIR ORACLE_HOSTNAME=kkcodb01; export ORACLE_HOSTNAME ORACLE_UNQNAME=oradb; export ORACLE_UNQNAME ORACLE_BASE=/app/oracle; export ORACLE_BASE GRID_HOME=/app/12.1.0/grid; export GRID_HOME DB_HOME=$ORACLE_BASE/product/12.1.0/db_1; export DB_HOME ORACLE_HOME=$DB_HOME; export ORACLE_HOME ORACLE_SID=oradb1; export ORACLE_SID ORACLE_TERM=xterm; export ORACLE_TERM BASE_PATH=/usr/sbin:$PATH; export BASE_PATH PATH=$ORACLE_HOME/bin:$BASE_PATH; export PATH LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH umask 022 @ the kkcodb02 as the oracle user su – oracle vim /home/grid/.bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs PATH=$PATH:$HOME/bin export PATH # Oracle Settings TMP=/tmp; export TMP TMPDIR=$TMP; export TMPDIR ORACLE_HOSTNAME=kkcodb02; export ORACLE_HOSTNAME ORACLE_UNQNAME=oradb; export ORACLE_UNQNAME
  • 10. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 10 ORACLE_BASE=/app/oracle; export ORACLE_BASE GRID_HOME=/app/12.1.0/grid; export GRID_HOME DB_HOME=$ORACLE_BASE/product/12.1.0/db_1; export DB_HOME ORACLE_HOME=$DB_HOME; export ORACLE_HOME ORACLE_SID=oradb2; export ORACLE_SID ORACLE_TERM=xterm; export ORACLE_TERM BASE_PATH=/usr/sbin:$PATH; export BASE_PATH PATH=$ORACLE_HOME/bin:$BASE_PATH; export PATH LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH umask 022 KERNEL PARAMETERS ON BOTH NODES/RACS - (192.168.0.139 & 192.168.0.140) MEMTOTAL=$(free -b | sed -n '2p' | awk '{print $2}') SHMMAX=$(expr $MEMTOTAL / 2) SHMMNI=4096 PAGESIZE=$(getconf PAGE_SIZE) cat >> /etc/sysctl.conf << EOF fs.aio-max-nr = 1048576 fs.file-max = 6815744 kernel.shmmax = $SHMMAX kernel.shmall = `expr ( $SHMMAX / $PAGESIZE ) * ( $SHMMNI / 16 )` kernel.shmmni = $SHMMNI kernel.sem = 250 32000 100 128 net.ipv4.ip_local_port_range = 9000 65500 net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048576 EOF cat >> /etc/security/limits.conf <<EOF oracle soft nproc 2047 oracle hard nproc 16384 oracle soft nofile 1024 oracle hard nofile 65536 grid soft nproc 2047 grid hard nproc 16384 grid soft nofile 1024 grid hard nofile 65536 oracle hard memlock 5437300 EOF cat >> /etc/pam.d/login <<EOF session required pam_limits.so EOF
  • 11. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 11 cat >> /etc/profile <<EOF if [ $USER = "oracle" ] || [ $USER = "grid" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi umask 022 fi EOF cat >> /etc/csh.login <<EOF if ( $USER == "oracle" || $USER == "grid" ) then limit maxproc 16384 limit descriptors 65536 endif EOF Execute the shutdown -r now on both nodes DOWNLOADING ORACLE DATABASE AND GRID INFRASTRUCTURE SOFTWARE You would have to download Oracle Database 12c Release 1 Grid Infrastructure (12.1.0.2.0) for Linux x86-64 – here Download – linuxamd64_12102_grid_1of2.zip Download – linuxamd64_12102_grid_2of2.zip Downloading and installing Oracle Database software You would have to download Oracle Database 12c Release (12.1.0.2.0) for Linux x86-64 – here Download – linuxamd64_12102_database_1of2.zip Download – linuxamd64_12102_database_2of2.zip Copy zip files to kkcodb01 server to /tmp directory using WinSCP As a root user, cd /tmp chmod +x *.zip for i in /tmp/linuxamd64_12102_grid_*.zip; do unzip $i -d /home/grid/stage; done for i in /tmp/linuxamd64_12102_database_*.zip; do unzip $i -d /home/oracle/stage; done
  • 12. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 12 INSTALL BIND TO CONFIGURE DNS SERVER ON 192.168.0.138 WHICH RESOLVES DOMAIN NAME OR IP ADDRESS. yum -y install bind bind-utils Configure BIND. vim /etc/named.conf // // named.conf // // Provided by Red Hat bind package to configure the ISC BIND named(8) DNS // server as a caching only nameserver (as a localhost DNS resolver only). // // See /usr/share/doc/bind*/sample/ for example named configuration files. // acl "trusted" { 192.168.0.0/24; 10.75.40.0/24; }; options { listen-on port 53 { 127.0.0.1; 192.168.0.0/24; 10.75.40.0/24;}; #listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-transfer { any; }; allow-query { localhost; trusted; }; recursion yes; dnssec-enable yes; dnssec-validation yes; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; managed-keys-directory "/var/named/dynamic"; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; zone "." IN { type hint; file "named.ca"; };
  • 13. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 13 include "/etc/named.rfc1912.zones"; include "/etc/named.root.key"; include "/etc/named/named.conf.local"; vim /etc/named/named.conf.local zone "example.com" { type master; file "/etc/named/zones/db.example.com"; # zone file path }; zone "0.192.in-addr.arpa" { type master; file "/etc/named/zones/db.192.0"; # 192.168.0.0/16 }; vim /etc/named/zones/db.example.com $TTL 604800 @ IN SOA ns1.example.com. root.example.com. ( 3 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; name servers - NS records IN NS ns1.example.com. ; name servers - A records ns1.example.com. IN A 192.168.0.139 ; A records kkcodb-scan IN A 192.168.0.145 kkcodb-scan IN A 192.168.0.146 kkcodb-scan IN A 192.168.0.147 kkcodb-scan IN A 192.168.0.148 ; kkcodb01-priv1 IN A 10.75.40.143 kkcodb02-priv1 IN A 10.75.40.144 ; kkcodb01 IN A 192.168.0.139 kkcodb02 IN A 192.168.0.140 ; nfs IN A 192.168.0.30 nfs-active IN A 10.75.40.31 nfs-pasive IN A 10.75.40.32
  • 14. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 14 vim /etc/named/zones/db.192.0 $TTL 604800 @ IN SOA ns1.example.com. root.example.com. ( 3 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; name servers - NS records IN NS ns1.example.com. ; PTR Records 139.0 IN PTR ns1.example.com. ; 192.168.0.139 ; 145.0 IN PTR kkcodb-scan.example.com. ; 192.168.0.145 146.0 IN PTR kkcodb-scan.example.com. ; 192.168.0.146 147.0 IN PTR kkcodb-scan.example.com. ; 192.168.0.147 148.0 IN PTR kkcodb-scan.example.com. ; 192.168.0.148 ; 143.40 IN PTR kkcodb01-priv1.example.com. ; 10.75.40.143 144.40 IN PTR kkcodb02-priv2.example.com. ; 10.75.40.144 ; 139.0 IN PTR kkcodb01.example.com. ; 192.168.0.139 140.0 IN PTR kkcodb02.example.com. ; 192.168.0.140 ; 30.0 IN PTR nfs.example.com. ; 192.168.0.30 31.40 IN PTR nfs-active.example.com. ; 10.75.40.31 32.40 IN PTR nfs-pasive.example.com. ; 10.75.40.32 chkconfig named on service named restart named-checkzone 0.192.in-addr.arpa /etc/named/zones/db.192.0 @ ALL OF THEM ARE SHOULD BE CONFIGURING DNS CLIENT SETTING AS FOLLOWS (INCLUDING BIND SERVER ALSO), vim /etc/sysconfig/networking/profiles/default/resolv.conf nameserver 192.168.0.138 search example.com vim /etc/resolv.conf nameserver 192.168.0.138 search example.com service network restart
  • 15. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 15 chkconfig NetworkManager off service network restart cat /etc/resolv.conf ID DEVICE eth0, eth1…ethx DOES NOT SEEM TO BE PRESENT: I was able to fix the problem by deleting the /etc/udev/rules.d/70-persistant-net.rules file and restarting the virtual machine which generated a new file and got everything set up correctly. Remove .ssh directory form individual users and restart the servers. LOGICAL VOLUME MANAGEMENT – LVM ON NFS & NFS BACKUP KEEPER SERVERS LVM is a logical volume manager for the Linux kernel that manages disk drives and similar mass-storage devices. Heinz Mauelshagen wrote the original code in 1998, taking its primary design guidelines from the HP-UX's volume manager. The installers for the CrunchBang, CentOS, Debian, Fedora, Gentoo, Mandriva, MontaVista Linux, openSUSE, Pardus, Red Hat Enterprise Linux, Slackware, SLED, SLES, Linux Mint, Kali Linux, and Ubuntu distributions are LVM-aware and can install a bootable system with a root filesystem on a logical volume. LVM IS COMMONLY USED FOR THE FOLLOWING PURPOSES: 1. Managing large hard disk farms by allowing disks to be added and replaced without downtime or service disruption, in combination with hot swapping. 2. On small systems (like a desktop at home), instead of having to estimate at installation time how big a partition might need to be in the future, LVM allows file systems to be easily resized later as needed. 3. Performing consistent backups by taking snapshots of the logical volumes. 4. Creating single logical volumes of multiple physical volumes or entire hard disks (somewhat similar to RAID 0, but more similar to JBOD), allowing for dynamic volume resizing. 5. the Ganeti solution stack relies on the Linux Logical Volume Manager 6. LVM can be considered as a thin software layer on top of the hard disks and partitions, which creates an abstraction of continuity and ease-of-use for managing hard drive replacement, re-partitioning, and backup. THE LVM CAN: 1. Resize volume groups online by absorbing new physical volumes (PV) or ejecting existing ones. 2. Resize logical volumes (LV) online by concatenating extents onto them or truncating extents from them. 3. Create read-only snapshots of logical volumes (LVM1). 4. Create read-write snapshots of logical volumes (LVM2). 5. Create RAID logical volumes (available in newer LVM implementations): RAID 1, RAID 5, RAID 6, etc. 6. Stripe whole or parts of logical volumes across multiple PVs, in a fashion similar to RAID 0. 7. Configure a RAID 1 backend device (a PV) as write-mostly, resulting in reads being avoided to such devices. 8. Allocate thin-provisioned logical volumes from a pool. 9. Move online logical volumes between PVs. Split or merge volume groups in situ (as long as no logical volumes span the split). This can be useful when migrating whole logical volumes to or from offline storage. 10. Create hybrid volumes by using the dm-cache target, which allows one or more fast storage devices, such as flash-based solid-state drives (SSDs), to act as a cache for one or more slower hard disk drives (HDDs).
  • 16. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 16 CREATE A PHYSICAL VOLUME Input Command pvcreate -ff /dev/sdb Output Physical volume "/dev/sdb" successfully created DISPLAY A STATUS OF PHYSICAL VOLUMES Input Command pvdisplay /dev/sdb Output "/dev/sdb" is a new physical volume of "150.00 GiB" --- NEW Physical volume --- PV Name /dev/sdb VG Name PV Size 150.00 GiB CREATE A VOLUME GROUP Input Command vgcreate volg1 /dev/sdb Output Volume group "volg1" successfully created DISPLAY VOLUME GROUPS Input Command vgdisplay Output --- Volume group --- VG Name volg1 System ID Format l vm2
  • 17. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 17 VG Access read/write VG Status resizable VG Size 150.00 GiB CREATE A LOGICAL VOLUME Input Command lvcreate -L 149G -n lv_data volg1 NOTE: create a Logical Volumes 'lv_data' as 150G in volume group 'vg_data' Output Logical volume "lv_data" created DISPLAY STATUS OF LOGICAL VOLUMES Input Command lvdisplay Output --- Logical volume --- LV Path /dev/volg1/lv_data LV Name lv_data VG Name vg_data LV Write Access read/write LV Status available FORMATTING LOGICAL VOLUME BEFORE MOUNT IT. Input Command mkfs.ext4 /dev/volg1/lv_data
  • 18. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 18 MOUNTING LOGICAL VOLUME INTO A SPECIFIC USER’S FOLDER Input Command mkdir -p /u01/VM/nfs_shares mount /dev/volg1/lv_data /u01/VM/nfs_shares vim /etc/fstab /dev/volg1/lv_data /u01/VM/nfs_shares ext4 defaults 0 0 CONFIGURING LSYNCD BACKUP SERVER AS A BACKUP KEEPER FOR THE NFS SERVER (CONFIGURE A SEPARATE NETWORK ADDRESS FOR THE BACKUP REPLICATION) Lsyncd is a tool I discovered a few weeks ago, it is a synchronization server based primarily on Rsync. It is a server daemon that runs on the “master” server, and it can sync / mirror any file or directory changes within seconds into your “slaves” servers, you can have as many slave servers as you want. Lsyncd is constantly watching a local directory and monitoring file system changes using inotify / fsevents. By default, lsyncd uses rsync to send the data over the slave machines, however there are other ways to do it. It does not require you to build new filesystems or block devices, and does not harm your server I/O performance. yum -y install lua lua-devel pkgconfig gcc asciidoc lsyncd rsync @ NFS SERVER (10.75.40.30/192.168.0.30) vim /etc/lsyncd.conf settings={ logfile="/var/log/lsyncd.log", statusFile="/tmp/lsyncd.stat", statusInterval=1, } sync{ default.rsync, source="/u01/VM/nfs_shares", target="192.168.0.31:/u01/VM/nfs_shares", rsync={rsh="/usr/bin/ssh -l root -i /root/.ssh/id_rsa",} } rsync = { compress = true, acls = true, verbose = true,
  • 19. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 19 owner = true, group = true, perms = true, rsh = "/usr/bin/ssh -p 22 -o StrictHostKeyChecking=no" service lsyncd start chkconfig lsyncd on mkdir -p /var/log/lsyncd GENERATE THE SSH PUBLIC KEYS BETWEEN NFS & NFS BACKUP KEEPER SERVERS #!/bin/sh echo "Are the both of the server’s reachable conditions satisfied? (y/n)" read sslkeygen case $sslkeygen in y) echo "Please enter the IP address of the Source Linux Server node." read ipaddr1 echo "Please enter the IP address of the Destination Linux Server." read ipaddr2 echo "" echo "Generating SSH key..." ssh-keygen -t rsa echo "" echo "Copying SSH key to the Destination Linux Server..." echo "Please enter the root password for the Remote Linux Server." ssh root@$ipaddr2 mkdir -p .ssh cat /root/.ssh/id_rsa.pub | ssh root@$ipaddr2 'cat >> .ssh/authorized_keys' ssh root@$ipaddr2 "chmod 700 .ssh; chmod 640 .ssh/authorized_keys" echo "" echo "SSH Key Authentication successfully set up ... continuing Next Linux RSA Key installation form Remote server to Source Server..." echo "Generating SSH key on Destination Server..." ssh root@$ipaddr2 ssh-keygen -t rsa echo "" echo "Copying SSH key to the Destination Linux Server..." echo "Please enter the root password for the Remote Linux Server." mkdir -p .ssh ssh root@$ipaddr2 cat /root/.ssh/id_rsa.pub | ssh root@$ipaddr1 'cat >> .ssh/authorized_keys' chmod 700 .ssh; chmod 640 .ssh/authorized_keys echo "" ;; n) echo "Root access must be enabled on the second machine...exiting!" exit 1 ;; *) echo "Unknown choice ... exiting!" exit 2 esac
  • 20. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 20 CONFIGURING NFS SERVER (10.75.40.30 / 192.168.0.30) groupadd -g 1000 oinstall groupadd -g 1200 dba useradd -u 1100 -g dba -G oinstall grid useradd -u 1300 -g dba -G oinstall oracle passwd grid passwd oracle mkdir -p /u01/VM/nfs_shares/shared_1 mkdir -p /u01/VM/nfs_shares/shared_2 mkdir -p /u01/VM/nfs_shares/shared_3 chown grid:dba /u01/VM/nfs_shares/shared_1 chown grid:dba /u01/VM/nfs_shares/shared_2 chown grid:dba /u01/VM/nfs_shares/shared_3 chmod +x /u01/VM/nfs_shares/shared_1 chmod +x /u01/VM/nfs_shares/shared_2 chmod +x /u01/VM/nfs_shares/shared_3 vim /etc/exports OPTIMISED BY 50% (ASYNC): /u01/VM/nfs_shares/shared_1 *(rw,async,no_wdelay,insecure_locks,no_root_squash) /u01/VM/nfs_shares/shared_2 *(rw,async,no_wdelay,insecure_locks,no_root_squash) /u01/VM/nfs_shares/shared_3 *(rw,async,no_wdelay,insecure_locks,no_root_squash) chkconfig nfs on service nfs restart showmount -e 10.75.40.30 NFS MASTER RESULT: /u01/VM/nfs_shares/shared_3 * /u01/VM/nfs_shares/shared_2 * /u01/VM/nfs_shares/shared_1 * showmount -e 10.75.40.32 NFS SLAVE RESULT: /u01/VM/nfs_shares/shared_3 * /u01/VM/nfs_shares/shared_2 * /u01/VM/nfs_shares/shared_1 * showmount -e 10.75.40.30 NFS VIRTUAL IP RESULT: /u01/VM/nfs_shares/shared_3 * /u01/VM/nfs_shares/shared_2 * /u01/VM/nfs_shares/shared_1 *
  • 21. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 21 CONFIGURING NFS MOUNT POINTS ON BOTH NODES/RACS - (192.168.0.139 & 192.168.0.140) vim /etc/fstab 10.75.40.30:/u01/VM/nfs_shares/shared_1 /u01 nfs rw,bg,hard,nolock,noac,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0 10.75.40.30:/u01/VM/nfs_shares/shared_2 /u02 nfs rw,bg,hard,nolock,noac,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0 10.75.40.30:/u01/VM/nfs_shares/shared_3 /u03 nfs rw,bg,hard,nolock,noac,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0 mount /u01 mount /u02 mount /u03 df -hT INSTALLING ORACLE GRID ON BOTH NODES FROM 192.168.0.139
  • 22. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 22
  • 23. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 23
  • 24. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 24
  • 25. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 25
  • 26. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 26
  • 27. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 27
  • 28. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 28
  • 29. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 29
  • 30. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 30
  • 31. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 31
  • 32. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 32 INSTALLING ORACLE DATABASE ON BOTH NODES FROM 192.168.0.139
  • 33. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 33
  • 34. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 34
  • 35. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 35
  • 36. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 36
  • 37. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 37
  • 38. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 38
  • 39. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 39
  • 40. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 40
  • 41. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 41 ADMINISTRATING THE GRID INSFRASTUCTURE ON BOTH OF RAC NODES (as a grid user) su - grid crsctl status resource -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.LISTENER.lsnr ONLINE ONLINE kkcodb01 STABLE ONLINE ONLINE kkcodb02 STABLE ora.asm OFFLINE OFFLINE kkcodb01 Instance Shutdown,ST ABLE OFFLINE OFFLINE kkcodb02 STABLE ora.net1.network ONLINE ONLINE kkcodb01 STABLE ONLINE ONLINE kkcodb02 STABLE ora.ons ONLINE ONLINE kkcodb01 STABLE ONLINE ONLINE kkcodb02 STABLE -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE kkcodb01 STABLE ora.LISTENER_SCAN2.lsnr 1 ONLINE ONLINE kkcodb01 STABLE ora.LISTENER_SCAN3.lsnr 1 ONLINE ONLINE kkcodb01 STABLE ora.LISTENER_SCAN4.lsnr 1 ONLINE ONLINE kkcodb01 STABLE ora.MGMTLSNR 1 ONLINE ONLINE kkcodb01 169.254.225.48 10.75 .40.143,STABLE ora.cvu 1 ONLINE ONLINE kkcodb01 STABLE ora.kkcodb01.vip 1 ONLINE ONLINE kkcodb01 STABLE ora.kkcodb02.vip 1 ONLINE ONLINE kkcodb02 STABLE ora.mgmtdb 1 ONLINE ONLINE kkcodb01 Open,STABLE ora.oc4j
  • 42. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 42 1 ONLINE ONLINE kkcodb01 STABLE ora.oradb.db 1 ONLINE ONLINE kkcodb01 Open,STABLE 2 ONLINE ONLINE kkcodb02 Open,STABLE ora.scan1.vip 1 ONLINE ONLINE kkcodb01 STABLE ora.scan2.vip 1 ONLINE ONLINE kkcodb01 STABLE ora.scan3.vip 1 ONLINE ONLINE kkcodb01 STABLE ora.scan4.vip 1 ONLINE ONLINE kkcodb01 STABLE srvctl status instance -db oradb -node kkcodb01 srvctl status instance -db oradb -node kkcodb02 Instance oradb1 is running on node kkcodb01 Instance oradb2 is running on node kkcodb02 srvctl status instance -d oradb -i oradb1 srvctl status instance -d oradb -i oradb2 Instance oradb1 is running on node kkcodb01 Instance oradb2 is running on node kkcodb02 SUMMARY OF THE MOST IMPORTANT COMMANDS TO RAISE / STOP / CHECK CLUSTER RESOURCES crsctl check crs crsctl check cluster -n kkcodb01 crsctl check ctss crsctl config crs (requiere root) cat /etc/oracle/scls_scr/rac1/root/ohasdstr crsctl stat res -t crsctl stat res ora.rac.db -p crsctl stat res ora.rac.db -f crsctl query css votedisk olsnodes -n -i -s -t oifcfg getif ocrcheck ocrcheck -local (requiere root) ocrconfig -showbackup ocrconfig -add +TEST cluvfy comp crs -n rac1 srvctl status database -d oradb srvctl status instance -d oradb -i kkcodb01 srvctl status service -d oradb srvctl status nodeapps
  • 43. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 43 srvctl status vip -n kkcodb01 srvctl status listener -l LISTENER srvctl status asm -n kkcodb01 srvctl status scan srvctl status scan_listener srvctl status server -n kkcodb01 srvctl status diskgroup -g DGRAC srvctl config database -d oradb srvctl config service -d oradb srvctl config nodeapps srvctl config vip -n kkcodb01 srvctl config asm -a srvctl config listener -l LISTENER srvctl config scan srvctl config scan_listener crsctl stop cluster crsctl start cluster crsctl stop crs crsctl start crs crsctl disable crsctl disable srvctl stop database -d oradb -o immediate srvctl start database -d oradb srvctl stop instance -d oradb -i kkcodb01 -o immediate srvctl start instance -d oradb -i kkcodb01 srvctl stop service -d oradb -s OLTP -n kkcodb01 srvctl sart service -d oradb -s OLTP srvctl stop nodeapps -n kkcodb01 srvctl start nodeapps srvctl stop vip -n rac1 srvctl start vip -n rac1 srvctl stop asm -n rac1 -o abort -f srvctl start asm -n rac1 srvctl stop listener -l LISTENER srvctl start listener -l LISTENER srvctl stop scan -i 1 srvctl start scan -i 1 srvctl stop scan_listener -i 1 srvctl start scan_listener -i 1 srvctl stop diskgroup -g TEST -n kkcodb01, kkcodb02 srvctl start diskgroup -g TEST -n kkcodb01, kkcodb02 srvctl relocate service -d RAC -s OLTP -i kkcodb01 -t kkcodb02 srvctl relocate scan_listener -i 1 rac1
  • 44. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 44 DELETING THE GRID INSFRASTUCTURE ON BOTH OF RAC NODES Login as grid user: Removin GRID $ORACLE_HOME/deinstall/deinstall Removing Brocken Grid Installation: On all cluster nodes except the last, run the following command as the "root" user. perl /u01/app/11.2.0/grid/crs/install/rootcrs.pl -verbose -deconfig -force perl $GRID_HOME/crs/install/rootcrs.pl -verbose -deconfig -force Cleanup steps for cluster node removal for following scenarios: rm -rf /app/* #rm -rf /u01/* ; rm -rf /u02/* ; rm -rf /u03/* rm -rf /etc/oracle/* rm -rf /etc/oraInst.loc rm -rf /etc/oratab rm -rf /var/tmp/.oracle/* rm -rf /home/oracle/.ssh/ rm -rf /home/grid/.ssh/ shutdown -r now CONNECTING WITH ORACLE SQL DEVELOPER
  • 45. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 45