admin 管理员组文章数量: 887021
2024年1月15日发(作者:python编程课程教程)
Oracle RAC 11gR2 INSTALLATION GUIDE
BY ALLAN
MAR, 29, 2011
EMC:
After FDISK , the following command needs to be run :
/etc/init.d/naviagent restart
partprobe
磁盘规划
+CRS 三个2G的盘(at least 3 for normal asm installation)
+DGDATA 三个10G的盘
+DGRECOVERY 两个5G的盘
NETWORK CONFIGURATION:
Vi /etc/hosts
#public ip
172.28.250.220 rac1
172.28.250.221 rac2
#priv ip (需对接[反]线inter-connect to each other)
10.10.10.211 rac1prv
10.10.10.212 rac2prv
#vip ip (failover to connect to other Rac node)
172.28.250.188 rac1vip
172.28.250.189 rac2vip
#scan ip (necessary)
172.28.250.190 racscan
vi /etc/sysconfig/network-scripts/ifcfg-eth1 & ifcfg-eth0
DEVICE=eth0
BOOTPROTO=none
HWADDR=84:2b:2b:49:77:34
ONBOOT=yes
NETMASK=255.255.255.0
IPADDR=172.28.250.221
GATEWAY=172.28.250.253
TYPE=Ethernet
vi /etc/
nameserver 172.28.250.210
nameserver 202.96.209.133
samples:
Ifconfig eth1 up/down
service network restart
/usr/sbin/groupadd -g 501 oinstall
/usr/sbin/groupadd -g 502 dba
/usr/sbin/groupadd -g 503 oper
/usr/sbin/groupadd -g 504 asmadmin
/usr/sbin/groupadd -g 505 asmoper
/usr/sbin/groupadd -g 506 asmdba
/usr/sbin/useradd -g oinstall -G dba,asmdba,oper oracle
/usr/sbin/useradd -g oinstall -G asmadmin,asmdba,asmoper,oper,dba grid
passwd oracle/grid
(ora123/grid123)
mkdir /oracle/app/
chown -R grid:oinstall /oracle/app/
chmod -R 775 /oracle/app/
mkdir -p /oracle/app/oraInventory
chown -R grid:oinstall /oracle/app/oraInventory
chmod -R 775 /oracle/app/oraInventory
mkdir -p /oracle/app/grid
mkdir -p /oracle/app/oracle
chown -R grid:oinstall /oracle/app/grid
chown -R oracle:oinstall /oracle/app/oracle
chmod -R 775 /oracle/app/oracle
chmod -R 775 /oracle/app/grid
修改系统参数:
vi /etc/security/
#ORACLE SETTING
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
vi /etc/pam.d/login
#ORACLE SETTING
session required pam_
# vi /etc/
#ORACLE SETTING
-max-nr = 1048576
-max = 6815744
= 2097152
= 536870912
= 4096
= 250 32000 100 128
_local_port_range = 9000 65500
_default = 262144
_max = 4194304
_default = 262144
_max = 1048586
gird时间同步所需要的设置(11gR2新增检查项)
#Network Time Protocol Setting
/sbin/service ntpd stop
chkconfig ntpd off
rm /etc/
或
mv /etc/ to /etc/
参考文档e10812 –page 66 :
Oracle® Grid Infrastructure
Installation Guide
11g Release 2 (11.2) for Linux
Vi /etc/fstab
/dev/shm 共享内存不足的处理
解决方法:
例如:为了将/dev/shm的大小增加到1GB,修改/etc/fstab的这行:默认的:
none /dev/shm tmpfs defaults 0 0
改成:
none /dev/shm tmpfs defaults,size=1024m 0 0
size参数也可以用G作单位:size=1G。
重新mount /dev/shm使之生效:
# mount -o remount /dev/shm
或者:
# umount /dev/shm
# mount -a
马上可以用"df -h"命令检查变化。
参考e10812- page 139-140:
MEMORY_TARGET not supported on this system
Cause: On Linux systems, insufficient /dev/shm size for PGA and SGA.
If you are installing on a Linux system, note that Memory Size (SGA and PGA),
which sets the initialization parameter MEMORY_TARGET or MEMORY_MAX_
TARGET, cannot be greater than the shared memory file system (/dev/shm) on
your operating system.
Note: You should not have netdev in the mount instructions, or
vers=2. The netdev option is only required for OCFS file systems,
and vers=2 forces the kernel to mount NFS using the older version 2
protocol.
Interpreting CVU "Unknown" Output Messages Using Verbose Mode
A-4 Oracle Grid Infrastructure Installation Guide
Action: Increase the /dev/shm mountpoint size. For example:
# mount -t tmpfs shmfs -o size=4g /dev/shm
Also, to make this change persistent across system restarts, add an entry in
/etc/fstab similar to the following:
shmfs /dev/shm tmpfs size=4g 0
grid 用户环境变量:
#grid 用户配置文件 ORACLE_HOSTNAME请自行设置
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR
ORACLE_SID=+ASM1; export ORACLE_SID
ORACLE_BASE=/oracle/app/oracle; export ORACLE_BASE
ORACLE_HOME=/oracle/app/grid/product/11.2.0; export ORACLE_HOME
NLS_DATE_FORMAT="yyyy-mm-dd HH24:MI:SS"; export NLS_DATE_FORMAT
THREADS_FLAG=native; export THREADS_FLAG
PATH=$ORACLE_HOME/bin:$PATH; export PATH
THREADS_FLAG=native; export THREADS_FLAG
PATH=$ORACLE_HOME/bin:$PATH; export PATH
if [ $USER = "oracle" ] || [ $USER = "grid" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
umask 022
fi
#oracle用户配置文件 ORACLE_HOSTNAME请自行设置
# Oracle Settings oracle
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR
ORACLE_BASE=/oracle/app/oracle; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/11.2.0; export ORACLE_HOME
# Each RAC node must have a unique ORACLE_SID. (i.e. racdb1, racdb2,...)
ORACLE_SID=racdb1; export ORACLE_SID
ORACLE_TERM=xterm; export ORACLE_TERM
PATH=/usr/sbin:$PATH; export PATH
PATH=$ORACLE_HOME/bin:$PATH; export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH
NLS_DATE_FORMAT="yyyy-mm-dd HH24:MI:SS"; export NLS_DATE_FORMAT
NLS_LANG=AMERICAN_16GBK;export NLS_LANG
if [ $USER = "oracle" ] || [ $USER = "grid" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
umask 022
fi
配置信任关系
设置SSH,
(The network must be configured first)
#for ‘grid’ user on both nodes
Su – grid
mkdir ~/.ssh
chmod 700 ~/.ssh
# Accept the default settings.
ssh-keygen -t rsa
ssh-keygen -t dsa
#on rac1, scp to another nodes
[grid@rac1 ~]$ cat ~/.ssh/id_ >> ./.ssh/authorized_keys
[grid@rac1 ~]$ cat ~/.ssh/id_ >> ./.ssh/authorized_keys
[grid@rac1 ~]$ ssh rac2 cat ~/.ssh/id_ >> ~/.ssh/authorized_keys
[grid@rac1 ~]$ ssh rac2 cat ~/.ssh/id_ >> ~/.ssh/authorized_keys
# input ‘yes’ and grid’s password
[grid@rac1 ~]$ scp ./.ssh/authorized_keys rac2:/home/grid/.ssh/
[grid@rac1 ~]$ $ ssh rac1 date
[grid@rac1 ~]$ $ ssh rac2 date
[grid@rac1 ~]$ $ ssh rac1prv date
[grid@rac1 ~]$ $ ssh rac2prv date
#on rac2
[grid@rac1 ~]$ $ ssh rac1 date
[grid@rac1 ~]$ $ ssh rac2 date
[grid@rac1 ~]$ $ ssh rac1prv date
[grid@rac1 ~]$ $ ssh rac2prv date
#same action for ‘oracle’ user on both nodes
Su – oracle
mkdir ~/.ssh
chmod 700 ~/.ssh
# Accept the default settings.
ssh-keygen -t rsa
ssh-keygen -t dsa
#on rac1, scp to another nodes
[oracle@rac2 ~]$ cat ~/.ssh/id_ >> ./.ssh/authorized_keys
[oracle@rac1 ~]$ cat ~/.ssh/id_ >> ./.ssh/authorized_keys
[oracle@rac1 ~]$ ssh rac2 cat ~/.ssh/id_ >> ~/.ssh/authorized_keys
[oracle@rac1 ~]$ ssh rac2 cat ~/.ssh/id_ >> ~/.ssh/authorized_keys
# input ‘yes’ and oracle’s password
[oracle@rac1 ~]$ scp ./.ssh/authorized_keys rac2:/home/oracle/.ssh/
[oracle@rac1 ~]$ $ ssh rac1 date
[oracle@rac1 ~]$ $ ssh rac2 date
[oracle@rac1 ~]$ $ ssh rac1prv date
[oracle@rac1 ~]$ $ ssh rac2prv date
#on rac2
[oracle@rac1 ~]$ $ ssh rac1 date
[oracle@rac1 ~]$ $ ssh rac2 date
[oracle@rac1 ~]$ $ ssh rac1prv date
[oracle@rac1 ~]$ $ ssh rac2prv date
NOTES: Enabling SSH User Equivalency for the Current Shell Session,
When running the OUI, it will need to run the secure shell tool commands (ssh and scp) without being prompted for a
pass phrase. Even though SSH is configured on both Oracle RAC nodes in the cluster, using the secure shell tool
commands will still prompt for a pass phrase.
[oracle@racnode1 .ssh]$ exec /usr/bin/ssh-agent $SHELL
[oracle@racnode1 .ssh]$ /usr/bin/ssh-add
Enter passphrase for /home/oracle/.ssh/id_rsa
Identity added: /home/oracle/.ssh/id_rsa (/home/oracle/.ssh/id_rsa)
Identity added: /home/oracle/.ssh/id_dsa (/home/oracle/.ssh/id_dsa)
These commands start the ssh-agent on the local node, and load the RSA and
DSA keys into the current session’s memory so that you are not prompted to use
pass phrases when issuing SSH commands.
安装ASM
/technology/tech/linux/asmlib/
if x86_64, related rpms need to be downloaded
rpm –ivh –replacefiles/--replacepkgs
This installation needs to be performed on both nodes in the RAC cluster as the root user account:
1. Error log recorded in : cat /var/log/oracleasm, [oracleasm update-driver] can update driver on-line.
2. SELINUX needs to be disabled : cat /etc/selinux/config
rpm -ivh *.rpm
[root@rac1 /]# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting
answer will keep that current value. Ctrl-C will abort.
Default user to own the driver interface []: grid
Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLib disks: [ OK ]
创建asm盘
Creating the ASM disks only needs to be done on one node in the
RAC cluster as the root user account. I will be running these
commands on linux1. On the other Oracle RAC node, you will need to
perform a scandisk to recognize the new volumes.
[root@rac1 /]# /etc/init.d/oracleasm createdisk CRS1 /dev/emcpowera1
Marking disk "CRS1" as an ASM disk: [ OK ]
[root@rac1 /]# /etc/init.d/oracleasm createdisk CRS2 /dev/emcpowera2
Marking disk "CRS2" as an ASM disk: [ OK ]
。。。
[root@rac1 /]# /etc/init.d/oracleasm createdisk DATA1 /dev/emcpowera5
Marking disk "DATA1" as an ASM disk: [ OK ]
[root@rac1 /]# /etc/init.d/oracleasm createdisk DATA2 /dev/emcpowera6
Marking disk "DATA2" as an ASM disk: [ OK ]
[root@rac1 /]# /etc/init.d/oracleasm createdisk DATA3 /dev/emcpowera7
Marking disk "DATA3" as an ASM disk: [ OK ]
[root@rac1 /]# /etc/init.d/oracleasm createdisk REC1 /dev/emcpowera8
Marking disk "REC1" as an ASM disk: [ OK ]
[root@rac1 /]# /etc/init.d/oracleasm createdisk REC2 /dev/emcpowera9
Marking disk "REC2" as an ASM disk: [ OK ]
On all other nodes in the RAC cluster, you must perform a scandisk to recognize the new volumes:
[root@rac2 /]# /etc/init.d/oracleasm scandisks
Scanning the system for Oracle ASMLib disks: [ OK ]
[root@rac2 /]# /etc/init.d/oracleasm listdisks
CRS1
CRS2
CRS3
DATA1
DATA2
DATA3
REC1
REC2
安装cvuqdisk包并验证
cvuqdisk RPM 包含在 Oracle Grid Infrastructure 安装介质上的 rpm 目录中
su – root
cd /$grid_install_folder/rpm/
[root@rac1 rpm] export CVUQDISK_GRP=oinstall
[root@rac1 rpm] rpm -e cvuqdisk
[root@rac1 rpm] rpm -ivh
使用 CVU 验证是否满足 Oracle 集群件要求
记住要作为 grid 用户在将要执行 Oracle 安装的节点 (racnode1) 上运行。此外,必须为 grid
用户配置通过用户等效性实现的 SSH
[if this machine has already installed other oracle database, you may have to check/modify /etc/ file]
inventory_loc=/oracle/app/oraInventory
inst_group=group
Su – grid
Cd /$grid_install_folder/
./ stage -pre crsinst -n rac1,rac2 -fixup -verbose
./ stage -pre crsinst -n rac1,rac2 –verbose
运行上述包时,必须用 oracle 用户,不能用 root ,同时,运行时可能会报一个 /tmp/bootstrap 不能被删除的错误,用 root 账号将此目录删除或改名即可。
使用 CVU 验证硬件和操作系统设置
./ stage -post hwos -n rac1,rac2 -verbose
[grid@rac1 grid]$ ./ stage -post hwos -n rac1,rac2 -verbose
With Oracle Clusterware 11g release 2, Oracle Universal Installer (OUI) detects when
the minimum requirements for an installation are not met, and creates shell scripts,
called fixup scripts, to finish incomplete system configuration steps. If OUI detects an
incomplete task, then it generates fixup scripts (). You can run the fixup
script after you click the Fix and Check Again Button.
You also can have CVU generate fixup scripts before installation
Install grid infrastructure on one NODE:
su - grid
./runInstaller
scan配置:
cluster scan: sanclusters
scanname: racscan
scanport: 1521
Sysasm/asm123
(A must to refer to grid user env.)
Disable all firewall or selinex:
service iptables stop
Apart from disabling Firewall, this might be with your /tmp directory in the second node while trying to copy
the installation files.
-- make sure you have same temporary directory name across the nodes.
-- do a chown -R grid:oinstall /tmp across all the nodes.
lsof -i
/forums/?messageID=9309296
/oracle/app/oraInventory/logs/installActions2011-04-01_
[root@rac2 ~]# /oracle/app/oraInventory/
更改权限/oracle/app/oraInventory.
添加组的读取和写入权限。
删除全局的读取, 写入和执行权限。
更改组名/oracle/app/oraInventory 到 oinstall.
脚本的执行已完成。
[root@rac2 ~]# /oracle/app/grid/product/11.2.0/
Running Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /oracle/app/grid/product/11.2.0
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of script.
Now product-specific root actions will be performed.
2011-04-01 14:48:44: Parsing the host name
2011-04-01 14:48:44: Checking for super user privileges
2011-04-01 14:48:44: User has super user privileges
Using configuration parameter file: /oracle/app/grid/product/11.2.0/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer cert
pa cert
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
CRS-2672: 尝试启动 '' (在 'rac2' 上)
CRS-2672: 尝试启动 '' (在 'rac2' 上)
CRS-2676: 成功启动 '' (在 'rac2' 上)
CRS-2676: 成功启动 '' (在 'rac2' 上)
CRS-2672: 尝试启动 '' (在 'rac2' 上)
CRS-2676: 成功启动 '' (在 'rac2' 上)
CRS-2672: 尝试启动 'nitor' (在 'rac2' 上)
CRS-2676: 成功启动 'nitor' (在 'rac2' 上)
CRS-2672: 尝试启动 '' (在 'rac2' 上)
CRS-2672: 尝试启动 'n' (在 'rac2' 上)
CRS-2676: 成功启动 'n' (在 'rac2' 上)
CRS-2676: 成功启动 '' (在 'rac2' 上)
CRS-2672: 尝试启动 '' (在 'rac2' 上)
CRS-2676: 成功启动 '' (在 'rac2' 上)
已成功创建并启动 ASM。
已成功创建磁盘组 CRS。
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-2672: 尝试启动 '' (在 'rac2' 上)
CRS-2676: 成功启动 '' (在 'rac2' 上)
CRS-4256: Updating the profile
Successful addition of voting disk 9a80b90fe43b4f0cbf4b8ca66ae2fe4b.
Successful addition of voting disk 1c930b3fe2654f07bf4ed841228326fc.
Successful addition of voting disk 00d02a5c24494f9bbfba7799e31761f3.
Successfully replaced voting disk group with +CRS.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 9a80b90fe43b4f0cbf4b8ca66ae2fe4b (ORCL:CRS1) [CRS]
2. ONLINE 1c930b3fe2654f07bf4ed841228326fc (ORCL:CRS2) [CRS]
3. ONLINE 00d02a5c24494f9bbfba7799e31761f3 (ORCL:CRS3) [CRS]
Located 3 voting disk(s).
CRS-2673: 尝试停止 '' (在 'rac2' 上)
CRS-2677: 成功停止 '' (在 'rac2' 上)
CRS-2673: 尝试停止 '' (在 'rac2' 上)
CRS-2677: 成功停止 '' (在 'rac2' 上)
CRS-2673: 尝试停止 '' (在 'rac2' 上)
CRS-2677: 成功停止 '' (在 'rac2' 上)
CRS-2673: 尝试停止 'nitor' (在 'rac2' 上)
CRS-2677: 成功停止 'nitor' (在 'rac2' 上)
CRS-2673: 尝试停止 '' (在 'rac2' 上)
CRS-2677: 成功停止 '' (在 'rac2' 上)
CRS-2673: 尝试停止 '' (在 'rac2' 上)
CRS-2677: 成功停止 '' (在 'rac2' 上)
CRS-2673: 尝试停止 '' (在 'rac2' 上)
CRS-2677: 成功停止 '' (在 'rac2' 上)
CRS-2673: 尝试停止 '' (在 'rac2' 上)
CRS-2677: 成功停止 '' (在 'rac2' 上)
CRS-2672: 尝试启动 '' (在 'rac2' 上)
CRS-2676: 成功启动 '' (在 'rac2' 上)
CRS-2672: 尝试启动 '' (在 'rac2' 上)
CRS-2676: 成功启动 '' (在 'rac2' 上)
CRS-2672: 尝试启动 '' (在 'rac2' 上)
CRS-2676: 成功启动 '' (在 'rac2' 上)
CRS-2672: 尝试启动 'nitor' (在 'rac2' 上)
CRS-2676: 成功启动 'nitor' (在 'rac2' 上)
CRS-2672: 尝试启动 '' (在 'rac2' 上)
CRS-2672: 尝试启动 'n' (在 'rac2' 上)
CRS-2676: 成功启动 'n' (在 'rac2' 上)
CRS-2676: 成功启动 '' (在 'rac2' 上)
CRS-2672: 尝试启动 '' (在 'rac2' 上)
CRS-2676: 成功启动 '' (在 'rac2' 上)
CRS-2672: 尝试启动 '' (在 'rac2' 上)
CRS-2676: 成功启动 '' (在 'rac2' 上)
CRS-2672: 尝试启动 '' (在 'rac2' 上)
CRS-2676: 成功启动 '' (在 'rac2' 上)
CRS-2672: 尝试启动 '' (在 'rac2' 上)
CRS-2676: 成功启动 '' (在 'rac2' 上)
CRS-2672: 尝试启动 '' (在 'rac2' 上)
CRS-2676: 成功启动 '' (在 'rac2' 上)
CRS-2672: 尝试启动 '' (在 'rac2' 上)
CRS-2676: 成功启动 '' (在 'rac2' 上)
CRS-2672: 尝试启动 '' (在 'rac2' 上)
CRS-2676: 成功启动 '' (在 'rac2' 上)
rac2 2011/04/01 14:54:27
/oracle/app/grid/product/11.2.0/cdata/rac2/backup_20110401_
Preparing packages
cvuqdisk-1.0.7-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
Updating inventory properties for clusterware
正在启动 Oracle
检查交换空间: 必须大于 500 MB。 实际为 16386 MB 通过
The inventory pointer is located at /etc/
The inventory is located at /oracle/app/oraInventory
'UpdateNodeList' 成功。
Deinstall:
【Oracle RAC】Oracle 11g RAC Grid卸载
[grid@rac1 ~]$ cd /oracle/app/grid/product/11.2.0/deinstall/
[grid@rac1 deinstall]$ ./deinstall
前面使用DBCA删除了数据库,下面通过Oracle的安装程序卸载Oracle RAC数据库软件。
编辑一个my_文件,内容如下:
RESPONSEFILE_VERSION=2.2.1.0.0
UNIX_GROUP_NAME=oinstall
FROM_LOCATION="/data/database/stage/"
ORACLE_BASE="/data/oracle"
ORACLE_HOME="/data/oracle/product/11.1/database"
ORACLE_HOME_NAME="OraDb11g_home1"
TOPLEVEL_COMPONENT={"","11.1.0.6.0"}
DEINSTALL_LIST={"","11.1.0.6.0"}
REMOVE_HOMES="/data/oracle/product/11.1/database"
CLUSTER_NODES="newtrade1","newtrade2"
然后通过runInstaller命令进行清除。
$ ./runInstaller -removeallfiles -silent -deinstall -responseFile
/data/database/response/my_
1. Run the script then the script from the
$ORA_CRS_HOME/install directory on any nodes you are removing CRS from.
deprecated
2. Stop the Nodeapps on all nodes:
srvctl stop nodeapps -n
RAC maintenance :
crs_stat
crsctl check crs
crsctl stat res -t –init
crsctl start res -init
olsnodes –n
crsctl start crs
cluvfy stage -post crsinst -n all
[root@rac2 install]# cd /oracle/app/grid/product/11.2.0/crs/install
[root@rac2 install]# ./ –deconfig
# /oracle/grid/crs/install/ -delete -force –verbose
/oracle/app/grid/product/11.2.0/
[grid@rac1 ~]$ cluvfy stage -pre nodeadd -n rac2 -fixup –verbose
到rac2上创建了共享目录.
[grid@rac1 bin]$ cd /oracle/app/grid/product/11.2.0/oui/bin/
./ -silent "CLUSTER_NEW_NODES={node2,node3}"
cluvfy stage -post nodeadd -n node2 [-verbose]
版权声明:本文标题:Oracle RAC 11gR2 INSTALLATION GUIDE 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.freenas.com.cn/jishu/1705331123h481284.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论