14. PPAS 환경 설정 (both)
vi /etc/profile
vi ~/.bashrc
아래 내용 추가
source /opt/PostgresPlus/9.4AS/pgplus_env.sh
* init db (node1)
su - enterprisedb
initdb /var/lib/ppas/data
or
initdb /var/lib/pgsql/data
exit
15. enable trusted authentication on the nodes and cluster IP's (node1)
(아래 IP 는 각자 알맞게 변경해야 함)
echo "host all all 192.168.21.148/32 trust" >> /var/lib/ppas/data/pg_hba.conf
echo "host all all 192.168.21.149/32 trust" >> /var/lib/ppas/data/pg_hba.conf
echo "host all all 192.168.21.150/32 trust" >> /var/lib/ppas/data/pg_hba.conf
or
echo "host all all 192.168.21.161/32 trust" >> /var/lib/pgsql/data/pg_hba.conf
echo "host all all 192.168.21.162/32 trust" >> /var/lib/pgsql/data/pg_hba.conf
echo "host all all 192.168.21.160/32 trust" >> /var/lib/pgsql/data/pg_hba.conf
* enable PPAS to listen on all interfaces (node1)
vi /var/lib/pgsql/data/postgresql.conf
아래와 같이 수정
listen_addresses = '*'
* start ppas (node1)
이미 실행 중이라면 다음 명령어로 중지
pkill edb (both)
pkill pgagent (both)
/etc/init.d/ppas-9.4 start (node1)
아래 에러 발생시
waiting for server to start....sh: /var/lib/ppas/data/pg_log/startup.log: No such file or directory
디렉토리 생성해줌 (both)
mkdir -p /var/lib/pgsql/data/pg_log/
chown -R enterprisedb:enterprisedb /var/lib/pgsql
다시 ppas 시작
/etc/init.d/ppas-9.4 start (node1)
* create an admin user to manage ppas (node1)
su - enterprisedb
createuser --superuser admpgsql --pwprompt
* create a database and populate it with pgbench (node1)
createdb pgbench
pgbench -i pgbench
* Pgbench populates the db with some info, the objetive is to test ppas (node1)
pgbench -i pgbench
* (node1)
psql -U admpgsql -d pgbench
select * from pgbench_tellers;
* all ppas config is done.
* Checking if PPAS will work on node2
* on node1, we need to stop ppas (node1)
/etc/init.d/ppas-9.4 stop
umount /dev/drbd0
drbdadm secondary postgres
* we need to promote node2 as Primary on DRBD resource:
drbdadm primary postgres
mount -t ext4 /dev/drbd0 /var/lib/pgsql/
/etc/init.d/ppas-9.4 start
* Let's check if we can access the pgbench db on node 2:
psql -U admpgsql -d pgbench
select * from pgbench_tellers;
pgbench=# select * from pgbench_tellers;
tid | bid | tbalance | filler
-----+-----+----------+--------
1 | 1 | 0 |
2 | 1 | 0 |
3 | 1 | 0 |
4 | 1 | 0 |
5 | 1 | 0 |
6 | 1 | 0 |
7 | 1 | 0 |
8 | 1 | 0 |
9 | 1 | 0 |
10 | 1 | 0 |
(10 rows)
Now, that everything is ok, we should stop all the services, to initiate the cluster config:
node 2:
/etc/init.d/ppas-9.4 stop
umount /dev/drbd0
drbdadm secondary postgres
/etc/init.d/drbd stop
node 1:
drbdadm primary postgres
/etc/init.d/drbd stop
* ensure that all the services are disabled at the initialization. (both)
chkconfig --level 35 drbd off
chkconfig --level 35 ppas-9.4 off
* Configuring Corosync
node 1:
cp /etc/corosync/corosync.conf/example /etc/corosync/corosync.conf
vi /etc/corosync/corosync.conf
compatibility: whitetank
totem {
version: 2
secauth: off
threads: 0
interface {
ringnumber: 0
bindnetaddr: 192.168.1.0
mcastaddr: 239.255.1.1
mcastport: 5405
ttl: 1
}
}
logging {
fileline: off
to_stderr: no
to_logfile: yes
logfile: /var/log/cluster/corosync.log
to_syslog: yes
debug: off
timestamp: on
logger_subsys {
subsys: AMF
debug: off
}
}
amf {
mode: disabled
}
aisexec {
user: root
group: root
}
service {
# Load the Pacemaker Cluster Resource Manager
name: pacemaker
ver: 0
}
* From node1, we'll transfer the corosync config files to node2:
scp /etc/corosync/corosync.conf node2:/etc/corosync/
댓글 없음:
댓글 쓰기