2015년 4월 28일 화요일

PPAS(Postgres Plus Advanced Server) HA 구성 with Corosync, Pacemaker, DRBD - step 6

* Pacemaker 설정(방법 2)

* Configuring Stonith

* Disable Stonith (node 1)
crm configure property stonith-enabled=false

* checking the cluster configuration, we should get no errors:
crm_verify -L

* Cluster General Configuration

* Configuring quorum to 2 nodes. For more information, look at pacemaker configuration.
crm configure property no-quorum-policy=ignore

crm configure rsc_defaults resource-stickiness=100

crm configure show

* Configuring DBIP
crm configure primitive DBIP ocf:heartbeat:IPaddr2 params ip=192.168.21.160 cidr_netmask=24 op monitor interval=30s

* Configuring DRBD on Cluster
crm configure primitive drbd_postgres ocf:linbit:drbd params drbd_resource="postgres" op monitor interval="15s"

crm configure ms ms_drbd_postgres drbd_postgres meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"

crm configure primitive postgres_fs ocf:heartbeat:Filesystem params device="/dev/drbd0" directory="/var/lib/pgsql" fstype="ext4"

* Configuring PostgreSQL on Cluster
crm configure primitive postgresql ocf:heartbeat:pgsql op monitor depth="0" timeout="30" interval="30"
-> 이건 PostgreSQL 용이고 PPAS 용으로 아래와 같이 해야 될 것 같음
crm configure primitive postgresql ocf:heartbeat:pgsql params \
 pgctl=/opt/PostgresPlus/9.4AS/bin/pg_ctl \
 psql=/opt/PostgresPlus/9.4AS/bin/psql \
 pgdata=/var/lib/pgsql/data pgdba=enterprisedb pgport=5444 pgdb=edb \
 op monitor depth="0" timeout="30" interval="10" \
 op start timeout=120 interval=0 \
 op stop timeout=120 interval=0
(위 파라미터에서 ocf:heartbeat:pgsql 로 한다는 것은 실제 /usr/lib/ocf/resource.d/heartbeat/pgsql 의 설정값을 사용한다는 것이므로 실제 양쪽 노드에서 파일을 열어 값이 정상인지 확인한다)
아마 아래 6줄과 같이 변경해야 할 것이다.
OCF_RESKEY_pgctl_default=/opt/PostgresPlus/9.4AS/bin/pg_ctl
OCF_RESKEY_psql_default=/opt/PostgresPlus/9.4AS/bin//psql
OCF_RESKEY_pgdata_default=/var/lib/pgsql/data
OCF_RESKEY_pgdba_default=enterprisedb
OCF_RESKEY_pghost_default=""
OCF_RESKEY_pgport_default=5444

crm configure group postgres postgres_fs DBIP postgresql

crm configure coasdflocation postgres_on_drbd inf: postgres ms_drbd_postgres:Master

crm configure order postgres_after_drbd inf: ms_drbd_postgres:promote postgres:start

crm configure show

* Setting the Preferential Node
crm configure location master-prefer-node1 DBIP 50: cos1.local

* Cluster managment

* Acessing from network
echo "host all all 192.168.21.0/24 md5">> /var/lib/pgsql/data/pg_hba.conf

* restart postgres to reload configuration:
crm resource stop postgresql
crm resource start postgresql

문제가 있어 postgresql resource 지우고 아래 명령어로 다시 세팅
crm configure primitive postgresql ocf:heartbeat:pgsql params \
 pgctl=/opt/PostgresPlus/9.4AS/bin/pg_ctl \
 psql=/opt/PostgresPlus/9.4AS/bin/psql \
 pgdata=/var/lib/ppas/data pgdba=enterprisedb pgport=5444 pgdb=edb \
 op monitor depth="0" timeout="30" interval="10" \
 op start timeout=120 interval=0 \
 op stop timeout=120 interval=0

* # crm resource cleanup [resource ]
(한쪽 노드에서 실행해도 모든 노드에 적용되지만 잘 안 될 경우 모든 노드에서 명령어 실행)

* cat /proc/drbd 해서 primary/unknown 으로 나오는 경우 아래와 같이 한다.
On primary node
    drbdadm connect all
On secondary node
    drbdadm -- --discard-my-data connect all

Should work

PPAS(Postgres Plus Advanced Server) HA 구성 with Corosync, Pacemaker, DRBD - step 5

* iptables 확인 (both)
vi /etc/sysconfig/iptables
-A INPUT -m state --state NEW -m udp -p udp --dport 5404 -j ACCEPT
-A INPUT -m state --state NEW -m udp -p udp --dport 5405 -j ACCEPT

mkdir /var/log/cluster (both)
service iptables restart (both)
/etc/init.d/corosync start (node1)

* check if the service is ok (node 1)
grep -e "Corosync Cluster Engine" -e "configuration file" /var/log/messages

* check if corosync started on the right interface (node 1)
grep TOTEM /var/log/messages

* check if pacemaker is up (node 1)
grep pcmk_startup /var/log/messages

* check if the corosync process is up (node 1)
ps aux | grep corosync

* if everything is ok on node1, then we can bring corosync up on node2:

/etc/init.d/corosync start

* check the status of the cluster. Running on any node, the following command:
crm_mon -1

* 1 nodes configured 2 expected votes 에러 발생
-> /etc/corosync/corosync.conf 를 다른 것으로 수정

# Please read the corosync.conf.5 manual page
compatibility: whitetank

totem {
        version: 2
        secauth: off
        threads: 0
        interface {
                ringnumber: 0
bindnetaddr: 192.168.21.0
bindnetaddr: 192.168.21.0
mcastaddr: 226.94.1.1
mcastport: 4000
                ttl: 1
        }
}

logging {
        fileline: off
        to_stderr: no
        to_logfile: yes
        logfile: /var/log/cluster/corosync.log
        to_syslog: yes
        debug: off
        timestamp: on
        logger_subsys {
                subsys: AMF
                debug: off
        }
}
amf {
           mode: disabled
}
aisexec {
        user: root
        group: root
}
service {
        # Load the Pacemaker Cluster Resource Manager
        name: pacemaker
        ver: 0
}

* Set Corosync to automatic initialization (both nodes)
chkconfig --level 35 corosync on

* Pacemaker 설정
* crmsh 설치 (both)
-> 아래 에러 발생
error: Failed dependencies:
        pssh is needed by crmsh-2.1-1.6.x86_64
        python-dateutil is needed by crmsh-2.1-1.6.x86_64
        python-lxml is needed by crmsh-2.1-1.6.x86_64
        redhat-rpm-config is needed by crmsh-2.1-1.6.x86_64
-> yum -y install pssh python-dateutil python-lxml redhat-rpm-config
하고 다시 설치 시도

클러스터 일반 설정(node1) (방법 1)
crm configure property stonith-enabled=false
crm configure property no-quorum-policy=ignore
crm configure rsc_defaults resource-stickiness=100

클러스터 리소스 설정 (node1)
crm configure primitive DBIP ocf:heartbeat:IPaddr2 params ip=192.168.21.144 cidr_netmask=24 op monitor interval=30s

crm_mon -1
로 등록 상태를 확인하면아래와 같이 등록된 리소스를 확인할 수 있다.
[root@cos1 yum.repos.d]# crm_mon -1
Last updated: Tue Apr 21 03:30:03 2015
Last change: Tue Apr 21 03:29:57 2015
Stack: classic openais (with plugin)
Current DC: cos2.local - partition with quorum
Version: 1.1.11-97629de
2 Nodes configured, 2 expected votes
1 Resources configured


Online: [ cos1.local cos2.local ]

 DBIP   (ocf::heartbeat:IPaddr2):       Started cos1.local

* [DRBD on cluster] (node1)
이어서, DRBD 설정도 순서대로 진행한다.

crm configure primitive drbd_postgres ocf:linbit:drbd params drbd_resource="postgres" op monitor interval="15s"

crm configure ms ms_drbd_postgres drbd_postgres meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"

crm configure primitive postgres_fs ocf:heartbeat:Filesystem params device="/dev/drbd0" directory="/var/lib/ppas" fstype="ext4"

* [PostgreSQL on cluster] (node1)
PostgreSQL도 리소스로 등록해준다.

crm configure primitive postgresql ocf:heartbeat:pgsql op monitor depth="0" timeout="30" interval="30"

* [Resource Grouping]
PostgreSQL과 관련하여 등록한 리소스들을 그룹으로 묶어주고순서를 부여한다.

crm configure group postgres postgres_fs DBIP postgresql

crm configure colocation postgres_on_drbd inf: postgres ms_drbd_postgres:Master

crm configure order postgres_after_drbd inf: ms_drbd_postgres:promote postgres:start

crm configure location master-prefer-node1 DBIP 50: cos1.local

* crm  resource 삭제시 아래 링크 참고
https://www.suse.com/documentation/sle_ha/book_sleha/data/sec_ha_config_crm.html


PPAS(Postgres Plus Advanced Server) HA 구성 with Corosync, Pacemaker, DRBD - step 4

14. PPAS 환경 설정 (both)
vi /etc/profile
vi ~/.bashrc
아래 내용 추가
source /opt/PostgresPlus/9.4AS/pgplus_env.sh

* init db (node1)
su - enterprisedb
initdb /var/lib/ppas/data
or
initdb /var/lib/pgsql/data
exit

15. enable trusted authentication on the nodes and cluster IP's (node1)
(아래 IP 는 각자 알맞게 변경해야 함)
echo "host all all 192.168.21.148/32 trust" >> /var/lib/ppas/data/pg_hba.conf
echo "host all all 192.168.21.149/32 trust" >> /var/lib/ppas/data/pg_hba.conf
echo "host all all 192.168.21.150/32 trust" >> /var/lib/ppas/data/pg_hba.conf
or
echo "host all all 192.168.21.161/32 trust" >> /var/lib/pgsql/data/pg_hba.conf
echo "host all all 192.168.21.162/32 trust" >> /var/lib/pgsql/data/pg_hba.conf
echo "host all all 192.168.21.160/32 trust" >> /var/lib/pgsql/data/pg_hba.conf

* enable PPAS to listen on all interfaces (node1)
vi /var/lib/pgsql/data/postgresql.conf
아래와 같이 수정
listen_addresses = '*'

* start ppas (node1)
이미 실행 중이라면 다음 명령어로 중지
pkill edb (both)
pkill pgagent (both)

/etc/init.d/ppas-9.4 start (node1)
아래 에러 발생시
waiting for server to start....sh: /var/lib/ppas/data/pg_log/startup.log: No such file or directory
디렉토리 생성해줌 (both)
mkdir -p /var/lib/pgsql/data/pg_log/
chown -R enterprisedb:enterprisedb /var/lib/pgsql
다시 ppas 시작
/etc/init.d/ppas-9.4 start (node1)

* create an admin user to manage ppas (node1)
su - enterprisedb
createuser --superuser admpgsql --pwprompt

* create a database and populate it with pgbench (node1)
createdb pgbench
pgbench -i pgbench

* Pgbench populates the db with some info, the objetive is to test ppas (node1)
pgbench -i pgbench

* (node1)
psql -U admpgsql -d pgbench
select * from pgbench_tellers;

* all ppas config is done.

* Checking if PPAS will work on node2

* on node1, we need to stop ppas (node1)

/etc/init.d/ppas-9.4 stop
umount /dev/drbd0
drbdadm secondary postgres

* we need to promote node2 as Primary on DRBD resource:
drbdadm primary postgres
mount -t ext4 /dev/drbd0 /var/lib/pgsql/
/etc/init.d/ppas-9.4 start

* Let's check if we can access the pgbench db on node 2:
psql -U admpgsql -d pgbench
select * from pgbench_tellers;

pgbench=# select * from pgbench_tellers;
 tid | bid | tbalance | filler
-----+-----+----------+--------
   1 |   1 |        0 |
   2 |   1 |        0 |
   3 |   1 |        0 |
   4 |   1 |        0 |
   5 |   1 |        0 |
   6 |   1 |        0 |
   7 |   1 |        0 |
   8 |   1 |        0 |
   9 |   1 |        0 |
  10 |   1 |        0 |
(10 rows)

Now, that everything is ok, we should stop all the services, to initiate the cluster config:

node 2:
/etc/init.d/ppas-9.4 stop
umount /dev/drbd0
drbdadm secondary postgres
/etc/init.d/drbd stop

node 1:
drbdadm primary postgres
/etc/init.d/drbd stop

* ensure that all the services are disabled at the initialization. (both)
chkconfig --level 35 drbd off
chkconfig --level 35 ppas-9.4 off

* Configuring Corosync
node 1:
cp /etc/corosync/corosync.conf/example /etc/corosync/corosync.conf

vi /etc/corosync/corosync.conf

compatibility: whitetank

totem {
        version: 2
        secauth: off
        threads: 0
        interface {
                ringnumber: 0
                bindnetaddr: 192.168.1.0
                mcastaddr: 239.255.1.1
                mcastport: 5405
                ttl: 1
        }
}

logging {
        fileline: off
        to_stderr: no
        to_logfile: yes
        logfile: /var/log/cluster/corosync.log
        to_syslog: yes
        debug: off
        timestamp: on
        logger_subsys {
                subsys: AMF
                debug: off
        }
}

amf {
mode: disabled
}

aisexec {
user: root
group: root
}

service {
# Load the Pacemaker Cluster Resource Manager
name: pacemaker
ver: 0
}

* From node1, we'll transfer the corosync config files to node2:
scp /etc/corosync/corosync.conf node2:/etc/corosync/



PPAS(Postgres Plus Advanced Server) HA 구성 with Corosync, Pacemaker, DRBD - step 3

10. Configuring DRBD (both)
vi /etc/drbd.conf
(빨간색 ip 정보는 각자 환경에 맞게 수정)
global {
    usage-count no;
}
common {
    syncer { rate 100M; }
    protocol      C;
}
resource postgres {
    startup {
        wfc-timeout 30;
        outdated-wfc-timeout 20;
        degr-wfc-timeout 30;
    }
    disk { on-io-error detach; }
    on cos1.local {
       device      /dev/drbd0;
       disk        /dev/sdb;
       address     10.0.0.139:7791;
       meta-disk   internal;
    }
    on cos2.local {
       device      /dev/drbd0;
       disk        /dev/sdb;
       address     10.0.0.140:7791;
       meta-disk   internal;
    }
}

10.1 iptables 수정
vi /etc/sysconfig/iptables
아래 내용 추가
-A INPUT -m state --state NEW -m udp -p udp --dport 5404 -j ACCEPT
-A INPUT -m state --state NEW -m udp -p udp --dport 5405 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 5444 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 7791 -j ACCEPT

service iptables restart

11. VM 이라면 hdd 추가하기 (both)
: /dev/sdb 로 잡힐 하드디스크 추가하기
fdisk -l 로 보이는지 확인

12. meta data 생성 (both)
앞서 선언한 resource postgres라는 이름으로 meta data를 생성한다.
drbdadm create-md postgres

drbdadm up postgres
-> 한쪽 노드는 아래 에러 나옴
postgres: Failure: (102) Local address(port) already in use.
Command 'drbdsetup-84 connect postgres ipv4:10.0.0.139:7791 ipv4:10.0.0.140:7791 --protocol=C' terminated with exit code 10


node1:
drbdadm -- --overwrite-data-of-peer primary postgres

* To check the progress of the sync, and status of DRBD resource, look at /proc/drbd (both)
cat /proc/drbd

100% 될때까지 기다린다.
완료되면 node 1, node 2 에서는 각각 cat /proc/drbd 했을 때 출력이 다음과 같아야 한다.
node 1:
[root@cos1 ppas]# cat /proc/drbd
version: 8.4.6 (api:1/proto:86-101)
GIT-hash: 833d830e0152d1e457fa7856e71e11248ccf3f70 build by phil@Build64R6, 2015-04-09 14:35:00
 0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
    ns:3145596 nr:0 dw:0 dr:3146268 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0

node 2:
[root@cos2 ~]# cat /proc/drbd
version: 8.4.6 (api:1/proto:86-101)
GIT-hash: 833d830e0152d1e457fa7856e71e11248ccf3f70 build by phil@Build64R6, 2015-04-09 14:35:00
 0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r-----
    ns:0 nr:3145596 dw:3145596 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0

13. Format the DRBD device (node1)
mkfs.ext4 /dev/drbd0

* mount (node1)
mount -t ext4 /dev/drbd0 /var/lib/ppas
or
mount -t ext4 /dev/drbd0 /var/lib/pgsql

* chown (node1)
chown -R enterprisedb:enterprisedb /var/lib/ppas
or
chown -R enterprisedb:enterprisedb /var/lib/pgsql



PPAS(Postgres Plus Advanced Server) HA 구성 with Corosync, Pacemaker, DRBD - step 2

6. Configuring Initialization options

I like to set runlevel to 3.
vi /etc/inittab

Change this line only (leaving everything else untouched):
id:3:initdefault:
I like remove some services from automatic initialization, to maintain only services that really will be used.
These are the active services that we'll need:
chkconfig --list | grep 3:sim

6.1 At this point, we need to reboot both nodes to apply configuration.

7. proxy 설정
vi /etc/profile
아래 내용 추가
# proxy
http_proxy="http://a.b.c.d:8080/"
https_proxy="http://a.b.c.d:8080/"
no_proxy="localhost,127.0.0.1,192.168.21.140,127.0.0.0/8,127.0.1.1,10.0.0.0/8,192.168.0.0/16"

export http_proxy
export https_proxy
export no_proxy

7.1 proxy 설정 for root
vi /root/.bashrc
아래 내용 추가
# proxy
http_proxy="http://a.b.c.d:8080/"
https_proxy="http://a.b.c.d:8080/"
no_proxy="localhost,127.0.0.1,192.168.21.140,127.0.0.0/8,127.0.1.1,10.0.0.0/8,192.168.0.0/16"

export http_proxy
export https_proxy
export no_proxy

rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

wget -O /etc/yum.repos.d/pacemaker.repo http://clusterlabs.org/rpm/epel-5/clusterlabs.repo

rpm -Uvh http://www.elrepo.org/elrepo-release-6-6.el6.elrepo.noarch.rpm

yum update -y

아래 에러 발생시
Error: Cannot retrieve metalink for repository: epel. Please verify its path and try again
vi /etc/yum.repos.d/epel.repo 의 모든
mirrorlist=https:// -> mirrorlist=http:// 로 변경 후
yum update -y 재시도

yum install -y pacemaker corosync drbd84 kmod-drbd84 heartbeat

아래와 같은 에러 나는 경우
Error: Package: cluster-glue-1.0.6-1.6.el5.x86_64 (clusterlabs)
           Requires: libltdl.so.3()(64bit)
Error: Package: cluster-glue-1.0.6-1.6.el5.x86_64 (clusterlabs)
           Requires: libnetsnmp.so.10()(64bit)
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest

아래 2개 패키지 설치 후 시도
rpm -Uvh http://ftp.muug.mb.ca/mirror/centos/6.6/os/x86_64/Packages/cluster-glue-libs-1.0.5-6.el6.x86_64.rpm
rpm -Uvh http://ftp.muug.mb.ca/mirror/centos/6.6/os/x86_64/Packages/cluster-glue-1.0.5-6.el6.x86_64.rpm
-> 아래 에러 발생하는 경우
error: Failed dependencies:
        perl-TimeDate is needed by cluster-glue-1.0.5-6.el6.x86_64
-> yum install perl-TimeDate

8. VM 복제
여기까지 node1 해놓고 node2 복제한다.
만약 ifconfig 해서 eth0 -> eth1, eth1 -> eth2 로 바뀌었다면 아래 링크 참고해서 해결한다.
http://www.cyberciti.biz/tips/vmware-linux-lost-eth0-after-cloning-image.html

9. PPAS 설치 (both)
# ./ppasmeta-9.4.1.3-linux-x64.run --optionfile ppas.cfg
           - ppas.cfg:
                     mode=unattended
                     serverport=5444
                     serviceaccount=enterprisedb
                     servicepassword=enterprisedb
                     servicename=ppas-9.4
                     superaccount=enterprisedb
                     superpassword=enterprisedb
                     datadir=/var/lib/pgsql/data
                     xlogdir=/var/lib/pgsql/data/pg_xlog



PPAS(Postgres Plus Advanced Server) HA 구성 with Corosync, Pacemaker, DRBD - step 1

설치환경
VMware VM 1, 2:
  RAM: 4GB
  HDD 1: 20GB
  HDD 2: 3GB
  NIC: eth0, eth1
OS: CentOS 6.6
PPAS: 9.4 version

1. Disabling SELINUX
vi /etc/selinux/config
SELINUX=disabled

2. Setting Hostname
vi /etc/sysconfig/network

node1:
NETWORKING=yes
NETWORKING_IPV6=no
HOSTNAME=cos1.local
GATEWAY=192.168.21.2

node2:
NETWORKING=yes
NETWORKING_IPV6=no
HOSTNAME=cos2.local
GATEWAY=192.168.21.2

3. Configuring network interfaces
node1:
vi /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE="eth0"
BOOTPROTO="static"
IPADDR=192.168.21.144
HWADDR="00:0C:29:5A:63:1A"
IPV6INIT="yes"
NM_CONTROLLED="yes"
ONBOOT="yes"
TYPE="Ethernet"
UUID="93baf4a8-9049-4069-adcb-f50cc0bc2cec"
DNS1=8.8.8.8

vi /etc/sysconfig/network-scripts/ifcfg-eth1

DEVICE="eth1"
BOOTPROTO="static"
IPADDR=10.0.0.139
HWADDR="00:0C:29:5A:63:24"
ONBOOT="yes"
TYPE="Ethernet"

node 2:
vi /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE="eth0"
BOOTPROTO="static"
IPADDR=192.168.21.144
HWADDR="00:0C:29:5A:63:1A"
IPV6INIT="yes"
NM_CONTROLLED="yes"
ONBOOT="yes"
TYPE="Ethernet"
UUID="93baf4a8-9049-4069-adcb-f50cc0bc2cec"
DNS1=8.8.8.8

vi /etc/sysconfig/network-scripts/ifcfg-eth1

DEVICE="eth1"
BOOTPROTO="static"
IPADDR=10.0.0.140
HWADDR="00:0C:29:5A:63:24"
ONBOOT="yes"
TYPE="Ethernet"

4. Configuring basic hostname resolution
Configuring /etc/hosts (same config on both nodes):

vi /etc/hosts
127.0.0.1   localhost cos1.local
::1         localhost cos1.local

192.168.21.160  dbip.local      dbip
192.168.21.161  cos1.local      cos1
192.168.21.162  cos2.local      cos2

5. Checking network connectivity
Let's check if everything is fine:
node1:
Pinging node2 (thru LAN interface):
ping -c 2 node2

[root@node1 ~]# ping -c 2 node2
PING node2 (10.0.0.192) 56(84) bytes of data.
64 bytes from node2 (10.0.0.192): icmp_seq=1 ttl=64 time=0.089 ms
64 bytes from node2 (10.0.0.192): icmp_seq=2 ttl=64 time=0.082 ms
--- node2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.082/0.085/0.089/0.009 ms
Pinging node2 (thru cross-over interface):
ping -c 2 172.16.0.2

[root@node1 ~]# ping -c 2 172.16.0.2
PING 172.16.0.2 (172.16.0.2) 56(84) bytes of data.
64 bytes from 172.16.0.2: icmp_seq=1 ttl=64 time=0.083 ms
64 bytes from 172.16.0.2: icmp_seq=2 ttl=64 time=0.083 ms
--- 172.16.0.2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms
node2:
Pinging node1 (thru LAN interface):
ping -c 2 node1

[root@node2 ~]# ping -c 2 node1
PING node1 (10.0.0.191) 56(84) bytes of data.
64 bytes from node1 (10.0.0.191): icmp_seq=1 ttl=64 time=0.068 ms
64 bytes from node1 (10.0.0.191): icmp_seq=2 ttl=64 time=0.063 ms
--- node1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.063/0.065/0.068/0.008 ms
Pinging node1 (thru cross-over interface):
ping -c 2 172.16.0.1

[root@node2 ~]# ping -c 2 172.16.0.1
PING 172.16.0.1 (172.16.0.1) 56(84) bytes of data.
64 bytes from 172.16.0.1: icmp_seq=1 ttl=64 time=1.36 ms
64 bytes from 172.16.0.1: icmp_seq=2 ttl=64 time=0.075 ms
--- 172.16.0.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.075/0.722/1.369/0.647 ms



trove cluster create

$ trove cluster-create products mongodb "2.6" \ --instance flavor_id=7,volume=2 --instance flavor_id=7,volume=2 --instance flavor_id=7,volume=2

2015년 4월 17일 금요일

CentOS 에서 proxy 설정하기

1. shell 에 proxy 설정
vi /etc/profile

http_proxy=http://[IP]:[PORT]
export no_proxy=[제외할 경로]
export http_proxy

2. yum 에 proxy 설정
vi /etc/yum.conf

proxy=http://your.proxy.server:8080

$ yum clean all

2015년 4월 16일 목요일

CentOS 이미지 glance 에 등록하기

1. 아래 사이트에서 qcow2 이미지 다운로드
http://cloud.centos.org/centos/6/images/
http://cloud.centos.org/centos/7/images/

2. glance 를 이용해서 등록
glance image-create --name centos-6.5 --disk-format=qcow2 --container-format=bare --is-public True --file /tmp/centos-6.5.img

devstack 설치 후 redstack kick-start 등으로 image 등록이 안되는 경우

swift 에 할당된 size 가 작은 지 확인 후 다음과  같이 늘려준다.
vi ~/devstack/lib/swift

SWIFT_LOOPBACK_DISK_SIZE_DEFAULT=10G

Debugging bash shell script

use -xv option

$ bash -xv filesize.sh

2015년 4월 14일 화요일

Python 을 위한 vim 설정

1. Python syntax coloring
Python Syntax Color를 지정하기 위해서 http://www.vim.org/scripts/script.php?script_id=790 사이트에서 python.vim 파일을 다운로드 받아 ~/.vim/syntax/ 폴더에 복사합니다. ~/.vimrc 파일에는 아래와 같이 추가해줍니다.
syntax on
filetype plugin indent on 

2. Tab 설정
Tab 설정은 ~/.vim/ftplugin/python.vim 파일에 아래와 같이 추가해줍니다. (파일이 없다면 새로 생성해주세요)
set tabstop=4
set softtabstop=4
set shiftwidth=4
set textwidth=100
set expandtab
set smartindent cinwords=if,elif,else,for,while,try,except,finally,def,class
set nocindent
let python_version_2 = 1 " python 2 문법을 따른다고 옵션을 설정합니다.
let python_highlight_all = 1 " 모든 강조(색상) 기능을 켭니다.
3. exuberant-ctags 설치
$ sudo apt-get install exuberant-ctags

설치 후 ctags --help 하면 도움말 등이 나와야 함

4. tagging file 만드는 법
원하는 디렉토리에서 ctags -R 하면 됨
그러나 이렇게 하면 taggind 에 python 코드의 변수, import 된 패키지 이름 정보도 함께 포함됨
tagging 정보에는 클래스와 함수만 포함되는 것이 코드 브라우징에 좋음.
이를 위해 ~/.ctags 파일에 아래와 같은 내용을 추가

--python-kinds=-iv
--exclude=build
--exclude=dist

위 내용에서 -iv 옵션이 import 와 변수를 tagging 대상에서 제외시키는 역할을 함
--exclude 옵션은 tagging 시 제외할 폴더를 지정할 수 있음

5. Auto Completion 기능
Jedi 플러그인을 설치하면 vim에서 Python 코드 자동완성 기능을 사용할 수 있습니다. 설치 방법은 아래와 같습니다.

5.1. Vundle 플러그인을 설치합니다.
git clone https://github.com/gmarik/vundle.git ~/.vim/bundle/vundle

5. 2. ~/.vimrc 파일을 다음과 같이 수정합니다.
syntax on
set nocompatible              
filetype off                  

set rtp+=~/.vim/bundle/vundle/
call vundle#rc()
" let Vundle manage Vundle
" " required! 
Bundle 'gmarik/vundle'
" " My bundles here:
Bundle 'davidhalter/jedi-vim'

filetype plugin indent on     " required!

5. 3. vim을 실행 후 :BundleInstall을 실행합니다.

6. File Browser 설치
The NERD Tree 를 설치해야 하는데 아래 링크에 가면
https://github.com/scrooloose/nerdtree
pathogen.vim 을 설치하라고 권장하고 있다. 따라서 이렇게 한다.
6.1 pathogen.vim 설치
mkdir -p ~/.vim/autoload ~/.vim/bundle && \ curl -LSso ~/.vim/autoload/pathogen.vim https://tpo.pe/pathogen.vim
6.2 Add this to your vimrc:
execute pathogen#infect()
6.3 install nerd tree
cd ~/.vim/bundle
git clone https://github.com/scrooloose/nerdtree.git
Then reload vim, run :Helptags, and check out :help NERD_tree.txt.

6.4 단축키 설정
map <F3> :NERDTreeToggle<CR> or
map <C-n> :NERDTreeToggle<CR>
7. taglist 설치
7.1 설치
아래 사이트에 가서 최신 taglis_xx.zip 다운로드
http://vim.sourceforge.net/scripts/script.php?script_id=273
풀면 아래 두 파일이 있어야 함 plugin/taglist.vim - main taglist plugin file doc/taglist.txt - documentation (help) file
아래 위치에 위 파일들을 복사
$HOME/.vim or the $HOME/vimfiles or the $VIM/vimfiles

Change to the $HOME/.vim/doc or $HOME/vimfiles/doc or $VIM/vimfiles/doc directory, start Vim and run the ":helptags ." command to process the taglist help file. Without this step, you cannot jump to the taglist help topics.
If the exuberant ctags utility is not present in your PATH, then set the Tlist_Ctags_Cmd variable to point to the location of the exuberant ctags utility (not to the directory) in the .vimrc file.
If you are running a terminal/console version of Vim and the terminal doesn't support changing the window width then set the 'Tlist_Inc_Winwidth' variable to 0 in the .vimrc file.
Restart Vim.
You can now use the ":TlistToggle" command (previously ":Tlist") to open/close the taglist window. You can use the ":help taglist" command to get more information about using the taglist plugin.

7.2 vimrc 수정
아래 내용 참고해서 수정
let Tlist_Ctags_Cmd="/usr/bin/ctags"
let Tlist_inc_Winwidth=0
let Tlist_Exit_OnlyWindow=1
let Tlist_Display_Tag_Scope = 1
let Tlist_Display_Prototype = 1
let Tlist_Use_Right_Window = 1
let Tlist_Sort_Type = "name"
let Tlist_WinWidth = 60

map <F4> :Tlist<cr>
nnoremap <F11> <C-t>
nnoremap <F12> <C-]>

7.3 taglist 사용법
ctrl + ] 로 이동하고
ctrl + t 로 이전 위치로 복귀하나, 위와 같이 vimrc 에 단축키 지정해서 사용함


2015년 4월 3일 금요일

PPAS(PostgreSQL Plus Advanced Server) 접속 테스트

1. psql 이용
psql -d pdb -U enterprisedb

2. EDB plus 이용
EDB*Plus로 DB접속방법(local)
/opt/PostgresPlus/9.4AS/edbplus$ ./edbplus.sh
    User: enterprisedb/enterprisedb@localhost:5444/edb

3. 원격 접속 허용
/opt/PostgresPlus/9.4AS/data/pg_hba.conf 에 아래와 같은 내용 추가하면 모든 곳에서 접속 가능
host    all             all             0.0.0.0/0               md5

2015년 4월 2일 목요일

How to glance image create

glance image-create --name "abc" --is-public=true --disk-format qcow2 --container-format bare --file ~/images/abc.qcow2

How to install Java 8 on Ubuntu

1.
 sudo add-apt-repository -y ppa:webupd8team/java

2.
sudo apt-get update

3. accept the oracle jdk8 license automatically
echo oracle-java8-installer shared/accepted-oracle-license-v1-1 select true | sudo /usr/bin/debconf-set-selections

4.
sudo apt-get install oracle-java8-installer

5.
sudo apt-get install oracle-java8-set-default

6. verify installed java version
java -version