0%

安装Postgresql-XL

  • 一直以来的服务器开发都会使用PostgreSQL做为数据库.有人会问为什么不用 MySQL?这个是个人喜好.Postgresql 与 MySQL 也是一些区别的.Postgresql 有一些特性是 MySQL 没有具备的.有人会说:在中国为什么 MySQL 的使用会比 PostgreSQL 流行.可能其中一个原因是 PostgreSQL 没有服务好 PHP.PostgreSQL 与 MySQL 的连接方式有很大不同,PostgreSQL 是 fork 进程,MySQL 是线程连接.所以在高并发下 PostgreSQL 的连接数会很快被用完.这时就需要一个连接池处理客户端的连接.
  • Pgbouncer,pgpool,PostgreSQL-XL

安装主机系统

  • 本文的系统是基于Debian 8,安装 qemu-kvm 虚拟机.并于一些相关的软件.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# apt-get install qemu-kvm openvswitch-{switch,common} libvirt-bin virtinst virt-manager
[...]

# /etc/init.d/openvswitch-switch start

[...]


# ovs-vsctl show
504bd042-d300-4fc3-be49-5609ea5df49f
Bridge "ovs-br0"
Port "ovs-br0"
Interface "ovs-br0"
type: internal
ovs_version: "2.5.0"


安装基础系统

  • 本文的做实验的虚拟机系统是基于Centos 7的环境. 通过virsh dumpxml vhost0,配置如下:
  • 两个 CPU,8G 内存,eth0 192.168.120/24, eth1 192.168.25.0/24,  eth0 是桥接主机的 openvswitch 的虚拟网卡用作于 HostOnly 内部网络.eth1 是桥主机的物理网卡,可以连接到互联安装软件
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
# virsh dumpxml vhost0
<domain type='kvm' id='53'>
<name>vhost0</name>
<uuid>0fdbcbe7-f7e9-443e-bf9b-7ac4a6d27612</uuid>
<memory unit='KiB'>8388608</memory>
<currentMemory unit='KiB'>8388608</currentMemory>
<vcpu placement='static'>2</vcpu>
<resource>
<partition>/machine</partition>
</resource>
<os>
<type arch='x86_64' machine='pc-i440fx-xenial'>hvm</type>
<boot dev='hd'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='custom' match='exact'>
<model fallback='allow'>Broadwell-noTSX</model>
</cpu>
<clock offset='utc'>
<timer name='rtc' tickpolicy='catchup'/>
<timer name='pit' tickpolicy='delay'/>
<timer name='hpet' present='no'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<pm>
<suspend-to-mem enabled='no'/>
<suspend-to-disk enabled='no'/>
</pm>
<devices>
<emulator>/usr/bin/kvm-spice</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/data/centos7.qcow2'/>
<backingStore/>
<target dev='vda' bus='virtio'/>
<alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</disk>
<controller type='usb' index='0' model='ich9-ehci1'>
<alias name='usb'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x7'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci1'>
<alias name='usb'/>
<master startport='0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0' multifunction='on'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci2'>
<alias name='usb'/>
<master startport='2'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x1'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci3'>
<alias name='usb'/>
<master startport='4'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'>
<alias name='pci.0'/>
</controller>
<controller type='ide' index='0'>
<alias name='ide'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
</controller>
<controller type='virtio-serial' index='0'>
<alias name='virtio-serial0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</controller>
<interface type='direct'>
<mac address='52:54:00:a5:93:d5'/>
<source dev='ovs-system' mode='bridge'/>
<target dev='macvtap10'/>
<model type='virtio'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<interface type='direct'>
<mac address='52:54:00:14:02:69'/>
<source dev='enp5s0' mode='bridge'/>
<target dev='macvtap11'/>
<model type='virtio'/>
<alias name='net1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
</interface>
<console type='pty' tty='/dev/pts/18'>
<source path='/dev/pts/18'/>
<target type='virtio' port='0'/>
<alias name='console0'/>
</console>
<channel type='unix'>
<source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-pgsql-xl-gmt/org.qemu.guest_agent.0'/>
<target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/>
<alias name='channel0'/>
<address type='virtio-serial' controller='0' bus='0' port='1'/>
</channel>
<channel type='spicevmc'>
<target type='virtio' name='com.redhat.spice.0' state='disconnected'/>
<alias name='channel1'/>
<address type='virtio-serial' controller='0' bus='0' port='2'/>
</channel>
<input type='tablet' bus='usb'>
<alias name='input0'/>
</input>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<graphics type='spice' port='5905' autoport='yes' listen='127.0.0.1'>
<listen type='address' address='127.0.0.1'/>
</graphics>
<video>
<model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1'/>
<alias name='video0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</video>
<redirdev bus='usb' type='spicevmc'>
<alias name='redir0'/>
</redirdev>
<redirdev bus='usb' type='spicevmc'>
<alias name='redir1'/>
</redirdev>
<redirdev bus='usb' type='spicevmc'>
<alias name='redir2'/>
</redirdev>
<redirdev bus='usb' type='spicevmc'>
<alias name='redir3'/>
</redirdev>
<memballoon model='virtio'>
<alias name='balloon0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
</memballoon>
</devices>
<seclabel type='dynamic' model='apparmor' relabel='yes'>
<label>libvirt-0fdbcbe7-f7e9-443e-bf9b-7ac4a6d27612</label>
<imagelabel>libvirt-0fdbcbe7-f7e9-443e-bf9b-7ac4a6d27612</imagelabel>
</seclabel>
</domain>

配置基础系统

  • 在虚拟机里安装相关的软件并配置好环境,配置好各虚拟机之间自动使用证书登录 SSH 的功能.
1
2
3
4
5
6
7
8
9
10
11
12
13
# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:a5:93:d5 brd ff:ff:ff:ff:ff:ff
inet 192.168.120.20/24 brd 192.168.120.255 scope global eth0
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:14:02:69 brd ff:ff:ff:ff:ff:ff
inet 192.168.25.20/24 brd 192.168.25.255 scope global eth1
valid_lft forever preferred_lft forever
  • 安装如下的软件包:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31

// 安装编译环境
# yum install -y flex bison readline-devel zlib-devel openjade docbook-style-dsssl gcc git tmux libuuid libuuid-devel pcre pcre-devel openssl-devel perl curl

// 添加用户

# groupadd postgres
# useradd -g postgres postgres
# mkdir /home/postgres && chown postgres.postgres -R /home/postgres

// 关闭防火墙,不然会有多问题.
# systemctl disable iptables.service
# iptables -F ; iptables -X

// 关闭SELINUX,修改配置文件/etc/selinux/config,将SELINU置为disabled
# setenforce 0
// OR
# sed -i '/SELINUX/s/enforcing/disabled/' /etc/selinux/config

//预先配置好主机名与IP对应如下

# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.120.19 coord1
192.168.120.18 gtmproxy
192.168.120.20 gtmnode
192.168.120.21 node01
192.168.120.22 node02
192.168.120.23 node03

  • 修改内核参数,系统限制,有些参数要根据需求修改
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
vm.swappiness = 0
net.ipv4.neigh.default.gc_stale_time = 120
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.all.arp_announce = 2
net.ipv4.tcp_max_tw_buckets = 10000
net.ipv4.tcp_syncookies = 0
net.ipv4.tcp_max_syn_backlog = 3240000
net.ipv4.tcp_synack_retries = 2
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
net.ipv4.conf.lo.arp_announce = 2
fs.file-max = 40000500
fs.nr_open = 40000500
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_keepalive_time = 1
net.ipv4.tcp_keepalive_intvl = 4
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_fin_timeout = 5
net.ipv4.tcp_mem = 768432 2097152 15242880
net.ipv4.tcp_rmem = 4096 4096 33554432
net.ipv4.tcp_wmem = 4096 4096 33554432
net.core.somaxconn = 32768
net.ipv4.ip_local_port_range = 1025 65000
net.core.wmem_default = 183888608
net.core.rmem_default = 183888608
net.core.rmem_max = 33554432
net.core.wmem_max = 33554432
net.core.netdev_max_backlog = 2621244
kernel.sem = 250 65536 100 2048
kernel.shmmni = 655360
kernel.shmmax = 34359738368
kernel.msgmni = 65535
kernel.msgmax = 65536
kernel.msgmnb = 65536
net.netfilter.nf_conntrack_max = 1000000
net.nf_conntrack_max = 1000000
kernel.perf_event_max_sample_rate = 6250
net.ipv4.tcp_max_orphans = 1048576
kernel.sched_migration_cost_ns = 5000000
net.core.optmem_max = 25165824
kernel.sem = 10000 2560000 10000 256

# ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 515045
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 20000500
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 515045
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited


编译 PostgreSQL-XL

  • 使用 git clone clone 下载源代码postgresql-xl,或者下载源码包.
1
2
3
4
5
6
7
8
9
10
# curl -O http://files.postgres-xl.org/postgres-xl-9.5r1.4.tar.gz
# tar xvf postgres-xl-9.5r1.4.tar.gz
# cd postgres-xl-9.5r1.4 && ./configure && make && make install

[...]

//默认安装在这里
# ls /usr/local/pgsql/
bin include lib share

设置 postgres 用户的变量

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
$ cat .bashrc
# .bashrc
# User specific aliases and functions

alias rm='rm -i'
alias cp='cp -i'
alias mv='mv -i'

# Source global definitions
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi

export PGUSER=postgres
# export PGHOME=/usr/local/pgxl-9.5-stable
export PGHOME=/usr/local/pgsql
export LD_LIBRARY_PATH=$PGHOME/lib:$LD_LIBRARY_PATH
export PATH=$HOME/bin:$PGHOME/bin:$PATH

配置 PostgreSQL-XL

  • 上面的操作配置了一台虚拟机环境并且编译好了Postgresql-XL,现在此基础上克隆 5 台虚拟机出来,并修它们的网卡与主机名
1
2
// 确保做好SSH公钥认证登录
# for name in node01 node02 node03 coord1 gtmnode gtmproxy; do > ssh ${name} "echo ${name} > /etc/hostname";done
1
2
3
4
5
6
7
8
9
# virsh  list
Id Name State
----------------------------------------------------
52 pgsql-xl-coord1 running
53 pgsql-xl-gmtnode running
54 pgsql-xl-node01 running
55 pgsql-xl-node02 running
56 pgsql-xl-node03 running
57 pgsql-xl-gtmproxy running
  • 所有虚拟机都修改主机名与 IP,并且可以相互自动登录.现在到gtmnode这台机上的postgres用户下去部署 Postgresql-XL 集群了.
  • 使用命令pgxc_ctl prepare config minimal,生成默认配置文件.里面配置比较多.这里要相应的修改.
1
2
3
4
5
 tree /home/postgres/
/home/postgres/
├── pgxc_ctl
│   ├── coordExtraConfig
│   ├── pgxc_ctl.conf
  • 这里根据前面定的虚拟机数量,做了如下修改:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167


[postgres@gtmnode ~]$ cat pgxc_ctl/pgxc_ctl.conf | grep -v '^#' | grep -v '^$' [114/1896]
pgxcInstallDir=$HOME/pgxc
pgxcOwner=$USER # owner of the Postgres-XC databaseo cluster. Here, we use this
# both as linus user and database user. This must be
# the super user of each coordinator and datanode.
pgxcUser=$pgxcOwner # OS user of Postgres-XC owner
tmpDir=/tmp # temporary dir used in XC servers
localTmpDir=$tmpDir # temporary dir used here locally
configBackup=n # If you want config file backup, specify y to this value.
configBackupHost=pgxc-linker # host to backup config file
configBackupDir=$HOME/pgxc # Backup directory
configBackupFile=pgxc_ctl.bak # Backup file name --> Need to synchronize when original changed.
gtmMasterServer=192.168.120.20
gtmMasterPort=20001
gtmMasterDir=$HOME/pgxc/nodes/gtm
gtmExtraConfig=none # Will be added gtm.conf for both Master and Slave (done at initilization only)
gtmMasterSpecificExtraConfig=none # Will be added to Master's gtm.conf (done at initialization only)
gtmSlave=n # Specify y if you configure GTM Slave. Otherwise, GTM slave will not be configured and
# all the following variables will be reset.
gtmSlaveName=gtmSlave
gtmSlaveServer=node12 # value none means GTM slave is not available. Give none if you don't configure GTM Slave.
gtmSlavePort=20001 # Not used if you don't configure GTM slave.
gtmSlaveDir=$HOME/pgxc/nodes/gtm # Not used if you don't configure GTM slave.
gtmSlaveSpecificExtraConfig=none # Will be added to Slave's gtm.conf (done at initialization only)
gtmProxyDir=$HOME/pgxc/nodes/gtm_pxy
gtmProxy=n # Specify y if you conifugre at least one GTM proxy. You may not configure gtm proxies
# only when you dont' configure GTM slaves.
# If you specify this value not to y, the following parameters will be set to default empty values.
# If we find there're no valid Proxy server names (means, every servers are specified
# as none), then gtmProxy value will be set to "n" and all the entries will be set to
# empty values.
gtmProxyNames=(gtm_pxy1 gtm_pxy2 gtm_pxy3 gtm_pxy4) # No used if it is not configured
gtmProxyServers=(node06 node07 node08 node09) # Specify none if you dont' configure it.
gtmProxyPorts=(20001 20001 20001 20001) # Not used if it is not configured.
gtmProxyDirs=($gtmProxyDir $gtmProxyDir $gtmProxyDir $gtmProxyDir) # Not used if it is not configured.
gtmPxyExtraConfig=none # Extra configuration parameter for gtm_proxy. Coordinator section has an example.
gtmPxySpecificExtraConfig=(none none none none)
coordMasterDir=$HOME/pgxc/nodes/coord
coordSlaveDir=$HOME/pgxc/nodes/coord_slave
coordArchLogDir=$HOME/pgxc/nodes/coord_archlog
coordNames=(coord1 coord2 ) # Master and slave use the same name
coordPorts=(20004 20004 ) # Master ports
poolerPorts=(6433 6433 ) # Master pooler ports
coordPgHbaEntries=(192.168.120.0/24) # Assumes that all the coordinator (master/slave) accepts
# the same connection
# This entry allows only $pgxcOwner to connect.
# If you'd like to setup another connection, you should
# supply these entries through files specified below.
coordMasterServers=(coord1 coord2) # none means this master is not available
coordMasterDirs=($coordMasterDir $coordMasterDir )
coordMaxWALsernder=5 # max_wal_senders: needed to configure slave. If zero value is specified,
# it is expected to supply this parameter explicitly by external files
# specified in the following. If you don't configure slaves, leave this value to zero.
coordMaxWALSenders=($coordMaxWALsernder $coordMaxWALsernder )
# max_wal_senders configuration for each coordinator.
# postgresql.conf
cat > $coordExtraConfig <<EOF
log_destination = 'stderr'
logging_collector = on
log_directory = 'pg_log'
listen_addresses = '*'
max_connections = 100
EOF
coordSpecificExtraConfig=(none none none none)
coordExtraPgHba=none # Extra entry for pg_hba.conf. This file will be added to all the coordinators' pg_hba.conf
coordSpecificExtraPgHba=(none none none none)
coordAdditionalSlaves=n # Additional slave can be specified as follows: where you
coordAdditionalSlaveSet=(cad1) # Each specifies set of slaves. This case, two set of slaves are
# configured
cad1_Sync=n # All the slaves at "cad1" are connected with asynchronous mode.
# If not, specify "y"
# The following lines specifies detailed configuration for each
# slave tag, cad1. You can define cad2 similarly.
cad1_Servers=(node08 node09 node06 node07) # Hosts
cad1_dir=$HOME/pgxc/nodes/coord_slave_cad1
cad1_Dirs=($cad1_dir $cad1_dir $cad1_dir $cad1_dir)
cad1_ArchLogDir=$HOME/pgxc/nodes/coord_archlog_cad1
cad1_ArchLogDirs=($cad1_ArchLogDir $cad1_ArchLogDir $cad1_ArchLogDir $cad1_ArchLogDir)
datanodeMasterDir=$HOME/pgxc/nodes/dn_master
datanodeSlaveDir=$HOME/pgxc/nodes/dn_slave
datanodeArchLogDir=$HOME/pgxc/nodes/datanode_archlog
primaryDatanode=node01
datanodeNames=(node01 node02 node03)
datanodePorts=(5432 5432 5432 ) # Master ports
datanodePoolerPorts=(6432 6432 6432 ) # Master pooler ports
datanodePgHbaEntries=(192.168.120.0/24) # Assumes that all the coordinator (master/slave) accepts
# the same connection
# This list sets up pg_hba.conf for $pgxcOwner user.
# If you'd like to setup other entries, supply them
# through extra configuration files specified below.
datanodeMasterServers=(node01 node02 node03) # none means this master is not available.
# This means that there should be the master but is down.
# The cluster is not operational until the master is
# recovered and ready to run.
datanodeMasterDirs=($datanodeMasterDir $datanodeMasterDir $datanodeMasterDir )
datanodeMaxWalSender=5 # max_wal_senders: needed to configure slave. If zero value is
# specified, it is expected this parameter is explicitly supplied
# by external configuration files.
# If you don't configure slaves, leave this value zero.
datanodeMaxWALSenders=($datanodeMaxWalSender $datanodeMaxWalSender $datanodeMaxWalSender )
# max_wal_senders configuration for each datanode
datanodeSlave=n # Specify y if you configure at least one coordiantor slave. Otherwise, the following
# configuration parameters will be set to empty values.
# If no effective server names are found (that is, every servers are specified as none),
# then datanodeSlave value will be set to n and all the following values will be set to
# empty values.
datanodeSlaveServers=(node07 node08 node09 node06) # value none means this slave is not available
datanodeSlavePorts=(20008 20009 20008 20009) # value none means this slave is not available


datanodeSlavePorts=(20008 20009 20008 20009) # value none means this slave is not available
datanodeSlavePoolerPorts=(20012 20013 20012 20013) # value none means this slave is not available
datanodeSlaveSync=y # If datanode slave is connected in synchronized mode
datanodeSlaveDirs=($datanodeSlaveDir $datanodeSlaveDir $datanodeSlaveDir $datanodeSlaveDir)
datanodeArchLogDirs=( $datanodeArchLogDir $datanodeArchLogDir $datanodeArchLogDir $datanodeArchLogDir )
datanodeExtraConfig=none # Extra configuration file for datanodes. This file will be added to all the
# datanodes' postgresql.conf
datanodeSpecificExtraConfig=(none none none none)
datanodeExtraPgHba=none # Extra entry for pg_hba.conf. This file will be added to all the datanodes' postgresql.conf
datanodeSpecificExtraPgHba=(none none none none)
datanodeAdditionalSlaves=n # Additional slave can be specified as follows: where you
# configured
# If not, specify "y"
# The following lines specifies detailed configuration for each
# slave tag, cad1. You can define cad2 similarly.
walArchive=n # If you'd like to configure WAL archive, edit this section.
# Pgxc_ctl assumes that if you configure WAL archive, you configure it
# for all the coordinators and datanodes.
# Default is "no". Please specify "y" here to turn it on.
walArchiveSet=(war1 war2)
war1_source=(master) # you can specify master, slave or ano other additional slaves as a source of WAL archive.
# Default is the master
wal1_source=(slave)
wal1_source=(additiona_coordinator_slave_set additional_datanode_slave_set)
war1_host=node10 # All the nodes are backed up at the same host for a given archive set
war1_backupdir=$HOME/pgxc/backup_war1
wal2_source=(master)
war2_host=node11
war2_backupdir=$HOME/pgxc/backup_war2
coordSlave=n
coordNames=( coord1 none )
coordMasterDirs=( /home/postgres/pgxc/nodes/coord none )
coordPorts=( 20004 -1 )
poolerPorts=( 6433 -1 )
coordMasterServers=( coord1 none )
coordMaxWALSenders=( 5 0 )
coordSlaveServers=( none none )
coordSlavePorts=( none none )
coordSlavePoolerPorts=( none none )
coordSlaveDirs=( none none )
coordArchLogDirs=( none none )
coordSpecificExtraConfig=( none none none none )


### 这里在pgxc_ctl 命令中中修改的,这里的gtmProxy=y 把前的gtmProxy=n给替换掉了,


gtmProxy=y
gtmProxyNames=( gtm_pxy )
gtmProxyServers=( 192.168.120.18 )
gtmProxyPorts=( 20009 )
gtmProxyDirs=( $HOME/pgxc/nodes/gtm_pxy )
gtmPxySpecificExtraConfig=( none )


  • 请注意这里面配置的主机名要与机器一一对应上,关闭防火墙,初始化集群.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
[postgres@gtmnode ~]$ pgxc_ctl init all

[...]


[postgres@gtmnode ~]$ pgxc_ctl
/bin/bash
Installing pgxc_ctl_bash script as /home/postgres/pgxc_ctl/pgxc_ctl_bash.
Installing pgxc_ctl_bash script as /home/postgres/pgxc_ctl/pgxc_ctl_bash.
Reading configuration using /home/postgres/pgxc_ctl/pgxc_ctl_bash --home /home/postgres/pgxc_ctl --configuration /home/postgres/pgxc_ctl/pgxc_ctl.conf
Finished reading configuration.
******** PGXC_CTL START ***************

Current directory: /home/postgres/pgxc_ctl
PGXC help
You are using pgxc_ctl, the configuration utility for PGXL
Type:
help <command>
where <command> is either add, Createdb, Createuser, clean,
configure, deploy, failover, init, kill, log, monitor,
prepare, q, reconnect, remove, set, show, start,
stop or unregister

调优数据库

  • Coordinator 节点的 postgresql.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
max_connections = 200          # 允许的最大并发连接数
max_prepared_transactions = 200 # 允许的最大预提交事务数,与更新操作相关
shared_buffers = 8GB # 用于缓存数据的内存(推荐内存的1/4)
temp_buffers = 80MB
work_mem = 100MB # 用于内部排序和一些复杂的查询的内存
dynamic_shared_memory_type = posix
max_files_per_process = 2000
effective_io_concurrency = 8
max_worker_processes = 64
effective_cache_size = 8GB
max_pool_size = 1600 # Coordinator与Datanode之间的连接池的最大连接数
pool_conn_keepalive = 600
persistent_datanode_connections = on

  • Datanode 节点 postgresql.conf
1
2
3
4
5
6
7
8
9
10
11
max_connections = 1600
shared_buffers = 8GB
temp_buffers = 80MB
dynamic_shared_memory_type = posix
effective_io_concurrency = 8
max_worker_processes = 32
random_page_cost = 1.5
effective_cache_size = 24GB
max_prepared_transactions = 100
persistent_datanode_connections = on

  • GtmProxy 节点 gtm_pxy.conf
1
2
3
4
worker_threads = 2
keepalives_idle = 120
keepalives_interval = 30
keepalives_count = 2

Gtm 节点 gtm.conf

1
2
3
keepalives_idle = 120
keepalives_interval = 30
keepalives_count = 50

安装 Pgbouncer 连接池

  • pgbouncer 是 PostgreSQL 的一个轻量级连接池,可以给客户端提供一个统一的视图.

  • pgbouncer 的作用:
    a)pgbouncer 可以在后端数据库和前端应用间简历连接的桥梁,由 pgbouncer 去处理和后端的连接关系.
    b)对客户端的连接进行限制,防止过多的恶意连接.

  • pgbouncer 的特点:
    a)内存消耗低.(默认 2k/连接)
    b)可以把不同的数据库连接到一个机器上,而对客户端保持透明.
    c)支持在线的重行配置而无需重启.


谢谢支持

  • 微信二维码: