0%

工作中的一些Bash shell测试脚本

  • 做服务器开发绕不开一些测试工作,如功能测试,压力测试,环境的搭建,数据收集,数据分析.这中间有很多重复单调的工作.如果手工一次一次的重复测试,不光费时,还容易出错.这里记录了一些小脚本做为以后工作参考.

提取 TOP 命令中的数据

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
$ cat grep_pid.sh
#!/bin/bash
srvdir="/root/netty_srv_dir/"

## 分类统计top输出的CPU与内存数据
grep -a "$1 root" $srvdir$1.log | sort -k9 -n | head -n1 | awk '{printf "\nCPU min:%4s\t", $9}'
grep -a "$1 root" $srvdir$1.log | awk '{sum+=$9;n++}END{printf "avg: %4s\t", sum /n }'
grep -a "$1 root" $srvdir$1.log | sort -k9 -n | tail -n1 | awk '{printf " max:%4s\n", $9}'
## print MEM
grep -a "$1 root" $srvdir$1.log | sort -k10 -n | tail -n1 | awk '{printf "MEM: %4s\n",$10}'
cat $srvdir$1.log > $1bak.log
echo "" > $srvdir$1.log
#grep "total" $srvdir$1.net | sort -k 3 |tail -n1 | awk -F";" 'BEGIN{print "Bytes Out\tBytes In\tPackets Out\tPackets In"}{ printf "%sKB/s\t%sKB/s\t%sKB/s\t%sKB/s\n", $3 / 1000,$4/1000,$6/1000,$7/1000}'
#grep "total" $srvdir$1.net | awk 'BEGIN {FS =";";print "BOut\tBin\tPOut\tPin"}{if(NF==16){ if($3>1000){printf "%sKB/s\t",$3/1000}else{printf "%sb/s\t",$3};if($4>1000){printf "%sKB/s\n",$4/1000}else{printf "%sb/s\n",$4}}}' 18028.net
cat /proc/net/sockstat | grep "TCP" | awk '{print $1,$9}'
awk 'BEGIN{print "\nBytesOut\t\tByteIn\t\tPacketsOut\tPacketsIn"}'

## 分类统计bwn-ng输出的CSV格式数据.
grep "total" $srvdir$1.net | sort -t ";" -n -k 3 | tail -n1 | awk -F ";" '{if (NF==16){if($3>1000){printf "%sKB/s\t",$3/1000}else{printf "%sb/s\t\t",$3}}}'
grep "total" $srvdir$1.net | sort -t ";" -n -k 4 | tail -n1 | awk -F ";" '{if (NF==16){if($4>1000){printf "%sKB/s\t",$4/1000}else{printf "%sb/s\t\t",$4}}}'
grep "total" $srvdir$1.net | sort -t ";" -n -k 6 | tail -n1 | awk -F ";" '{if (NF==16){if($6>1000){printf "%sKB/s\t",$6/1000}else{printf "%sb/s\t\t",$6}}}'
grep "total" $srvdir$1.net | sort -t ";" -n -k 7 | tail -n1 | awk -F ";" '{if (NF==16){if($7>1000){printf "%sKB/s\n",$7/1000}else{printf "%sb/s\n",$7}}}'
echo "" > $srvdir$1.net

自动批量回答 yes

  • 这里是一个自动登录的脚本,因为每个服务器第一次连接都会提示要接受一个证书.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
#!/usr/bin/expect -f

set timeout 10

set num [lindex $argv 0]

spawn ssh -b 172.168.$num.10 root@172.168.$num.100
expect {
"*yes/no" { send "yes\r"; exp_continue}
}
expect "#*"
send "exit\r"
send "eof\r"
expect eof

循环测试脚本

  • 这个脚本根据camera_client.py运行打印出``done”表示该程序运行结束,在 while 里把它清除掉.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
root@debian-t6:~# cat test_loop.sh
#!/bin/bash

srv="192.168.25.100"
python camera_client.py dev -H $srv -f $1 &
dpid=$!
echo $dpid
done="done>"
while true;
do
grep "done" camera.log && python camera_client.py app -H $srv -f $1 && kill -9 $dpid && break || sleep 1
done
cat camera.log
echo "" > camera.log

开启服务器与数据收集脚本

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
$ cat test_start_srv.sh
#!/bin/bash

## 设置系统的文件句柄数限制
ulimit -Hn 20000500
ulimit -Sn 20000500
#cd netty_srv_dir

## 清除上次运行的,``bwn-ng, top``命令.
pkill bwm-ng
pkill top
srv="asio_p2p_srv"
## 清除上次运行的服务器的进程
pkill $srv
## 服务已经运行,它的pid赋于给SPID变量,并保存在192.168.25.100机器上的/root/srv.pid里
$srv & SPID=$!
ssh -t root@192.168.25.100 "echo $SPID > srv.pid"

## bwm-ng 监控网卡eth1的流量,并输出CSV格式到以SPID变量名,.net为后缀的文件名里. 它的pid赋于给NPID变量
bwm-ng -o csv -T rate -I eth1 -F $SPID.net & NPID=$!
## top 只输出SPID的pid的信息,并重定向pid.log的文件名中.
top -p $SPID -b > $SPID.log &

服务器测试脚本

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17

# cat start_srv.py
import paramiko
import select
ssh = paramiko.SSHClient()
SRV = "192.168.25.100"
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
mykey = paramiko.RSAKey.from_private_key_file("/root/.ssh/id_rsa")
ssh.connect("%s" % SRV,username="root",pkey=mykey)
ssh.exec_command("pkill test_start_srv")
stdin,stdout,stderr = ssh.exec_command("/root/test_start_srv.sh")
while not stdout.channel.exit_status_ready():
if stdout.channel.recv_ready():
rl,wl,xl = select.select([stdout.channel],[],[],0.0)
if len(rl )> 0:
print stdout.channel.recv(1024)
break

1
2
3
4
5
6
7
8
~$ cat test_devices_only.sh
#!/bin/bash

pid=`python start_srv.py`

./test_camera_client.sh $1
ssh -t root@192.168.25.100 "/root/grep_pid.sh $pid && pkill test_start_srv && pkill socket_srv"


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
~$ cat test_app_dev.sh
#!/bin/bash

test_one(){
tt=$200
## 中止远程服务器的之前进程名.
ssh -t root@192.168.25.100 "pkill test_start_srv && pkill socket_srv"
pid=`python start_srv.py`
srv="192.168.25.100"

## 开启一种类型的客户端 ,DEV
python camera_client.py dev -H $srv -f $1 &
dpid=$!
while true;
do
## 等到DEV输出done,开记另一种类型的客户端,APP
grep "done" camera.log && python camera_client.py app -H $srv -f $1 && break || sleep 2
done

## 上面两种类型的客户端运行完成之后,把各类数据分类增量重定向到相应的文件名中.
cat camera.log >> $1.log
echo "" > camera.log
pkill "python"

ssh -t root@192.168.25.100 "pkill test_start_srv && pkill socket_srv && /root/grep_pid.sh $pid" >> $1.log
echo ---------------------------------------------------------------
}


run_ten_test()
{
test_one $1 $2 >> $1.log
echo "$1 test result" >> $1.result
## 分类累计,DEV,APP输出的日志数据,并做平均计算.
grep "DEV Run Time" $1.log | awk '{sum += $2;n++} END {if (n > 0) printf "Dev Run Time: %s\n", sum / n}' >> $1.result
grep "DEV Avg Time" $1.log | awk '{sum += $2;n++} END {if (n > 0) printf "Dev Avg Time: %s\n", sum / n}' >> $1.result
grep "APP Run Time" $1.log | awk '{sum += $2;n++} END {if (n > 0) printf "APP Run Time: %s\n", sum / n}' >> $1.result
grep "APP Avg Time" $1.log | awk '{sum += $2;n++} END {if (n > 0) printf "APP Avg Time: %s\n", sum / n}' >> $1.result

## 分类统计出top 输出的内存与CPU的总数与平均数值
grep "CPU" $1.log | awk '{sum += $2;n++} END {if (n > 0) printf "AVG CPU: %s\n", sum / n}' >> $1.result
grep "MEM" $1.log | awk '{sum += $2;n++} END {if (n > 0) printf "AVG MEM: %s\n", sum / n}' >> $1.result
echo "$1 test result---------------------------------------------------------" >> $1.result
}
test_all()
{
ext="0000.txt"
pre="user_"
for i in `seq 1 6`;
do
echo "start test $pre$i$ext"
test_one $pre$i$ext $i
echo "test $pre$i$ext done"
echo "wait 30 seconds"
sleep 30
done
}
test_all

复制目录,并生成二进制文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26

cat ./rsync_openresty.sh
#!/bin/bash

LUAJIT=/usr/local/openresty/luajit/bin/luajit
ORG=openresty-china
BYTECODE=openresty-byte

## 清除旧的目录
rm -rf ${BYTECODE}


## 用find 把目录找出来,替换ORG到BYTECODE.
for i in `find ${ORG} -type d`;do
mkdir -p "${i/"${ORG}"/"${BYTECODE}"}"
done

## 用find 把*.lua结尾的文件找来,并使用LUAJIT编译成字节码,输出到BYTECODE的相应目录下
for i in `find ${ORG} -iname "*.lua"`;do
${LUAJIT} -b $i "${i/"${ORG}"/"${BYTECODE}"}"
done

cp ${ORG}/conf/* ${BYTECODE}/conf

rsync --exclude="*.git" --exclude="*.sh" --exclude="config.lua" --exclude="*.conf" -a ${BYTECODE}/* root@exmaple.com:/home/www/test_openresty

  • 把一个N*M行的转换成NM列如:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
A
A
A
B
B
B
C
C
C

~$ awk '{a[FNR%3] = a[FNR%3] == "" ? $0 : a[FNR%3] "\t" $0} END{for(i=1;i<=3;i++) print a[i%3]}'

A B C
A B C
A B C
  • 或者是这样
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
~$ cat <<EOF | awk -F\| '{a[FNR%4] = a[FNR%4] == "" ? $0 : a[FNR%4] "\t" $0} END{for(i=1;i<=4;i++) print a[i%4]}'
> ADC_CS_N|
> ADC_SADDR|
> ADC_SDAT|
> ADC_SCLK|
> PIN_A10|
> PIN_B10|
> PIN_A9|
> PIN_B14|
> Chip select|
> Digital data input|
> Digital data output|
> Digital clock input|
> 3.3V|
> 3.3V|
> 3.3V|
> 3.3V|
> EOF
ADC_CS_N| PIN_A10| Chip select| 3.3V|
ADC_SADDR| PIN_B10| Digital data input| 3.3V|
ADC_SDAT| PIN_A9| Digital data output| 3.3V|
ADC_SCLK| PIN_B14| Digital clock input| 3.3V|


谢谢支持

  • 微信二维码: