用一个fio工具
安装
yum -y install fio
二,FIO用法:
随机读:
fio -direct=1 -iodepth 1 -thread -rw=randread -ioengine=psync -bs=16k -size=20G -numjobs=10 -runtime=1000 -group_reporting -name=mytest
说明:
filename=/dev/sdb1 测试文件名称,通常选择需要测试的盘的data目录。
direct=1 测试过程绕过机器自带的buffer。使测试结果更真实。
rw=randwrite 测试随机写的I/O
rw=randrw 测试随机写和读的I/O
bs=16k 单次io的块文件大小为16k
bsrange=512-2048 同上,提定数据块的大小范围
size=5g 本次的测试文件大小为5g,以每次4k的io进行测试。
numjobs=30 本次的测试线程为30.
runtime=1000 测试时间为1000秒,如果不写则一直将5g文件分4k每次写完为止。
ioengine=psync io引擎使用pync方式
rwmixwrite=30 在混合读写的模式下,写占30%
group_reporting 关于显示结果的,汇总每个进程的信息。
此外
lockmem=1g 只使用1g内存进行测试。
zero_buffers 用0初始化系统buffer。
nrfiles=8 每个进程生成文件的数量。
顺序读:
fio -direct=1 -iodepth 1 -thread -rw=read -ioengine=psync -bs=16k -size=200G -numjobs=30 -runtime=1000 -group_reporting -name=mytest
随机写:
fio -direct=1 -iodepth 1 -thread -rw=randwrite -ioengine=psync -bs=16k -size=200G -numjobs=30 -runtime=1000 -group_reporting -name=mytest
顺序写:
fio -direct=1 -iodepth 1 -thread -rw=write -ioengine=psync -bs=16k -size=200G -numjobs=30 -runtime=1000 -group_reporting -name=mytest
混合随机读写:
fio -direct=1 -iodepth 1 -thread -rw=randrw -rwmixread=70 -ioengine=psync -bs=16k -size=200G -numjobs=30 -runtime=100 -group_reporting -name=mytest -ioscheduler=noop
三,实际测试范例:
[root@localhost tmp]# fio -direct=1 -iodepth 1 -thread -rw=randread -ioengine=psync -bs=16k -size=2G -numjobs=10 -runtime=5 -group_reporting -name=mytest
mytest: (g=0): rw=randread, bs=16K-16K/16K-16K/16K-16K, ioengine=psync, iodepth=1
...
fio-2.2.8
Starting 10 threads
mytest: Laying out IO file(s) (1 file(s) / 2048MB)
mytest: Laying out IO file(s) (1 file(s) / 2048MB)
mytest: Laying out IO file(s) (1 file(s) / 2048MB)
Jobs: 10 (f=10): [r(10)] [100.0% done] [3152KB/0KB/0KB /s] [197/0/0 iops] [eta 00m:00s]
mytest: (groupid=0, jobs=10): err= 0: pid=15442: Thu Sep 24 16:51:05 2015
read : io=14784KB, bw=2918.3KB/s, iops=182, runt= 5066msec
clat (msec): min=3, max=460, avg=54.41, stdev=63.26
lat (msec): min=3, max=460, avg=54.41, stdev=63.26
clat percentiles (msec):
| 1.00th=[ 5], 5.00th=[ 9], 10.00th=[ 11], 20.00th=[ 16],
| 30.00th=[ 19], 40.00th=[ 24], 50.00th=[ 31], 60.00th=[ 43],
| 70.00th=[ 56], 80.00th=[ 80], 90.00th=[ 128], 95.00th=[ 184],
| 99.00th=[ 338], 99.50th=[ 379], 99.90th=[ 461], 99.95th=[ 461],
| 99.99th=[ 461]
bw (KB /s): min= 98, max= 593, per=10.18%, avg=297.06, stdev=109.73
lat (msec) : 4=0.76%, 10=7.03%, 20=25.76%, 50=32.03%, 100=19.59%
lat (msec) : 250=12.77%, 500=2.06%
cpu : usr=0.01%, sys=0.13%, ctx=942, majf=0, minf=41
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=924/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
READ: io=14784KB, aggrb=2918KB/s, minb=2918KB/s, maxb=2918KB/s, mint=5066msec, maxt=5066msec
Disk stats (read/write):
dm-1: ios=905/2, merge=0/0, ticks=53056/149, in_queue=53856, util=98.12%, aggrios=928/3, aggrmerge=0/0, aggrticks=55020/149, aggrin_queue=55168, aggrutil=97.95%
sda: ios=928/3, merge=0/0, ticks=55020/149, in_queue=55168, util=97.95%