FIO tool in action
fio 说明
FIO
一个Linux I/O子系统和调度程序测试的强大工具.
1 | Fio was written by Jens Axboe <axboe@kernel.dk> to enable flexible testing of the Linux I/O subsystem and schedulers. He got tired of writing specific test applications to simulate a given workload, and found that the existing I/O benchmark/test tools out there weren’t flexible enough to do what he wanted. |
FIO install
1 | # centos |
fio test
选择需要测试的磁盘挂载:
1 | #检查磁盘信息 |
test action
- 测试顺序read
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31$ sudo fio -direct=1 -iodepth=128 -rw=read -ioengine=libaio -bs=128k -numjobs=1 -time_based=1 -runtime=1000 -group_reporting -filename=/dev/vdd -name=test
test: (g=0): rw=read, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=128
fio-3.7
Starting 1 process
Jobs: 1 (f=1): [R(1)][100.0%][r=251MiB/s,w=0KiB/s][r=2010,w=0 IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=4305: Wed Nov 9 10:58:41 2022
read: IOPS=1853, BW=232MiB/s (243MB/s)(226GiB/1000062msec)
slat (usec): min=3, max=176, avg=10.48, stdev= 4.28
clat (msec): min=21, max=189, avg=69.04, stdev= 6.86
lat (msec): min=21, max=189, avg=69.05, stdev= 6.86
clat percentiles (msec):
| 1.00th=[ 56], 5.00th=[ 61], 10.00th=[ 63], 20.00th=[ 65],
| 30.00th=[ 67], 40.00th=[ 68], 50.00th=[ 68], 60.00th=[ 69],
| 70.00th=[ 70], 80.00th=[ 72], 90.00th=[ 78], 95.00th=[ 81],
| 99.00th=[ 93], 99.50th=[ 102], 99.90th=[ 118], 99.95th=[ 138],
| 99.99th=[ 169]
bw ( KiB/s): min=192000, max=265216, per=99.99%, avg=237268.10, stdev=10703.72, samples=2000
iops : min= 1500, max= 2072, avg=1853.65, stdev=83.62, samples=2000
lat (msec) : 50=0.01%, 100=99.37%, 250=0.62%
cpu : usr=0.35%, sys=2.62%, ctx=488619, majf=0, minf=4107
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
issued rwts: total=1853894,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=128
Run status group 0 (all jobs):
READ: bw=232MiB/s (243MB/s), 232MiB/s-232MiB/s (243MB/s-243MB/s), io=226GiB (243GB), run=1000062-1000062msec
Disk stats (read/write):
vdd: ios=1854792/1, merge=273/0, ticks=128357846/1, in_queue=127954002, util=100.00% - 随机读:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36$ sudo fio -direct=1 -iodepth=128 -rw=randread -ioengine=libaio -bs=4k -size=1G -numjobs=1 -runtime=1000 -group_reporting -filename=/dev/vdd -name=Rand_Read_Testing
Rand_Read_Testing: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
fio-3.7
Starting 1 process
bs: 1 (f=1): [r(1)][6.8%][r=3224KiB/s,w=0KiB/s][r=806,w=0 IOPS][eta 05m:00s]
fio: terminating on signal 2
Rand_Read_Testing: (groupid=0, jobs=1): err= 0: pid=17095: Wed Nov 9 11:57:38 2022
read: IOPS=815, BW=3262KiB/s (3340kB/s)(72.8MiB/22866msec)
slat (nsec): min=2115, max=45294, avg=5667.10, stdev=2685.80
clat (msec): min=2, max=1058, avg=156.91, stdev=105.39
lat (msec): min=2, max=1058, avg=156.91, stdev=105.39
clat percentiles (msec):
| 1.00th=[ 16], 5.00th=[ 31], 10.00th=[ 45], 20.00th=[ 69],
| 30.00th=[ 94], 40.00th=[ 117], 50.00th=[ 140], 60.00th=[ 165],
| 70.00th=[ 190], 80.00th=[ 220], 90.00th=[ 284], 95.00th=[ 359],
| 99.00th=[ 527], 99.50th=[ 567], 99.90th=[ 768], 99.95th=[ 818],
| 99.99th=[ 1003]
bw ( KiB/s): min= 1616, max= 3704, per=100.00%, avg=3277.33, stdev=485.40, samples=45
iops : min= 404, max= 926, avg=819.33, stdev=121.35, samples=45
lat (msec) : 4=0.01%, 10=0.18%, 20=1.74%, 50=10.42%, 100=20.43%
lat (msec) : 250=53.37%, 500=12.51%, 750=1.24%, 1000=0.10%
cpu : usr=0.38%, sys=1.29%, ctx=18627, majf=0, minf=138
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
issued rwts: total=18648,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=128
Run status group 0 (all jobs):
READ: bw=3262KiB/s (3340kB/s), 3262KiB/s-3262KiB/s (3340kB/s-3340kB/s), io=72.8MiB (76.4MB), run=22866-22866msec
Disk stats (read/write):
vdd: ios=18557/1, merge=0/0, ticks=2909219/1, in_queue=2897777, util=99.59%
fio test macos ssd performence
https://www.nivas.hr/blog/2017/09/19/measuring-disk-io-performance-macos/
1 | # start test |
result :
1 | test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=posixaio, iodepth=64 |
标准化测试脚本
centos 上执行对本地Disk 进行FIO测试的标准化脚本:
1 | cat < EOF > disk_fio_test.sh |
执行标准脚本进行测试,选择Disk mount 为/dev/sdm
进行测试
1 | sudo su - |
FIO参数说明
FIO参数取值说明 下表以测试云盘随机写IOPS(randwrite)的命令为例,说明各种参数的含义。
参数 | 说明 |
---|---|
-direct=1 | 表示测试时忽略I/O缓存,数据直写。 |
-iodepth=128 | 表示使用异步I/O(AIO)时,同时发出I/O数的上限为128。 |
-rw=randwrite | 表示测试时的读写策略为随机写(random writes)。其它测试可以设置为: randread(随机读random reads) read (顺序读sequential reads) write(顺序写sequential writes) randrw(混合随机读写mixed random reads and writes) |
-ioengine=libaio | 表示测试方式为libaio(Linux AIO,异步I/O)。应用程序使用I/O通有两种方式: 同步: 同步的I/O一次只能发出一个I/O请求,等待内核完成才返回。这样对于单个线程iodepth总是小于1,但是可以透过多个线程并发执行来解决。通常会用16~32根线程同时工作将iodepth塞满。 异步 异步的I/O通常使用libaio这样的方式一次提交一批I/O请求,然后等待一批的完成,减少交互的次数,会更有效率。 |
-bs=4k | 表示单次I/O的块文件大小为4 KiB。默认值也是4 KiB。 测试IObr/S时,建议将bs设置为一个较小的值,如4k。 测吞吐量时,建议将bs设置为一个较大的值,如1024k。 |
-size=1G | 表示测试文件大小为1 GiB。 |
-numjobs=1 | 表示测试线程数为1。 |
-runtime=1000 | 表示测试时间为1000秒。如果未配置,则持续将前述-size指定大小的文件,以每次-bs值为分块大小写完。 |
-group_reporting | 表示测试结果里汇总每个进程的统计信息,而非以不同job汇总展示信息。 |
-filename=/dev/your_device | 指定的云盘设备名,例如/dev/your_device。 |
-name=Rand_Write_Testing | 表示测试任务名称为Rand_Write_Testing,可以随意设定。 |