系统io调度优化
io scheduler
就是优化存储盘的io方案,提高读写性能。
sql优化时的io优化
在学习sql优化的时候,系统上的优化就是io scheduler的优化了。看了许多的关于io scheduler的优化方法,发现跟我的archlinux不一致,因为我发现我的优化是none
,而推荐的绝大多数是noop,其实是因为这些文章都过时了。
从linux kernel5.0开始,使用multiqueue I/O scheduling代替了Non-multiqueue I/O scheduling,linux5.3就废弃了Non-multiqueue I/O scheduling,之前的那些noop等都是single queue的,已经过时。
查看与修改io scheduler
查看
1 | --- ~ » cat /sys/block/nvme0n1/queue/scheduler |
none就是正在使用的策略,而none就是适合于nvme固态盘的方案
临时修改
1 | echo bfq > /sys/block/nvme0n1/queue/scheduler |
虽然语法是这样,但是我用root用户也无法修改。这种修改方式重启后就没有了
永久修改
1 | vim /etc/default/grub |
不过我的系统就不需要修改了,因为默认就是none
Multiqueue I/O schedulers
The following I/O schedulers are designed for multiqueue devices. These map I/O requests to multiple queues and these are handled by kernel threads that are distributed across multiple CPUs.
bfq (Budget Fair Queuing) (Multiqueue)
Designed to provide good interactive response, especially for slower I/O devices. This is a complex I/O scheduler and has a relatively high per-operation overhead so it is not ideal for devices with slow CPUs or high throughput I/O devices. Fair sharing is based on the number of sectors requested and heuristics rather than a time slice. Desktop users may like to experiment with this I/O scheduler as it can be advantageous when loading large applications.
kyber (Multiqueue)
Designed for fast multi-queue devices and is relatively simple. Has two request queues:
- Synchronous requests (e.g. blocked reads)
- Asynchronous requests (e.g. writes)
There are strict limits on the number of request operations sent to the queues. In theory this limits the time waiting for requests to be dispatched, and hence should provide quick completion time for requests that are high priority.
none (Multiqueue)
The multi-queue no-op I/O scheduler. Does no reordering of requests, minimal overhead. Ideal for fast random I/O devices such as NVME.
mq-deadline (Multiqueue)
This is an adaption of the deadline I/O scheduler but designed for Multiqueue devices. A good all-rounder with fairly low CPU overhead.
Non-multiqueue I/O schedulers
NOTE: Non-multiqueue have been deprecated in Ubuntu Eoan Ermine 19.10 onwards as they are no longer supported in the Linux 5.3 kernel.
deadline
This fixes starvation issues seen in other schedulers. It uses 3 queues for I/O requests:
- Sorted
- Read FIFO - read requests stored chronologically
- Write FIFO - write requests stored chronologically
Requests are issued from the sorted queue inless a read from the head of a read or write FIFO expires. Read requests are preferred over write requests. Read requests have a 500ms expiration time, write requests have a 5s expiration time.
cfq (Completely Fair Queueing)
- Per-process sorted queues for synchronous I/O requests.
- Fewer queues for asynchronous I/O requests.
- Priorities from ionice are taken into account.
Each queue is allocated a time slice for fair queuing. There may be wasteful idle time if a time slice quantum has not expired.
noop (No-operation)
Performs merging of I/O requests but no sorting. Good for random access devices (flash, ramdisk, etc) and for devices that sort I/O requests such as advanced storage controllers.
Selecting I/O Schedulers
Prior to Ubuntu 19.04 with Linux 5.0 or Ubuntu 18.04.3 with Linux 4.15, the multiqueue I/O scheduling was not enabled by default and just the deadline, cfq and noop I/O schedulers were available by default.
For Ubuntu 19.10 with Linux 5.0 or Ubuntu 18.04.3 with Linux 5.0 onwards, multiqueue is enabled by default providing the bfq, kyber, mq-deadline and none I/O schedulers. For Ubuntu 19.10 with Linux 5.3 the deadline, cfq and noop I/O schedulers are deprecated.
With the Linux 5.0 kernels, one can disable these and fall back to the non-multiqueue I/O schedulers using a kernel parameter, for example for SCSI devices one can use:
1 | scsi_mod.use_blk_mq=0 |
..add this to the GRUB_CMDLINE_LINUX_DEFAULT
string in /etc/default/grub
and run sudo update-grub
to enable this option.
Changing an I/O scheduler is performed on a per block device basis. For example, for non-multi queue device /dev/sda one can see the current I/O schedulers available using the following:
1 | cat /sys/block/sda/queue/scheduler |
to change this to deadline use:
1 | echo "deadline" | sudo tee /sys/block/sda/queue/scheduler |
For multiqueue devices the default will show:
1 | cat /sys/block/sda/queue/scheduler |
To use kyber, install the module:
1 | sudo modprobe kyber-iosched |
and enable it:
1 | echo "kyber" | sudo tee /sys/block/sda/queue/scheduler |
To use bfq, install the module:
1 | sudo modprobe bfq |
and enable it:
1 | echo "bfq" | sudo tee /sys/block/sda/queue/scheduler |
Tuning I/O Schedulers
Each I/O scheduler has a default set of tunable options that may be adjusted to help improve performance or fair sharing for your particular use case. The following kernel documentation covers these per-I/O scheduler tunable options:
- deadline (and mq-deadline) deadline-iosched.txt
- cfq cfq-iosched.txt
- bfq bfq-iosched.txt
- kyber kyber-iosched.txt
Best I/O scheduler to use
Different I/O requirements may benefit from changing from the Ubuntu distro default. A quick start guide to select a suitable I/O scheduler is below. The results are based on running 25 different synthetic I/O patterns generated using fio on ext4, xfs and btrfs with the various I/O schedulers using the 5.3 kernel.
SSD or NVME drives
It is worth noting that there is little difference in throughput between the mq-deadline/none/bfq I/O schedulers when using fast multi-queue SSD configurations or fast NVME devices. In these cases it may be preferable to use the ‘none’ I/O scheduler to reduce CPU overhead.
HDD
Avoid using the none/noop I/O schedulers for a HDD as sorting requests on block addresses reduce the seek time latencies and neither of these I/O schedulers support this feature. mq-deadline has been shown to be advantageous for the more demanding server related I/O, however, desktop users may like to experiment with bfq as has been shown to load some applications faster.
Of course, your use-case may differ, the above are just suggestions to start with based on some synthetic tests. You may find other choices with adjustments to the I/O scheduler tunables produce better results.
总结
- bfq:平均分配策略,消耗cpu计算分配方案,io速度慢,适合运行很多程序的个人电脑,速度快
- kyber :高速的并发方案,适合服务器
- none :适合高速的随机读写设备,nvme这类固态硬盘,减少cpu消耗
- mq-deadline:大量的读写,适合数据库