linux 下開啟 trim 功能
對於 ssd 硬盤,假如長期使用, 並且已經用光磁盤 free lists 中的空間, 都會嚴重影響磁盤寫能力 (就算磁盤空間空閑率為 90%) ,
注,
但實際上是由於 ssd 使用 flash 進行數據保存, 每次數據讀寫過程都需要將曾經使用過的磁盤數據塊抹掉後再重寫, 出現重複 Io 增加了係統額外資源, 而機械硬盤不需要把數據抹掉而是直接重寫,因此,對於需要進行頻繁寫操作,(OverWrite 操作) 或者沒有 freelists 空間的情況而言, Ssd 會發現產生嚴重的 Io
1. linux 下可以通過啟用 trim 功能讓電腦自動重新生成 freelists
啟用 trim 方法
1. 建議使用 ext4 格式
2. 內核必須大於 2.6.28
3.hdparm -I /dev/sda 查詢是否支持
* Data Set Management TRIM supported (支持提示)
4. fstab 中加入 discard 參數
/dev/sda1 / ext4 discard,defaults
5. swap 分區啟用方法
echo 1 > /proc/sys/vm/swappiness
2. 建議使用 noop 調度算法
Linux has several different disk schedulers, which are responsible for determining in which order read and write requests to the disk are handled. Using thenoop scheduler means that Linux will simply handle requests in the order they are received, without giving any consideration to where the data physically resides on the disk. This is good for solid-state drives because they have no moving parts, and seek times are identical for all sectors on the disk.
The 2.6 LinuxKernel includes selectable I/O schedulers. They control the way theKernel commits reads and writes to disks – the intention of providing different schedulers is to allow better optimisation for different classes of workload.
Without an I/O scheduler, the kernel would basically just issue each request to disk in the order that it received them. This could result in massiveHardDisk thrashing: if one process was reading from one part of the disk, and one writing to another, the heads would have to seek back and forth across the disk for every operation. The scheduler’s main goal is to optimise disk access times.
An I/O scheduler can use the following techniques to improve performance:
- Request merging
- The scheduler merges adjacent requests together to reduce disk seeking
- Elevator
- The scheduler orders requests based on their physical location on the block device, and it basically tries to seek in one direction as much as possible.
- Prioritisation
- The scheduler has complete control over how it prioritises requests, and can do so in a number of ways
All I/O schedulers should also take into account resource starvation, to ensure requests eventually do get serviced!
The Schedulers
There are currently 4 available:
- No-op Scheduler
- Anticipatory IO Scheduler (AS)
- Deadline Scheduler
- Complete Fair Queueing Scheduler (CFQ)
No-op Scheduler
This scheduler only implements request merging.
Anticipatory IO Scheduler
The anticipatory scheduler is the default scheduler in older 2.6 kernels – if you've not specified one, this is the one that will be loaded. It implements request merging, a one-way elevator, read and write request batching, and attempts some anticipatory reads by holding off a bit after a read batch if it thinks a user is going to ask for more data. It tries to optimise for physical disks by avoiding head movements if possible – one downside to this is that it probably give highly erratic performance on database or storage systems.
Deadline Scheduler
The deadline scheduler implements request merging, a one-way elevator, and imposes a deadline on all operations to prevent resource starvation. Because writes return instantly withinLinux, with the actual data being held in cache, the deadline scheduler will also prefer readers – as long as the deadline for a write request hasn't passed. The kernel docs suggest this is the preferred scheduler for database systems, especially if you have TCQ aware disks, or any system with high disk performance.
Complete Fair Queueing Scheduler (CFQ)
The complete fair queueing scheduler implements both request merging and the elevator, and attempts to give all users of a particular device the same number of IO requests over a particular time interval. This should make it more efficient for multiuser systems. It seems that Novel SLES sets cfq as the scheduler by default, as does the latestUbuntu release. As of the 2.6.18 kernel, this is the default schedular in kernel.org releases.
Changing Schedulers
The most reliable way to change schedulers is to set the kernel option “elevator” at boot time. You can set it to one of “as”, “cfq”, “deadline” or “noop”, to set the appropriate scheduler.
It seems under more recent 2.6 kernels (2.6.11, possibly earlier), you can change the scheduler at runtime by echoing the name of the scheduler into/sys/block/$devicename/queue/scheduler, where the device name is the basename of the block device, eg “sda” for/dev/sda.
Which one should I use?
I've not personally done any testing on this, so I can't speak from experience yet. The anticipatory scheduler will be the default one for a reason however - it is optimised for the common case. If you've only got single disk systems (ie, no RAID - hardware or software) then this scheduler is probably the right one for you. If it's a multiuser system, you will probably find CFQ or deadline providing better performance, and the numbers seem to back deadline giving the best performance for database systems.
The noop scheduler has minimal cpu overhead in managing the queues and may be well suited to systems with either low seek times, such as an SSD or systems using a hardware RAID controller, which often has its own IO scheduler designed around the RAID semantics.
Tuning the I/O schedulers
The schedulers may have parameters that can be tuned at runtime. Read theLinuxKernel documentation on the schedulers listed in theReferences section below
More information
Read the documents mentioned in the References section below, especially theLinuxKernel documentation on the anticipatory and deadline schedulers.
link from https://www.wlug.org.nz/LinuxIoScheduler
2. 啟用 wiper 工具對 SSD 進行重新清空
wiper.sh 由 hdparm 工具附帶, 但 rhel5,6 都默認不帶改工具, 建議重新編譯安裝
最後更新:2017-04-03 21:30:14