2 --- GEOM BASED DISK SCHEDULERS FOR FREEBSD ---
4 This code contains a framework for GEOM-based disk schedulers and a
5 couple of sample scheduling algorithms that use the framework and
6 implement two forms of "anticipatory scheduling" (see below for more
9 As a quick example of what this code can give you, try to run "dd",
10 "tar", or some other program with highly SEQUENTIAL access patterns,
11 together with "cvs", "cvsup", "svn" or other highly RANDOM access patterns
12 (this is not a made-up example: it is pretty common for developers
13 to have one or more apps doing random accesses, and others that do
14 sequential accesses e.g., loading large binaries from disk, checking
15 the integrity of tarballs, watching media streams and so on).
17 These are the results we get on a local machine (AMD BE2400 dual
18 core CPU, SATA 250GB disk):
20 /mnt is a partition mounted on /dev/ad0s1f
22 cvs: cvs -d /mnt/home/ncvs-local update -Pd /mnt/ports
23 dd-read: dd bs=128k of=/dev/null if=/dev/ad0 (or ad0-sched-)
24 dd-writew dd bs=128k if=/dev/zero of=/mnt/largefile
26 NO SCHEDULER RR SCHEDULER
29 dd-read only 72 MB/s ---- 72 MB/s ---
30 dd-write only 55 MB/s --- 55 MB/s ---
31 dd-read+cvs 6 MB/s ok 30 MB/s ok
32 dd-write+cvs 55 MB/s slooow 14 MB/s ok
34 As you can see, when a cvs is running concurrently with dd, the
35 performance drops dramatically, and depending on read or write mode,
36 one of the two is severely penalized. The use of the RR scheduler
37 in this example makes the dd-reader go much faster when competing
38 with cvs, and lets cvs progress when competing with a writer.
42 1. PLEASE MAKE SURE THAT THE DISK THAT YOU WILL BE USING FOR TESTS
43 DOES NOT CONTAIN PRECIOUS DATA.
44 This is experimental code, so we make no guarantees, though
45 I am routinely using it on my desktop and laptop.
47 2. EXTRACT AND BUILD THE PROGRAMS
48 A 'make install' in the directory should work (with root privs),
49 or you can even try the binary modules.
50 If you want to build the modules yourself, look at the Makefile.
52 3. LOAD THE MODULE, CREATE A GEOM NODE, RUN TESTS
54 The scheduler's module must be loaded first:
58 substitute with gsched_as to test AS. Then, supposing that you are
59 using /dev/ad0 for testing, a scheduler can be attached to it with:
61 # geom sched insert ad0
63 The scheduler is inserted transparently in the geom chain, so
64 mounted partitions and filesystems will keep working, but
65 now requests will go through the scheduler.
67 To change scheduler on-the-fly, you can reconfigure the geom:
69 # geom sched configure -a as ad0.sched.
71 assuming that gsched_as was loaded previously.
75 In principle it is possible to remove the scheduler module
76 even on an active chain by doing
78 # geom sched destroy ad0.sched.
80 However, there is some race in the geom subsystem which makes
81 the removal unsafe if there are active requests on a chain.
82 So, in order to reduce the risk of data losses, make sure
83 you don't remove a scheduler from a chain with ongoing transactions.
85 --- NOTES ON THE SCHEDULERS ---
87 The important contribution of this code is the framework to experiment
88 with different scheduling algorithms. 'Anticipatory scheduling'
89 is a very powerful technique based on the following reasoning:
91 The disk throughput is much better if it serves sequential requests.
92 If we have a mix of sequential and random requests, and we see a
93 non-sequential request, do not serve it immediately but instead wait
94 a little bit (2..5ms) to see if there is another one coming that
95 the disk can serve more efficiently.
97 There are many details that should be added to make sure that the
98 mechanism is effective with different workloads and systems, to
99 gain a few extra percent in performance, to improve fairness,
100 insulation among processes etc. A discussion of the vast literature
101 on the subject is beyond the purpose of this short note.
103 --------------------------------------------------------------------------
105 TRANSPARENT INSERT/DELETE
107 geom_sched is an ordinary geom module, however it is convenient
108 to plug it transparently into the geom graph, so that one can
109 enable or disable scheduling on a mounted filesystem, and the
110 names in /etc/fstab do not depend on the presence of the scheduler.
112 To understand how this works in practice, remember that in GEOM
113 we have "providers" and "geom" objects.
114 Say that we want to hook a scheduler on provider "ad0",
115 accessible through pointer 'pp'. Originally, pp is attached to
116 geom "ad0" (same name, different object) accessible through pointer old_gp
118 BEFORE ---> [ pp --> old_gp ...]
120 A normal "geom sched create ad0" call would create a new geom node
121 on top of provider ad0/pp, and export a newly created provider
122 ("ad0.sched." accessible through pointer newpp).
124 AFTER create ---> [ newpp --> gp --> cp ] ---> [ pp --> old_gp ... ]
126 On top of newpp, a whole tree will be created automatically, and we
127 can e.g. mount partitions on /dev/ad0.sched.s1d, and those requests
128 will go through the scheduler, whereas any partition mounted on
129 the pre-existing device entries will not go through the scheduler.
131 With the transparent insert mechanism, the original provider "ad0"/pp
132 is hooked to the newly created geom, as follows:
134 AFTER insert ---> [ pp --> gp --> cp ] ---> [ newpp --> old_gp ... ]
136 so anything that was previously using provider pp will now have
137 the requests routed through the scheduler node.
139 A removal ("geom sched destroy ad0.sched.") will restore the original