1 .\" Copyright (c) 1985 The Regents of the University of California.
2 .\" All rights reserved.
4 .\" Redistribution and use in source and binary forms, with or without
5 .\" modification, are permitted provided that the following conditions
7 .\" 1. Redistributions of source code must retain the above copyright
8 .\" notice, this list of conditions and the following disclaimer.
9 .\" 2. Redistributions in binary form must reproduce the above copyright
10 .\" notice, this list of conditions and the following disclaimer in the
11 .\" documentation and/or other materials provided with the distribution.
12 .\" 3. Neither the name of the University nor the names of its contributors
13 .\" may be used to endorse or promote products derived from this software
14 .\" without specific prior written permission.
16 .\" THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
17 .\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
18 .\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
19 .\" ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
20 .\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
21 .\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
22 .\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
23 .\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
24 .\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
25 .\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
28 .\" @(#)3.t 5.1 (Berkeley) 4/17/91
30 .ds RH Results of our observations
32 Results of our observations
34 When 4.2BSD was first installed on several large timesharing systems
35 the degradation in performance was significant.
36 Informal measurements showed 4.2BSD providing 80% of the throughput
37 of 4.1BSD (based on load averages observed under a normal timesharing load).
38 Many of the initial problems found were because of programs that were
39 not part of 4.1BSD. Using the techniques described in the previous
40 section and standard process profiling several problems were identified.
41 Later work concentrated on the operation of the kernel itself.
42 In this section we discuss the problems uncovered; in the next
43 section we describe the changes made to the system.
50 The mail system was the first culprit identified as a major
51 contributor to the degradation in system performance.
52 At Lucasfilm the mail system is heavily used
53 on one machine, a VAX-11/780 with eight megabytes of memory.\**
55 \** During part of these observations the machine had only four
59 traffic is usually between users on the same machine and ranges from
60 person-to-person telephone messages to per-organization distribution
61 lists. After conversion to 4.2BSD, it was
62 immediately noticed that mail to distribution lists of 20 or more people
63 caused the system load to jump by anywhere from 3 to 6 points.
64 The number of processes spawned by the \fIsendmail\fP program and
65 the messages sent from \fIsendmail\fP to the system logging
66 process, \fIsyslog\fP, generated significant load both from their
67 execution and their interference with basic system operation. The
68 number of context switches and disk transfers often doubled while
69 \fIsendmail\fP operated; the system call rate jumped dramatically.
70 System accounting information consistently
71 showed \fIsendmail\fP as the top cpu user on the system.
75 The network services provided in 4.2BSD add new capabilities to the system,
76 but are not without cost. The system uses one daemon process to accept
77 requests for each network service provided. The presence of many
78 such daemons increases the numbers of active processes and files,
79 and requires a larger configuration to support the same number of users.
80 The overhead of the routing and status updates can consume
81 several percent of the cpu.
82 Remote logins and shells incur more overhead
83 than their local equivalents.
84 For example, a remote login uses three processes and a
85 pseudo-terminal handler in addition to the local hardware terminal
86 handler. When using a screen editor, sending and echoing a single
87 character involves four processes on two machines.
88 The additional processes, context switching, network traffic, and
89 terminal handler overhead can roughly triple the load presented by one
94 To measure the costs of various functions in the kernel,
95 a profiling system was run for a 17 hour
96 period on one of our general timesharing machines.
97 While this is not as reproducible as a synthetic workload,
98 it certainly represents a realistic test.
99 This test was run on several occasions over a three month period.
100 Despite the long period of time that elapsed
101 between the test runs the shape of the profiles,
102 as measured by the number of times each system call
103 entry point was called, were remarkably similar.
105 These profiles turned up several bottlenecks that are
106 discussed in the next section.
107 Several of these were new to 4.2BSD,
108 but most were caused by overloading of mechanisms
109 which worked acceptably well in previous BSD systems.
110 The general conclusion from our measurements was that
111 the ratio of user to system time had increased from
112 45% system / 55% user in 4.1BSD to 57% system / 43% user
115 Micro-operation benchmarks
117 To compare certain basic system operations
118 between 4.1BSD and 4.2BSD a suite of benchmark
119 programs was constructed and run on a VAX-11/750 with 4.5 megabytes
120 of physical memory and two disks on a MASSBUS controller.
121 Tests were run with the machine operating in single user mode
122 under both 4.1BSD and 4.2BSD. Paging was localized to the drive
123 where the root file system was located.
125 The benchmark programs were modeled after the Kashtan benchmarks,
126 [Kashtan80], with identical sources compiled under each system.
127 The programs and their intended purpose are described briefly
128 before the presentation of the results. The benchmark scripts
129 were run twice with the results shown as the average of
131 The source code for each program and the shell scripts used during
132 the benchmarks are included in the Appendix.
134 The set of tests shown in Table 1 was concerned with
135 system operations other than paging. The intent of most
136 benchmarks is clear. The result of running \fIsignocsw\fP is
137 deducted from the \fIcsw\fP benchmark to calculate the context
138 switch overhead. The \fIexec\fP tests use two different jobs to gauge
139 the cost of overlaying a larger program with a smaller one
141 ``null job'' and ``big job'' differ solely in the size of their data
142 segments, 1 kilobyte versus 256 kilobytes. In both cases the
143 text segment of the parent is larger than that of the child.\**
145 \** These tests should also have measured the cost of expanding the
146 text segment; unfortunately time did not permit running additional tests.
148 All programs were compiled into the default load format that causes
149 the text segment to be demand paged out of the file system and shared
158 syscall perform 100,000 \fIgetpid\fP system calls
159 csw perform 10,000 context switches using signals
160 signocsw send 10,000 signals to yourself
161 pipeself4 send 10,000 4-byte messages to yourself
162 pipeself512 send 10,000 512-byte messages to yourself
163 pipediscard4 send 10,000 4-byte messages to child who discards
164 pipediscard512 send 10,000 512-byte messages to child who discards
165 pipeback4 exchange 10,000 4-byte messages with child
166 pipeback512 exchange 10,000 512-byte messages with child
167 forks0 fork-exit-wait 1,000 times
168 forks1k sbrk(1024), fault page, fork-exit-wait 1,000 times
169 forks100k sbrk(102400), fault pages, fork-exit-wait 1,000 times
170 vforks0 vfork-exit-wait 1,000 times
171 vforks1k sbrk(1024), fault page, vfork-exit-wait 1,000 times
172 vforks100k sbrk(102400), fault pages, vfork-exit-wait 1,000 times
173 execs0null fork-exec ``null job''-exit-wait 1,000 times
174 execs0null (1K env) execs0null above, with 1K environment added
175 execs1knull sbrk(1024), fault page, fork-exec ``null job''-exit-wait 1,000 times
176 execs1knull (1K env) execs1knull above, with 1K environment added
177 execs100knull sbrk(102400), fault pages, fork-exec ``null job''-exit-wait 1,000 times
178 vexecs0null vfork-exec ``null job''-exit-wait 1,000 times
179 vexecs1knull sbrk(1024), fault page, vfork-exec ``null job''-exit-wait 1,000 times
180 vexecs100knull sbrk(102400), fault pages, vfork-exec ``null job''-exit-wait 1,000 times
181 execs0big fork-exec ``big job''-exit-wait 1,000 times
182 execs1kbig sbrk(1024), fault page, fork-exec ``big job''-exit-wait 1,000 times
183 execs100kbig sbrk(102400), fault pages, fork-exec ``big job''-exit-wait 1,000 times
184 vexecs0big vfork-exec ``big job''-exit-wait 1,000 times
185 vexecs1kbig sbrk(1024), fault pages, vfork-exec ``big job''-exit-wait 1,000 times
186 vexecs100kbig sbrk(102400), fault pages, vfork-exec ``big job''-exit-wait 1,000 times
189 Table 1. Kernel Benchmark programs.
193 The results of these tests are shown in Table 2. If the 4.1BSD results
194 are scaled to reflect their being run on a VAX-11/750, they
195 correspond closely to those found in [Joy80].\**
197 \** We assume that a VAX-11/750 runs at 60% of the speed of a VAX-11/780
198 (not considering floating point operations).
205 c || c s s || c s s || c s s
206 c || c s s || c s s || c s s
207 c || c | c | c || c | c | c || c | c | c
208 l || n | n | n || n | n | n || n | n | n.
209 Berkeley Software Distribution UNIX Systems
211 Test Elapsed Time User Time System Time
213 \^ 4.1 4.2 4.3 4.1 4.2 4.3 4.1 4.2 4.3
215 syscall 28.0 29.0 23.0 4.5 5.3 3.5 23.9 23.7 20.4
216 csw 45.0 60.0 45.0 3.5 4.3 3.3 19.5 25.4 19.0
217 signocsw 16.5 23.0 16.0 1.9 3.0 1.1 14.6 20.1 15.2
218 pipeself4 21.5 29.0 26.0 1.1 1.1 0.8 20.1 28.0 25.6
219 pipeself512 47.5 59.0 55.0 1.2 1.2 1.0 46.1 58.3 54.2
220 pipediscard4 32.0 42.0 36.0 3.2 3.7 3.0 15.5 18.8 15.6
221 pipediscard512 61.0 76.0 69.0 3.1 2.1 2.0 29.7 36.4 33.2
222 pipeback4 57.0 75.0 66.0 2.9 3.2 3.3 25.1 34.2 29.7
223 pipeback512 110.0 138.0 125.0 3.1 3.4 2.2 52.2 65.7 57.7
224 forks0 37.5 41.0 22.0 0.5 0.3 0.3 34.5 37.6 21.5
225 forks1k 40.0 43.0 22.0 0.4 0.3 0.3 36.0 38.8 21.6
226 forks100k 217.5 223.0 176.0 0.7 0.6 0.4 214.3 218.4 175.2
227 vforks0 34.5 37.0 22.0 0.5 0.6 0.5 27.3 28.5 17.9
228 vforks1k 35.0 37.0 22.0 0.6 0.8 0.5 27.2 28.6 17.9
229 vforks100k 35.0 37.0 22.0 0.6 0.8 0.6 27.6 28.9 17.9
230 execs0null 97.5 92.0 66.0 3.8 2.4 0.6 68.7 82.5 48.6
231 execs0null (1K env) 197.0 229.0 75.0 4.1 2.6 0.9 167.8 212.3 62.6
232 execs1knull 99.0 100.0 66.0 4.1 1.9 0.6 70.5 86.8 48.7
233 execs1knull (1K env) 199.0 230.0 75.0 4.2 2.6 0.7 170.4 214.9 62.7
234 execs100knull 283.5 278.0 216.0 4.8 2.8 1.1 251.9 269.3 202.0
235 vexecs0null 100.0 92.0 66.0 5.1 2.7 1.1 63.7 76.8 45.1
236 vexecs1knull 100.0 91.0 66.0 5.2 2.8 1.1 63.2 77.1 45.1
237 vexecs100knull 100.0 92.0 66.0 5.1 3.0 1.1 64.0 77.7 45.6
238 execs0big 129.0 201.0 101.0 4.0 3.0 1.0 102.6 153.5 92.7
239 execs1kbig 130.0 202.0 101.0 3.7 3.0 1.0 104.7 155.5 93.0
240 execs100kbig 318.0 385.0 263.0 4.8 3.1 1.1 286.6 339.1 247.9
241 vexecs0big 128.0 200.0 101.0 4.6 3.5 1.6 98.5 149.6 90.4
242 vexecs1kbig 125.0 200.0 101.0 4.7 3.5 1.3 98.9 149.3 88.6
243 vexecs100kbig 126.0 200.0 101.0 4.2 3.4 1.3 99.5 151.0 89.0
246 Table 2. Kernel Benchmark results (all times in seconds).
250 In studying the measurements we found that the basic system call
251 and context switch overhead did not change significantly
252 between 4.1BSD and 4.2BSD. The \fIsignocsw\fP results were caused by
253 the changes to the \fIsignal\fP interface, resulting
254 in an additional subroutine invocation for each call, not
255 to mention additional complexity in the system's implementation.
257 The times for the use of pipes are significantly higher under
258 4.2BSD because of their implementation on top of the interprocess
259 communication facilities. Under 4.1BSD pipes were implemented
260 without the complexity of the socket data structures and with
261 simpler code. Further, while not obviously a factor here,
262 4.2BSD pipes have less system buffer space provided them than
265 The \fIexec\fP tests shown in Table 2 were performed with 34 bytes of
266 environment information under 4.1BSD and 40 bytes under 4.2BSD.
267 To figure the cost of passing data through the environment,
268 the execs0null and execs1knull tests were rerun with
269 1065 additional bytes of data. The results are show in Table 3.
274 c || c s || c s || c s
275 c || c s || c s || c s
276 c || c | c || c | c || c | c
277 l || n | n || n | n || n | n.
278 Test Real User System
280 \^ 4.1 4.2 4.1 4.2 4.1 4.2
282 execs0null 197.0 229.0 4.1 2.6 167.8 212.3
283 execs1knull 199.0 230.0 4.2 2.6 170.4 214.9
286 Table 3. Benchmark results with ``large'' environment (all times in seconds).
289 These results show that passing argument data is significantly
290 slower than under 4.1BSD: 121 ms/byte versus 93 ms/byte. Even using
291 this factor to adjust the basic overhead of an \fIexec\fP system
292 call, this facility is more costly under 4.2BSD than under 4.1BSD.
294 Path name translation
296 The single most expensive function performed by the kernel
297 is path name translation.
298 This has been true in almost every UNIX kernel [Mosher80];
299 we find that our general time sharing systems do about
300 500,000 name translations per day.
302 Name translations became more expensive in 4.2BSD for several reasons.
303 The single most expensive addition was the symbolic link.
305 have the effect of increasing the average number of components
306 in path names to be translated.
307 As an insidious example,
308 consider the system manager that decides to change /tmp
309 to be a symbolic link to /usr/tmp.
310 A name such as /tmp/tmp1234 that previously required two component
312 now requires four component translations plus the cost of reading
313 the contents of the symbolic link.
315 The new directory format also changes the characteristics of
317 The more complex format requires more computation to determine
318 where to place new entries in a directory.
319 Conversely the additional information allows the system to only
320 look at active entries when searching,
321 hence searches of directories that had once grown large
322 but currently have few active entries are checked quickly.
323 The new format also stores the length of each name so that
324 costly string comparisons are only done on names that are the
325 same length as the name being sought.
327 The net effect of the changes is that the average time to
328 translate a path name in 4.2BSD is 24.2 milliseconds,
329 representing 40% of the time processing system calls,
330 that is 19% of the total cycles in the kernel,
331 or 11% of all cycles executed on the machine.
332 The times are shown in Table 4. We have no comparable times
333 for \fInamei\fP under 4.1 though they are certain to
334 be significantly less.
340 part time % of kernel
342 self 14.3 ms/call 11.3%
343 child 9.9 ms/call 7.9%
345 total 24.2 ms/call 19.2%
348 Table 4. Call times for \fInamei\fP in 4.2BSD.
354 Nearly 25% of the time spent in the kernel is spent in the clock
356 (This is a clear indication that to avoid sampling bias when profiling the
357 kernel with our tools
358 we need to drive them from an independent clock.)
359 These routines are responsible for implementing timeouts,
360 scheduling the processor,
361 maintaining kernel statistics,
362 and tending various hardware operations such as
363 draining the terminal input silos.
364 Only minimal work is done in the hardware clock interrupt
365 routine (at high priority), the rest is performed (at a lower priority)
366 in a software interrupt handler scheduled by the hardware interrupt
368 In the worst case, with a clock rate of 100 Hz
369 and with every hardware interrupt scheduling a software
370 interrupt, the processor must field 200 interrupts per second.
371 The overhead of simply trapping and returning
372 is 3% of the machine cycles,
373 figuring out that there is nothing to do
374 requires an additional 2%.
376 Terminal multiplexors
378 The terminal multiplexors supported by 4.2BSD have programmable receiver
379 silos that may be used in two ways.
380 With the silo disabled, each character received causes an interrupt
382 Enabling the receiver silo allows the silo to fill before
383 generating an interrupt, allowing multiple characters to be read
385 At low rates of input, received characters will not be processed
386 for some time unless the silo is emptied periodically.
387 The 4.2BSD kernel uses the input silos of each terminal multiplexor,
388 and empties each silo on each clock interrupt.
389 This allows high input rates without the cost of per-character interrupts
390 while assuring low latency.
391 However, as character input rates on most machines are usually
392 low (about 25 characters per second),
393 this can result in excessive overhead.
394 At the current clock rate of 100 Hz, a machine with 5 terminal multiplexors
395 configured makes 500 calls to the receiver interrupt routines per second.
396 In addition, to achieve acceptable input latency
397 for flow control, each clock interrupt must schedule
398 a software interrupt to run the silo draining routines.\**
400 \** It is not possible to check the input silos at
401 the time of the actual clock interrupt without modifying the terminal
402 line disciplines, as the input queues may not be in a consistent state \**.
404 \** This implies that the worst case estimate for clock processing
405 is the basic overhead for clock processing.
407 Process table management
409 In 4.2BSD there are numerous places in the kernel where a linear search
410 of the process table is performed:
412 in \fIexit\fP to locate and wakeup a process's parent;
414 in \fIwait\fP when searching for \fB\s-2ZOMBIE\s+2\fP and
415 \fB\s-2STOPPED\s+2\fP processes;
417 in \fIfork\fP when allocating a new process table slot and
418 counting the number of processes already created by a user;
420 in \fInewproc\fP, to verify
421 that a process id assigned to a new process is not currently
424 in \fIkill\fP and \fIgsignal\fP to locate all processes to
425 which a signal should be delivered;
427 in \fIschedcpu\fP when adjusting the process priorities every
430 in \fIsched\fP when locating a process to swap out and/or swap
433 These linear searches can incur significant overhead. The rule
434 for calculating the size of the process table is:
436 nproc = 20 + 8 * maxusers
438 that means a 48 user system will have a 404 slot process table.
439 With the addition of network services in 4.2BSD, as many as a dozen
440 server processes may be maintained simply to await incoming requests.
441 These servers are normally created at boot time which causes them
442 to be allocated slots near the beginning of the process table. This
443 means that process table searches under 4.2BSD are likely to take
444 significantly longer than under 4.1BSD. System profiling shows
445 that as much as 20% of the time spent in the kernel on a loaded
446 system (a VAX-11/780) can be spent in \fIschedcpu\fP and, on average,
447 5-10% of the kernel time is spent in \fIschedcpu\fP.
448 The other searches of the proc table are similarly affected.
449 This shows the system can no longer tolerate using linear searches of
452 File system buffer cache
454 The trace facilities described in section 2.3 were used
455 to gather statistics on the performance of the buffer cache.
456 We were interested in measuring the effectiveness of the
457 cache and the read-ahead policies.
458 With the file system block size in 4.2BSD four to
459 eight times that of a 4.1BSD file system, we were concerned
460 that large amounts of read-ahead might be performed without
461 being used. Also, we were interested in seeing if the
462 rules used to size the buffer cache at boot time were severely
463 affecting the overall cache operation.
465 The tracing package was run over a three hour period during
466 a peak mid-afternoon period on a VAX 11/780 with four megabytes
468 This resulted in a buffer cache containing 400 kilobytes of memory
469 spread among 50 to 200 buffers
470 (the actual number of buffers depends on the size mix of
471 disk blocks being read at any given time).
472 The pertinent configuration information is shown in Table 5.
478 Controller Drive Device File System
480 DEC MASSBUS DEC RP06 hp0d /usr
482 Emulex SC780 Fujitsu Eagle hp1a /usr/spool/news
486 Fujitsu Eagle hp2a /tmp
492 Table 5. Active file systems during buffer cache tests.
496 During the test period the load average ranged from 2 to 13
497 with an average of 5.
498 The system had no idle time, 43% user time, and 57% system time.
499 The system averaged 90 interrupts per second
500 (excluding the system clock interrupts),
501 220 system calls per second,
502 and 50 context switches per second (40 voluntary, 10 involuntary).
504 The active virtual memory (the sum of the address space sizes of
505 all jobs that have run in the previous twenty seconds)
506 over the period ranged from 2 to 6 megabytes with an average
508 There was no swapping, though the page daemon was inspecting
509 about 25 pages per second.
511 On average 250 requests to read disk blocks were initiated
513 These include read requests for file blocks made by user
514 programs as well as requests initiated by the system.
515 System reads include requests for indexing information to determine
516 where a file's next data block resides,
517 file system layout maps to allocate new data blocks,
518 and requests for directory contents needed to do path name translations.
520 On average, an 85% cache hit rate was observed for read requests.
521 Thus only 37 disk reads were initiated per second.
522 In addition, 5 read-ahead requests were made each second
523 filling about 20% of the buffer pool.
524 Despite the policies to rapidly reuse read-ahead buffers
525 that remain unclaimed, more than 90% of the read-ahead
528 These measurements showed that the buffer cache was working
529 effectively. Independent tests have also showed that the size
530 of the buffer cache may be reduced significantly on memory-poor
531 system without severe effects;
532 we have not yet tested this hypothesis [Shannon83].
536 The overhead associated with the
537 network facilities found in 4.2BSD is often
538 difficult to gauge without profiling the system.
539 This is because most input processing is performed
540 in modules scheduled with software interrupts.
541 As a result, the system time spent performing protocol
542 processing is rarely attributed to the processes that
543 really receive the data. Since the protocols supported
544 by 4.2BSD can involve significant overhead this was a serious
545 concern. Results from a profiled kernel show an average
546 of 5% of the system time is spent
547 performing network input and timer processing in our environment
548 (a 3Mb/s Ethernet with most traffic using TCP).
549 This figure can vary significantly depending on
550 the network hardware used, the average message
551 size, and whether packet reassembly is required at the network
552 layer. On one machine we profiled over a 17 hour
553 period (our gateway to the ARPANET)
554 206,000 input messages accounted for 2.4% of the system time,
555 while another 0.6% of the system time was spent performing
556 protocol timer processing.
557 This machine was configured with an ACC LH/DH IMP interface
558 and a DMA 3Mb/s Ethernet controller.
560 The performance of TCP over slower long-haul networks
561 was degraded substantially by two problems.
562 The first problem was a bug that prevented round-trip timing measurements
563 from being made, thus increasing retransmissions unnecessarily.
564 The second was a problem with the maximum segment size chosen by TCP,
565 that was well-tuned for Ethernet, but was poorly chosen for
566 the ARPANET, where it causes packet fragmentation. (The maximum
567 segment size was actually negotiated upwards to a value that
568 resulted in excessive fragmentation.)
570 When benchmarked in Ethernet environments the main memory buffer management
571 of the network subsystem presented some performance anomalies.
572 The overhead of processing small ``mbufs'' severely affected throughput for a
573 substantial range of message sizes.
574 In spite of the fact that most system ustilities made use of the throughput
575 optimal 1024 byte size, user processes faced large degradations for some
576 arbitrary sizes. This was specially true for TCP/IP transmissions [Cabrera84,
579 Virtual memory subsystem
581 We ran a set of tests intended to exercise the virtual
582 memory system under both 4.1BSD and 4.2BSD.
583 The tests are described in Table 6.
584 The test programs dynamically allocated
585 a 7.3 Megabyte array (using \fIsbrk\fP\|(2)) then referenced
586 pages in the array either: sequentially, in a purely random
587 fashion, or such that the distance between
588 successive pages accessed was randomly selected from a Gaussian
589 distribution. In the last case, successive runs were made with
590 increasing standard deviations.
598 seqpage sequentially touch pages, 10 iterations
599 seqpage-v as above, but first make \fIvadvise\fP\|(2) call
600 randpage touch random page 30,000 times
601 randpage-v as above, but first make \fIvadvise\fP call
602 gausspage.1 30,000 Gaussian accesses, standard deviation of 1
603 gausspage.10 as above, standard deviation of 10
604 gausspage.30 as above, standard deviation of 30
605 gausspage.40 as above, standard deviation of 40
606 gausspage.50 as above, standard deviation of 50
607 gausspage.60 as above, standard deviation of 60
608 gausspage.80 as above, standard deviation of 80
609 gausspage.inf as above, standard deviation of 10,000
612 Table 6. Paging benchmark programs.
616 The results in Table 7 show how the additional
618 of 4.2BSD can generate more work for the paging system.
620 the system used 0.5 of the 4.5 megabytes of physical memory
622 under 4.2BSD it used nearly 1 megabyte of physical memory.\**
624 \** The 4.1BSD system used for testing was really a 4.1a
626 with networking facilities and code to support
627 remote file access. The
628 4.2BSD system also included the remote file access code.
630 systems would be larger than similarly configured ``vanilla''
631 4.1BSD or 4.2BSD system, we consider out conclusions to still be valid.
633 This resulted in more page faults and, hence, more system time.
634 To establish a common ground on which to compare the paging
635 routines of each system, we check instead the average page fault
636 service times for those test runs that had a statistically significant
637 number of random page faults. These figures, shown in Table 8, show
638 no significant difference between the two systems in
639 the area of page fault servicing. We currently have
640 no explanation for the results of the sequential
646 l || c s || c s || c s || c s
647 l || c s || c s || c s || c s
648 l || c | c || c | c || c | c || c | c
649 l || n | n || n | n || n | n || n | n.
650 Test Real User System Page Faults
652 \^ 4.1 4.2 4.1 4.2 4.1 4.2 4.1 4.2
654 seqpage 959 1126 16.7 12.8 197.0 213.0 17132 17113
655 seqpage-v 579 812 3.8 5.3 216.0 237.7 8394 8351
656 randpage 571 569 6.7 7.6 64.0 77.2 8085 9776
657 randpage-v 572 562 6.1 7.3 62.2 77.5 8126 9852
658 gausspage.1 25 24 23.6 23.8 0.8 0.8 8 8
659 gausspage.10 26 26 22.7 23.0 3.2 3.6 2 2
660 gausspage.30 34 33 25.0 24.8 8.6 8.9 2 2
661 gausspage.40 42 81 23.9 25.0 11.5 13.6 3 260
662 gausspage.50 113 175 24.2 26.2 19.6 26.3 784 1851
663 gausspage.60 191 234 27.6 26.7 27.4 36.0 2067 3177
664 gausspage.80 312 329 28.0 27.9 41.5 52.0 3933 5105
665 gausspage.inf 619 621 82.9 85.6 68.3 81.5 8046 9650
668 Table 7. Paging benchmark results (all times in seconds).
679 Test Page Faults PFST
683 randpage 8085 9776 791 789
684 randpage-v 8126 9852 765 786
685 gausspage.inf 8046 9650 848 844
688 Table 8. Page fault service times (all times in microseconds).