1 .\" Copyright (C) 2001 Matthew Dillon. All rights reserved.
3 .\" Redistribution and use in source and binary forms, with or without
4 .\" modification, are permitted provided that the following conditions
6 .\" 1. Redistributions of source code must retain the above copyright
7 .\" notice, this list of conditions and the following disclaimer.
8 .\" 2. Redistributions in binary form must reproduce the above copyright
9 .\" notice, this list of conditions and the following disclaimer in the
10 .\" documentation and/or other materials provided with the distribution.
12 .\" THIS SOFTWARE IS PROVIDED BY AUTHOR AND CONTRIBUTORS ``AS IS'' AND
13 .\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
14 .\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
15 .\" ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE
16 .\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
17 .\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
18 .\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
19 .\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
20 .\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
21 .\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
31 .Nd performance tuning under FreeBSD
32 .Sh SYSTEM SETUP - DISKLABEL, NEWFS, TUNEFS, SWAP
33 The swap partition should typically be approximately 2x the size of
35 for systems with less than 4GB of RAM, or approximately equal to
36 the size of main memory
38 Keep in mind future memory
39 expansion when sizing the swap partition.
40 Configuring too little swap can lead
41 to inefficiencies in the VM page scanning code as well as create issues
42 later on if you add more memory to your machine.
44 with multiple SCSI disks (or multiple IDE disks operating on different
45 controllers), configure swap on each drive.
46 The swap partitions on the drives should be approximately the same size.
47 The kernel can handle arbitrary sizes but
48 internal data structures scale to 4 times the largest swap partition.
50 the swap partitions near the same size will allow the kernel to optimally
51 stripe swap space across the N disks.
52 Do not worry about overdoing it a
53 little, swap space is the saving grace of
55 and even if you do not normally use much swap, it can give you more time to
56 recover from a runaway program before being forced to reboot.
58 It is not a good idea to make one large partition.
60 each partition has different operational characteristics and separating them
61 allows the file system to tune itself to those characteristics.
65 partitions are read-mostly, with very little writing, while
66 a lot of reading and writing could occur in
69 partitioning your system fragmentation introduced in the smaller more
70 heavily write-loaded partitions will not bleed over into the mostly-read
73 Properly partitioning your system also allows you to tune
80 option worthwhile turning on is
83 .Dq Li "tunefs -n enable /filesystem" .
84 Softupdates drastically improves meta-data performance, mainly file
85 creation and deletion.
86 We recommend enabling softupdates on most file systems; however, there
87 are two limitations to softupdates that you should be aware of when
88 determining whether to use it on a file system.
89 First, softupdates guarantees file system consistency in the
90 case of a crash but could very easily be several seconds (even a minute!\&)
91 behind on pending write to the physical disk.
92 If you crash you may lose more work
94 Secondly, softupdates delays the freeing of file system
96 If you have a file system (such as the root file system) which is
97 close to full, doing a major update of it, e.g.,\&
98 .Dq Li "make installworld" ,
99 can run it out of space and cause the update to fail.
100 For this reason, softupdates will not be enabled on the root file system
101 during a typical install.
102 There is no loss of performance since the root
103 file system is rarely written to.
107 options exist that can help you tune the system.
108 The most obvious and most dangerous one is
110 Only use this option in conjunction with
112 as it is far too dangerous on a normal file system.
113 A less dangerous and more
119 file systems normally update the last-accessed time of a file or
120 directory whenever it is accessed.
121 This operation is handled in
123 with a delayed write and normally does not create a burden on the system.
124 However, if your system is accessing a huge number of files on a continuing
125 basis the buffer cache can wind up getting polluted with atime updates,
126 creating a burden on the system.
127 For example, if you are running a heavily
128 loaded web site, or a news server with lots of readers, you might want to
129 consider turning off atime updates on your larger partitions with this
132 However, you should not gratuitously turn off atime
136 file system customarily
137 holds mailboxes, and atime (in combination with mtime) is used to
138 determine whether a mailbox has new mail.
139 You might as well leave
140 atime turned on for mostly read-only partitions such as
145 This is especially useful for
147 since some system utilities
148 use the atime field for reporting.
150 In larger systems you can stripe partitions from several drives together
151 to create a much larger overall partition.
152 Striping can also improve
153 the performance of a file system by splitting I/O operations across two
160 utilities may be used to create simple striped file systems.
162 speaking, striping smaller partitions such as the root and
164 or essentially read-only partitions such as
166 is a complete waste of time.
167 You should only stripe partitions that require serious I/O performance,
170 or custom partitions used to hold databases and web pages.
171 Choosing the proper stripe size is also
173 File systems tend to store meta-data on power-of-2 boundaries
174 and you usually want to reduce seeking rather than increase seeking.
176 means you want to use a large off-center stripe size such as 1152 sectors
177 so sequential I/O does not seek both disks and so meta-data is distributed
178 across both disks rather than concentrated on a single disk.
180 you really need to get sophisticated, we recommend using a real hardware
181 RAID controller from the list of
183 supported controllers.
186 variables permit system behavior to be monitored and controlled at
188 Some sysctls simply report on the behavior of the system; others allow
189 the system behavior to be modified;
190 some may be set at boot time using
192 but most will be set via
194 There are several hundred sysctls in the system, including many that appear
195 to be candidates for tuning but actually are not.
196 In this document we will only cover the ones that have the greatest effect
201 sysctl defines the overcommit behaviour of the vm subsystem.
202 The virtual memory system always does accounting of the swap space
203 reservation, both total for system and per-user.
205 are available through sysctl
207 that gives the total bytes available for swapping, and
208 .Va vm.swap_reserved ,
209 that gives number of bytes that may be needed to back all currently
210 allocated anonymous memory.
214 sysctl causes the virtual memory system to return failure
215 to the process when allocation of memory causes
219 Bit 1 of the sysctl enforces
224 Root is exempt from this limit.
225 Bit 2 allows to count most of the physical
226 memory as allocatable, except wired and free reserved pages
228 .Va vm.stats.vm.v_free_target
230 .Va vm.stats.vm.v_wire_count
231 sysctls, respectively).
234 .Va kern.ipc.maxpipekva
235 loader tunable is used to set a hard limit on the
236 amount of kernel address space allocated to mapping of pipe buffers.
237 Use of the mapping allows the kernel to eliminate a copy of the
238 data from writer address space into the kernel, directly copying
239 the content of mapped buffer to the reader.
240 Increasing this value to a higher setting, such as `25165824' might
241 improve performance on systems where space for mapping pipe buffers
242 is quickly exhausted.
243 This exhaustion is not fatal; however, and it will only cause pipes
244 to fall back to using double-copy.
247 .Va kern.ipc.shm_use_phys
248 sysctl defaults to 0 (off) and may be set to 0 (off) or 1 (on).
250 this parameter to 1 will cause all System V shared memory segments to be
251 mapped to unpageable physical RAM.
252 This feature only has an effect if you
253 are either (A) mapping small amounts of shared memory across many (hundreds)
254 of processes, or (B) mapping large amounts of shared memory across any
256 This feature allows the kernel to remove a great deal
257 of internal memory management page-tracking overhead at the cost of wiring
258 the shared memory into core, making it unswappable.
261 .Va vfs.vmiodirenable
262 sysctl defaults to 1 (on).
263 This parameter controls how directories are cached
265 Most directories are small and use but a single fragment
266 (typically 2K) in the file system and even less (typically 512 bytes) in
268 However, when operating in the default mode the buffer
269 cache will only cache a fixed number of directories even if you have a huge
271 Turning on this sysctl allows the buffer cache to use
272 the VM Page Cache to cache the directories.
273 The advantage is that all of
274 memory is now available for caching directories.
275 The disadvantage is that
276 the minimum in-core memory used to cache a directory is the physical page
277 size (typically 4K) rather than 512 bytes.
278 We recommend turning this option off in memory-constrained environments;
279 however, when on, it will substantially improve the performance of services
280 that manipulate a large number of files.
281 Such services can include web caches, large mail systems, and news systems.
282 Turning on this option will generally not reduce performance even with the
283 wasted memory but you should experiment to find out.
287 sysctl defaults to 1 (on).
288 This tells the file system to issue media
289 writes as full clusters are collected, which typically occurs when writing
290 large sequential files.
291 The idea is to avoid saturating the buffer
292 cache with dirty buffers when it would not benefit I/O performance.
294 this may stall processes and under certain circumstances you may wish to turn
298 .Va vfs.hirunningspace
299 sysctl determines how much outstanding write I/O may be queued to
300 disk controllers system-wide at any given time.
301 It is used by the UFS file system.
302 The default is self-tuned and
303 usually sufficient but on machines with advanced controllers and lots
304 of disks this may be tuned up to match what the controllers buffer.
305 Configuring this setting to match tagged queuing capabilities of
306 controllers or drives with average IO size used in production works
307 best (for example: 16 MiB will use 128 tags with IO requests of 128 KiB).
308 Note that setting too high a value
309 (exceeding the buffer cache's write threshold) can lead to extremely
310 bad clustering performance.
311 Do not set this value arbitrarily high!
312 Higher write queuing values may also add latency to reads occurring at
317 sysctl governs VFS read-ahead and is expressed as the number of blocks
318 to pre-read if the heuristics algorithm decides that the reads are
320 It is used by the UFS, ext2fs and msdosfs file systems.
321 With the default UFS block size of 32 KiB, a setting of 64 will allow
322 speculatively reading up to 2 MiB.
323 This setting may be increased to get around disk I/O latencies, especially
324 where these latencies are large such as in virtual machine emulated
326 It may be tuned down in specific cases where the I/O load is such that
327 read-ahead adversely affects performance or where system memory is really
332 sysctl defines how large VFS namecache may grow.
333 The number of currently allocated entries in namecache is provided by
335 sysctl and the condition
336 debug.numcache < kern.maxvnodes * vfs.ncsizefactor
341 sysctl defines how many negative entries VFS namecache is allowed to create.
342 The number of currently allocated negative entries is provided by
344 sysctl and the condition
345 vfs.ncnegfactor * debug.numneg < debug.numcache
348 There are various other buffer-cache and VM page cache related sysctls.
349 We do not recommend modifying these values.
352 the VM system does an extremely good job tuning itself.
355 .Va net.inet.tcp.sendspace
357 .Va net.inet.tcp.recvspace
358 sysctls are of particular interest if you are running network intensive
360 They control the amount of send and receive buffer space
361 allowed for any given TCP connection.
362 The default sending buffer is 32K; the default receiving buffer
365 improve bandwidth utilization by increasing the default at the cost of
366 eating up more kernel memory for each connection.
368 increasing the defaults if you are serving hundreds or thousands of
369 simultaneous connections because it is possible to quickly run the system
370 out of memory due to stalled connections building up.
372 high bandwidth over a fewer number of connections, especially if you have
373 gigabit Ethernet, increasing these defaults can make a huge difference.
374 You can adjust the buffer size for incoming and outgoing data separately.
375 For example, if your machine is primarily doing web serving you may want
376 to decrease the recvspace in order to be able to increase the
377 sendspace without eating too much kernel memory.
378 Note that the routing table (see
380 can be used to introduce route-specific send and receive buffer size
383 As an additional management tool you can use pipes in your
386 to limit the bandwidth going to or from particular IP blocks or ports.
387 For example, if you have a T1 you might want to limit your web traffic
388 to 70% of the T1's bandwidth in order to leave the remainder available
389 for mail and interactive use.
390 Normally a heavily loaded web server
391 will not introduce significant latencies into other services even if
392 the network link is maxed out, but enforcing a limit can smooth things
393 out and lead to longer term stability.
394 Many people also enforce artificial
395 bandwidth limitations in order to ensure that they are not charged for
396 using too much bandwidth.
398 Setting the send or receive TCP buffer to values larger than 65535 will result
399 in a marginal performance improvement unless both hosts support the window
400 scaling extension of the TCP protocol, which is controlled by the
401 .Va net.inet.tcp.rfc1323
403 These extensions should be enabled and the TCP buffer size should be set
404 to a value larger than 65536 in order to obtain good performance from
405 certain types of network links; specifically, gigabit WAN links and
406 high-latency satellite links.
407 RFC1323 support is enabled by default.
410 .Va net.inet.tcp.always_keepalive
411 sysctl determines whether or not the TCP implementation should attempt
412 to detect dead TCP connections by intermittently delivering
415 By default, this is enabled for all applications; by setting this
416 sysctl to 0, only applications that specifically request keepalives
418 In most environments, TCP keepalives will improve the management of
419 system state by expiring dead TCP connections, particularly for
420 systems serving dialup users who may not always terminate individual
421 TCP connections before disconnecting from the network.
422 However, in some environments, temporary network outages may be
423 incorrectly identified as dead sessions, resulting in unexpectedly
424 terminated TCP connections.
425 In such environments, setting the sysctl to 0 may reduce the occurrence of
426 TCP session disconnections.
429 .Va net.inet.tcp.delayed_ack
430 TCP feature is largely misunderstood.
431 Historically speaking, this feature
432 was designed to allow the acknowledgement to transmitted data to be returned
433 along with the response.
434 For example, when you type over a remote shell,
435 the acknowledgement to the character you send can be returned along with the
436 data representing the echo of the character.
437 With delayed acks turned off,
438 the acknowledgement may be sent in its own packet, before the remote service
439 has a chance to echo the data it just received.
440 This same concept also
441 applies to any interactive protocol (e.g.,\& SMTP, WWW, POP3), and can cut the
442 number of tiny packets flowing across the network in half.
445 delayed ACK implementation also follows the TCP protocol rule that
446 at least every other packet be acknowledged even if the standard 100ms
447 timeout has not yet passed.
448 Normally the worst a delayed ACK can do is
449 slightly delay the teardown of a connection, or slightly delay the ramp-up
450 of a slow-start TCP connection.
451 While we are not sure we believe that
452 the several FAQs related to packages such as SAMBA and SQUID which advise
453 turning off delayed acks may be referring to the slow-start issue.
456 .Va net.inet.ip.portrange.*
457 sysctls control the port number ranges automatically bound to TCP and UDP
459 There are three ranges: a low range, a default range, and a
460 high range, selectable via the
465 network programs use the default range which is controlled by
466 .Va net.inet.ip.portrange.first
468 .Va net.inet.ip.portrange.last ,
469 which default to 49152 and 65535, respectively.
470 Bound port ranges are
471 used for outgoing connections, and it is possible to run the system out
472 of ports under certain circumstances.
473 This most commonly occurs when you are
474 running a heavily loaded web proxy.
475 The port range is not an issue
476 when running a server which handles mainly incoming connections, such as a
477 normal web server, or has a limited number of outgoing connections, such
479 For situations where you may run out of ports,
480 we recommend decreasing
481 .Va net.inet.ip.portrange.first
483 A range of 10000 to 30000 ports may be reasonable.
484 You should also consider firewall effects when changing the port range.
486 may block large ranges of ports (usually low-numbered ports) and expect systems
487 to use higher ranges of ports for outgoing connections.
489 .Va net.inet.ip.portrange.last
490 is set at the maximum allowable port number.
493 .Va kern.ipc.somaxconn
494 sysctl limits the size of the listen queue for accepting new TCP connections.
495 The default value of 128 is typically too low for robust handling of new
496 connections in a heavily loaded web server environment.
497 For such environments,
498 we recommend increasing this value to 1024 or higher.
500 may itself limit the listen queue size (e.g.,\&
503 often have a directive in its configuration file to adjust the queue size up.
504 Larger listen queues also do a better job of fending off denial of service
509 sysctl determines how many open files the system supports.
511 typically a few thousand but you may need to bump this up to ten or twenty
512 thousand if you are running databases or large descriptor-heavy daemons.
515 sysctl may be interrogated to determine the current number of open files
519 .Va vm.swap_idle_enabled
520 sysctl is useful in large multi-user systems where you have lots of users
521 entering and leaving the system and lots of idle processes.
523 tend to generate a great deal of continuous pressure on free memory reserves.
524 Turning this feature on and adjusting the swapout hysteresis (in idle
526 .Va vm.swap_idle_threshold1
528 .Va vm.swap_idle_threshold2
529 allows you to depress the priority of pages associated with idle processes
530 more quickly then the normal pageout algorithm.
531 This gives a helping hand
532 to the pageout daemon.
533 Do not turn this option on unless you need it,
534 because the tradeoff you are making is to essentially pre-page memory sooner
535 rather than later, eating more swap and disk bandwidth.
537 this option will have a detrimental effect but in a large system that is
538 already doing moderate paging this option allows the VM system to stage
539 whole processes into and out of memory more easily.
541 Some aspects of the system behavior may not be tunable at runtime because
542 memory allocations they perform must occur early in the boot process.
543 To change loader tunables, you must set their values in
545 and reboot the system.
548 controls the scaling of a number of static system tables, including defaults
549 for the maximum number of open files, sizing of network memory resources, etc.
553 is automatically sized at boot based on the amount of memory available in
554 the system, and may be determined at run-time by inspecting the value of the
558 Some sites will require larger or smaller values of
560 and may set it as a loader tunable; values of 64, 128, and 256 are not
562 We do not recommend going above 256 unless you need a huge number
563 of file descriptors; many of the tunable values set to their defaults by
565 may be individually overridden at boot-time or run-time as described
566 elsewhere in this document.
569 must set this value via the kernel
579 tunables set the default soft limits for process data and stack size
581 Processes may increase these up to the hard limits by calling
588 tunables set the hard limits for process data, stack, and text size
589 respectively; processes may not exceed these limits.
592 tunable controls how much the stack segment will grow when a process
593 needs to allocate more stack.
595 .Va kern.ipc.nmbclusters
596 may be adjusted to increase the number of network mbufs the system is
598 Each cluster represents approximately 2K of memory,
599 so a value of 1024 represents 2M of kernel memory reserved for network
601 You can do a simple calculation to figure out how many you need.
602 If you have a web server which maxes out at 1000 simultaneous connections,
603 and each connection eats a 16K receive and 16K send buffer, you need
604 approximately 32MB worth of network buffers to deal with it.
606 thumb is to multiply by 2, so 32MBx2 = 64MB/2K = 32768.
608 you would want to set
609 .Va kern.ipc.nmbclusters
611 We recommend values between
612 1024 and 4096 for machines with moderates amount of memory, and between 4096
613 and 32768 for machines with greater amounts of memory.
614 Under no circumstances
615 should you specify an arbitrarily high value for this parameter, it could
616 lead to a boot-time crash.
621 may be used to observe network cluster use.
624 do not have this tunable and require that the
631 More and more programs are using the
633 system call to transmit files over the network.
636 sysctl controls the number of file system buffers
638 is allowed to use to perform its work.
639 This parameter nominally scales
642 so you should not need to modify this parameter except under extreme
648 manual page for details.
649 .Sh KERNEL CONFIG TUNING
650 There are a number of kernel options that you may have to fiddle with in
651 a large-scale system.
652 In order to change these options you need to be
653 able to compile a new kernel from source.
656 manual page and the handbook are good starting points for learning how to
658 Generally the first thing you do when creating your own custom
659 kernel is to strip out all the drivers and services you do not use.
662 and drivers you do not have will reduce the size of your kernel, sometimes
663 by a megabyte or more, leaving more memory available for applications.
666 may be used to reduce system boot times.
667 The defaults are fairly high and
668 can be responsible for 5+ seconds of delay in the boot process.
671 to something below 5 seconds could work (especially with modern drives).
673 There are a number of
675 options that can be commented out.
676 If you only want the kernel to run
677 on a Pentium class CPU, you can easily remove
681 if you are sure your CPU is being recognized as a Pentium II or better.
682 Some clones may be recognized as a Pentium or even a 486 and not be able
683 to boot without those options.
686 will be able to better use higher-end CPU features for MMU, task switching,
687 timebase, and even device operations.
688 Additionally, higher-end CPUs support
689 4MB MMU pages, which the kernel uses to map the kernel itself into memory,
690 increasing its efficiency under heavy syscall loads.
691 .Sh CPU, MEMORY, DISK, NETWORK
692 The type of tuning you do depends heavily on where your system begins to
693 bottleneck as load increases.
694 If your system runs out of CPU (idle times
695 are perpetually 0%) then you need to consider upgrading the CPU
696 or perhaps you need to revisit the
697 programs that are causing the load and try to optimize them.
699 is paging to swap a lot you need to consider adding more memory.
701 system is saturating the disk you typically see high CPU idle times and
702 total disk saturation.
704 can be used to monitor this.
705 There are many solutions to saturated disks:
706 increasing memory for caching, mirroring disks, distributing operations across
707 several machines, and so forth.
708 If disk performance is an issue and you
709 are using IDE drives, switching to SCSI can help a great deal.
711 IDE drives compare with SCSI in raw sequential bandwidth, the moment you
712 start seeking around the disk SCSI drives usually win.
714 Finally, you might run out of network suds.
715 Optimize the network path
719 we describe a firewall protecting internal hosts with a topology where
720 the externally visible hosts are not routed through it.
722 than 100BaseT, depending on your needs.
723 Most bottlenecks occur at the WAN link (e.g.,\&
724 modem, T1, DSL, whatever).
725 If expanding the link is not an option it may be possible to use the
727 feature to implement peak shaving or other forms of traffic shaping to
728 prevent the overloaded service (such as web services) from affecting other
729 services (such as email), or vice versa.
730 In home installations this could
731 be used to give interactive traffic (your browser,
734 over services you export from your box (web services, email).
768 manual page was originally written by
774 The manual page was greatly modified by
775 .An Eitan Adler Aq Mt eadler@FreeBSD.org .