1 .\" Copyright (C) 2001 Matthew Dillon. All rights reserved.
3 .\" Redistribution and use in source and binary forms, with or without
4 .\" modification, are permitted provided that the following conditions
6 .\" 1. Redistributions of source code must retain the above copyright
7 .\" notice, this list of conditions and the following disclaimer.
8 .\" 2. Redistributions in binary form must reproduce the above copyright
9 .\" notice, this list of conditions and the following disclaimer in the
10 .\" documentation and/or other materials provided with the distribution.
12 .\" THIS SOFTWARE IS PROVIDED BY AUTHOR AND CONTRIBUTORS ``AS IS'' AND
13 .\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
14 .\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
15 .\" ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE
16 .\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
17 .\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
18 .\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
19 .\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
20 .\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
21 .\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
31 .Nd performance tuning under FreeBSD
32 .Sh SYSTEM SETUP - DISKLABEL, NEWFS, TUNEFS, SWAP
33 The swap partition should typically be approximately 2x the size of
35 for systems with less than 4GB of RAM, or approximately equal to
36 the size of main memory
38 Keep in mind future memory
39 expansion when sizing the swap partition.
40 Configuring too little swap can lead
41 to inefficiencies in the VM page scanning code as well as create issues
42 later on if you add more memory to your machine.
44 with multiple SCSI disks (or multiple IDE disks operating on different
45 controllers), configure swap on each drive.
46 The swap partitions on the drives should be approximately the same size.
47 The kernel can handle arbitrary sizes but
48 internal data structures scale to 4 times the largest swap partition.
50 the swap partitions near the same size will allow the kernel to optimally
51 stripe swap space across the N disks.
52 Do not worry about overdoing it a
53 little, swap space is the saving grace of
55 and even if you do not normally use much swap, it can give you more time to
56 recover from a runaway program before being forced to reboot.
58 It is not a good idea to make one large partition.
60 each partition has different operational characteristics and separating them
61 allows the file system to tune itself to those characteristics.
65 partitions are read-mostly, with very little writing, while
66 a lot of reading and writing could occur in
69 partitioning your system fragmentation introduced in the smaller more
70 heavily write-loaded partitions will not bleed over into the mostly-read
73 Properly partitioning your system also allows you to tune
80 option worthwhile turning on is
83 .Dq Li "tunefs -n enable /filesystem" .
84 Softupdates drastically improves meta-data performance, mainly file
85 creation and deletion.
86 We recommend enabling softupdates on most file systems; however, there
87 are two limitations to softupdates that you should be aware of when
88 determining whether to use it on a file system.
89 First, softupdates guarantees file system consistency in the
90 case of a crash but could very easily be several seconds (even a minute!\&)
91 behind on pending write to the physical disk.
92 If you crash you may lose more work
94 Secondly, softupdates delays the freeing of file system
96 If you have a file system (such as the root file system) which is
97 close to full, doing a major update of it, e.g.,\&
98 .Dq Li "make installworld" ,
99 can run it out of space and cause the update to fail.
100 For this reason, softupdates will not be enabled on the root file system
101 during a typical install.
102 There is no loss of performance since the root
103 file system is rarely written to.
107 options exist that can help you tune the system.
108 The most obvious and most dangerous one is
110 Only use this option in conjunction with
112 as it is far too dangerous on a normal file system.
113 A less dangerous and more
119 file systems normally update the last-accessed time of a file or
120 directory whenever it is accessed.
121 This operation is handled in
123 with a delayed write and normally does not create a burden on the system.
124 However, if your system is accessing a huge number of files on a continuing
125 basis the buffer cache can wind up getting polluted with atime updates,
126 creating a burden on the system.
127 For example, if you are running a heavily
128 loaded web site, or a news server with lots of readers, you might want to
129 consider turning off atime updates on your larger partitions with this
132 However, you should not gratuitously turn off atime
136 file system customarily
137 holds mailboxes, and atime (in combination with mtime) is used to
138 determine whether a mailbox has new mail.
139 You might as well leave
140 atime turned on for mostly read-only partitions such as
145 This is especially useful for
147 since some system utilities
148 use the atime field for reporting.
150 In larger systems you can stripe partitions from several drives together
151 to create a much larger overall partition.
152 Striping can also improve
153 the performance of a file system by splitting I/O operations across two
160 utilities may be used to create simple striped file systems.
162 speaking, striping smaller partitions such as the root and
164 or essentially read-only partitions such as
166 is a complete waste of time.
167 You should only stripe partitions that require serious I/O performance,
170 or custom partitions used to hold databases and web pages.
171 Choosing the proper stripe size is also
173 File systems tend to store meta-data on power-of-2 boundaries
174 and you usually want to reduce seeking rather than increase seeking.
176 means you want to use a large off-center stripe size such as 1152 sectors
177 so sequential I/O does not seek both disks and so meta-data is distributed
178 across both disks rather than concentrated on a single disk.
180 you really need to get sophisticated, we recommend using a real hardware
181 RAID controller from the list of
183 supported controllers.
186 variables permit system behavior to be monitored and controlled at
188 Some sysctls simply report on the behavior of the system; others allow
189 the system behavior to be modified;
190 some may be set at boot time using
192 but most will be set via
194 There are several hundred sysctls in the system, including many that appear
195 to be candidates for tuning but actually are not.
196 In this document we will only cover the ones that have the greatest effect
201 sysctl defines the overcommit behaviour of the vm subsystem.
202 The virtual memory system always does accounting of the swap space
203 reservation, both total for system and per-user.
205 are available through sysctl
207 that gives the total bytes available for swapping, and
208 .Va vm.swap_reserved ,
209 that gives number of bytes that may be needed to back all currently
210 allocated anonymous memory.
214 sysctl causes the virtual memory system to return failure
215 to the process when allocation of memory causes
219 Bit 1 of the sysctl enforces
224 Root is exempt from this limit.
225 Bit 2 allows to count most of the physical
226 memory as allocatable, except wired and free reserved pages
228 .Va vm.stats.vm.v_free_target
230 .Va vm.stats.vm.v_wire_count
231 sysctls, respectively).
234 .Va kern.ipc.maxpipekva
235 loader tunable is used to set a hard limit on the
236 amount of kernel address space allocated to mapping of pipe buffers.
237 Use of the mapping allows the kernel to eliminate a copy of the
238 data from writer address space into the kernel, directly copying
239 the content of mapped buffer to the reader.
240 Increasing this value to a higher setting, such as `25165824' might
241 improve performance on systems where space for mapping pipe buffers
242 is quickly exhausted.
243 This exhaustion is not fatal; however, and it will only cause pipes
244 to fall back to using double-copy.
247 .Va kern.ipc.shm_use_phys
248 sysctl defaults to 0 (off) and may be set to 0 (off) or 1 (on).
250 this parameter to 1 will cause all System V shared memory segments to be
251 mapped to unpageable physical RAM.
252 This feature only has an effect if you
253 are either (A) mapping small amounts of shared memory across many (hundreds)
254 of processes, or (B) mapping large amounts of shared memory across any
256 This feature allows the kernel to remove a great deal
257 of internal memory management page-tracking overhead at the cost of wiring
258 the shared memory into core, making it unswappable.
261 .Va vfs.vmiodirenable
262 sysctl defaults to 1 (on).
263 This parameter controls how directories are cached
265 Most directories are small and use but a single fragment
266 (typically 2K) in the file system and even less (typically 512 bytes) in
268 However, when operating in the default mode the buffer
269 cache will only cache a fixed number of directories even if you have a huge
271 Turning on this sysctl allows the buffer cache to use
272 the VM Page Cache to cache the directories.
273 The advantage is that all of
274 memory is now available for caching directories.
275 The disadvantage is that
276 the minimum in-core memory used to cache a directory is the physical page
277 size (typically 4K) rather than 512 bytes.
278 We recommend turning this option off in memory-constrained environments;
279 however, when on, it will substantially improve the performance of services
280 that manipulate a large number of files.
281 Such services can include web caches, large mail systems, and news systems.
282 Turning on this option will generally not reduce performance even with the
283 wasted memory but you should experiment to find out.
287 sysctl defaults to 1 (on).
288 This tells the file system to issue media
289 writes as full clusters are collected, which typically occurs when writing
290 large sequential files.
291 The idea is to avoid saturating the buffer
292 cache with dirty buffers when it would not benefit I/O performance.
294 this may stall processes and under certain circumstances you may wish to turn
298 .Va vfs.hirunningspace
299 sysctl determines how much outstanding write I/O may be queued to
300 disk controllers system-wide at any given time.
301 It is used by the UFS file system.
302 The default is self-tuned and
303 usually sufficient but on machines with advanced controllers and lots
304 of disks this may be tuned up to match what the controllers buffer.
305 Configuring this setting to match tagged queuing capabilities of
306 controllers or drives with average IO size used in production works
307 best (for example: 16 MiB will use 128 tags with IO requests of 128 KiB).
308 Note that setting too high a value
309 (exceeding the buffer cache's write threshold) can lead to extremely
310 bad clustering performance.
311 Do not set this value arbitrarily high!
312 Higher write queuing values may also add latency to reads occurring at
317 sysctl governs VFS read-ahead and is expressed as the number of blocks
318 to pre-read if the heuristics algorithm decides that the reads are
320 It is used by the UFS, ext2fs and msdosfs file systems.
321 With the default UFS block size of 32 KiB, a setting of 64 will allow
322 speculatively reading up to 2 MiB.
323 This setting may be increased to get around disk I/O latencies, especially
324 where these latencies are large such as in virtual machine emulated
326 It may be tuned down in specific cases where the I/O load is such that
327 read-ahead adversely affects performance or where system memory is really
332 sysctl defines how large VFS namecache may grow.
333 The number of currently allocated entries in namecache is provided by
335 sysctl and the condition
336 debug.numcache < kern.maxvnodes * vfs.ncsizefactor
341 sysctl defines how many negative entries VFS namecache is allowed to create.
342 The number of currently allocated negative entries is provided by
344 sysctl and the condition
345 vfs.ncnegfactor * debug.numneg < debug.numcache
348 There are various other buffer-cache and VM page cache related sysctls.
349 We do not recommend modifying these values.
352 the VM system does an extremely good job tuning itself.
355 .Va net.inet.tcp.sendspace
357 .Va net.inet.tcp.recvspace
358 sysctls are of particular interest if you are running network intensive
360 They control the amount of send and receive buffer space
361 allowed for any given TCP connection.
362 The default sending buffer is 32K; the default receiving buffer
365 improve bandwidth utilization by increasing the default at the cost of
366 eating up more kernel memory for each connection.
368 increasing the defaults if you are serving hundreds or thousands of
369 simultaneous connections because it is possible to quickly run the system
370 out of memory due to stalled connections building up.
372 high bandwidth over a fewer number of connections, especially if you have
373 gigabit Ethernet, increasing these defaults can make a huge difference.
374 You can adjust the buffer size for incoming and outgoing data separately.
375 For example, if your machine is primarily doing web serving you may want
376 to decrease the recvspace in order to be able to increase the
377 sendspace without eating too much kernel memory.
378 Note that the routing table (see
380 can be used to introduce route-specific send and receive buffer size
383 As an additional management tool you can use pipes in your
386 to limit the bandwidth going to or from particular IP blocks or ports.
387 For example, if you have a T1 you might want to limit your web traffic
388 to 70% of the T1's bandwidth in order to leave the remainder available
389 for mail and interactive use.
390 Normally a heavily loaded web server
391 will not introduce significant latencies into other services even if
392 the network link is maxed out, but enforcing a limit can smooth things
393 out and lead to longer term stability.
394 Many people also enforce artificial
395 bandwidth limitations in order to ensure that they are not charged for
396 using too much bandwidth.
398 Setting the send or receive TCP buffer to values larger than 65535 will result
399 in a marginal performance improvement unless both hosts support the window
400 scaling extension of the TCP protocol, which is controlled by the
401 .Va net.inet.tcp.rfc1323
403 These extensions should be enabled and the TCP buffer size should be set
404 to a value larger than 65536 in order to obtain good performance from
405 certain types of network links; specifically, gigabit WAN links and
406 high-latency satellite links.
407 RFC1323 support is enabled by default.
410 .Va net.inet.tcp.always_keepalive
411 sysctl determines whether or not the TCP implementation should attempt
412 to detect dead TCP connections by intermittently delivering
415 By default, this is enabled for all applications; by setting this
416 sysctl to 0, only applications that specifically request keepalives
418 In most environments, TCP keepalives will improve the management of
419 system state by expiring dead TCP connections, particularly for
420 systems serving dialup users who may not always terminate individual
421 TCP connections before disconnecting from the network.
422 However, in some environments, temporary network outages may be
423 incorrectly identified as dead sessions, resulting in unexpectedly
424 terminated TCP connections.
425 In such environments, setting the sysctl to 0 may reduce the occurrence of
426 TCP session disconnections.
429 .Va net.inet.tcp.delayed_ack
430 TCP feature is largely misunderstood.
431 Historically speaking, this feature
432 was designed to allow the acknowledgement to transmitted data to be returned
433 along with the response.
434 For example, when you type over a remote shell,
435 the acknowledgement to the character you send can be returned along with the
436 data representing the echo of the character.
437 With delayed acks turned off,
438 the acknowledgement may be sent in its own packet, before the remote service
439 has a chance to echo the data it just received.
440 This same concept also
441 applies to any interactive protocol (e.g.,\& SMTP, WWW, POP3), and can cut the
442 number of tiny packets flowing across the network in half.
445 delayed ACK implementation also follows the TCP protocol rule that
446 at least every other packet be acknowledged even if the standard 100ms
447 timeout has not yet passed.
448 Normally the worst a delayed ACK can do is
449 slightly delay the teardown of a connection, or slightly delay the ramp-up
450 of a slow-start TCP connection.
451 While we are not sure we believe that
452 the several FAQs related to packages such as SAMBA and SQUID which advise
453 turning off delayed acks may be referring to the slow-start issue.
456 it would be more beneficial to increase the slow-start flightsize via
458 .Va net.inet.tcp.slowstart_flightsize
459 sysctl rather than disable delayed acks.
462 .Va net.inet.ip.portrange.*
463 sysctls control the port number ranges automatically bound to TCP and UDP
465 There are three ranges: a low range, a default range, and a
466 high range, selectable via the
471 network programs use the default range which is controlled by
472 .Va net.inet.ip.portrange.first
474 .Va net.inet.ip.portrange.last ,
475 which default to 49152 and 65535, respectively.
476 Bound port ranges are
477 used for outgoing connections, and it is possible to run the system out
478 of ports under certain circumstances.
479 This most commonly occurs when you are
480 running a heavily loaded web proxy.
481 The port range is not an issue
482 when running a server which handles mainly incoming connections, such as a
483 normal web server, or has a limited number of outgoing connections, such
485 For situations where you may run out of ports,
486 we recommend decreasing
487 .Va net.inet.ip.portrange.first
489 A range of 10000 to 30000 ports may be reasonable.
490 You should also consider firewall effects when changing the port range.
492 may block large ranges of ports (usually low-numbered ports) and expect systems
493 to use higher ranges of ports for outgoing connections.
495 .Va net.inet.ip.portrange.last
496 is set at the maximum allowable port number.
499 .Va kern.ipc.somaxconn
500 sysctl limits the size of the listen queue for accepting new TCP connections.
501 The default value of 128 is typically too low for robust handling of new
502 connections in a heavily loaded web server environment.
503 For such environments,
504 we recommend increasing this value to 1024 or higher.
506 may itself limit the listen queue size (e.g.,\&
509 often have a directive in its configuration file to adjust the queue size up.
510 Larger listen queues also do a better job of fending off denial of service
515 sysctl determines how many open files the system supports.
517 typically a few thousand but you may need to bump this up to ten or twenty
518 thousand if you are running databases or large descriptor-heavy daemons.
521 sysctl may be interrogated to determine the current number of open files
525 .Va vm.swap_idle_enabled
526 sysctl is useful in large multi-user systems where you have lots of users
527 entering and leaving the system and lots of idle processes.
529 tend to generate a great deal of continuous pressure on free memory reserves.
530 Turning this feature on and adjusting the swapout hysteresis (in idle
532 .Va vm.swap_idle_threshold1
534 .Va vm.swap_idle_threshold2
535 allows you to depress the priority of pages associated with idle processes
536 more quickly then the normal pageout algorithm.
537 This gives a helping hand
538 to the pageout daemon.
539 Do not turn this option on unless you need it,
540 because the tradeoff you are making is to essentially pre-page memory sooner
541 rather than later, eating more swap and disk bandwidth.
543 this option will have a detrimental effect but in a large system that is
544 already doing moderate paging this option allows the VM system to stage
545 whole processes into and out of memory more easily.
547 Some aspects of the system behavior may not be tunable at runtime because
548 memory allocations they perform must occur early in the boot process.
549 To change loader tunables, you must set their values in
551 and reboot the system.
554 controls the scaling of a number of static system tables, including defaults
555 for the maximum number of open files, sizing of network memory resources, etc.
559 is automatically sized at boot based on the amount of memory available in
560 the system, and may be determined at run-time by inspecting the value of the
564 Some sites will require larger or smaller values of
566 and may set it as a loader tunable; values of 64, 128, and 256 are not
568 We do not recommend going above 256 unless you need a huge number
569 of file descriptors; many of the tunable values set to their defaults by
571 may be individually overridden at boot-time or run-time as described
572 elsewhere in this document.
575 must set this value via the kernel
585 tunables set the default soft limits for process data and stack size
587 Processes may increase these up to the hard limits by calling
594 tunables set the hard limits for process data, stack, and text size
595 respectively; processes may not exceed these limits.
598 tunable controls how much the stack segment will grow when a process
599 needs to allocate more stack.
601 .Va kern.ipc.nmbclusters
602 may be adjusted to increase the number of network mbufs the system is
604 Each cluster represents approximately 2K of memory,
605 so a value of 1024 represents 2M of kernel memory reserved for network
607 You can do a simple calculation to figure out how many you need.
608 If you have a web server which maxes out at 1000 simultaneous connections,
609 and each connection eats a 16K receive and 16K send buffer, you need
610 approximately 32MB worth of network buffers to deal with it.
612 thumb is to multiply by 2, so 32MBx2 = 64MB/2K = 32768.
614 you would want to set
615 .Va kern.ipc.nmbclusters
617 We recommend values between
618 1024 and 4096 for machines with moderates amount of memory, and between 4096
619 and 32768 for machines with greater amounts of memory.
620 Under no circumstances
621 should you specify an arbitrarily high value for this parameter, it could
622 lead to a boot-time crash.
627 may be used to observe network cluster use.
630 do not have this tunable and require that the
637 More and more programs are using the
639 system call to transmit files over the network.
642 sysctl controls the number of file system buffers
644 is allowed to use to perform its work.
645 This parameter nominally scales
648 so you should not need to modify this parameter except under extreme
654 manual page for details.
655 .Sh KERNEL CONFIG TUNING
656 There are a number of kernel options that you may have to fiddle with in
657 a large-scale system.
658 In order to change these options you need to be
659 able to compile a new kernel from source.
662 manual page and the handbook are good starting points for learning how to
664 Generally the first thing you do when creating your own custom
665 kernel is to strip out all the drivers and services you do not use.
668 and drivers you do not have will reduce the size of your kernel, sometimes
669 by a megabyte or more, leaving more memory available for applications.
672 may be used to reduce system boot times.
673 The defaults are fairly high and
674 can be responsible for 5+ seconds of delay in the boot process.
677 to something below 5 seconds could work (especially with modern drives).
679 There are a number of
681 options that can be commented out.
682 If you only want the kernel to run
683 on a Pentium class CPU, you can easily remove
687 if you are sure your CPU is being recognized as a Pentium II or better.
688 Some clones may be recognized as a Pentium or even a 486 and not be able
689 to boot without those options.
692 will be able to better use higher-end CPU features for MMU, task switching,
693 timebase, and even device operations.
694 Additionally, higher-end CPUs support
695 4MB MMU pages, which the kernel uses to map the kernel itself into memory,
696 increasing its efficiency under heavy syscall loads.
697 .Sh CPU, MEMORY, DISK, NETWORK
698 The type of tuning you do depends heavily on where your system begins to
699 bottleneck as load increases.
700 If your system runs out of CPU (idle times
701 are perpetually 0%) then you need to consider upgrading the CPU
702 or perhaps you need to revisit the
703 programs that are causing the load and try to optimize them.
705 is paging to swap a lot you need to consider adding more memory.
707 system is saturating the disk you typically see high CPU idle times and
708 total disk saturation.
710 can be used to monitor this.
711 There are many solutions to saturated disks:
712 increasing memory for caching, mirroring disks, distributing operations across
713 several machines, and so forth.
714 If disk performance is an issue and you
715 are using IDE drives, switching to SCSI can help a great deal.
717 IDE drives compare with SCSI in raw sequential bandwidth, the moment you
718 start seeking around the disk SCSI drives usually win.
720 Finally, you might run out of network suds.
721 Optimize the network path
725 we describe a firewall protecting internal hosts with a topology where
726 the externally visible hosts are not routed through it.
728 than 100BaseT, depending on your needs.
729 Most bottlenecks occur at the WAN link (e.g.,\&
730 modem, T1, DSL, whatever).
731 If expanding the link is not an option it may be possible to use the
733 feature to implement peak shaving or other forms of traffic shaping to
734 prevent the overloaded service (such as web services) from affecting other
735 services (such as email), or vice versa.
736 In home installations this could
737 be used to give interactive traffic (your browser,
740 over services you export from your box (web services, email).
774 manual page was originally written by
780 The manual page was greatly modified by
781 .An Eitan Adler Aq eadler@FreeBSD.org