4 .\" The contents of this file are subject to the terms of the
5 .\" Common Development and Distribution License (the "License").
6 .\" You may not use this file except in compliance with the License.
8 .\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9 .\" or http://www.opensolaris.org/os/licensing.
10 .\" See the License for the specific language governing permissions
11 .\" and limitations under the License.
13 .\" When distributing Covered Code, include this CDDL HEADER in each
14 .\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15 .\" If applicable, add the following below this CDDL HEADER, with the
16 .\" fields enclosed by brackets "[]" replaced with your own identifying
17 .\" information: Portions Copyright [yyyy] [name of copyright owner]
20 .\" Copyright (c) 2008, Sun Microsystems, Inc. All Rights Reserved.
29 .Nd report kernel lock and profiling statistics
33 .Op Fl e Ar event-list
35 .Op Fl b | t | h | s Ar depth
36 .Op Fl n Ar num-records
37 .Op Fl l Ar lock Oo Ns , Ns Ar size Oc
39 .Op Fl f Ar function Oo Ns , Ns Ar size Oc
44 .Op Fl x Ar opt Oo Ns = Ns Ar val Oc
50 utility gathers and displays kernel locking and profiling statistics.
52 allows you to specify which events to watch (for example, spin on adaptive
53 mutex, block on read access to rwlock due to waiting writers, and so forth), how
54 much data to gather for each event, and how to display the data.
57 monitors all lock contention events, gathers frequency and timing data about
58 those events, and displays the data in decreasing frequency order, so that the
59 most common events appear first.
62 gathers data until the specified command completes.
63 For example, to gather statistics for a fixed-time interval, use
65 as the command, as follows:
67 .Dl # lockstat sleep 5
73 establishes a per-processor high-level periodic interrupt source to gather
75 The interrupt handler simply generates a
77 event whose caller is the interrupted PC (program counter).
78 The profiling event is just like any other
80 event, so all of the normal
82 options are applicable.
85 relies on DTrace to modify the running kernel's text to intercept events of
87 This imposes a small but measurable overhead on all system activity, so access
90 is restricted to super-user by default.
92 The following options are supported:
94 If no event selection options are specified, the default is
96 .Bl -tag -width indent
98 Watch all lock events.
103 Watch contention events.
106 .It Fl e Ar event-list
107 Only watch the specified events.
109 is a comma-separated list of events or ranges of events such as 1,4-7,35.
112 with no arguments to get a brief description of all events.
116 Watch profiling interrupt events.
118 Interrupt rate (per second) for
120 The default is 97 Hz, so that profiling doesn't run in lockstep with the clock
121 interrupt (which runs at 100 Hz).
124 .Bl -tag -width indent
125 .It Fl x Ar arg Oo Ns = Ns Ar val Oc
128 runtime option or D compiler option.
129 Boolean options are enabled by specifying their name.
130 Options with values are set by separating the option name and value with an
133 .Ss "Data Gathering (Mutually Exclusive)"
134 .Bl -tag -width indent
136 Basic statistics: lock, caller, number of events.
138 Histogram: timing plus time-distribution histograms.
140 Stack trace: histogram plus stack traces up to
144 Timing: Basic plus timing for all events (default).
147 .Bl -tag -width indent
149 Only watch events longer than
151 .It Fl f Ar func Ns Oo Ns , Ns Ar size Oc Ns
152 Only watch events generated by
154 which can be specified as a symbolic name or hex address.
156 defaults to the ELF symbol size if available, or 1 if not.
157 .It Fl l Ar lock Ns Oo Ns , Ns Ar size Oc Ns
160 which can be specified as a symbolic name or hex address.
162 defaults to the ELF symbol size or 1 if the symbol size is not available.
163 .It Fl n Ar num-records
164 Maximum number of data records.
166 Trace (rather than sample) events.
167 This is off by default.
170 .Bl -tag -width indent
172 Coalesce lock data for lock arrays.
178 Show total events generated by function.
183 in a loop, the work done by
185 counts as work generated by
187 (along with any work done by
192 option works by counting the total number of stack frames in which each function
194 This implies two things: (1) the data reported by
196 can be misleading if the stack traces are not deep enough, and (2) functions
197 that are called recursively might show greater than 100% activity.
198 In light of issue (1), the default data gathering mode when using
203 Coalesce PCs within functions.
208 Sort data by (\fIcount * time\fR) product.
210 Parsable output format.
212 Display rates (events per second) rather than counts.
214 Whichever: distinguish events only by caller, not by lock.
216 Wherever: distinguish events only by lock, not by caller.
219 The following headers appear over various columns of data.
220 .Bl -tag -width indent
222 Number of times this event occurred, or the rate (times per second) if
226 Percentage of all events represented by this individual event.
228 Percentage of all events generated by this function.
230 Cumulative percentage; a running total of the individuals.
232 Average reference count.
233 This will always be 1 for exclusive locks (mutexes,
234 spin locks, rwlocks held as writer) but can be greater than 1 for shared locks
235 (rwlocks held as reader).
237 Average duration of the events in nanoseconds, as appropriate for the event.
238 For the profiling event, duration means interrupt latency.
240 Address of the lock; displayed symbolically if possible.
242 CPU plus processor interrupt level (PIL).
243 For example, if CPU 4 is interrupted while at PIL 6, this will be reported as
246 Address of the caller; displayed symbolically if possible.
250 .It Example 1 Measuring Kernel Lock Contention
252 .Li # lockstat sleep 5
254 Adaptive mutex spin: 41411 events in 5.011 seconds (8263 events/sec)
256 Count indv cuml rcnt nsec Lock Caller
257 -------------------------------------------------------------------------------
258 13750 33% 33% 0.00 72 vm_page_queue_free_mtx vm_page_free_toq+0x12e
259 13648 33% 66% 0.00 66 vm_page_queue_free_mtx vm_page_alloc+0x138
260 4023 10% 76% 0.00 51 vm_dom+0x80 vm_page_dequeue+0x68
261 2672 6% 82% 0.00 186 vm_dom+0x80 vm_page_enqueue+0x63
262 618 1% 84% 0.00 31 0xfffff8000cd83a88 qsyncvp+0x37
263 506 1% 85% 0.00 164 0xfffff8000cb3f098 vputx+0x5a
264 477 1% 86% 0.00 69 0xfffff8000c7eb180 uma_dbg_getslab+0x5b
265 288 1% 87% 0.00 77 0xfffff8000cd8b000 vn_finished_write+0x29
266 263 1% 88% 0.00 103 0xfffff8000cbad448 vinactive+0xdc
267 259 1% 88% 0.00 53 0xfffff8000cd8b000 vfs_ref+0x24
268 237 1% 89% 0.00 20 0xfffff8000cbad448 vfs_hash_get+0xcc
269 233 1% 89% 0.00 22 0xfffff8000bfd9480 uma_dbg_getslab+0x5b
270 223 1% 90% 0.00 20 0xfffff8000cb3f098 cache_lookup+0x561
271 193 0% 90% 0.00 16 0xfffff8000cb40ba8 vref+0x27
272 175 0% 91% 0.00 34 0xfffff8000cbad448 vputx+0x5a
273 169 0% 91% 0.00 51 0xfffff8000cd8b000 vfs_unbusy+0x27
274 164 0% 92% 0.00 31 0xfffff8000cb40ba8 vputx+0x5a
277 Adaptive mutex block: 10 events in 5.011 seconds (2 events/sec)
279 Count indv cuml rcnt nsec Lock Caller
280 -------------------------------------------------------------------------------
281 3 30% 30% 0.00 17592 vm_page_queue_free_mtx vm_page_alloc+0x138
282 2 20% 50% 0.00 20528 vm_dom+0x80 vm_page_enqueue+0x63
283 2 20% 70% 0.00 55502 0xfffff8000cb40ba8 vputx+0x5a
284 1 10% 80% 0.00 12007 vm_page_queue_free_mtx vm_page_free_toq+0x12e
285 1 10% 90% 0.00 9125 0xfffff8000cbad448 vfs_hash_get+0xcc
286 1 10% 100% 0.00 7864 0xfffff8000cd83a88 qsyncvp+0x37
287 -------------------------------------------------------------------------------
290 .It Example 2 Measuring Hold Times
292 .Li # lockstat -H -D 10 sleep 1
294 Adaptive mutex hold: 109589 events in 1.039 seconds (105526 events/sec)
296 Count indv cuml rcnt nsec Lock Caller
297 -------------------------------------------------------------------------------
298 8998 8% 8% 0.00 617 0xfffff8000c7eb180 uma_dbg_getslab+0xd4
299 5901 5% 14% 0.00 917 vm_page_queue_free_mtx vm_object_terminate+0x16a
300 5040 5% 18% 0.00 902 vm_dom+0x80 vm_page_free_toq+0x88
301 4884 4% 23% 0.00 1056 vm_page_queue_free_mtx vm_page_alloc+0x44e
302 4664 4% 27% 0.00 759 vm_dom+0x80 vm_fault_hold+0x1a13
303 4011 4% 31% 0.00 888 vm_dom vm_page_advise+0x11b
304 4010 4% 34% 0.00 957 vm_dom+0x80 _vm_page_deactivate+0x5c
305 3743 3% 38% 0.00 582 0xfffff8000cf04838 pmap_is_prefaultable+0x158
306 2254 2% 40% 0.00 952 vm_dom vm_page_free_toq+0x88
307 1639 1% 41% 0.00 591 0xfffff800d60065b8 trap_pfault+0x1f7
308 -------------------------------------------------------------------------------
311 R/W writer hold: 64314 events in 1.039 seconds (61929 events/sec)
313 Count indv cuml rcnt nsec Lock Caller
314 -------------------------------------------------------------------------------
315 7421 12% 12% 0.00 2994 pvh_global_lock pmap_page_is_mapped+0xb6
316 4668 7% 19% 0.00 3313 pvh_global_lock pmap_enter+0x9ae
317 1639 3% 21% 0.00 733 0xfffff80168d10200 vm_object_deallocate+0x683
318 1639 3% 24% 0.00 3061 0xfffff80168d10200 unlock_and_deallocate+0x2b
319 1639 3% 26% 0.00 2966 0xfffff80168d10200 vm_fault_hold+0x16ee
320 1567 2% 29% 0.00 733 0xfffff80168d10200 vm_fault_hold+0x19bc
321 821 1% 30% 0.00 786 0xfffff801eb0cc000 vm_object_madvise+0x32d
322 649 1% 31% 0.00 4918 0xfffff80191105300 vm_fault_hold+0x16ee
323 648 1% 32% 0.00 8112 0xfffff80191105300 unlock_and_deallocate+0x2b
324 647 1% 33% 0.00 1261 0xfffff80191105300 vm_object_deallocate+0x683
325 -------------------------------------------------------------------------------
327 .It Example 3 Measuring Hold Times for Stack Traces Containing a Specific Function
329 .Li # lockstat -H -f tcp_input -s 50 -D 10 sleep 1
331 Adaptive mutex hold: 68 events in 1.026 seconds (66 events/sec)
333 -------------------------------------------------------------------------------
334 Count indv cuml rcnt nsec Lock Caller
335 32 47% 47% 0.00 1631 0xfffff800686f50d8 tcp_do_segment+0x284b
337 nsec ------ Time Distribution ------ count Stack
338 1024 |@@@@@@@@@@ 11 tcp_input+0xf54
339 2048 |@@@@@@@@@@@@@ 14 ip_input+0xc8
340 4096 |@@@@@ 6 swi_net+0x192
341 8192 | 1 intr_event_execute_handlers+0x93
345 -------------------------------------------------------------------------------
346 Count indv cuml rcnt nsec Lock Caller
347 29 43% 90% 0.00 4851 0xfffff800686f50d8 sowakeup+0xf8
349 nsec ------ Time Distribution ------ count Stack
350 4096 |@@@@@@@@@@@@@@@ 15 tcp_do_segment+0x2423
351 8192 |@@@@@@@@@@@@ 12 tcp_input+0xf54
352 16384 |@@ 2 ip_input+0xc8
354 intr_event_execute_handlers+0x93
358 -------------------------------------------------------------------------------
367 Tail-call elimination can affect call sites.
376 the compiler can arrange for
380 with a return address of
386 will appear as though it occurred at
389 The PC in the stack frame in which an interrupt occurs can be bogus because,
390 between function calls, the compiler is free to use the return address register
397 options together, the interrupted PC will usually not appear anywhere in the
398 stack since the interrupt handler is entered asynchronously, not by a function