blob: 0e97c5482de74226370d8afe59b91da41f3cb156 [file] [log] [blame] [view]
Jaana Burcu Dogan9f859352017-05-08 20:04:42 -07001# Debugging performance issues in Go programs
2
3-- originally written by Dmitry Vyukov
4
5Let's assume you have a Go program and want to improve its performance. There
6are several tools available that can help with this task. These tools can help
7you to identify various types of hotspots (CPU, IO, memory), hotspots are the
8places that you need to concentrate on in order to significantly improve
9performance. However, another outcome is possible -- the tools can help you
10identify obvious performance defects in the program. For example, you prepare an
11SQL statement before each query while you could prepare it once at program
12startup. Another example is if an O(N^2) algorithm somehow slipped into where an
13obvious O(N) exists and is expected. In order to identify such cases you need to
14sanity check what you see in profiles. For example for the first case
15significant time spent in SQL statement preparation would be the red flag.
16
17It's also important to understand various bounding factors for performance. For
18example, if the program communicates via 100 Mbps network link and it is already
19utilizes >90Mbps, there is not much you can do with the program to improve its
20performance. There are similar bounding factors for disk IO, memory consumption
21and computational tasks. With that in mind we can look at the available tools.
22
23Note: The tools can interfere with each other. For example, precise memory
24profiling skews CPU profiles, goroutine blocking profiling affects scheduler
25trace, etc. Use tools in isolation to get more precise info.
26
27## CPU Profiler
28
29Go runtime contains built-in CPU profiler, which shows what functions consume
30what percent of CPU time. There are 3 ways you can get access to it:
31
321. The simplest one is -cpuprofile flag of 'go test'
33 (http://golang.org/cmd/go/#hdr-Description_of_testing_flags) command. For
34 example, the following command:
35
36 $ go test -run=none -bench=ClientServerParallel4 -cpuprofile=cprof net/http
37
38 will profile the given benchmark and write CPU profile into 'cprof' file.
39 Then:
40
41 $ go tool pprof --text http.test cprof
42
43 will print a list of the hottest functions.
44
45 There are several output types available, the most useful ones are:
46 `--text`, `--web` and `--list`. Run `go tool pprof` to get the complete
47 list. The obvious drawback of this option is that it works only for tests.
48
492. [net/http/pprof](http://golang.org/pkg/net/http/pprof): This is the ideal
50 solution for network servers. You merely need to import `net/http/pprof`,
51 and collect profiles with:
52
53 $ go tool pprof --text mybin http://myserver:6060:/debug/pprof/profile
54
553. Manual profile collection. You need to import
56 [runtime/pprof](http://golang.org/pkg/runtime/pprof/) and add the following
57 code to main function:
58
59 if *flagCpuprofile != "" {
60 f, err := os.Create(*flagCpuprofile)
61 if err != nil {
62 log.Fatal(err)
63 }
64 pprof.StartCPUProfile(f)
65 defer pprof.StopCPUProfile()
66 }
67
68 The profile will be written to the specified file, visualize it the same way
69 as in the first option.
70
71Here is an example of a profile visualized with `--web` option:
72[cpu_profile.png]
73
74You can investigate a single function with `--list=funcname` option. For
75example, the following profile shows that the time was spent in the `append`
76function:
77
78 . . 93: func (bp *buffer) WriteRune(r rune) error {
79 . . 94: if r < utf8.RuneSelf {
80 5 5 95: *bp = append(*bp, byte(r))
81 . . 96: return nil
82 . . 97: }
83 . . 98:
84 . . 99: b := *bp
85 . . 100: n := len(b)
86 . . 101: for n+utf8.UTFMax > cap(b) {
87 . . 102: b = append(b, 0)
88 . . 103: }
89 . . 104: w := utf8.EncodeRune(b[n:n+utf8.UTFMax], r)
90 . . 105: *bp = b[:n+w]
91 . . 106: return nil
92 . . 107: }
93
94There are also 3 special entries that the profiler uses when it can't unwind
95stack: GC, System and ExternalCode. GC means time spent during garbage
96collection, see Memory Profiler and Garbage Collector Trace sections below for
97optimization suggestions. System means time spent in goroutine scheduler, stack
98management code and other auxiliary runtime code. ExternalCode means time spent
99in native dynamic libraries.
100
101Here are some hints with respect to how to interpret what you see in the
102profile.
103
104If you see lots of time spent in `runtime.mallocgc` function, the program
105potentially makes excessive amount of small memory allocations. The profile will
106tell you where the allocations are coming from. See the memory profiler section
107for suggestions on how to optimize this case.
108
109If lots of time is spent in channel operations, sync.Mutex code and other
110synchronization primitives or System component, the program probably suffers
111from contention. Consider to restructure program to eliminate frequently
112accessed shared resources. Common techniques for this include
113sharding/partitioning, local buffering/batching and copy-on-write technique.
114
115If lots of time is spent in `syscall.Read/Write`, the program potentially makes
116excessive amount of small reads and writes. Bufio wrappers around os.File or
117net.Conn can help in this case.
118
119If lots of time is spent in GC component, the program either allocates too many
120transient objects or heap size is very small so garbage collections happen too
121frequently. See Garbage Collector Tracer and Memory Profiler sections for
122optimization suggestions.
123
124Note: CPU profiler currently [does not work on
125darwin](https://code.google.com/p/go/issues/detail?id=6047).
126
127Note: On windows you need to install Cygwin, Perl and Graphviz to generate
128svg/web profiles.
129
130
131## Memory Profiler
132
133Memory profiler shows what functions allocate heap memory. You can collect it in
134similar ways as CPU profile: with `go test --memprofile`
135(http://golang.org/cmd/go/#hdr-Description_of_testing_flags), with
136[net/http/pprof](http://golang.org/pkg/net/http/pprof) via
137http://myserver:6060:/debug/pprof/heap or by calling
138[runtime/pprof.WriteHeapProfile](http://golang.org/pkg/runtime/pprof/#WriteHeapProfile).
139
140You can visualize only allocations live at the time of profile collection
141(`--inuse_space flag to pprof`, default), or all allocations happened since
142program start (`--alloc_space` flag to `pprof`). The former is useful for
143profiles collected with net/http/pprof on live applications, the latter is
144useful for profiles collected at program end (otherwise you will see almost
145empty profile).
146
147Note: the memory profiler is sampling, that is, it collects information only
148about some subset of memory allocations. Probability of sampling an object is
149proportional to its size. You can change the sampling rate with go test
150`--memprofilerate` flag, or by setting `runtime.MemProfileRate` variable at
151program startup. The rate of 1 will lead to collection of information about all
152allocations, but it can slow down execution. The default sampling rate is 1
153sample per 512KB of allocated memory.
154
155You can also visualize number of bytes allocated or number of objects allocated
156(`--inuse/alloc_space` and `--inuse/alloc_objects` flags, respectively). The
157profiler tends to sample larger objects during profiling more. But it's
158important to understand that large objects affect memory consumption and GC
159time, while large number of tiny allocations affects execution speed (and GC
160time to some degree as well). So it may be useful to look at both.
161
162Objects can be persistent or transient. If you have several large persistent
163objects allocated at program start, they will be most likely sampled by the
164profiler (because they are large). Such objects do affect memory consumption and
165GC time, but they do not affect normal execution speed (no memory management
166operations happen on them). On the other hand if you have large number of
167objects with very short life durations, they can be barely represented in the
168profile (if you use the default `--inuse_space mode`). But they do significantly
169affect execution speed, because they are constantly allocated and freed. So,
170once again, it may be useful to look at both types of objects. So, generally, if
171you want to reduce memory consumption, you need to look at `--inuse_space`
172profile collected during normal program operation. If you want to improve
173execution speed, look at `--alloc_objects` profile collected after significant
174running time or at program end.
175
176There are several flags that control reporting granularity. `--functions` makes
177pprof report on function level (default). `--lines` makes pprof report on source
178line level, which is useful if hot functions allocate on different lines. And
179there are also `--addresses` and `--files` for exact instruction address and
180file level, respectively.
181
182There is a useful option for the memory profile -- you can look at it right in
183the browser (provided that you imported `net/http/pprof`). If you open
184http://myserver:6060/debug/pprof/heap?debug=1 you must see the heap profile
185along the lines of:
186
187 heap profile: 4: 266528 [123: 11284472] @ heap/1048576
188 1: 262144 [4: 376832] @ 0x28d9f 0x2a201 0x2a28a 0x2624d 0x26188 0x94ca3 0x94a0b 0x17add6 0x17ae9f 0x1069d3 0xfe911 0xf0a3e 0xf0d22 0x21a70
189 # 0x2a201 cnew+0xc1 runtime/malloc.goc:718
190 # 0x2a28a runtime.cnewarray+0x3a runtime/malloc.goc:731
191 # 0x2624d makeslice1+0x4d runtime/slice.c:57
192 # 0x26188 runtime.makeslice+0x98 runtime/slice.c:38
193 # 0x94ca3 bytes.makeSlice+0x63 bytes/buffer.go:191
194 # 0x94a0b bytes.(*Buffer).ReadFrom+0xcb bytes/buffer.go:163
195 # 0x17add6 io/ioutil.readAll+0x156 io/ioutil/ioutil.go:32
196 # 0x17ae9f io/ioutil.ReadAll+0x3f io/ioutil/ioutil.go:41
197 # 0x1069d3 godoc/vfs.ReadFile+0x133 godoc/vfs/vfs.go:44
198 # 0xfe911 godoc.funcĀ·023+0x471 godoc/meta.go:80
199 # 0xf0a3e godoc.(*Corpus).updateMetadata+0x9e godoc/meta.go:101
200 # 0xf0d22 godoc.(*Corpus).refreshMetadataLoop+0x42 godoc/meta.go:141
201
202 2: 4096 [2: 4096] @ 0x28d9f 0x29059 0x1d252 0x1d450 0x106993 0xf1225 0xe1489 0xfbcad 0x21a70
203 # 0x1d252 newdefer+0x112 runtime/panic.c:49
204 # 0x1d450 runtime.deferproc+0x10 runtime/panic.c:132
205 # 0x106993 godoc/vfs.ReadFile+0xf3 godoc/vfs/vfs.go:43
206 # 0xf1225 godoc.(*Corpus).parseFile+0x75 godoc/parser.go:20
207 # 0xe1489 godoc.(*treeBuilder).newDirTree+0x8e9 godoc/dirtrees.go:108
208 # 0xfbcad godoc.funcĀ·002+0x15d godoc/dirtrees.go:100
209
210The numbers in the beginning of each entry `("1: 262144 [4: 376832]")` represent
211number of currently live objects, amount of memory occupied by live objects,
212total number of allocations and amount of memory occupied by all allocations,
213respectively.
214
215Optimizations are usually application-specific, but here are some common
216suggestions.
217
2181. Combine objects into larger objects. For example, replace `*bytes.Buffer`
219 struct member with `bytes.Buffer` (you can preallocate buffer for writing by
220 calling `bytes.Buffer.Grow` later). This will reduce number of memory
221 allocations (faster) and also reduce pressure on garbage collector (faster
222 garbage collections).
223
2242. Local variables that escape from their declaration scope get promoted into
225 heap allocations. Compiler generally can't prove that several variables have
226 the same life time, so it allocates each such variable separately. So you
227 can use the above advise for local variables as well. For example, replace:
228
229 for k, v := range m {
230 k, v := k, v // copy for capturing by the goroutine
231 go func() {
232 // use k and v
233 }()
234 }
235
236 with:
237
238 for k, v := range m {
239 x := struct{ k, v string }{k, v} // copy for capturing by the goroutine
240 go func() {
241 // use x.k and x.v
242 }()
243 }
244
245 This replaces two memory allocations with a single allocation. However, this
246 optimization usually negatively affects code readability, so use it
247 reasonably.
248
2493. A special case of allocation combining is slice array preallocation. If you
250 know a typical size of the slice, you can preallocate a backing array for it
251 as follows:
252
253 type X struct {
254 buf []byte
255 bufArray [16]byte // Buf usually does not grow beyond 16 bytes.
256 }
257
258 func MakeX() *X {
259 x := &X{}
260 // Preinitialize buf with the backing array.
261 x.buf = x.bufArray[:0]
262 return x
263 }
264
2654. If possible use smaller data types. For example, use `int8` instead of
266 `int`.
267
2685. Objects that do not contain any pointers (note that strings, slices, maps
269 and chans contain implicit pointers), are not scanned by garbage collector.
270 For example, a 1GB byte slice virtually does not affect garbage collection
271 time. So if you remove pointers from actively used objects, it can
272 positively impact garbage collection time. Some possibilities are: replace
273 pointers with indices, split object into two parts one of which does not
274 contain pointers.
275
2766. Use freelists to reuse transient objects and reduce number of allocations.
277 Standard library contains [sync.Pool](http://tip.golang.org/pkg/sync/#Pool)
278 type that allows to reuse the same object several times in between garbage
279 collections. However, be aware that, as any manual memory management scheme,
280 incorrect use of sync.Pool can lead to use-after-free bugs.
281
282You can also use the Garbage Collector Trace (see below) to get some insights
283into memory issues.
284
285TODO(dvyukov): mention that stats are updated in a deferred way: "Memprof stats
286are updated in a deferred way. This is required in order to present a consistent
287picture in the situation when allocs are coming continuously and frees are
288coming in batches afterwards. Several consecutive GCs push the update pipeline
289forward. That's what you observe. So if you profile a live server then any
290sample will give you a consistent snapshot. However, if the program completes
291some activity and you want to collect the snapshot after this activity, then you
292need to execute 2 or 3 GCs before the collection."
293
294## Blocking Profiler
295
296The blocking profiler shows where goroutine block waiting on synchronization
297primitives (including timer channels). You can collect it in similar ways as CPU
298profile: with `go test --blockprofile`
299(http://golang.org/cmd/go/#hdr-Description_of_testing_flags), with
300[net/http/pprof](http://golang.org/pkg/net/http/pprof) via
301http://myserver:6060:/debug/pprof/block or by calling
302[runtime/pprof.Lookup("block").WriteTo](http://golang.org/pkg/runtime/pprof/#Profile.WriteTo).
303
304But there is significant caveat -- the blocking profiler is not enabled by
305default. `go test --blockprofile` will enable it for you automatically. However,
306if you use `net/http/pprof` or `runtime/pprof`, you need to enable it manually
307(otherwise the profile will be empty). To enable the blocking profiler call
308[runtime.SetBlockProfileRate](http://golang.org/pkg/runtime/#SetBlockProfileRate).
309SetBlockProfileRate controls the fraction of goroutine blocking events that are
310reported in the blocking profile. The profiler aims to sample an average of one
311blocking event per the specified amount of nanoseconds spent blocked. To include
312every blocking event in the profile, set the rate to 1.
313
314If a function contains several blocking operations and it's not obvious which
315one leads to blocking, use `--lines` flag to pprof.
316
317Note that not all blocking is bad. When a goroutine blocks, the underlying
318worker thread simply switches to another goroutine. So blocking in the
319cooperative Go environment is very different from blocking on a mutex in a
320non-cooperative systems (e.g. typical C++ or Java threading libraries, where
321blocking leads to thread idling and expensive thread context switches). To give
322you some feeling, let's consider some examples.
323
324Blocking on a time.Ticker is usually OK. If a goroutine blocks on a Ticker for
32510 seconds, you will see 10 seconds of blocking in the profile, which is
326perfectly fine. Blocking on `sync.WaitGroup` is frequently OK. For example, is a
327task takes 10 seconds, the goroutine waiting on a WaitGroup for completion will
328account for 10 seconds of blocking in the profile. Blocking on sync.Cond may or
329may not be OK, depending on the situation. Consumer blocking on a channel
330suggests slow producers or lack of work. Producer blocking on a channel suggests
331that consumers are slower, but this is frequently OK. Blocking on a
332channel-based semaphore shows how much goroutines are gated on the semaphore.
333Blocking on a sync.Mutex or sync.RWMutex is usually bad. You can use `--ignore`
334flag to pprof to exclude known uninteresting blocking from a profile during
335visualization.
336
337Blocking of goroutines can lead to two negative consequences:
338
3391. Program does not scale with processors due to lack of work. Scheduler Trace
340 can help to identify this case.
341
3422. Excessive goroutine blocking/unblocking consumes CPU time. CPU Profiler can
343 help to identify this case (look at the System component).
344
345Here are some common suggestions that can help to reduce goroutine blocking:
346
3471. Use sufficiently buffered channels in producer-consumer scenarios.
348 Unbuffered channels substantially limit available parallelism in the
349 program.
350
3512. Use `sync.RWMutex` instead of `sync.Mutex` for read-mostly workloads.
352 Readers never block other readers in `sync.RWMutex`, even on implementation
353 level.
354
3553. In some cases it's possible to remove mutexes entirely by using
356 copy-on-write technique. If the protected data structure is modified
357 infrequently and it's feasible to make copies of it, then it can be updated
358 as follows:
359
360 type Config struct {
361 Routes map[string]net.Addr
362 Backends []net.Addr
363 }
364
365 var config atomic.Value // actual type is *Config
366
367 // Worker goroutines use this function to obtain the current config.
368 // UpdateConfig must be called at least once before this func.
369 func CurrentConfig() *Config {
370 return config.Load().(*Config)
371 }
372
373 // Background goroutine periodically creates a new Config object
374 // as sets it as current using this function.
375 func UpdateConfig(cfg *Config) {
376 config.Store(cfg)
377 }
378
379 This pattern prevents the writer from blocking readers during update.
380
3814. Partitioning is another general technique for reducing contention/blocking
382 on shared mutable data structures. Below is an example of how to partition a
383 hashmap:
384
385 type Partition struct {
386 sync.RWMutex
387 m map[string]string
388 }
389
390 const partCount = 64
391 var m [partCount]Partition
392
393 func Find(k string) string {
394 idx := hash(k) % partCount
395 part := &m[idx]
396 part.RLock()
397 v := part.m[k]
398 part.RUnlock()
399 return v
400 }
401
4025. Local caching and batching of updates can help to reduce contention on
403 un-partitionable data structures. Below you can see how to batch sends to a
404 channel:
405
406 const CacheSize = 16
407
408 type Cache struct {
409 buf [CacheSize]int
410 pos int
411 }
412
413 func Send(c chan [CacheSize]int, cache *Cache, value int) {
414 cache.buf[cache.pos] = value
415 cache.pos++
416 if cache.pos == CacheSize {
417 c <- cache.buf
418 cache.pos = 0
419 }
420 }
421
422 This technique is not limited to channels. It can be used to batch updates
423 to a map, batch allocations, etc.
424
4256. Use [sync.Pool](http://tip.golang.org/pkg/sync/#Pool) for freelists instead
426 of chan-based or mutex-protected freelists. sync.Pool uses smart techniques
427 internally to reduce blocking.
428
429## Goroutine Profiler
430
431The goroutine profiler simply gives you current stacks of all live goroutines in
432the process. It can be handy to debug load balancing issues (see Scheduler Trace
433section below), or to debug deadlocks. The profile makes sense only for a
434running app, so go test does not expose it. You can collect the profile with
435[net/http/pprof](http://golang.org/pkg/net/http/pprof) via
436http://myserver:6060:/debug/pprof/goroutine, and visualize it to svg/pdf or by
437calling
438[runtime/pprof.Lookup("goroutine").WriteTo](http://golang.org/pkg/runtime/pprof/#Profile.WriteTo).
439But the most useful way is to type
440http://myserver:6060:/debug/pprof/goroutine?debug=2 in your browser, which will
441give you symbolized stacks similar to what you see when a program crashes. Note
442that goroutines in "syscall" state consume an OS thread, other goroutines do not
443(except for goroutines that called runtime.LockOSThread, which is,
444unfortunately, not visible in the profile). Note that goroutines in "IO wait"
445state also do not consume threads, they are parked on non-blocking network
446poller (which uses epoll/kqueue/GetQueuedCompletionStatus to unpark goroutines
447later).
448
449## Garbage Collector Trace
450
451Aside from the profiling tools, there is another kind of tools available --
452tracers. They allow to trace garbage collections, memory allocator and goroutine
453scheduler state. To enable the garbage collector (GC) trace, run the program
454with `GODEBUG=gctrace=1` environment variable:
455
456 $ GODEBUG=gctrace=1 ./myserver
457
458Then the program will print output similar to the following during execution:
459
460 gc9(2): 12+1+744+8 us, 2 -> 10 MB, 108615 (593983-485368) objects, 4825/3620/0 sweeps, 0(0) handoff, 6(91) steal, 16/1/0 yields
461 gc10(2): 12+6769+767+3 us, 1 -> 1 MB, 4222 (593983-589761) objects, 4825/0/1898 sweeps, 0(0) handoff, 6(93) steal, 16/10/2 yields
462 gc11(2): 799+3+2050+3 us, 1 -> 69 MB, 831819 (1484009-652190) objects, 4825/691/0 sweeps, 0(0) handoff, 5(105) steal, 16/1/0 yields
463
464Let's consider the meaning of these numbers. One line per GC is printed. The
465first number ("gc9") is the number of GC (this is the 9-th GC since program
466start). The number in parens ("(2)") is the number of worker threads
467participated in the GC. The next 4 numbers ("12+1+744+8 us") mean
468stop-the-world, sweeping, marking and waiting for worker threads to finish, in
469microseconds, respectively. The next 2 numbers ("2 -> 10 MB") mean size of live
470heap after the previous GC and full heap size (including garbage) before the
471current GC. The next 3 numbers ("108615 (593983-485368) objects") are total
472number of objects in heap (including garbage) and total number of memory
473allocation and free operations. The next 3 numbers ("4825/3620/0 sweeps")
474characterize sweep phase (of the previous GC): there were total 4825 memory
475spans, 3620 were swept on demand or in background, 0 were swept during
476stop-the-world phase (the rest were unused spans). The next 4 numbers ("0(0)
477handoff, 6(91) steal") characterize load balancing during parallel mark phase:
478there were 0 object handoff operations (0 objects were handoff), and 6 steal
479operations (91 objects were stolen). The last 3 numbers ("16/1/0 yields")
480characterize effectiveness of parallel mark phase: there were total of 17 yield
481operations during waiting for another thread.
482
483The GC is [mark-and-sweep
484type](http://www.brpreiss.com/books/opus5/html/page424.html). Total GC can be
485expressed as:
486
487 Tgc = Tseq + Tmark + Tsweep
488
489where Tseq is time to stop user goroutines and some preparation activities
490(usually small); Tmark is heap marking time, marking happens when all user
491goroutines are stopped, and thus can significantly affect latency of processing;
492Tsweep is heap sweeping time, sweeping generally happens concurrently with
493normal program execution, and so is not so critical for latency.
494
495Marking time can be approximately expressed as:
496
497 Tmark = C1*Nlive + C2*MEMlive_ptr + C3*Nlive_ptr
498
499where Nlive is the number of live objects in the heap during GC, `MEMlive_ptr`
500is the amount of memory occupied by live objects with pointers, `Nlive_ptr` is
501the number of pointers in live objects.
502
503Sweeping time can be approximately expressed as:
504
505 Tsweep = C4*MEMtotal + C5*MEMgarbage
506
507where `MEMtotal` is the total amount of heap memory, `MEMgarbage` is the amount
508of garbage in the heap.
509
510Next GC happens after the program has allocated an extra amount of memory
511proportional to the amount already in use. The proportion is controlled by GOGC
512environment variable (100 by default). If GOGC=100 and the program is using 4M
513of heap memory, runtime will trigger GC again when the program gets to 8M. This
514keeps the GC cost in linear proportion to the allocation cost. Adjusting GOGC
515changes the linear constant and also the amount of extra memory used.
516
517Only sweeping depends on total size of the heap, and sweeping happens
518concurrently with normal program execution. So it can make sense to set GOGC to
519a higher value (200, 300, 500, etc) if you can afford extra memory consumption.
520For example, GOGC=300 can reduce garbage collection overhead by up to 2 times
521while keeping latencies the same (at the cost of 2 times larger heap).
522
523GC is parallel and generally scales well with hardware parallelism. So it can
524make sense to set GOMAXPROCS to higher value even for sequential programs just
525to speed up garbage collections. However, note that number of garbage collector
526threads is currently bounded by 8.
527
528## Memory Allocator Trace
529
530Memory allocator traces simply dumps all memory allocation and free operations
531onto console. It's enabled with GODEBUG=allocfreetrace=1 environment variable.
532The output looks along the lines of:
533
534 tracealloc(0xc208062500, 0x100, array of parse.Node)
535 goroutine 16 [running]:
536 runtime.mallocgc(0x100, 0x3eb7c1, 0x0)
537 runtime/malloc.goc:190 +0x145 fp=0xc2080b39f8
538 runtime.growslice(0x31f840, 0xc208060700, 0x8, 0x8, 0x1, 0x0, 0x0, 0x0)
539 runtime/slice.goc:76 +0xbb fp=0xc2080b3a90
540 text/template/parse.(*Tree).parse(0xc2080820e0, 0xc208023620, 0x0, 0x0)
541 text/template/parse/parse.go:289 +0x549 fp=0xc2080b3c50
542 ...
543
544 tracefree(0xc208002d80, 0x120)
545 goroutine 16 [running]:
546 runtime.MSpan_Sweep(0x73b080)
547 runtime/mgc0.c:1880 +0x514 fp=0xc20804b8f0
548 runtime.MCentral_CacheSpan(0x69c858)
549 runtime/mcentral.c:48 +0x2b5 fp=0xc20804b920
550 runtime.MCache_Refill(0x737000, 0xc200000012)
551 runtime/mcache.c:78 +0x119 fp=0xc20804b950
552 ...
553
554The trace contains address of the memory block, size, type, goroutine id and the
555stack trace. It's probably more useful for debugging, but can give very
556fine-grained info for allocation optimizations as well.
557
558## Scheduler Trace
559
560Scheduler trace can provide insights into dynamic behavior of the goroutine
561scheduler and allow to debug load balancing and scalability issues. To enable
562the scheduler trace trace, run the program with GODEBUG=schedtrace=1000
563environment variable (the value means period of output, in ms, in this case it's
564once per second):
565
566 $ GODEBUG=schedtrace=1000 ./myserver
567
568Then the program will print output similar to the following during execution:
569
570 SCHED 1004ms: gomaxprocs=4 idleprocs=0 threads=11 idlethreads=4 runqueue=8 [0 1 0 3]
571 SCHED 2005ms: gomaxprocs=4 idleprocs=0 threads=11 idlethreads=5 runqueue=6 [1 5 4 0]
572 SCHED 3008ms: gomaxprocs=4 idleprocs=0 threads=11 idlethreads=4 runqueue=10 [2 2 2 1]
573
574The first number ("1004ms") is time since program start. Gomaxprocs is the
575current value of GOMAXPROCS. Idleprocs is the number of idling processors (the
576rest are executing Go code). Threads is the total number of worker threads
577created by the scheduler (threads can be in 3 states: execute Go code
578(gomaxprocs-idleprocs), execute syscalls/cgocalls or idle). Idlethreads is the
579number of idling worker threads. Runqueue is the length of global queue with
580runnable goroutines. The numbers in square brackets ("[0 1 0 3]") are lengths of
581per-processor queues with runnable goroutines. Sum of lengths of global and
582local queues represents the total number of goroutines available for execution.
583
584Note: You can combine any of the tracers as
585GODEBUG=gctrace=1,allocfreetrace=1,schedtrace=1000.
586
587Note: There is also detailed scheduler trace, which you can enable with
588GODEBUG=schedtrace=1000,scheddetail=1. It prints detailed info about every
589goroutine, worker thread and processor. We won't describe its format here as
590it's mainly useful for scheduler developers; but you can find details in
591[src/pkg/runtime/proc.c](https://code.google.com/p/go/source/browse/src/pkg/runtime/proc.c).
592
593The scheduler trace is useful when a program does not scale linearly with
594GOMAXPROCS and/or does not consume 100% of CPU time. The ideal situation is when
595all processors are busy executing Go code, number of threads is reasonable,
596there is plenty of work in all queues and the work is reasonably evenly
597distributed:
598
599 gomaxprocs=8 idleprocs=0 threads=40 idlethreads=5 runqueue=10 [20 20 20 20 20 20 20 20]
600
601A bad situation is when something of the above does not hold. For example the
602following sample demonstrates shortage of work to keep all processors busy:
603
604 gomaxprocs=8 idleprocs=6 threads=40 idlethreads=30 runqueue=0 [0 2 0 0 0 1 0 0]
605
606Note: use OS-provided means to measure actual CPU utilization as the ultimate
607characteristic. On Unix family of operating system it is top command; on Windows
608it is Task Manager.
609
610You can use the goroutine profiler to understand where goroutines block in the
611case of work shortage. Note that load imbalance is not ultimately bad as long as
612all processors are busy, it will just cause some moderate load balancing
613overheads.
614
615## Memory Statistics
616
617Go runtime exposes coarse-grained memory statistics via
618[runtime.ReadMemStats](http://golang.org/pkg/runtime/#ReadMemStats) function.
619The statistics are also exposed via net/http/pprof at the bottom of
620http://myserver:6060/debug/pprof/heap?debug=1. The statistics are described
621[here](http://golang.org/pkg/runtime/#MemStats). Some of the interesting fields
622are:
623
6241. HeapAlloc - current heap size.
6252. HeapSys - total heap size.
6263. HeapObjects - total number of objects in the heap.
6274. HeapReleased - amount of memory released to the OS; runtime releases to the
628 OS memory unused for 5 minutes, you can force this process with
629 runtime/debug.FreeOSMemory.
6305. Sys - total amount of memory allocated from OS.
6316. Sys-HeapReleased - effective memory consumption of the program.
6327. StackSys - memory consumed for goroutine stacks (note that some stacks are
633 allocated from heap and are not accounted here, unfortunately there is no
634 way to get total size of stacks
635 (https://code.google.com/p/go/issues/detail?id=7468)).
6368. MSpanSys/MCacheSys/BuckHashSys/GCSys/OtherSys - amount of memory allocated
637 by runtime for various auxiliary purposes; they are generally not
638 interesting, unless they are too high.
6399. PauseNs - durations of last garbage collections.
640
641## Heap Dumper
642
643The last available tool is heap dumper, it can write state of the whole heap
644into a file for future exploration. It can be useful for identifying memory
645leaks and getting insights into program memory consumption.
646
647First, you need to write the dump using
648[runtime/debug.WriteHeapDump](http://tip.golang.org/pkg/runtime/debug/#WriteHeapDump)
649function:
650
651 f, err := os.Create("heapdump")
652 if err != nil { ... }
653 debug.WriteHeapDump(f.Fd())
654
655Then you can either render it to a dot file with graphical representation of the
656heap or convert it to hprof format. To render it to a dot file:
657
658 $ go get github.com/randall77/hprof/dumptodot
659 $ dumptodot heapdump mybinary > heap.dot
660
661and open `heap.dot` with Graphviz.
662
663To convert it to `hprof` format:
664
665 $ go get github.com/randall77/hprof/dumptohprof
666 $ dumptohprof heapdump heap.hprof
667 $ jhat heap.hprof
668
669and navigate your browser to http://myserver:7000.
670
671## Concluding Remarks
672
673Optimization is an open problem, there are simple recipes that you can use to
674improve performance. Sometimes optimization requires complete re-architecture of
675the program. But we hope that the tools will be a valuable addition to your
676toolbox, that you can use to at least analyze and understand what happens.
677[Profiling Go Programs](http://blog.golang.org/profiling-go-programs) is a good
678tutorial on usage of CPU and memory profilers to optimize a simple program.