Jaana Burcu Dogan | 9f85935 | 2017-05-08 20:04:42 -0700 | [diff] [blame] | 1 | # Debugging performance issues in Go programs |
| 2 | |
| 3 | -- originally written by Dmitry Vyukov |
| 4 | |
| 5 | Let's assume you have a Go program and want to improve its performance. There |
| 6 | are several tools available that can help with this task. These tools can help |
| 7 | you to identify various types of hotspots (CPU, IO, memory), hotspots are the |
| 8 | places that you need to concentrate on in order to significantly improve |
| 9 | performance. However, another outcome is possible -- the tools can help you |
| 10 | identify obvious performance defects in the program. For example, you prepare an |
| 11 | SQL statement before each query while you could prepare it once at program |
| 12 | startup. Another example is if an O(N^2) algorithm somehow slipped into where an |
| 13 | obvious O(N) exists and is expected. In order to identify such cases you need to |
| 14 | sanity check what you see in profiles. For example for the first case |
| 15 | significant time spent in SQL statement preparation would be the red flag. |
| 16 | |
| 17 | It's also important to understand various bounding factors for performance. For |
| 18 | example, if the program communicates via 100 Mbps network link and it is already |
| 19 | utilizes >90Mbps, there is not much you can do with the program to improve its |
| 20 | performance. There are similar bounding factors for disk IO, memory consumption |
| 21 | and computational tasks. With that in mind we can look at the available tools. |
| 22 | |
| 23 | Note: The tools can interfere with each other. For example, precise memory |
| 24 | profiling skews CPU profiles, goroutine blocking profiling affects scheduler |
| 25 | trace, etc. Use tools in isolation to get more precise info. |
| 26 | |
| 27 | ## CPU Profiler |
| 28 | |
| 29 | Go runtime contains built-in CPU profiler, which shows what functions consume |
| 30 | what percent of CPU time. There are 3 ways you can get access to it: |
| 31 | |
| 32 | 1. The simplest one is -cpuprofile flag of 'go test' |
| 33 | (http://golang.org/cmd/go/#hdr-Description_of_testing_flags) command. For |
| 34 | example, the following command: |
| 35 | |
| 36 | $ go test -run=none -bench=ClientServerParallel4 -cpuprofile=cprof net/http |
| 37 | |
| 38 | will profile the given benchmark and write CPU profile into 'cprof' file. |
| 39 | Then: |
| 40 | |
| 41 | $ go tool pprof --text http.test cprof |
| 42 | |
| 43 | will print a list of the hottest functions. |
| 44 | |
| 45 | There are several output types available, the most useful ones are: |
| 46 | `--text`, `--web` and `--list`. Run `go tool pprof` to get the complete |
| 47 | list. The obvious drawback of this option is that it works only for tests. |
| 48 | |
| 49 | 2. [net/http/pprof](http://golang.org/pkg/net/http/pprof): This is the ideal |
| 50 | solution for network servers. You merely need to import `net/http/pprof`, |
| 51 | and collect profiles with: |
| 52 | |
| 53 | $ go tool pprof --text mybin http://myserver:6060:/debug/pprof/profile |
| 54 | |
| 55 | 3. Manual profile collection. You need to import |
| 56 | [runtime/pprof](http://golang.org/pkg/runtime/pprof/) and add the following |
| 57 | code to main function: |
| 58 | |
| 59 | if *flagCpuprofile != "" { |
| 60 | f, err := os.Create(*flagCpuprofile) |
| 61 | if err != nil { |
| 62 | log.Fatal(err) |
| 63 | } |
| 64 | pprof.StartCPUProfile(f) |
| 65 | defer pprof.StopCPUProfile() |
| 66 | } |
| 67 | |
| 68 | The profile will be written to the specified file, visualize it the same way |
| 69 | as in the first option. |
| 70 | |
| 71 | Here is an example of a profile visualized with `--web` option: |
| 72 | [cpu_profile.png] |
| 73 | |
| 74 | You can investigate a single function with `--list=funcname` option. For |
| 75 | example, the following profile shows that the time was spent in the `append` |
| 76 | function: |
| 77 | |
| 78 | . . 93: func (bp *buffer) WriteRune(r rune) error { |
| 79 | . . 94: if r < utf8.RuneSelf { |
| 80 | 5 5 95: *bp = append(*bp, byte(r)) |
| 81 | . . 96: return nil |
| 82 | . . 97: } |
| 83 | . . 98: |
| 84 | . . 99: b := *bp |
| 85 | . . 100: n := len(b) |
| 86 | . . 101: for n+utf8.UTFMax > cap(b) { |
| 87 | . . 102: b = append(b, 0) |
| 88 | . . 103: } |
| 89 | . . 104: w := utf8.EncodeRune(b[n:n+utf8.UTFMax], r) |
| 90 | . . 105: *bp = b[:n+w] |
| 91 | . . 106: return nil |
| 92 | . . 107: } |
| 93 | |
| 94 | There are also 3 special entries that the profiler uses when it can't unwind |
| 95 | stack: GC, System and ExternalCode. GC means time spent during garbage |
| 96 | collection, see Memory Profiler and Garbage Collector Trace sections below for |
| 97 | optimization suggestions. System means time spent in goroutine scheduler, stack |
| 98 | management code and other auxiliary runtime code. ExternalCode means time spent |
| 99 | in native dynamic libraries. |
| 100 | |
| 101 | Here are some hints with respect to how to interpret what you see in the |
| 102 | profile. |
| 103 | |
| 104 | If you see lots of time spent in `runtime.mallocgc` function, the program |
| 105 | potentially makes excessive amount of small memory allocations. The profile will |
| 106 | tell you where the allocations are coming from. See the memory profiler section |
| 107 | for suggestions on how to optimize this case. |
| 108 | |
| 109 | If lots of time is spent in channel operations, sync.Mutex code and other |
| 110 | synchronization primitives or System component, the program probably suffers |
| 111 | from contention. Consider to restructure program to eliminate frequently |
| 112 | accessed shared resources. Common techniques for this include |
| 113 | sharding/partitioning, local buffering/batching and copy-on-write technique. |
| 114 | |
| 115 | If lots of time is spent in `syscall.Read/Write`, the program potentially makes |
| 116 | excessive amount of small reads and writes. Bufio wrappers around os.File or |
| 117 | net.Conn can help in this case. |
| 118 | |
| 119 | If lots of time is spent in GC component, the program either allocates too many |
| 120 | transient objects or heap size is very small so garbage collections happen too |
| 121 | frequently. See Garbage Collector Tracer and Memory Profiler sections for |
| 122 | optimization suggestions. |
| 123 | |
| 124 | Note: CPU profiler currently [does not work on |
| 125 | darwin](https://code.google.com/p/go/issues/detail?id=6047). |
| 126 | |
| 127 | Note: On windows you need to install Cygwin, Perl and Graphviz to generate |
| 128 | svg/web profiles. |
| 129 | |
| 130 | |
| 131 | ## Memory Profiler |
| 132 | |
| 133 | Memory profiler shows what functions allocate heap memory. You can collect it in |
| 134 | similar ways as CPU profile: with `go test --memprofile` |
| 135 | (http://golang.org/cmd/go/#hdr-Description_of_testing_flags), with |
| 136 | [net/http/pprof](http://golang.org/pkg/net/http/pprof) via |
| 137 | http://myserver:6060:/debug/pprof/heap or by calling |
| 138 | [runtime/pprof.WriteHeapProfile](http://golang.org/pkg/runtime/pprof/#WriteHeapProfile). |
| 139 | |
| 140 | You can visualize only allocations live at the time of profile collection |
| 141 | (`--inuse_space flag to pprof`, default), or all allocations happened since |
| 142 | program start (`--alloc_space` flag to `pprof`). The former is useful for |
| 143 | profiles collected with net/http/pprof on live applications, the latter is |
| 144 | useful for profiles collected at program end (otherwise you will see almost |
| 145 | empty profile). |
| 146 | |
| 147 | Note: the memory profiler is sampling, that is, it collects information only |
| 148 | about some subset of memory allocations. Probability of sampling an object is |
| 149 | proportional to its size. You can change the sampling rate with go test |
| 150 | `--memprofilerate` flag, or by setting `runtime.MemProfileRate` variable at |
| 151 | program startup. The rate of 1 will lead to collection of information about all |
| 152 | allocations, but it can slow down execution. The default sampling rate is 1 |
| 153 | sample per 512KB of allocated memory. |
| 154 | |
| 155 | You can also visualize number of bytes allocated or number of objects allocated |
| 156 | (`--inuse/alloc_space` and `--inuse/alloc_objects` flags, respectively). The |
| 157 | profiler tends to sample larger objects during profiling more. But it's |
| 158 | important to understand that large objects affect memory consumption and GC |
| 159 | time, while large number of tiny allocations affects execution speed (and GC |
| 160 | time to some degree as well). So it may be useful to look at both. |
| 161 | |
| 162 | Objects can be persistent or transient. If you have several large persistent |
| 163 | objects allocated at program start, they will be most likely sampled by the |
| 164 | profiler (because they are large). Such objects do affect memory consumption and |
| 165 | GC time, but they do not affect normal execution speed (no memory management |
| 166 | operations happen on them). On the other hand if you have large number of |
| 167 | objects with very short life durations, they can be barely represented in the |
| 168 | profile (if you use the default `--inuse_space mode`). But they do significantly |
| 169 | affect execution speed, because they are constantly allocated and freed. So, |
| 170 | once again, it may be useful to look at both types of objects. So, generally, if |
| 171 | you want to reduce memory consumption, you need to look at `--inuse_space` |
| 172 | profile collected during normal program operation. If you want to improve |
| 173 | execution speed, look at `--alloc_objects` profile collected after significant |
| 174 | running time or at program end. |
| 175 | |
| 176 | There are several flags that control reporting granularity. `--functions` makes |
| 177 | pprof report on function level (default). `--lines` makes pprof report on source |
| 178 | line level, which is useful if hot functions allocate on different lines. And |
| 179 | there are also `--addresses` and `--files` for exact instruction address and |
| 180 | file level, respectively. |
| 181 | |
| 182 | There is a useful option for the memory profile -- you can look at it right in |
| 183 | the browser (provided that you imported `net/http/pprof`). If you open |
| 184 | http://myserver:6060/debug/pprof/heap?debug=1 you must see the heap profile |
| 185 | along the lines of: |
| 186 | |
| 187 | heap profile: 4: 266528 [123: 11284472] @ heap/1048576 |
| 188 | 1: 262144 [4: 376832] @ 0x28d9f 0x2a201 0x2a28a 0x2624d 0x26188 0x94ca3 0x94a0b 0x17add6 0x17ae9f 0x1069d3 0xfe911 0xf0a3e 0xf0d22 0x21a70 |
| 189 | # 0x2a201 cnew+0xc1 runtime/malloc.goc:718 |
| 190 | # 0x2a28a runtime.cnewarray+0x3a runtime/malloc.goc:731 |
| 191 | # 0x2624d makeslice1+0x4d runtime/slice.c:57 |
| 192 | # 0x26188 runtime.makeslice+0x98 runtime/slice.c:38 |
| 193 | # 0x94ca3 bytes.makeSlice+0x63 bytes/buffer.go:191 |
| 194 | # 0x94a0b bytes.(*Buffer).ReadFrom+0xcb bytes/buffer.go:163 |
| 195 | # 0x17add6 io/ioutil.readAll+0x156 io/ioutil/ioutil.go:32 |
| 196 | # 0x17ae9f io/ioutil.ReadAll+0x3f io/ioutil/ioutil.go:41 |
| 197 | # 0x1069d3 godoc/vfs.ReadFile+0x133 godoc/vfs/vfs.go:44 |
| 198 | # 0xfe911 godoc.funcĀ·023+0x471 godoc/meta.go:80 |
| 199 | # 0xf0a3e godoc.(*Corpus).updateMetadata+0x9e godoc/meta.go:101 |
| 200 | # 0xf0d22 godoc.(*Corpus).refreshMetadataLoop+0x42 godoc/meta.go:141 |
| 201 | |
| 202 | 2: 4096 [2: 4096] @ 0x28d9f 0x29059 0x1d252 0x1d450 0x106993 0xf1225 0xe1489 0xfbcad 0x21a70 |
| 203 | # 0x1d252 newdefer+0x112 runtime/panic.c:49 |
| 204 | # 0x1d450 runtime.deferproc+0x10 runtime/panic.c:132 |
| 205 | # 0x106993 godoc/vfs.ReadFile+0xf3 godoc/vfs/vfs.go:43 |
| 206 | # 0xf1225 godoc.(*Corpus).parseFile+0x75 godoc/parser.go:20 |
| 207 | # 0xe1489 godoc.(*treeBuilder).newDirTree+0x8e9 godoc/dirtrees.go:108 |
| 208 | # 0xfbcad godoc.funcĀ·002+0x15d godoc/dirtrees.go:100 |
| 209 | |
| 210 | The numbers in the beginning of each entry `("1: 262144 [4: 376832]")` represent |
| 211 | number of currently live objects, amount of memory occupied by live objects, |
| 212 | total number of allocations and amount of memory occupied by all allocations, |
| 213 | respectively. |
| 214 | |
| 215 | Optimizations are usually application-specific, but here are some common |
| 216 | suggestions. |
| 217 | |
| 218 | 1. Combine objects into larger objects. For example, replace `*bytes.Buffer` |
| 219 | struct member with `bytes.Buffer` (you can preallocate buffer for writing by |
| 220 | calling `bytes.Buffer.Grow` later). This will reduce number of memory |
| 221 | allocations (faster) and also reduce pressure on garbage collector (faster |
| 222 | garbage collections). |
| 223 | |
| 224 | 2. Local variables that escape from their declaration scope get promoted into |
| 225 | heap allocations. Compiler generally can't prove that several variables have |
| 226 | the same life time, so it allocates each such variable separately. So you |
| 227 | can use the above advise for local variables as well. For example, replace: |
| 228 | |
| 229 | for k, v := range m { |
| 230 | k, v := k, v // copy for capturing by the goroutine |
| 231 | go func() { |
| 232 | // use k and v |
| 233 | }() |
| 234 | } |
| 235 | |
| 236 | with: |
| 237 | |
| 238 | for k, v := range m { |
| 239 | x := struct{ k, v string }{k, v} // copy for capturing by the goroutine |
| 240 | go func() { |
| 241 | // use x.k and x.v |
| 242 | }() |
| 243 | } |
| 244 | |
| 245 | This replaces two memory allocations with a single allocation. However, this |
| 246 | optimization usually negatively affects code readability, so use it |
| 247 | reasonably. |
| 248 | |
| 249 | 3. A special case of allocation combining is slice array preallocation. If you |
| 250 | know a typical size of the slice, you can preallocate a backing array for it |
| 251 | as follows: |
| 252 | |
| 253 | type X struct { |
| 254 | buf []byte |
| 255 | bufArray [16]byte // Buf usually does not grow beyond 16 bytes. |
| 256 | } |
| 257 | |
| 258 | func MakeX() *X { |
| 259 | x := &X{} |
| 260 | // Preinitialize buf with the backing array. |
| 261 | x.buf = x.bufArray[:0] |
| 262 | return x |
| 263 | } |
| 264 | |
| 265 | 4. If possible use smaller data types. For example, use `int8` instead of |
| 266 | `int`. |
| 267 | |
| 268 | 5. Objects that do not contain any pointers (note that strings, slices, maps |
| 269 | and chans contain implicit pointers), are not scanned by garbage collector. |
| 270 | For example, a 1GB byte slice virtually does not affect garbage collection |
| 271 | time. So if you remove pointers from actively used objects, it can |
| 272 | positively impact garbage collection time. Some possibilities are: replace |
| 273 | pointers with indices, split object into two parts one of which does not |
| 274 | contain pointers. |
| 275 | |
| 276 | 6. Use freelists to reuse transient objects and reduce number of allocations. |
| 277 | Standard library contains [sync.Pool](http://tip.golang.org/pkg/sync/#Pool) |
| 278 | type that allows to reuse the same object several times in between garbage |
| 279 | collections. However, be aware that, as any manual memory management scheme, |
| 280 | incorrect use of sync.Pool can lead to use-after-free bugs. |
| 281 | |
| 282 | You can also use the Garbage Collector Trace (see below) to get some insights |
| 283 | into memory issues. |
| 284 | |
| 285 | TODO(dvyukov): mention that stats are updated in a deferred way: "Memprof stats |
| 286 | are updated in a deferred way. This is required in order to present a consistent |
| 287 | picture in the situation when allocs are coming continuously and frees are |
| 288 | coming in batches afterwards. Several consecutive GCs push the update pipeline |
| 289 | forward. That's what you observe. So if you profile a live server then any |
| 290 | sample will give you a consistent snapshot. However, if the program completes |
| 291 | some activity and you want to collect the snapshot after this activity, then you |
| 292 | need to execute 2 or 3 GCs before the collection." |
| 293 | |
| 294 | ## Blocking Profiler |
| 295 | |
| 296 | The blocking profiler shows where goroutine block waiting on synchronization |
| 297 | primitives (including timer channels). You can collect it in similar ways as CPU |
| 298 | profile: with `go test --blockprofile` |
| 299 | (http://golang.org/cmd/go/#hdr-Description_of_testing_flags), with |
| 300 | [net/http/pprof](http://golang.org/pkg/net/http/pprof) via |
| 301 | http://myserver:6060:/debug/pprof/block or by calling |
| 302 | [runtime/pprof.Lookup("block").WriteTo](http://golang.org/pkg/runtime/pprof/#Profile.WriteTo). |
| 303 | |
| 304 | But there is significant caveat -- the blocking profiler is not enabled by |
| 305 | default. `go test --blockprofile` will enable it for you automatically. However, |
| 306 | if you use `net/http/pprof` or `runtime/pprof`, you need to enable it manually |
| 307 | (otherwise the profile will be empty). To enable the blocking profiler call |
| 308 | [runtime.SetBlockProfileRate](http://golang.org/pkg/runtime/#SetBlockProfileRate). |
| 309 | SetBlockProfileRate controls the fraction of goroutine blocking events that are |
| 310 | reported in the blocking profile. The profiler aims to sample an average of one |
| 311 | blocking event per the specified amount of nanoseconds spent blocked. To include |
| 312 | every blocking event in the profile, set the rate to 1. |
| 313 | |
| 314 | If a function contains several blocking operations and it's not obvious which |
| 315 | one leads to blocking, use `--lines` flag to pprof. |
| 316 | |
| 317 | Note that not all blocking is bad. When a goroutine blocks, the underlying |
| 318 | worker thread simply switches to another goroutine. So blocking in the |
| 319 | cooperative Go environment is very different from blocking on a mutex in a |
| 320 | non-cooperative systems (e.g. typical C++ or Java threading libraries, where |
| 321 | blocking leads to thread idling and expensive thread context switches). To give |
| 322 | you some feeling, let's consider some examples. |
| 323 | |
| 324 | Blocking on a time.Ticker is usually OK. If a goroutine blocks on a Ticker for |
| 325 | 10 seconds, you will see 10 seconds of blocking in the profile, which is |
| 326 | perfectly fine. Blocking on `sync.WaitGroup` is frequently OK. For example, is a |
| 327 | task takes 10 seconds, the goroutine waiting on a WaitGroup for completion will |
| 328 | account for 10 seconds of blocking in the profile. Blocking on sync.Cond may or |
| 329 | may not be OK, depending on the situation. Consumer blocking on a channel |
| 330 | suggests slow producers or lack of work. Producer blocking on a channel suggests |
| 331 | that consumers are slower, but this is frequently OK. Blocking on a |
| 332 | channel-based semaphore shows how much goroutines are gated on the semaphore. |
| 333 | Blocking on a sync.Mutex or sync.RWMutex is usually bad. You can use `--ignore` |
| 334 | flag to pprof to exclude known uninteresting blocking from a profile during |
| 335 | visualization. |
| 336 | |
| 337 | Blocking of goroutines can lead to two negative consequences: |
| 338 | |
| 339 | 1. Program does not scale with processors due to lack of work. Scheduler Trace |
| 340 | can help to identify this case. |
| 341 | |
| 342 | 2. Excessive goroutine blocking/unblocking consumes CPU time. CPU Profiler can |
| 343 | help to identify this case (look at the System component). |
| 344 | |
| 345 | Here are some common suggestions that can help to reduce goroutine blocking: |
| 346 | |
| 347 | 1. Use sufficiently buffered channels in producer-consumer scenarios. |
| 348 | Unbuffered channels substantially limit available parallelism in the |
| 349 | program. |
| 350 | |
| 351 | 2. Use `sync.RWMutex` instead of `sync.Mutex` for read-mostly workloads. |
| 352 | Readers never block other readers in `sync.RWMutex`, even on implementation |
| 353 | level. |
| 354 | |
| 355 | 3. In some cases it's possible to remove mutexes entirely by using |
| 356 | copy-on-write technique. If the protected data structure is modified |
| 357 | infrequently and it's feasible to make copies of it, then it can be updated |
| 358 | as follows: |
| 359 | |
| 360 | type Config struct { |
| 361 | Routes map[string]net.Addr |
| 362 | Backends []net.Addr |
| 363 | } |
| 364 | |
| 365 | var config atomic.Value // actual type is *Config |
| 366 | |
| 367 | // Worker goroutines use this function to obtain the current config. |
| 368 | // UpdateConfig must be called at least once before this func. |
| 369 | func CurrentConfig() *Config { |
| 370 | return config.Load().(*Config) |
| 371 | } |
| 372 | |
| 373 | // Background goroutine periodically creates a new Config object |
| 374 | // as sets it as current using this function. |
| 375 | func UpdateConfig(cfg *Config) { |
| 376 | config.Store(cfg) |
| 377 | } |
| 378 | |
| 379 | This pattern prevents the writer from blocking readers during update. |
| 380 | |
| 381 | 4. Partitioning is another general technique for reducing contention/blocking |
| 382 | on shared mutable data structures. Below is an example of how to partition a |
| 383 | hashmap: |
| 384 | |
| 385 | type Partition struct { |
| 386 | sync.RWMutex |
| 387 | m map[string]string |
| 388 | } |
| 389 | |
| 390 | const partCount = 64 |
| 391 | var m [partCount]Partition |
| 392 | |
| 393 | func Find(k string) string { |
| 394 | idx := hash(k) % partCount |
| 395 | part := &m[idx] |
| 396 | part.RLock() |
| 397 | v := part.m[k] |
| 398 | part.RUnlock() |
| 399 | return v |
| 400 | } |
| 401 | |
| 402 | 5. Local caching and batching of updates can help to reduce contention on |
| 403 | un-partitionable data structures. Below you can see how to batch sends to a |
| 404 | channel: |
| 405 | |
| 406 | const CacheSize = 16 |
| 407 | |
| 408 | type Cache struct { |
| 409 | buf [CacheSize]int |
| 410 | pos int |
| 411 | } |
| 412 | |
| 413 | func Send(c chan [CacheSize]int, cache *Cache, value int) { |
| 414 | cache.buf[cache.pos] = value |
| 415 | cache.pos++ |
| 416 | if cache.pos == CacheSize { |
| 417 | c <- cache.buf |
| 418 | cache.pos = 0 |
| 419 | } |
| 420 | } |
| 421 | |
| 422 | This technique is not limited to channels. It can be used to batch updates |
| 423 | to a map, batch allocations, etc. |
| 424 | |
| 425 | 6. Use [sync.Pool](http://tip.golang.org/pkg/sync/#Pool) for freelists instead |
| 426 | of chan-based or mutex-protected freelists. sync.Pool uses smart techniques |
| 427 | internally to reduce blocking. |
| 428 | |
| 429 | ## Goroutine Profiler |
| 430 | |
| 431 | The goroutine profiler simply gives you current stacks of all live goroutines in |
| 432 | the process. It can be handy to debug load balancing issues (see Scheduler Trace |
| 433 | section below), or to debug deadlocks. The profile makes sense only for a |
| 434 | running app, so go test does not expose it. You can collect the profile with |
| 435 | [net/http/pprof](http://golang.org/pkg/net/http/pprof) via |
| 436 | http://myserver:6060:/debug/pprof/goroutine, and visualize it to svg/pdf or by |
| 437 | calling |
| 438 | [runtime/pprof.Lookup("goroutine").WriteTo](http://golang.org/pkg/runtime/pprof/#Profile.WriteTo). |
| 439 | But the most useful way is to type |
| 440 | http://myserver:6060:/debug/pprof/goroutine?debug=2 in your browser, which will |
| 441 | give you symbolized stacks similar to what you see when a program crashes. Note |
| 442 | that goroutines in "syscall" state consume an OS thread, other goroutines do not |
| 443 | (except for goroutines that called runtime.LockOSThread, which is, |
| 444 | unfortunately, not visible in the profile). Note that goroutines in "IO wait" |
| 445 | state also do not consume threads, they are parked on non-blocking network |
| 446 | poller (which uses epoll/kqueue/GetQueuedCompletionStatus to unpark goroutines |
| 447 | later). |
| 448 | |
| 449 | ## Garbage Collector Trace |
| 450 | |
| 451 | Aside from the profiling tools, there is another kind of tools available -- |
| 452 | tracers. They allow to trace garbage collections, memory allocator and goroutine |
| 453 | scheduler state. To enable the garbage collector (GC) trace, run the program |
| 454 | with `GODEBUG=gctrace=1` environment variable: |
| 455 | |
| 456 | $ GODEBUG=gctrace=1 ./myserver |
| 457 | |
| 458 | Then the program will print output similar to the following during execution: |
| 459 | |
| 460 | gc9(2): 12+1+744+8 us, 2 -> 10 MB, 108615 (593983-485368) objects, 4825/3620/0 sweeps, 0(0) handoff, 6(91) steal, 16/1/0 yields |
| 461 | gc10(2): 12+6769+767+3 us, 1 -> 1 MB, 4222 (593983-589761) objects, 4825/0/1898 sweeps, 0(0) handoff, 6(93) steal, 16/10/2 yields |
| 462 | gc11(2): 799+3+2050+3 us, 1 -> 69 MB, 831819 (1484009-652190) objects, 4825/691/0 sweeps, 0(0) handoff, 5(105) steal, 16/1/0 yields |
| 463 | |
| 464 | Let's consider the meaning of these numbers. One line per GC is printed. The |
| 465 | first number ("gc9") is the number of GC (this is the 9-th GC since program |
| 466 | start). The number in parens ("(2)") is the number of worker threads |
| 467 | participated in the GC. The next 4 numbers ("12+1+744+8 us") mean |
| 468 | stop-the-world, sweeping, marking and waiting for worker threads to finish, in |
| 469 | microseconds, respectively. The next 2 numbers ("2 -> 10 MB") mean size of live |
| 470 | heap after the previous GC and full heap size (including garbage) before the |
| 471 | current GC. The next 3 numbers ("108615 (593983-485368) objects") are total |
| 472 | number of objects in heap (including garbage) and total number of memory |
| 473 | allocation and free operations. The next 3 numbers ("4825/3620/0 sweeps") |
| 474 | characterize sweep phase (of the previous GC): there were total 4825 memory |
| 475 | spans, 3620 were swept on demand or in background, 0 were swept during |
| 476 | stop-the-world phase (the rest were unused spans). The next 4 numbers ("0(0) |
| 477 | handoff, 6(91) steal") characterize load balancing during parallel mark phase: |
| 478 | there were 0 object handoff operations (0 objects were handoff), and 6 steal |
| 479 | operations (91 objects were stolen). The last 3 numbers ("16/1/0 yields") |
| 480 | characterize effectiveness of parallel mark phase: there were total of 17 yield |
| 481 | operations during waiting for another thread. |
| 482 | |
| 483 | The GC is [mark-and-sweep |
| 484 | type](http://www.brpreiss.com/books/opus5/html/page424.html). Total GC can be |
| 485 | expressed as: |
| 486 | |
| 487 | Tgc = Tseq + Tmark + Tsweep |
| 488 | |
| 489 | where Tseq is time to stop user goroutines and some preparation activities |
| 490 | (usually small); Tmark is heap marking time, marking happens when all user |
| 491 | goroutines are stopped, and thus can significantly affect latency of processing; |
| 492 | Tsweep is heap sweeping time, sweeping generally happens concurrently with |
| 493 | normal program execution, and so is not so critical for latency. |
| 494 | |
| 495 | Marking time can be approximately expressed as: |
| 496 | |
| 497 | Tmark = C1*Nlive + C2*MEMlive_ptr + C3*Nlive_ptr |
| 498 | |
| 499 | where Nlive is the number of live objects in the heap during GC, `MEMlive_ptr` |
| 500 | is the amount of memory occupied by live objects with pointers, `Nlive_ptr` is |
| 501 | the number of pointers in live objects. |
| 502 | |
| 503 | Sweeping time can be approximately expressed as: |
| 504 | |
| 505 | Tsweep = C4*MEMtotal + C5*MEMgarbage |
| 506 | |
| 507 | where `MEMtotal` is the total amount of heap memory, `MEMgarbage` is the amount |
| 508 | of garbage in the heap. |
| 509 | |
| 510 | Next GC happens after the program has allocated an extra amount of memory |
| 511 | proportional to the amount already in use. The proportion is controlled by GOGC |
| 512 | environment variable (100 by default). If GOGC=100 and the program is using 4M |
| 513 | of heap memory, runtime will trigger GC again when the program gets to 8M. This |
| 514 | keeps the GC cost in linear proportion to the allocation cost. Adjusting GOGC |
| 515 | changes the linear constant and also the amount of extra memory used. |
| 516 | |
| 517 | Only sweeping depends on total size of the heap, and sweeping happens |
| 518 | concurrently with normal program execution. So it can make sense to set GOGC to |
| 519 | a higher value (200, 300, 500, etc) if you can afford extra memory consumption. |
| 520 | For example, GOGC=300 can reduce garbage collection overhead by up to 2 times |
| 521 | while keeping latencies the same (at the cost of 2 times larger heap). |
| 522 | |
| 523 | GC is parallel and generally scales well with hardware parallelism. So it can |
| 524 | make sense to set GOMAXPROCS to higher value even for sequential programs just |
| 525 | to speed up garbage collections. However, note that number of garbage collector |
| 526 | threads is currently bounded by 8. |
| 527 | |
| 528 | ## Memory Allocator Trace |
| 529 | |
| 530 | Memory allocator traces simply dumps all memory allocation and free operations |
| 531 | onto console. It's enabled with GODEBUG=allocfreetrace=1 environment variable. |
| 532 | The output looks along the lines of: |
| 533 | |
| 534 | tracealloc(0xc208062500, 0x100, array of parse.Node) |
| 535 | goroutine 16 [running]: |
| 536 | runtime.mallocgc(0x100, 0x3eb7c1, 0x0) |
| 537 | runtime/malloc.goc:190 +0x145 fp=0xc2080b39f8 |
| 538 | runtime.growslice(0x31f840, 0xc208060700, 0x8, 0x8, 0x1, 0x0, 0x0, 0x0) |
| 539 | runtime/slice.goc:76 +0xbb fp=0xc2080b3a90 |
| 540 | text/template/parse.(*Tree).parse(0xc2080820e0, 0xc208023620, 0x0, 0x0) |
| 541 | text/template/parse/parse.go:289 +0x549 fp=0xc2080b3c50 |
| 542 | ... |
| 543 | |
| 544 | tracefree(0xc208002d80, 0x120) |
| 545 | goroutine 16 [running]: |
| 546 | runtime.MSpan_Sweep(0x73b080) |
| 547 | runtime/mgc0.c:1880 +0x514 fp=0xc20804b8f0 |
| 548 | runtime.MCentral_CacheSpan(0x69c858) |
| 549 | runtime/mcentral.c:48 +0x2b5 fp=0xc20804b920 |
| 550 | runtime.MCache_Refill(0x737000, 0xc200000012) |
| 551 | runtime/mcache.c:78 +0x119 fp=0xc20804b950 |
| 552 | ... |
| 553 | |
| 554 | The trace contains address of the memory block, size, type, goroutine id and the |
| 555 | stack trace. It's probably more useful for debugging, but can give very |
| 556 | fine-grained info for allocation optimizations as well. |
| 557 | |
| 558 | ## Scheduler Trace |
| 559 | |
| 560 | Scheduler trace can provide insights into dynamic behavior of the goroutine |
| 561 | scheduler and allow to debug load balancing and scalability issues. To enable |
| 562 | the scheduler trace trace, run the program with GODEBUG=schedtrace=1000 |
| 563 | environment variable (the value means period of output, in ms, in this case it's |
| 564 | once per second): |
| 565 | |
| 566 | $ GODEBUG=schedtrace=1000 ./myserver |
| 567 | |
| 568 | Then the program will print output similar to the following during execution: |
| 569 | |
| 570 | SCHED 1004ms: gomaxprocs=4 idleprocs=0 threads=11 idlethreads=4 runqueue=8 [0 1 0 3] |
| 571 | SCHED 2005ms: gomaxprocs=4 idleprocs=0 threads=11 idlethreads=5 runqueue=6 [1 5 4 0] |
| 572 | SCHED 3008ms: gomaxprocs=4 idleprocs=0 threads=11 idlethreads=4 runqueue=10 [2 2 2 1] |
| 573 | |
| 574 | The first number ("1004ms") is time since program start. Gomaxprocs is the |
| 575 | current value of GOMAXPROCS. Idleprocs is the number of idling processors (the |
| 576 | rest are executing Go code). Threads is the total number of worker threads |
| 577 | created by the scheduler (threads can be in 3 states: execute Go code |
| 578 | (gomaxprocs-idleprocs), execute syscalls/cgocalls or idle). Idlethreads is the |
| 579 | number of idling worker threads. Runqueue is the length of global queue with |
| 580 | runnable goroutines. The numbers in square brackets ("[0 1 0 3]") are lengths of |
| 581 | per-processor queues with runnable goroutines. Sum of lengths of global and |
| 582 | local queues represents the total number of goroutines available for execution. |
| 583 | |
| 584 | Note: You can combine any of the tracers as |
| 585 | GODEBUG=gctrace=1,allocfreetrace=1,schedtrace=1000. |
| 586 | |
| 587 | Note: There is also detailed scheduler trace, which you can enable with |
| 588 | GODEBUG=schedtrace=1000,scheddetail=1. It prints detailed info about every |
| 589 | goroutine, worker thread and processor. We won't describe its format here as |
| 590 | it's mainly useful for scheduler developers; but you can find details in |
| 591 | [src/pkg/runtime/proc.c](https://code.google.com/p/go/source/browse/src/pkg/runtime/proc.c). |
| 592 | |
| 593 | The scheduler trace is useful when a program does not scale linearly with |
| 594 | GOMAXPROCS and/or does not consume 100% of CPU time. The ideal situation is when |
| 595 | all processors are busy executing Go code, number of threads is reasonable, |
| 596 | there is plenty of work in all queues and the work is reasonably evenly |
| 597 | distributed: |
| 598 | |
| 599 | gomaxprocs=8 idleprocs=0 threads=40 idlethreads=5 runqueue=10 [20 20 20 20 20 20 20 20] |
| 600 | |
| 601 | A bad situation is when something of the above does not hold. For example the |
| 602 | following sample demonstrates shortage of work to keep all processors busy: |
| 603 | |
| 604 | gomaxprocs=8 idleprocs=6 threads=40 idlethreads=30 runqueue=0 [0 2 0 0 0 1 0 0] |
| 605 | |
| 606 | Note: use OS-provided means to measure actual CPU utilization as the ultimate |
| 607 | characteristic. On Unix family of operating system it is top command; on Windows |
| 608 | it is Task Manager. |
| 609 | |
| 610 | You can use the goroutine profiler to understand where goroutines block in the |
| 611 | case of work shortage. Note that load imbalance is not ultimately bad as long as |
| 612 | all processors are busy, it will just cause some moderate load balancing |
| 613 | overheads. |
| 614 | |
| 615 | ## Memory Statistics |
| 616 | |
| 617 | Go runtime exposes coarse-grained memory statistics via |
| 618 | [runtime.ReadMemStats](http://golang.org/pkg/runtime/#ReadMemStats) function. |
| 619 | The statistics are also exposed via net/http/pprof at the bottom of |
| 620 | http://myserver:6060/debug/pprof/heap?debug=1. The statistics are described |
| 621 | [here](http://golang.org/pkg/runtime/#MemStats). Some of the interesting fields |
| 622 | are: |
| 623 | |
| 624 | 1. HeapAlloc - current heap size. |
| 625 | 2. HeapSys - total heap size. |
| 626 | 3. HeapObjects - total number of objects in the heap. |
| 627 | 4. HeapReleased - amount of memory released to the OS; runtime releases to the |
| 628 | OS memory unused for 5 minutes, you can force this process with |
| 629 | runtime/debug.FreeOSMemory. |
| 630 | 5. Sys - total amount of memory allocated from OS. |
| 631 | 6. Sys-HeapReleased - effective memory consumption of the program. |
| 632 | 7. StackSys - memory consumed for goroutine stacks (note that some stacks are |
| 633 | allocated from heap and are not accounted here, unfortunately there is no |
| 634 | way to get total size of stacks |
| 635 | (https://code.google.com/p/go/issues/detail?id=7468)). |
| 636 | 8. MSpanSys/MCacheSys/BuckHashSys/GCSys/OtherSys - amount of memory allocated |
| 637 | by runtime for various auxiliary purposes; they are generally not |
| 638 | interesting, unless they are too high. |
| 639 | 9. PauseNs - durations of last garbage collections. |
| 640 | |
| 641 | ## Heap Dumper |
| 642 | |
| 643 | The last available tool is heap dumper, it can write state of the whole heap |
| 644 | into a file for future exploration. It can be useful for identifying memory |
| 645 | leaks and getting insights into program memory consumption. |
| 646 | |
| 647 | First, you need to write the dump using |
| 648 | [runtime/debug.WriteHeapDump](http://tip.golang.org/pkg/runtime/debug/#WriteHeapDump) |
| 649 | function: |
| 650 | |
| 651 | f, err := os.Create("heapdump") |
| 652 | if err != nil { ... } |
| 653 | debug.WriteHeapDump(f.Fd()) |
| 654 | |
| 655 | Then you can either render it to a dot file with graphical representation of the |
| 656 | heap or convert it to hprof format. To render it to a dot file: |
| 657 | |
| 658 | $ go get github.com/randall77/hprof/dumptodot |
| 659 | $ dumptodot heapdump mybinary > heap.dot |
| 660 | |
| 661 | and open `heap.dot` with Graphviz. |
| 662 | |
| 663 | To convert it to `hprof` format: |
| 664 | |
| 665 | $ go get github.com/randall77/hprof/dumptohprof |
| 666 | $ dumptohprof heapdump heap.hprof |
| 667 | $ jhat heap.hprof |
| 668 | |
| 669 | and navigate your browser to http://myserver:7000. |
| 670 | |
| 671 | ## Concluding Remarks |
| 672 | |
| 673 | Optimization is an open problem, there are simple recipes that you can use to |
| 674 | improve performance. Sometimes optimization requires complete re-architecture of |
| 675 | the program. But we hope that the tools will be a valuable addition to your |
| 676 | toolbox, that you can use to at least analyze and understand what happens. |
| 677 | [Profiling Go Programs](http://blog.golang.org/profiling-go-programs) is a good |
| 678 | tutorial on usage of CPU and memory profilers to optimize a simple program. |