runtime: scavenge memory upon allocating from scavenged memory

Because scavenged and unscavenged spans no longer coalesce, memory that
is freed no longer has a high likelihood of being re-scavenged. As a
result, if an application is allocating at a fast rate, it may work fast
enough to undo all the scavenging work performed by the runtime's
current scavenging mechanisms. This behavior is exacerbated by the
global best-fit allocation policy the runtime uses, since scavenged
spans are just as likely to be chosen as unscavenged spans on average.

To remedy that, we treat each allocation of scavenged space as a heap
growth, and scavenge other memory to make up for the allocation.

This change makes performance of the runtime slightly worse, as now
we're scavenging more often during allocation. The regression is
particularly obvious with the garbage benchmark (3%) but most of the Go1
benchmarks are within the margin of noise. A follow-up change should
help.

Garbage: https://perf.golang.org/search?q=upload:20190131.3
Go1:     https://perf.golang.org/search?q=upload:20190131.2

Updates #14045.

Change-Id: I44a7e6586eca33b5f97b6d40418db53a8a7ae715
Reviewed-on: https://go-review.googlesource.com/c/159500
Reviewed-by: Austin Clements <austin@google.com>
diff --git a/src/runtime/mheap.go b/src/runtime/mheap.go
index 1bf7bbe..055dfee 100644
--- a/src/runtime/mheap.go
+++ b/src/runtime/mheap.go
@@ -1190,6 +1190,16 @@
 		// heap_released since we already did so earlier.
 		sysUsed(unsafe.Pointer(s.base()), s.npages<<_PageShift)
 		s.scavenged = false
+
+		// Since we allocated out of a scavenged span, we just
+		// grew the RSS. Mitigate this by scavenging enough free
+		// space to make up for it.
+		//
+		// Also, scavengeLargest may cause coalescing, so prevent
+		// coalescing with s by temporarily changing its state.
+		s.state = mSpanManual
+		h.scavengeLargest(s.npages * pageSize)
+		s.state = mSpanFree
 	}
 	s.unusedsince = 0