PerformanceMonitoring.md: add new page and remove PerfDashboard.md

This change removes the old and out-of-date PerfDashboard.md and
replaces it with documentation on our actual working performance
monitoring system.

Change-Id: I8aeb735bae5563cd4caf43523b0f6409b6a2d8e3
Reviewed-on: https://go-review.googlesource.com/c/wiki/+/563495
Reviewed-by: David Chase <drchase@google.com>
diff --git a/PerfDashboard.md b/PerfDashboard.md
deleted file mode 100644
index be066bf..0000000
--- a/PerfDashboard.md
+++ /dev/null
@@ -1,68 +0,0 @@
----
-title: PerfDashboard
----
-
-*The perf dashboard is unmaintained and currently not active*
-
-## Introduction
-
-[Performance Dashboard](http://build.golang.org/perf) does continuous monitoring of performance characteristics of the Go implementation. It notifies codereview threads about any significant changes caused by the commit, allows to see performance changes caused by [recent commits](http://build.golang.org/perf), allows to investigate changes [in detail](http://build.golang.org/perfdetail?commit=fb3d6c1631c3f3141f33a01afb4c0a23ef0ea2cf&commit0=82f48826c6c79a3d5697d5e06cac8451f3dc3c7f&kind=builder&builder=linux-amd64-perf&benchmark=http) .
-
-## Builders
-
-The dashboard uses two builders: linux-amd64 running Ubuntu 14.04 and windows-amd64 running Windows 8.1. Both builders has the same hardware: 2 x Intel Xeon E5620 @ 2.4GHz, 8 HT cores, 12GB RAM.
-
-## Benchmarks
-
-The builders run benchmarks from the [x/benchmarks](https://golang.org/x/benchmarks) repo:
-  * ` json `: marshals and unmarshals large json object, in several goroutines independently.
-  * ` http `: http client and server serving "hello world", uses persistent connections and read/write timeouts.
-  * ` garbage `: parses net package using go/parser, in a loop in several goroutines; half of packages are instantly discarded, the other half is preserved indefinitely; this creates significant pressure on the garbage collector.
-  * ` build `: does 'go build -a std'.
-
-## Metrics
-
-Metrics collected are:
-  * ` allocated `: amount of memory allocated, per iteration, in bytes
-  * ` allocs `: number of memory allocations, per iteration
-  * ` cputime `: total CPU time (user+sys from time Unix utility output), can be larger than time when GOMAXPROCS>1, per iteration, in ns
-  * ` gc-pause-one `: duration of a single garbage collector pause, in ns
-  * ` gc-pause-total `: total duration of garbage collector pauses, per iteration, ns
-  * ` latency-50/95/99 `: request latency percentile, in ns
-  * ` rss `: max memory consumption as reported by OS, in bytes
-  * ` sys-gc `: memory consumed by garbage collector metadata (` MemStats.GCSys `), in bytes
-  * ` sys-heap `: memory consumed by heap (` MemStats.HeapSys `), in bytes
-  * ` sys-other `: unclassified memory consumption (` MemStats.OtherSys `), in bytes
-  * ` sys-stack `: memory consumed by stacks (` MemStats.StackSys `), in bytes
-  * ` sys-total `: total memory allocated from OS (` MemStats.Sys `), in bytes
-  * ` time `: real time (essentially the same as std Go benchmarks output), per iteration, in ns
-  * ` virtual-mem `: virtual memory consumption as reported by OS, in bytes
-
-And for build benchmark:
-  * ` binary-size `: size of the go command, in bytes
-  * ` build-cputime `: CPU time spent on the build, in ns
-  * ` build-rss `: max memory consumption of the build process as reported by OS, in bytes
-  * ` build-time `: real time of the build, in ns
-
-## Profiles
-
-The dashboard also collects a set of profiles for every commit, they are available from the [details page](http://build.golang.org/perfdetail?commit=fb3d6c1631c3f3141f33a01afb4c0a23ef0ea2cf&commit0=82f48826c6c79a3d5697d5e06cac8451f3dc3c7f&kind=builder&builder=linux-amd64-perf&benchmark=http). For usual benchmarks [CPU](http://build.golang.org/log/b023711522ca6511f2c9bfb46cdfb511fd77e967) and [memory](http://build.golang.org/log/06bd072aa0dec4936a05b7aa13b9f906b6989865) profiles are collected. For build benchmark - [perf profile](http://build.golang.org/log/34c4f0c7b7ea3521e5356b91775a026607e72d44), [per-process split of CPU time](http://build.golang.org/log/da517b4f6892af8a6b4900dbe58311b665ced00f) and [per-section size](http://build.golang.org/log/fc4287d6a9e280bf35c572c038dbc4414d60bcf8).
-
-## Perf Changes View
-
-The [view](http://build.golang.org/perf) allows to see aggregate information about significant performance changes caused by recent commits.
-
-Rows:
-  * The first row shows difference between the latest release and tip.
-  * The rest of the rows show deltas caused by individual commits.
-
-Columns:
-  * The first column is commit hash.
-  * Second - number of benchmarks that were executed for the commit to far.
-  * Third - metric name, or the special 'failure' metric for build/runtime crashes.
-  * Fourth - negative deltas.
-  * Fifth - positive deltas.
-  * The rest describe commit.
-
-You can click on any positive/negative delta to see details about the change.
-
diff --git a/PerformanceMonitoring.md b/PerformanceMonitoring.md
new file mode 100644
index 0000000..442a34a
--- /dev/null
+++ b/PerformanceMonitoring.md
@@ -0,0 +1,111 @@
+---
+title: PerformanceMonitoring
+---
+
+The Go project monitors the performance characteristics of the Go implementation
+as well as that of subrepositories like golang.org/x/tools.
+
+## Benchmarks
+
+`golang.org/x/benchmarks/cmd/bench` is the entrypoint for our performance tests.
+For Go implementations, this runs both the
+[Sweet](https://golang.org/x/benchmarks/sweet) (end-to-end benchmarks)
+and [bent](https://golang.org/x/benchmarks/cmd/bent) (microbenchmarks)
+benchmarking suites.
+
+For the `golang.org/x/tools` project, it runs the repository's benchmarks.
+
+These benchmarks can all be invoked manually, as can `cmd/bench`, but using both
+Sweet and bent directly will likely offer a better user experience.
+See their documentation for more details.
+
+## Performance testing principles
+
+### Change with the times
+
+Our set of benchmarks is curated.
+It is allowed to change over time.
+Sticking to a single benchmark set over a long period of time can easily land
+us in a situation where we're optimizing for the wrong thing.
+
+### Always perform a comparison
+
+We never report performance numbers in isolation, and only relative to some
+baseline.
+This strategy comes from the fact that comparing performance data taken far
+apart in time, even on the same hardware, can result in a lot of noise that
+goes unaccounted for.
+The state of a machine or VM on one day is likely to be very different than
+the state of a machine or VM on the next day.
+
+We refer to the tested version of source code as the "experiment" and the
+baseline version of source code as the "baseline."
+
+## Presubmit
+
+Do you have a Gerrit change that you want to run against our benchmarks?
+
+Select a builder containing the word `perf` in the "Choose Tryjobs" dialog that
+appears when selecting a [SlowBot](https://go.dev/wiki/SlowBots).
+
+There are two kinds of presubmit builders for performance testing:
+- `perf_vs_parent`, which measures the performance delta of a change in isolation.
+- `perf_vs_tip`, which measures the performance delta versus the current
+  tip-of-tree for whichever repository the change is for.
+  (Remember to rebase your change(s) before using this one!)
+
+There's a third special presubmit builder for the tools repository as well which
+contains the string `perf_vs_gopls_0_11`.
+This measures the performance delta versus the `release-branch-gopls.0.11` branch
+of the tools repository.
+
+## Postsubmit
+
+The [performance dashboard](http://perf.golang.org/dashboard) provides
+continuous monitoring of benchmark performance for every commit that is made to
+the main Go repository and other subrepositories.
+The dashboard, more specifically, displays graphs showing the change in certain
+performance metrics (also called "units") over time for different benchmarks.
+Use the navigation interface at the top of the page to explore further.
+
+The [regressions page](https://perf.golang.org/dashboard/?benchmark=regressions)
+displays all benchmarks in order of biggest regression to biggest improvement,
+followed by all benchmarks for which there is no statistically clear answer.
+
+On the graphs, red means regression, blue means improvement.
+
+### Baselines
+
+In post-submit, the baseline version for Go repository performance tests is
+automatically determined.
+For performance tests against changes on release branches, the baseline is always
+the latest release for that branch (for example, the latest minor release for
+Go 1.21 on `release-branch.go1.21`).
+For performance tests against tip-of-tree, the baseline is always the latest
+overall release of Go.
+This is indicated by the name of the builder that produces these benchmark
+esults, which contains the string `perf_vs_release`.
+What this means is that on every minor release of Go, the baseline shifts.
+These baseline shifts can be observed in the [per-metric view](#per-metric-view).
+
+Performance tests on subrepositories typically operate against some known
+long-term fixed baseline.
+For the tools repository, it's the tip of the `release-branch-gopls.0.11`
+branch.
+
+### Per-metric view
+
+Click on any graph's performance metric name to view a more detailed timeline
+of the performance deltas for that metric.
+
+![Image displaying which link to click to reach the per-metric
+page.](images/performance-monitoring-per-metric-link.png)
+
+This view is particularly useful for identifying the culprit behind a regression
+and for pinpointing the source of an improvement.
+
+Sometimes, a performance change happens because a benchmark has changed or
+because the baseline version being used as changed.
+This view also displays information about the baseline versions and the version
+of `golang.org/x/benchmarks` that was used to produce the results to help identify
+when this happens.
diff --git a/images/performance-monitoring-per-metric-link.png b/images/performance-monitoring-per-metric-link.png
new file mode 100644
index 0000000..255c5b5
--- /dev/null
+++ b/images/performance-monitoring-per-metric-link.png
Binary files differ