Sweet is a set of benchmarks derived from the Go community which are intended to represent a breadth of real-world applications. The primary use-case of this suite is to perform an evaluation of the difference in CPU and memory performance between two Go implementations.
If you use this benchmarking suite for any measurements, please ensure you use a versioned release and note the version in the release.
sweet tool only depends on having a stable version of Go and
Some benchmarks, however, have various requirements for building. Notably they are:
Please ensure your system has these tools installed and available in your system's PATH.
Furthermore, some benchmarks are able to produce additional information on some platforms. For instance, running on platforms where systemd is available adds an average RSS measurement for the go-build benchmark.
The gVisor benchmark has additional requirements:
linux/amd64. Nothing else is supported or ever will be.
ptrace API must be enabled on your system. Set
/proc/sys/kernel/yama/ptrace_scope appropriately (0 and 1 work, 2 might, 3 will not).
$ go build ./cmd/sweet
$ ./sweet get
Create a configuration file called
config.toml with the following contents:
[[config]] name = "myconfig" goroot = "<insert some GOROOT here>"
Run the benchmarks by running:
$ ./sweet run -shell config.toml
Benchmark results will appear in the
-shell will cause the tool to print each action it performs as a shell command. Note that while the shell commands are valid for many systems, they may depend on tools being available on your system that
sweet does not require.
Note that by default
sweet run expects to be executed in
/path/to/x/benchmarks/sweet, that is, the root of the Sweet subdirectory in the
x/benchmarks repository. To execute it from somewhere else, point
-short flag to run to get much faster feedback on whether each benchmark builds and runs.
-shell and copy and re-run the last command to get full output. TODO(mknyszek): Dump the output to the terminal.
/path/to/results/biogo-igor/myconfig.results) which is really just the stderr (and usually stdout too) of the benchmark. You can also try to re-run it yourself with the output of
These benchmarks generally try to stress the Go runtime in interesting ways, and some may end up with very large heaps. Therefore, it's recommended to run the suite on a system with at least 16 GiB of RAM available to minimize the chance that results are lost due to an out-of-memory error.
The configuration is TOML-based and a more detailed description of fields may be found in the help docs for the
$ ./sweet help run
Results are produced into a single directory containing each benchmark as a sub-directory. Within each sub-directory is one file per configuration containing the stderr (and usually combined stdout) of the benchmark run, which also doubles as the benchmark output format.
All results are reported in the standard Go testing package format, such that results may be compared using the benchstat tool.
Results then may also be composed together for easy viewing. For example, if one runs sweet with two configurations named
config2, then to quickly compare all results, do:
$ cat results/*/config1.results > config1.results $ cat results/*/config2.results > config2.results $ benchstat config1.results config2.results
This benchmark suite tries to keep noise low in measurements where possible.
Do not compare results produced by separate invocations of the