commit | 2f2bd003cf4c25d11c7979bb7025c3ed9b2b9bdb | [log] [tgz] |
---|---|---|
author | Dmitri Shuralyov <dmitshur@golang.org> | Thu Nov 07 14:35:32 2024 -0500 |
committer | Gopher Robot <gobot@golang.org> | Tue Nov 19 20:12:03 2024 +0000 |
tree | 4e2799b86de81d4bee0600bd5c9977e5a3a62a3e | |
parent | e823c990d74c2f2d1ba5a0896080846d3e68c8ab [diff] |
internal/workflow: fix sub-workflow prefix for tasks added via expansion Tasks added by expansions, in contrast to all other tasks, didn't handle the sub-workflow prefix. A while back I looked into it briefly, and saw that shallowClone didn't clone that one field, so figured CL 546235 would fix it. It didn't, and now it's apparent to me why not. The expansion always gets a copy of the top-level workflow definition, even if it was made by a task with some prefix. Modify addExpansion to record not just that a given task is an expansion, but also the workflow prefix at that time. We also have a test that makes it easy to verify this works now. Keep shallowClone as is, to match the behavior implied by its name, even though by now the namePrefix it clones gets overwritten anyway. For golang/go#70249. Change-Id: Ib1cb34ef121baf946fe6f2500c4bf1611aaa6db7 Reviewed-on: https://go-review.googlesource.com/c/build/+/626336 Reviewed-by: Dmitri Shuralyov <dmitshur@google.com> LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com> Reviewed-by: Carlos Amedee <carlos@golang.org> Auto-Submit: Dmitri Shuralyov <dmitshur@golang.org>
This repository holds the source for various packages and tools that support Go's build system and the development of the Go programming language.
Warning: Packages here are internal to Go's build system and its needs. Some may one day be promoted to another golang.org/x
repository, or they may be modified arbitrarily or even disappear altogether. In short, code in this repository is not subject to the Go 1 compatibility promise nor the Release Policy.
This repository uses Gerrit for code changes. To contribute, see https://go.dev/doc/contribute.
The git repository is https://go.googlesource.com/build.
The main issue tracker for the build repository is located at https://go.dev/issues. Prefix your issue with “x/build/DIR:
” in the subject line.
The main components of the Go build system are:
The coordinator, in cmd/coordinator/, serves https://farmer.golang.org/ and https://build.golang.org/. It runs on GKE and coordinates the whole build system. It finds work to do (both pre-submit “TryBot” work, and post-submit work) and executes builds, allocating machines to run the builds. It is the owner of all machines. It holds the state for which builds passed or failed, and the build logs.
The Go package in buildenv/ contains constants for where the dashboard and coordinator run, for prod, staging, and local development.
The buildlet, in cmd/buildlet/, is the HTTP server that runs on each worker machine to execute builds on the coordinator's behalf. This runs on every possible GOOS/GOARCH value. The buildlet binaries are stored on Google Cloud Storage and fetched per-build, so we can update the buildlet binary independently of the underlying machine images. The buildlet is the most insecure server possible: it has HTTP handlers to read & write arbitrary content to disk, and to execute any file on disk. It also has an SSH tunnel handler. The buildlet must never be exposed to the Internet. The coordinator provisions buildlets in one of three ways:
by creating VMs on Google Compute Engine (GCE) with custom images configured to fetch & run the buildlet on boot, listening on port 80 in a private network.
by running Linux containers (on either Google Kubernetes Engine or GCE with the Container-Optimized OS image), with the container images configured to fetch & run the buildlet on start, also listening on port 80 in a private network.
by taking buildlets out of a pool of connected, dedicated machines. The buildlet can run in either listen mode (as on GCE and GKE) or in reverse mode. In reverse mode, the buildlet connects out to https://farmer.golang.org/ and registers itself with the coordinator. The TCP connection is then logically reversed (using revdial and when the coordinator needs to do a build, it makes HTTP requests to the coordinator over the already-open TCP connection.
These three pools can be viewed at the coordinator's https://farmer.golang.org/#pools.
The env/ directory describes build environments. It contains scripts to create VM images, Dockerfiles to create Kubernetes containers, and instructions and tools for dedicated machines.
maintner in maintner/ is a library for slurping all of Go's GitHub and Gerrit state into memory. The daemon maintnerd in maintner/maintnerd/ runs on GKE and serves https://maintner.golang.org/. The daemon watches GitHub and Gerrit and appends to a mutation log whenever it sees new activity. The logs are stored on GCS and served to clients.
The godata package in maintner/godata/ provides a trivial API to let anybody write programs against Go's maintner corpus (all of our GitHub and Gerrit history), live up to the second. It takes a few seconds to load into memory and a few hundred MB of RAM after it downloads the mutation log from the network.
pubsubhelper in cmd/pubsubhelper/ is a dependency of maintnerd. It runs on GKE, is available at https://pubsubhelper.golang.org/, and runs an HTTP server to receive Webhook updates from GitHub on new activity and an SMTP server to receive new activity emails from Gerrit. It then is a pubsub system for maintnerd to subscribe to.
The gitmirror server in cmd/gitmirror/ mirrors Gerrit to GitHub, and also serves a mirror of the Gerrit code to the coordinator for builds, so we don't overwhelm Gerrit and blow our quota.
The Go gopherbot bot logic runs on GKE. The code is in cmd/gopherbot. It depends on maintner via the godata package.
The developer dashboard at https://dev.golang.org/ runs on GKE. Its code is in devapp/. It also depends on maintner via the godata package.
cmd/retrybuilds: a Go client program to delete build results from the dashboard
The perfdata server, in perfdata/appengine serves https://perfdata.golang.org/. It runs on App Engine and serves the benchmark result storage system.
The perf server, in perf/appengine serves https://perf.golang.org/. It runs on App Engine and serves the benchmark result analysis system. See its README for how to start a local testing instance.
If you wish to run a Go builder, please email golang-dev@googlegroups.com first. There is documentation at https://golang.org/wiki/DashboardBuilders, but depending on the type of builder, we may want to run it ourselves, after you prepare an environment description (resulting in a VM image) of it. See the env directory.