crfs: delete

CRFS has moved from x/build to https://github.com/google/crfs.

Change-Id: Ib482cf0670a657d4a6238c1ab87f4f1aa78d9338
Reviewed-on: https://go-review.googlesource.com/c/build/+/196020
Run-TryBot: Dmitri Shuralyov <dmitshur@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
diff --git a/crfs/README.md b/crfs/README.md
index 6aee368..fc9c458 100644
--- a/crfs/README.md
+++ b/crfs/README.md
@@ -1,148 +1,7 @@
 # CRFS: Container Registry Filesystem
 
-Discussion: https://github.com/golang/go/issues/30829
+## Moved
 
-# Moved
-
-This project has moved to https://github.com/google/crfs
+This project has moved to https://github.com/google/crfs.
 
 It's more widely applicable than just for use by Go's build system.
-
-## Overview
-
-**CRFS** is a read-only FUSE filesystem that lets you mount a
-container image, served directly from a container registry (such as
-[gcr.io](https://gcr.io/)), without pulling it all locally first.
-
-## Background
-
-Starting a container should be fast. Currently, however, starting a
-container in many environments requires doing a `pull` operation from
-a container registry to read the entire container image from the
-registry and write the entire container image to the local machine's
-disk. It's pretty silly (and wasteful) that a read operation becomes a
-write operation. For small containers, this problem is rarely noticed.
-For larger containers, though, the pull operation quickly becomes the
-slowest part of launching a container, especially on a cold node.
-Contrast this with launching a VM on major cloud providers: even with
-a VM image that's hundreds of gigabytes, the VM boots in seconds.
-That's because the hypervisors' block devices are reading from the
-network on demand. The cloud providers all have great internal
-networks. Why aren't we using those great internal networks to read
-our container images on demand?
-
-## Why does Go want this?
-
-Go's continuous build system tests Go on [many operating systems and
-architectures](https://build.golang.org/), using a mix of containers
-(mostly for Linux) and VMs (for other operating systems). We
-prioritize fast builds, targetting 5 minute turnaround for pre-submit
-tests when testing new changes. For isolation and other reasons, we
-run all our containers in a single-use fresh VMs. Generally our
-containers do start quickly, but some of our containers are very large
-and take a long time to start. To work around that, we've automated
-the creation of VM images where our heavy containers are pre-pulled.
-This is all a silly workaround. It'd be much better if we could just
-read the bytes over the network from the right place, without the all
-the hoops.
-
-## Tar files
-
-One reason that reading the bytes directly from the source on demand
-is somewhat non-trivial is that container images are, somewhat
-regrettably, represented by *tar.gz* files, and tar files are
-unindexed, and gzip streams are not seekable. This means that trying
-to read 1KB out of a file named `/var/lib/foo/data` still involves
-pulling hundreds of gigabytes to uncompress the stream, to decode the
-entire tar file until you find the entry you're looking for. You can't
-look it up by its path name.
-
-## Introducing Stargz
-
-Fortunately, we can fix the fact that *tar.gz* files are unindexed and
-unseekable, while still making the file a valid *tar.gz* file by
-taking advantage of the fact that two gzip streams can be concatenated
-and still be a valid gzip stream. So you can just make a tar file
-where each tar entry is its own gzip stream.
-
-We introduce a format, **Stargz**, a **S**eekable
-**tar.gz** format that's still a valid tar.gz file for everything else
-that's unaware of these details.
-
-In summary:
-
-* That traditional `*.tar.gz` format is: `Gzip(TarF(file1) + TarF(file2) + TarF(file3) + TarFooter))`
-* Stargz's format is: `Gzip(TarF(file1)) + Gzip(TarF(file2)) + Gzip(TarF(file3_chunk1)) + Gzip(F(file3_chunk2)) + Gzip(F(index of earlier files in magic file), TarFooter)`, where the trailing ZIP-like index contains offsets for each file/chunk's GZIP header in the overall **stargz** file.
-
-This makes images a few percent larger (due to more gzip headers and
-loss of compression context between files), but it's plenty
-acceptable.
-
-## Converting images
-
-If you're using `docker push` to push to a registry, you can't use
-CRFS to mount the image. Maybe one day `docker push` will push
-*stargz* files (or something with similar properties) by default, but
-not yet. So for now we need to convert the storage image layers from
-*tar.gz* into *stargz*. There is a tool that does that. **TODO: examples**
-
-## Operation
-
-When mounting an image, the FUSE filesystem makes a couple Docker
-Registry HTTP API requests to the container registry to get the
-metadata for the container and all its layers.
-
-It then does HTTP Range requests to read just the **stargz** index out
-of the end of each of the layers. The index is stored similar to how
-the ZIP format's TOC is stored, storing a pointer to the index at the
-very end of the file. Generally it takes 1 HTTP request to read the
-index, but no more than 2. In any case, we're assuming a fast network
-(GCE VMs to gcr.io, or similar) with low latency to the container
-registry. Each layer needs these 1 or 2 HTTP requests, but they can
-all be done in parallel.
-
-From that, we keep the index in memory, so `readdir`, `stat`, and
-friends are all served from memory. For reading data, the index
-contains the offset of each file's `GZIP(TAR(file data))` range of the
-overall *stargz* file. To make it possible to efficiently read a small
-amount of data from large files, there can actually be multiple
-**stargz** index entries for large files. (e.g. a new gzip stream
-every 16MB of a large file).
-
-## Union/overlay filesystems
-
-CRFS can do the aufs/overlay2-ish unification of multiple read-only
-*stargz* layers, but it will stop short of trying to unify a writable
-filesystem layer atop. For that, you can just use the traditional
-Linux filesystems.
-
-## Using with Docker, without modifying Docker
-
-Ideally container runtimes would support something like this whole
-scheme natively, but in the meantime a workaround is that when
-converting an image into *stargz* format, the converter tool can also
-produce an image variant that only has metadata (environment,
-entrypoints, etc) and no file contents. Then you can bind mount in the
-contents from the CRFS FUSE filesystem.
-
-That is, the convert tool can do:
-
-**Input**: `gcr.io/your-proj/container:v2`
-
-**Output**: `gcr.io/your-proj/container:v2meta` + `gcr.io/your-proj/container:v2stargz`
-
-What you actually run on Docker or Kubernetes then is the `v2meta`
-version, so your container host's `docker pull` or equivalent only
-pulls a few KB. The gigabytes of remaining data is read lazily via
-CRFS from the `v2stargz` layer directly from the container registry.
-
-## Status
-
-WIP. Enough parts are implemented & tested for me to realize this
-isn't crazy. I'm publishing this document first for discussion while I
-finish things up. Maybe somebody will point me to an existing
-implementation, which would be great.
-
-## Discussion
-
-See https://github.com/golang/go/issues/30829
diff --git a/crfs/crfs.go b/crfs/crfs.go
deleted file mode 100644
index 78ea552..0000000
--- a/crfs/crfs.go
+++ /dev/null
@@ -1,223 +0,0 @@
-// Copyright 2019 The Go Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style
-// license that can be found in the LICENSE file.
-
-// The crfs command runs the Container Registry Filesystem, providing a read-only
-// FUSE filesystem for container images.
-//
-// For purposes of documentation, we'll assume you've mounted this at /crfs.
-//
-// Currently (as of 2019-03-21) it only mounts a single layer at the top level.
-// In the future it'll have paths like:
-//
-//    /crfs/image/gcr.io/foo-proj/image/latest
-//    /crfs/layer/gcr.io/foo-proj/image/latest/xxxxxxxxxxxxxx
-//
-// For mounting a squashed image and a layer, respectively, with the
-// host, owner, image name, and version encoded in the path
-// components.
-package main
-
-import (
-	"context"
-	"errors"
-	"flag"
-	"fmt"
-	"io"
-	"log"
-	"os"
-	"sort"
-	"syscall"
-	"time"
-	"unsafe"
-
-	"bazil.org/fuse"
-	fspkg "bazil.org/fuse/fs"
-	"golang.org/x/build/crfs/stargz"
-)
-
-const debug = false
-
-func usage() {
-	fmt.Fprintf(os.Stderr, "Usage of %s:\n", os.Args[0])
-	fmt.Fprintf(os.Stderr, "   %s <MOUNT_POINT>  (defaults to /crfs)\n", os.Args[0])
-	flag.PrintDefaults()
-}
-
-var stargzFile = flag.String("test_stargz", "", "local stargz file for testing a single layer mount, without hitting a container registry")
-
-func main() {
-	flag.Parse()
-	mntPoint := "/crfs"
-	if flag.NArg() > 1 {
-		usage()
-		os.Exit(2)
-	}
-	if flag.NArg() == 1 {
-		mntPoint = flag.Arg(0)
-	}
-
-	if *stargzFile == "" {
-		log.Fatalf("TODO: network mode not done yet. Use --test_stargz for now")
-	}
-	fs, err := NewLocalStargzFileFS(*stargzFile)
-	if err != nil {
-		log.Fatal(err)
-	}
-
-	c, err := fuse.Mount(mntPoint, fuse.FSName("crfs"), fuse.Subtype("crfs"))
-	if err != nil {
-		log.Fatal(err)
-	}
-	defer c.Close()
-
-	err = fspkg.Serve(c, fs)
-	if err != nil {
-		log.Fatal(err)
-	}
-
-	// check if the mount process has an error to report
-	<-c.Ready
-	if err := c.MountError; err != nil {
-		log.Fatal(err)
-	}
-}
-
-// FS is the CRFS filesystem.
-// It implements https://godoc.org/bazil.org/fuse/fs#FS
-type FS struct {
-	r *stargz.Reader
-}
-
-func NewLocalStargzFileFS(file string) (*FS, error) {
-	f, err := os.Open(file)
-	if err != nil {
-		return nil, err
-	}
-	fi, err := f.Stat()
-	if err != nil {
-		return nil, err
-	}
-	r, err := stargz.Open(io.NewSectionReader(f, 0, fi.Size()))
-	if err != nil {
-		return nil, err
-	}
-	return &FS{r: r}, nil
-}
-
-// Root returns the root filesystem node for the CRFS filesystem.
-// See https://godoc.org/bazil.org/fuse/fs#FS
-func (fs *FS) Root() (fspkg.Node, error) {
-	te, ok := fs.r.Lookup("")
-	if !ok {
-		return nil, errors.New("failed to find root in stargz")
-	}
-	return &node{fs, te}, nil
-}
-
-func inodeOfEnt(ent *stargz.TOCEntry) uint64 {
-	return uint64(uintptr(unsafe.Pointer(ent)))
-}
-
-func direntType(ent *stargz.TOCEntry) fuse.DirentType {
-	switch ent.Type {
-	case "dir":
-		return fuse.DT_Dir
-	case "reg":
-		return fuse.DT_File
-	case "symlink":
-		return fuse.DT_Link
-	}
-	// TODO: socket, block, char, fifo as needed
-	return fuse.DT_Unknown
-}
-
-// node is a CRFS node in the FUSE filesystem.
-// See https://godoc.org/bazil.org/fuse/fs#Node
-type node struct {
-	fs *FS
-	te *stargz.TOCEntry
-}
-
-var (
-	_ fspkg.HandleReadDirAller = (*node)(nil)
-	_ fspkg.Node               = (*node)(nil)
-	_ fspkg.NodeStringLookuper = (*node)(nil)
-	_ fspkg.NodeReadlinker     = (*node)(nil)
-	_ fspkg.HandleReader       = (*node)(nil)
-)
-
-// Attr populates a with the attributes of n.
-// See https://godoc.org/bazil.org/fuse/fs#Node
-func (n *node) Attr(ctx context.Context, a *fuse.Attr) error {
-	fi := n.te.Stat()
-	a.Valid = 30 * 24 * time.Hour
-	a.Inode = inodeOfEnt(n.te)
-	a.Size = uint64(fi.Size())
-	a.Blocks = a.Size / 512
-	a.Mtime = fi.ModTime()
-	a.Mode = fi.Mode()
-	a.Uid = uint32(n.te.Uid)
-	a.Gid = uint32(n.te.Gid)
-	if debug {
-		log.Printf("attr of %s: %s", n.te.Name, *a)
-	}
-	return nil
-}
-
-// ReadDirAll returns all directory entries in the directory node n.
-//
-// https://godoc.org/bazil.org/fuse/fs#HandleReadDirAller
-func (n *node) ReadDirAll(ctx context.Context) (ents []fuse.Dirent, err error) {
-	n.te.ForeachChild(func(baseName string, ent *stargz.TOCEntry) bool {
-		ents = append(ents, fuse.Dirent{
-			Inode: inodeOfEnt(ent),
-			Type:  direntType(ent),
-			Name:  baseName,
-		})
-		return true
-	})
-	sort.Slice(ents, func(i, j int) bool { return ents[i].Name < ents[j].Name })
-	return ents, nil
-}
-
-// Lookup looks up a child entry of the directory node n.
-//
-// See https://godoc.org/bazil.org/fuse/fs#NodeStringLookuper
-func (n *node) Lookup(ctx context.Context, name string) (fspkg.Node, error) {
-	e, ok := n.te.LookupChild(name)
-	if !ok {
-		return nil, syscall.ENOENT
-	}
-	return &node{n.fs, e}, nil
-}
-
-// Readlink reads the target of a symlink.
-//
-// See https://godoc.org/bazil.org/fuse/fs#NodeReadlinker
-func (n *node) Readlink(ctx context.Context, req *fuse.ReadlinkRequest) (string, error) {
-	if n.te.Type != "symlink" {
-		return "", syscall.EINVAL
-	}
-	return n.te.LinkName, nil
-}
-
-// Read reads data from a regular file n.
-//
-// See https://godoc.org/bazil.org/fuse/fs#HandleReader
-func (n *node) Read(ctx context.Context, req *fuse.ReadRequest, resp *fuse.ReadResponse) error {
-	sr, err := n.fs.r.OpenFile(n.te.Name)
-	if err != nil {
-		return err
-	}
-
-	resp.Data = make([]byte, req.Size)
-	nr, err := sr.ReadAt(resp.Data, req.Offset)
-	if nr < req.Size {
-		resp.Data = resp.Data[:nr]
-	}
-	if debug {
-		log.Printf("Read response: size=%d @ %d, read %d", req.Size, req.Offset, nr)
-	}
-	return nil
-}
diff --git a/crfs/go.mod b/crfs/go.mod
deleted file mode 100644
index fb14db3..0000000
--- a/crfs/go.mod
+++ /dev/null
@@ -1,9 +0,0 @@
-module golang.org/x/build/crfs
-
-go 1.12
-
-require (
-	bazil.org/fuse v0.0.0-20180421153158-65cc252bf669
-	golang.org/x/net v0.0.0-20190320064053-1272bf9dcd53 // indirect
-	golang.org/x/sys v0.0.0-20190321052220-f7bb7a8bee54 // indirect
-)
diff --git a/crfs/go.sum b/crfs/go.sum
deleted file mode 100644
index 7d36c6b..0000000
--- a/crfs/go.sum
+++ /dev/null
@@ -1,9 +0,0 @@
-bazil.org/fuse v0.0.0-20180421153158-65cc252bf669 h1:FNCRpXiquG1aoyqcIWVFmpTSKVcx2bQD38uZZeGtdlw=
-bazil.org/fuse v0.0.0-20180421153158-65cc252bf669/go.mod h1:Xbm+BRKSBEpa4q4hTSxohYNQpsxXPbPry4JJWOB3LB8=
-golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
-golang.org/x/net v0.0.0-20190320064053-1272bf9dcd53 h1:kcXqo9vE6fsZY5X5Rd7R1l7fTgnWaDCVmln65REefiE=
-golang.org/x/net v0.0.0-20190320064053-1272bf9dcd53/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
-golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
-golang.org/x/sys v0.0.0-20190321052220-f7bb7a8bee54 h1:xe1/2UUJRmA9iDglQSlkx8c5n3twv58+K0mPpC2zmhA=
-golang.org/x/sys v0.0.0-20190321052220-f7bb7a8bee54/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
-golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
diff --git a/crfs/stargz/stargz.go b/crfs/stargz/stargz.go
deleted file mode 100644
index 4523b4b..0000000
--- a/crfs/stargz/stargz.go
+++ /dev/null
@@ -1,686 +0,0 @@
-// Copyright 2019 The Go Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style
-// license that can be found in the LICENSE file.
-
-// The stargz package reads & writes tar.gz ("tarball") files in a
-// seekable, indexed format call "stargz". A stargz file is still a
-// valid tarball, but it's slightly bigger with new gzip streams for
-// each new file & throughout large files, and has an index in a magic
-// file at the end.
-package stargz
-
-import (
-	"archive/tar"
-	"bufio"
-	"bytes"
-	"compress/gzip"
-	"encoding/json"
-	"errors"
-	"fmt"
-	"io"
-	"io/ioutil"
-	"os"
-	"path"
-	"sort"
-	"strconv"
-	"strings"
-	"time"
-)
-
-// TOCTarName is the name of the JSON file in the tar archive in the
-// table of contents gzip stream.
-const TOCTarName = "stargz.index.json"
-
-// FooterSize is the number of bytes in the stargz footer.
-//
-// The footer is an empty gzip stream with no compression and an Extra
-// header of the form "%016xSTARGZ", where the 64 bit hex-encoded
-// number is the offset to the gzip stream of JSON TOC.
-//
-// 47 comes from:
-//
-//   10 byte gzip header +
-//   2 byte (LE16) length of extra, encoding 22 (16 hex digits + len("STARGZ")) == "\x16\x00" +
-//   22 bytes of extra (fmt.Sprintf("%016xSTARGZ", tocGzipOffset))
-//   5 byte flate header
-//   8 byte gzip footer (two little endian uint32s: digest, size)
-const FooterSize = 47
-
-// A Reader permits random access reads from a stargz file.
-type Reader struct {
-	sr  *io.SectionReader
-	toc *jtoc
-
-	// m stores all non-chunk entries, keyed by name.
-	m map[string]*TOCEntry
-
-	// chunks stores all TOCEntry values for regular files that
-	// are split up. For a file with a single chunk, it's only
-	// stored in m.
-	chunks map[string][]*TOCEntry
-}
-
-// Open opens a stargz file for reading.
-func Open(sr *io.SectionReader) (*Reader, error) {
-	if sr.Size() < FooterSize {
-		return nil, fmt.Errorf("stargz size %d is smaller than the stargz footer size", sr.Size())
-	}
-	// TODO: read a bigger chunk (1MB?) at once here to hopefully
-	// get the TOC + footer in one go.
-	var footer [FooterSize]byte
-	if _, err := sr.ReadAt(footer[:], sr.Size()-FooterSize); err != nil {
-		return nil, fmt.Errorf("error reading footer: %v", err)
-	}
-	tocOff, ok := parseFooter(footer[:])
-	if !ok {
-		return nil, fmt.Errorf("error parsing footer")
-	}
-	tocTargz := make([]byte, sr.Size()-tocOff-FooterSize)
-	if _, err := sr.ReadAt(tocTargz, tocOff); err != nil {
-		return nil, fmt.Errorf("error reading %d byte TOC targz: %v", len(tocTargz), err)
-	}
-	zr, err := gzip.NewReader(bytes.NewReader(tocTargz))
-	if err != nil {
-		return nil, fmt.Errorf("malformed TOC gzip header: %v", err)
-	}
-	zr.Multistream(false)
-	tr := tar.NewReader(zr)
-	h, err := tr.Next()
-	if err != nil {
-		return nil, fmt.Errorf("failed to find tar header in TOC gzip stream: %v", err)
-	}
-	if h.Name != TOCTarName {
-		return nil, fmt.Errorf("TOC tar entry had name %q; expected %q", h.Name, TOCTarName)
-	}
-	toc := new(jtoc)
-	if err := json.NewDecoder(tr).Decode(&toc); err != nil {
-		return nil, fmt.Errorf("error decoding TOC JSON: %v", err)
-	}
-	r := &Reader{sr: sr, toc: toc}
-	r.initFields()
-	return r, nil
-}
-
-// TOCEntry is an entry in the stargz file's TOC (Table of Contents).
-type TOCEntry struct {
-	// Name is the tar entry's name. It is the complete path
-	// stored in the tar file, not just the base name.
-	Name string `json:"name"`
-
-	// Type is one of "dir", "reg", "symlink", "hardlink", or "chunk".
-	// The "chunk" type is used for regular file data chunks past the first
-	// TOCEntry; the 2nd chunk and on have only Type ("chunk"), Offset,
-	// ChunkOffset, and ChunkSize populated.
-	Type string `json:"type"`
-
-	// Size, for regular files, is the logical size of the file.
-	Size int64 `json:"size,omitempty"`
-
-	// ModTime3339 is the modification time of the tar entry. Empty
-	// means zero or unknown. Otherwise it's in UTC RFC3339
-	// format. Use the ModTime method to access the time.Time value.
-	ModTime3339 string `json:"modtime,omitempty"`
-	modTime     time.Time
-
-	// LinkName, for symlinks and hardlinks, is the link target.
-	LinkName string `json:"linkName,omitempty"`
-
-	// Mode is the permission and mode bits.
-	Mode int64 `json:"mode,omitempty"`
-
-	// Uid is the user ID of the owner.
-	Uid int `json:"uid,omitempty"`
-
-	// Gid is the group ID of the owner.
-	Gid int `json:"gid,omitempty"`
-
-	// Uname is the username of the owner.
-	//
-	// In the serialized JSON, this field may only be present for
-	// the first entry with the same Uid.
-	Uname string `json:"userName,omitempty"`
-
-	// Gname is the group name of the owner.
-	//
-	// In the serialized JSON, this field may only be present for
-	// the first entry with the same Gid.
-	Gname string `json:"groupName,omitempty"`
-
-	// Offset, for regular files, provides the offset in the
-	// stargz file to the file's data bytes. See ChunkOffset and
-	// ChunkSize.
-	Offset int64 `json:"offset,omitempty"`
-
-	// ChunkOffset is non-zero if this is a chunk of a large,
-	// regular file. If so, the Offset is where the gzip header of
-	// ChunkSize bytes at ChunkOffset in Name begin. If both
-	// ChunkOffset and ChunkSize are zero, the file contents are
-	// completely represented at the tar gzip stream starting at
-	// Offset.
-	ChunkOffset int64 `json:"chunkOffset,omitempty"`
-	ChunkSize   int64 `json:"chunkSize,omitempty"`
-
-	children map[string]*TOCEntry
-}
-
-// ModTime returns the entry's modification time.
-func (e *TOCEntry) ModTime() time.Time { return e.modTime }
-
-func (e *TOCEntry) addChild(baseName string, child *TOCEntry) {
-	if e.children == nil {
-		e.children = make(map[string]*TOCEntry)
-	}
-	e.children[baseName] = child
-}
-
-// jtoc is the JSON-serialized table of contents index of the files in the stargz file.
-type jtoc struct {
-	Version int         `json:"version"`
-	Entries []*TOCEntry `json:"entries"`
-}
-
-// Stat returns a FileInfo value representing e.
-func (e *TOCEntry) Stat() os.FileInfo { return fileInfo{e} }
-
-// ForeachChild calls f for each child item. If f returns false, iteration ends.
-// If e is not a directory, f is not called.
-func (e *TOCEntry) ForeachChild(f func(baseName string, ent *TOCEntry) bool) {
-	for name, ent := range e.children {
-		if !f(name, ent) {
-			return
-		}
-	}
-}
-
-// LookupChild returns the directory e's child by its base name.
-func (e *TOCEntry) LookupChild(baseName string) (child *TOCEntry, ok bool) {
-	child, ok = e.children[baseName]
-	return
-}
-
-// fileInfo implements os.FileInfo using the wrapped *TOCEntry.
-type fileInfo struct{ e *TOCEntry }
-
-var _ os.FileInfo = fileInfo{}
-
-func (fi fileInfo) Name() string       { return path.Base(fi.e.Name) }
-func (fi fileInfo) IsDir() bool        { return fi.e.Type == "dir" }
-func (fi fileInfo) Size() int64        { return fi.e.Size }
-func (fi fileInfo) ModTime() time.Time { return fi.e.ModTime() }
-func (fi fileInfo) Sys() interface{}   { return fi.e }
-func (fi fileInfo) Mode() (m os.FileMode) {
-	m = os.FileMode(fi.e.Mode) & os.ModePerm
-	switch fi.e.Type {
-	case "dir":
-		m |= os.ModeDir
-	case "symlink":
-		m |= os.ModeSymlink
-	}
-	return m
-}
-
-// initFields populates the Reader from r.toc after decoding it from
-// JSON.
-//
-// Unexported fields are populated and TOCEntry fields that were
-// implicit in the JSON are populated.
-func (r *Reader) initFields() {
-	r.m = make(map[string]*TOCEntry, len(r.toc.Entries))
-	r.chunks = make(map[string][]*TOCEntry)
-	var lastPath string
-	uname := map[int]string{}
-	gname := map[int]string{}
-	for _, ent := range r.toc.Entries {
-		ent.Name = strings.TrimPrefix(ent.Name, "./")
-		if ent.Type == "chunk" {
-			ent.Name = lastPath
-			r.chunks[ent.Name] = append(r.chunks[ent.Name], ent)
-		} else {
-			lastPath = ent.Name
-
-			if ent.Uname != "" {
-				uname[ent.Uid] = ent.Uname
-			} else {
-				ent.Uname = uname[ent.Uid]
-			}
-			if ent.Gname != "" {
-				gname[ent.Gid] = ent.Gname
-			} else {
-				ent.Gname = uname[ent.Gid]
-			}
-
-			ent.modTime, _ = time.Parse(time.RFC3339, ent.ModTime3339)
-
-			r.m[ent.Name] = ent
-		}
-		if ent.Type == "reg" && ent.ChunkSize > 0 && ent.ChunkSize < ent.Size {
-			r.chunks[ent.Name] = make([]*TOCEntry, 0, ent.Size/ent.ChunkSize+1)
-			r.chunks[ent.Name] = append(r.chunks[ent.Name], ent)
-		}
-	}
-
-	// Populate children, add implicit directories:
-	for _, ent := range r.toc.Entries {
-		if ent.Type == "chunk" {
-			continue
-		}
-		// add "foo/":
-		//    add "foo" child to "" (creating "" if necessary)
-		//
-		// add "foo/bar/":
-		//    add "bar" child to "foo" (creating "foo" if necessary)
-		//
-		// add "foo/bar.txt":
-		//    add "bar.txt" child to "foo" (creating "foo" if necessary)
-		//
-		// add "a/b/c/d/e/f.txt":
-		//    create "a/b/c/d/e" node
-		//    add "f.txt" child to "e"
-
-		name := ent.Name
-		if ent.Type == "dir" {
-			name = strings.TrimSuffix(name, "/")
-		}
-		pdir := r.getOrCreateDir(parentDir(name))
-		pdir.addChild(path.Base(name), ent)
-	}
-
-}
-
-func parentDir(p string) string {
-	dir, _ := path.Split(p)
-	return strings.TrimSuffix(dir, "/")
-}
-
-func (r *Reader) getOrCreateDir(d string) *TOCEntry {
-	e, ok := r.m[d]
-	if !ok {
-		e = &TOCEntry{
-			Name: d,
-			Type: "dir",
-			Mode: 0755,
-		}
-		r.m[d] = e
-		if d != "" {
-			pdir := r.getOrCreateDir(parentDir(d))
-			pdir.addChild(path.Base(d), e)
-		}
-	}
-	return e
-}
-
-// Lookup returns the Table of Contents entry for the given path.
-//
-// To get the root directory, use the empty string.
-func (r *Reader) Lookup(path string) (e *TOCEntry, ok bool) {
-	if r == nil {
-		return
-	}
-	// TODO: decide at which stage to handle hard links. Probably
-	// here? And it probably needs a link count field stored in
-	// the TOCEntry.
-	e, ok = r.m[path]
-	return
-}
-
-func (r *Reader) OpenFile(name string) (*io.SectionReader, error) {
-	ent, ok := r.Lookup(name)
-	if !ok {
-		// TODO: come up with some error plan. This is lazy:
-		return nil, &os.PathError{
-			Path: name,
-			Op:   "OpenFile",
-			Err:  os.ErrNotExist,
-		}
-	}
-	if ent.Type != "reg" {
-		return nil, &os.PathError{
-			Path: name,
-			Op:   "OpenFile",
-			Err:  errors.New("not a regular file"),
-		}
-	}
-	fr := &fileReader{
-		r:    r,
-		size: ent.Size,
-		ents: []*TOCEntry{ent},
-	}
-	if ents, ok := r.chunks[name]; ok {
-		fr.ents = ents
-	}
-	return io.NewSectionReader(fr, 0, fr.size), nil
-}
-
-type fileReader struct {
-	r    *Reader
-	size int64
-	ents []*TOCEntry // 1 or more reg/chunk entries
-}
-
-func (fr *fileReader) ReadAt(p []byte, off int64) (n int, err error) {
-	if off >= fr.size {
-		return 0, io.EOF
-	}
-	if off < 0 {
-		return 0, errors.New("invalid offset")
-	}
-	var i int
-	if len(fr.ents) > 1 {
-		i = sort.Search(len(fr.ents), func(i int) bool {
-			return fr.ents[i].ChunkOffset >= off
-		})
-		if i == -1 {
-			return 0, errors.New("internal error; error finding chunk given offset")
-		}
-	}
-	ent := fr.ents[i]
-	if ent.ChunkOffset > off {
-		if i == 0 {
-			return 0, errors.New("internal error; first chunk offset is non-zero")
-		}
-		ent = fr.ents[i-1]
-	}
-
-	//  If ent is a chunk of a large file, adjust the ReadAt
-	//  offset by the chunk's offset.
-	off -= ent.ChunkOffset
-
-	gzOff := ent.Offset
-	sr := io.NewSectionReader(fr.r.sr, gzOff, fr.r.sr.Size()-gzOff)
-	gz, err := gzip.NewReader(sr)
-	if err != nil {
-		return 0, fmt.Errorf("fileReader.ReadAt.gzipNewReader: %v", err)
-	}
-	if n, err := io.CopyN(ioutil.Discard, gz, off); n != off || err != nil {
-		return 0, fmt.Errorf("discard of %d bytes = %v, %v", off, n, err)
-	}
-	return io.ReadFull(gz, p)
-}
-
-// A Writer writes stargz files.
-//
-// Use NewWriter to create a new Writer.
-type Writer struct {
-	bw  *bufio.Writer
-	cw  *countWriter
-	toc *jtoc
-
-	closed        bool
-	gz            *gzip.Writer
-	lastUsername  map[int]string
-	lastGroupname map[int]string
-
-	// ChunkSize optionally controls the maximum number of bytes
-	// of data of a regular file that can be written in one gzip
-	// stream before a new gzip stream is started.
-	// Zero means to use a default, currently 4 MiB.
-	ChunkSize int
-}
-
-// currentGzipWriter writes to the current w.gz field, can change
-// throughout writing a tar entry.
-type currentGzipWriter struct{ w *Writer }
-
-func (cgw currentGzipWriter) Write(p []byte) (int, error) { return cgw.w.gz.Write(p) }
-
-func (w *Writer) chunkSize() int {
-	if w.ChunkSize <= 0 {
-		return 4 << 20
-	}
-	return w.ChunkSize
-}
-
-// NewWriter returns a new stargz writer writing to w.
-//
-// The writer must be closed to write its trailing table of contents.
-func NewWriter(w io.Writer) *Writer {
-	bw := bufio.NewWriter(w)
-	cw := &countWriter{w: bw}
-	return &Writer{
-		bw:  bw,
-		cw:  cw,
-		toc: &jtoc{Version: 1},
-	}
-}
-
-// Close writes the stargz's table of contents and flushes all the
-// buffers, returning any error.
-func (w *Writer) Close() error {
-	if w.closed {
-		return nil
-	}
-	defer func() { w.closed = true }()
-
-	if err := w.closeGz(); err != nil {
-		return err
-	}
-
-	// Write the TOC index.
-	tocOff := w.cw.n
-	w.gz, _ = gzip.NewWriterLevel(w.cw, gzip.BestCompression)
-	w.gz.Extra = []byte("stargz.toc")
-	tw := tar.NewWriter(w.gz)
-	tocJSON, err := json.MarshalIndent(w.toc, "", "\t")
-	if err != nil {
-		return err
-	}
-	if err := tw.WriteHeader(&tar.Header{
-		Typeflag: tar.TypeReg,
-		Name:     TOCTarName,
-		Size:     int64(len(tocJSON)),
-	}); err != nil {
-		return err
-	}
-	if _, err := tw.Write(tocJSON); err != nil {
-		return err
-	}
-
-	if err := tw.Close(); err != nil {
-		return err
-	}
-	if err := w.closeGz(); err != nil {
-		return err
-	}
-
-	// And a little footer with pointer to the TOC gzip stream.
-	if _, err := w.bw.Write(footerBytes(tocOff)); err != nil {
-		return err
-	}
-
-	if err := w.bw.Flush(); err != nil {
-		return err
-	}
-
-	return nil
-}
-
-func (w *Writer) closeGz() error {
-	if w.closed {
-		return errors.New("write on closed Writer")
-	}
-	if w.gz != nil {
-		if err := w.gz.Close(); err != nil {
-			return err
-		}
-		w.gz = nil
-	}
-	return nil
-}
-
-// nameIfChanged returns name, unless it was the already the value of (*mp)[id],
-// in which case it returns the empty string.
-func (w *Writer) nameIfChanged(mp *map[int]string, id int, name string) string {
-	if name == "" {
-		return ""
-	}
-	if *mp == nil {
-		*mp = make(map[int]string)
-	}
-	if (*mp)[id] == name {
-		return ""
-	}
-	(*mp)[id] = name
-	return name
-}
-
-func (w *Writer) condOpenGz() {
-	if w.gz == nil {
-		w.gz, _ = gzip.NewWriterLevel(w.cw, gzip.BestCompression)
-	}
-}
-
-// AppendTar reads the tar or tar.gz file from r and appends
-// each of its contents to w.
-//
-// The input r can optionally be gzip compressed but the output will
-// always be gzip compressed.
-func (w *Writer) AppendTar(r io.Reader) error {
-	br := bufio.NewReader(r)
-	var tr *tar.Reader
-	if isGzip(br) {
-		// NewReader can't fail if isGzip returned true.
-		zr, _ := gzip.NewReader(br)
-		tr = tar.NewReader(zr)
-	} else {
-		tr = tar.NewReader(br)
-	}
-	for {
-		h, err := tr.Next()
-		if err == io.EOF {
-			break
-		}
-		if err != nil {
-			return fmt.Errorf("error reading from source tar: tar.Reader.Next: %v", err)
-		}
-		ent := &TOCEntry{
-			Name:        h.Name,
-			Mode:        h.Mode,
-			Uid:         h.Uid,
-			Gid:         h.Gid,
-			Uname:       w.nameIfChanged(&w.lastUsername, h.Uid, h.Uname),
-			Gname:       w.nameIfChanged(&w.lastGroupname, h.Gid, h.Gname),
-			ModTime3339: formatModtime(h.ModTime),
-		}
-		w.condOpenGz()
-		tw := tar.NewWriter(currentGzipWriter{w})
-		if err := tw.WriteHeader(h); err != nil {
-			return err
-		}
-		switch h.Typeflag {
-		case tar.TypeLink:
-			ent.Type = "hardlink"
-			ent.LinkName = h.Linkname
-		case tar.TypeSymlink:
-			ent.Type = "symlink"
-			ent.LinkName = h.Linkname
-		case tar.TypeDir:
-			ent.Type = "dir"
-		case tar.TypeReg:
-			ent.Type = "reg"
-			ent.Size = h.Size
-		default:
-			return fmt.Errorf("unsupported input tar entry %q", h.Typeflag)
-		}
-
-		if h.Typeflag == tar.TypeReg {
-			var written int64
-			totalSize := ent.Size // save it before we destroy ent
-			for written < totalSize {
-				if err := w.closeGz(); err != nil {
-					return err
-				}
-
-				chunkSize := int64(w.chunkSize())
-				remain := totalSize - written
-				if remain < chunkSize {
-					chunkSize = remain
-				} else {
-					ent.ChunkSize = chunkSize
-				}
-				ent.Offset = w.cw.n
-				ent.ChunkOffset = written
-
-				w.condOpenGz()
-
-				if _, err := io.CopyN(tw, tr, chunkSize); err != nil {
-					return fmt.Errorf("error copying %q: %v", h.Name, err)
-				}
-				w.toc.Entries = append(w.toc.Entries, ent)
-				written += chunkSize
-				ent = &TOCEntry{
-					Name: h.Name,
-					Type: "chunk",
-				}
-			}
-		} else {
-			w.toc.Entries = append(w.toc.Entries, ent)
-		}
-		if err := tw.Flush(); err != nil {
-			return err
-		}
-	}
-	return nil
-}
-
-// footerBytes the 47 byte footer.
-func footerBytes(tocOff int64) []byte {
-	buf := bytes.NewBuffer(make([]byte, 0, FooterSize))
-	gz, _ := gzip.NewWriterLevel(buf, gzip.NoCompression)
-	gz.Header.Extra = []byte(fmt.Sprintf("%016xSTARGZ", tocOff))
-	gz.Close()
-	if buf.Len() != FooterSize {
-		panic(fmt.Sprintf("footer buffer = %d, not %d", buf.Len(), FooterSize))
-	}
-	return buf.Bytes()
-}
-
-func parseFooter(p []byte) (tocOffset int64, ok bool) {
-	if len(p) != FooterSize {
-		return 0, false
-	}
-	zr, err := gzip.NewReader(bytes.NewReader(p))
-	if err != nil {
-		return 0, false
-	}
-	extra := zr.Header.Extra
-	if len(extra) != 16+len("STARGZ") {
-		return 0, false
-	}
-	if string(extra[16:]) != "STARGZ" {
-		return 0, false
-	}
-	tocOffset, err = strconv.ParseInt(string(extra[:16]), 16, 64)
-	return tocOffset, err == nil
-}
-
-func formatModtime(t time.Time) string {
-	if t.IsZero() || t.Unix() == 0 {
-		return ""
-	}
-	return t.UTC().Round(time.Second).Format(time.RFC3339)
-}
-
-// countWriter counts how many bytes have been written to its wrapped
-// io.Writer.
-type countWriter struct {
-	w io.Writer
-	n int64
-}
-
-func (cw *countWriter) Write(p []byte) (n int, err error) {
-	n, err = cw.w.Write(p)
-	cw.n += int64(n)
-	return
-}
-
-// isGzip reports whether br is positioned right before an upcoming gzip stream.
-// It does not consume any bytes from br.
-func isGzip(br *bufio.Reader) bool {
-	const (
-		gzipID1     = 0x1f
-		gzipID2     = 0x8b
-		gzipDeflate = 8
-	)
-	peek, _ := br.Peek(3)
-	return len(peek) >= 3 && peek[0] == gzipID1 && peek[1] == gzipID2 && peek[2] == gzipDeflate
-}
diff --git a/crfs/stargz/stargz_test.go b/crfs/stargz/stargz_test.go
deleted file mode 100644
index 43467d2..0000000
--- a/crfs/stargz/stargz_test.go
+++ /dev/null
@@ -1,377 +0,0 @@
-// Copyright 2019 The Go Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style
-// license that can be found in the LICENSE file.
-
-package stargz
-
-import (
-	"archive/tar"
-	"bytes"
-	"compress/gzip"
-	"encoding/json"
-	"errors"
-	"fmt"
-	"io"
-	"io/ioutil"
-	"reflect"
-	"sort"
-	"strings"
-	"testing"
-)
-
-// Tests 47 byte footer encoding, size, and parsing.
-func TestFooter(t *testing.T) {
-	for off := int64(0); off <= 200000; off += 1023 {
-		footer := footerBytes(off)
-		if len(footer) != FooterSize {
-			t.Fatalf("for offset %v, footer length was %d, not expected %d. got bytes: %q", off, len(footer), FooterSize, footer)
-		}
-		got, ok := parseFooter(footer)
-		if !ok {
-			t.Fatalf("failed to parse footer for offset %d, footer: %q", off, footer)
-		}
-		if got != off {
-			t.Fatalf("parseFooter(footerBytes(offset %d)) = %d; want %d", off, got, off)
-
-		}
-	}
-}
-
-func TestWriteAndOpen(t *testing.T) {
-	const content = "Some contents"
-	tests := []struct {
-		name      string
-		chunkSize int
-		in        []tarEntry
-		want      []stargzCheck
-		wantNumGz int // expected number of gzip streams
-	}{
-		{
-			name:      "empty",
-			in:        tarOf(),
-			wantNumGz: 2, // TOC + footer
-			want: checks(
-				numTOCEntries(0),
-			),
-		},
-		{
-			name: "1dir_1file",
-			in: tarOf(
-				dir("foo/"),
-				file("foo/bar.txt", content),
-			),
-			wantNumGz: 4, // var dir, foo.txt alone, TOC, footer
-			want: checks(
-				numTOCEntries(2),
-				hasDir("foo/"),
-				hasFileLen("foo/bar.txt", len(content)),
-				hasFileContentsRange("foo/bar.txt", 0, content),
-				hasFileContentsRange("foo/bar.txt", 1, content[1:]),
-				entryHasChildren("", "foo"),
-				entryHasChildren("foo", "bar.txt"),
-			),
-		},
-		{
-			name: "2meta_2file",
-			in: tarOf(
-				dir("bar/"),
-				dir("foo/"),
-				file("foo/bar.txt", content),
-			),
-			wantNumGz: 4, // both dirs, foo.txt alone, TOC, footer
-			want: checks(
-				numTOCEntries(3),
-				hasDir("bar/"),
-				hasDir("foo/"),
-				hasFileLen("foo/bar.txt", len(content)),
-				entryHasChildren("", "bar", "foo"),
-				entryHasChildren("foo", "bar.txt"),
-			),
-		},
-		{
-			name: "symlink",
-			in: tarOf(
-				dir("foo/"),
-				symlink("foo/bar", "../../x"),
-			),
-			wantNumGz: 3, // metas + TOC + footer
-			want: checks(
-				numTOCEntries(2),
-				hasSymlink("foo/bar", "../../x"),
-				entryHasChildren("", "foo"),
-				entryHasChildren("foo", "bar"),
-			),
-		},
-		{
-			name:      "chunked_file",
-			chunkSize: 4,
-			in: tarOf(
-				dir("foo/"),
-				file("foo/big.txt", "This "+"is s"+"uch "+"a bi"+"g fi"+"le"),
-			),
-			wantNumGz: 9,
-			want: checks(
-				numTOCEntries(7), // 1 for foo dir, 6 for the foo/big.txt file
-				hasDir("foo/"),
-				hasFileLen("foo/big.txt", len("This is such a big file")),
-				hasFileContentsRange("foo/big.txt", 0, "This is such a big file"),
-				hasFileContentsRange("foo/big.txt", 1, "his is such a big file"),
-				hasFileContentsRange("foo/big.txt", 2, "is is such a big file"),
-				hasFileContentsRange("foo/big.txt", 3, "s is such a big file"),
-				hasFileContentsRange("foo/big.txt", 4, " is such a big file"),
-				hasFileContentsRange("foo/big.txt", 5, "is such a big file"),
-				hasFileContentsRange("foo/big.txt", 6, "s such a big file"),
-				hasFileContentsRange("foo/big.txt", 7, " such a big file"),
-				hasFileContentsRange("foo/big.txt", 8, "such a big file"),
-				hasFileContentsRange("foo/big.txt", 9, "uch a big file"),
-				hasFileContentsRange("foo/big.txt", 10, "ch a big file"),
-				hasFileContentsRange("foo/big.txt", 11, "h a big file"),
-				hasFileContentsRange("foo/big.txt", 12, " a big file"),
-			),
-		},
-	}
-
-	for _, tt := range tests {
-		t.Run(tt.name, func(t *testing.T) {
-			tr, cancel := buildTarGz(t, tt.in)
-			defer cancel()
-			var stargzBuf bytes.Buffer
-			w := NewWriter(&stargzBuf)
-			w.ChunkSize = tt.chunkSize
-			if err := w.AppendTar(tr); err != nil {
-				t.Fatalf("Append: %v", err)
-			}
-			if err := w.Close(); err != nil {
-				t.Fatalf("Writer.Close: %v", err)
-			}
-			b := stargzBuf.Bytes()
-
-			got := countGzStreams(t, b)
-			if got != tt.wantNumGz {
-				t.Errorf("number of gzip streams = %d; want %d", got, tt.wantNumGz)
-			}
-
-			r, err := Open(io.NewSectionReader(bytes.NewReader(b), 0, int64(len(b))))
-			if err != nil {
-				t.Fatalf("stargz.Open: %v", err)
-			}
-			for _, want := range tt.want {
-				want.check(t, r)
-			}
-		})
-	}
-}
-
-func countGzStreams(t *testing.T, b []byte) (numStreams int) {
-	len0 := len(b)
-	br := bytes.NewReader(b)
-	zr := new(gzip.Reader)
-	t.Logf("got gzip streams:")
-	for {
-		zoff := len0 - br.Len()
-		if err := zr.Reset(br); err != nil {
-			if err == io.EOF {
-				return
-			}
-			t.Fatalf("countGzStreams, Reset: %v", err)
-		}
-		zr.Multistream(false)
-		n, err := io.Copy(ioutil.Discard, zr)
-		if err != nil {
-			t.Fatalf("countGzStreams, Copy: %v", err)
-		}
-		var extra string
-		if len(zr.Header.Extra) > 0 {
-			extra = fmt.Sprintf("; extra=%q", zr.Header.Extra)
-		}
-		t.Logf("  [%d] at %d in stargz, uncompressed length %d%s", numStreams, zoff, n, extra)
-		numStreams++
-	}
-}
-
-type numTOCEntries int
-
-func (n numTOCEntries) check(t *testing.T, r *Reader) {
-	if r.toc == nil {
-		t.Fatal("nil TOC")
-	}
-	if got, want := len(r.toc.Entries), int(n); got != want {
-		t.Errorf("got %d TOC entries; want %d", got, want)
-	}
-	t.Logf("got TOC entries:")
-	for i, ent := range r.toc.Entries {
-		entj, _ := json.Marshal(ent)
-		t.Logf("  [%d]: %s\n", i, entj)
-	}
-	if t.Failed() {
-		t.FailNow()
-	}
-}
-
-func tarOf(s ...tarEntry) []tarEntry { return s }
-
-func checks(s ...stargzCheck) []stargzCheck { return s }
-
-type stargzCheck interface {
-	check(t *testing.T, r *Reader)
-}
-
-type stargzCheckFn func(*testing.T, *Reader)
-
-func (f stargzCheckFn) check(t *testing.T, r *Reader) { f(t, r) }
-
-func hasFileLen(file string, wantLen int) stargzCheck {
-	return stargzCheckFn(func(t *testing.T, r *Reader) {
-		for _, ent := range r.toc.Entries {
-			if ent.Name == file {
-				if ent.Type != "reg" {
-					t.Errorf("file type of %q is %q; want \"reg\"", file, ent.Type)
-				} else if ent.Size != int64(wantLen) {
-					t.Errorf("file size of %q = %d; want %d", file, ent.Size, wantLen)
-				}
-				return
-			}
-		}
-		t.Errorf("file %q not found", file)
-	})
-}
-
-func hasFileContentsRange(file string, offset int, want string) stargzCheck {
-	return stargzCheckFn(func(t *testing.T, r *Reader) {
-		f, err := r.OpenFile(file)
-		if err != nil {
-			t.Fatal(err)
-		}
-		got := make([]byte, len(want))
-		n, err := f.ReadAt(got, int64(offset))
-		if err != nil {
-			t.Fatalf("ReadAt(len %d, offset %d) = %v, %v", len(got), offset, n, err)
-		}
-		if string(got) != want {
-			t.Fatalf("ReadAt(len %d, offset %d) = %q, want %q", len(got), offset, got, want)
-		}
-	})
-}
-
-func entryHasChildren(dir string, want ...string) stargzCheck {
-	return stargzCheckFn(func(t *testing.T, r *Reader) {
-		want := append([]string(nil), want...)
-		var got []string
-		ent, ok := r.Lookup(dir)
-		if !ok {
-			t.Fatalf("didn't find TOCEntry for dir node %q", dir)
-		}
-		for baseName := range ent.children {
-			got = append(got, baseName)
-		}
-		sort.Strings(got)
-		sort.Strings(want)
-		if !reflect.DeepEqual(got, want) {
-			t.Errorf("children of %q = %q; want %q", dir, got, want)
-		}
-	})
-}
-
-func hasDir(file string) stargzCheck {
-	return stargzCheckFn(func(t *testing.T, r *Reader) {
-		for _, ent := range r.toc.Entries {
-			if ent.Name == file {
-				if ent.Type != "dir" {
-					t.Errorf("file type of %q is %q; want \"dir\"", file, ent.Type)
-				}
-				return
-			}
-		}
-		t.Errorf("directory %q not found", file)
-	})
-}
-
-func hasSymlink(file, target string) stargzCheck {
-	return stargzCheckFn(func(t *testing.T, r *Reader) {
-		for _, ent := range r.toc.Entries {
-			if ent.Name == file {
-				if ent.Type != "symlink" {
-					t.Errorf("file type of %q is %q; want \"symlink\"", file, ent.Type)
-				} else if ent.LinkName != target {
-					t.Errorf("link target of symlink %q is %q; want %q", file, ent.LinkName, target)
-				}
-				return
-			}
-		}
-		t.Errorf("symlink %q not found", file)
-	})
-}
-
-type tarEntry interface {
-	appendTar(*tar.Writer) error
-}
-
-type tarEntryFunc func(*tar.Writer) error
-
-func (f tarEntryFunc) appendTar(tw *tar.Writer) error { return f(tw) }
-
-func buildTarGz(t *testing.T, ents []tarEntry) (r io.Reader, cancel func()) {
-	pr, pw := io.Pipe()
-	go func() {
-		tw := tar.NewWriter(pw)
-		for _, ent := range ents {
-			if err := ent.appendTar(tw); err != nil {
-				t.Errorf("building input tar: %v", err)
-				pw.Close()
-				return
-			}
-		}
-		if err := tw.Close(); err != nil {
-			t.Errorf("closing write of input tar: %v", err)
-		}
-		pw.Close()
-		return
-	}()
-	return pr, func() { go pr.Close(); go pw.Close() }
-}
-
-func dir(d string) tarEntry {
-	return tarEntryFunc(func(tw *tar.Writer) error {
-		name := string(d)
-		if !strings.HasSuffix(name, "/") {
-			panic(fmt.Sprintf("missing trailing slash in dir %q ", name))
-		}
-		return tw.WriteHeader(&tar.Header{
-			Typeflag: tar.TypeDir,
-			Name:     name,
-			Mode:     0755,
-		})
-	})
-}
-
-func file(name, contents string, extraAttr ...interface{}) tarEntry {
-	return tarEntryFunc(func(tw *tar.Writer) error {
-		if len(extraAttr) > 0 {
-			return errors.New("unsupported extraAttr")
-		}
-		if strings.HasSuffix(name, "/") {
-			return fmt.Errorf("bogus trailing slash in file %q", name)
-		}
-		if err := tw.WriteHeader(&tar.Header{
-			Typeflag: tar.TypeReg,
-			Name:     name,
-			Mode:     0644,
-			Size:     int64(len(contents)),
-		}); err != nil {
-			return err
-		}
-		_, err := io.WriteString(tw, contents)
-		return err
-	})
-}
-
-func symlink(name, target string) tarEntry {
-	return tarEntryFunc(func(tw *tar.Writer) error {
-		return tw.WriteHeader(&tar.Header{
-			Typeflag: tar.TypeSymlink,
-			Name:     name,
-			Linkname: target,
-			Mode:     0644,
-		})
-	})
-}
diff --git a/crfs/stargz/stargzify/stargzify.go b/crfs/stargz/stargzify/stargzify.go
deleted file mode 100644
index 115bb9b..0000000
--- a/crfs/stargz/stargzify/stargzify.go
+++ /dev/null
@@ -1,70 +0,0 @@
-// Copyright 2019 The Go Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style
-// license that can be found in the LICENSE file.
-
-// The stargzify command converts a tarball into a seekable stargz
-// tarball. The output is still a valid tarball, but has new gzip
-// streams throughout the file and and an Table of Contents (TOC)
-// index at the end pointing into those streams.
-package main
-
-import (
-	"flag"
-	"log"
-	"os"
-	"strings"
-
-	"golang.org/x/build/crfs/stargz"
-)
-
-var (
-	in  = flag.String("in", "", "input file in tar or tar.gz format. Use \"-\" for stdin.")
-	out = flag.String("out", "", "output file. If empty, it's the input base + \".stargz\", or stdout if the input is stdin. Use \"-\" for stdout.")
-)
-
-func main() {
-	flag.Parse()
-	var f, fo *os.File // file in, file out
-	var err error
-	switch *in {
-	case "":
-		log.Fatal("missing required --in flag")
-	case "-":
-		f = os.Stdin
-	default:
-		f, err = os.Open(*in)
-		if err != nil {
-			log.Fatal(err)
-		}
-	}
-	defer f.Close()
-
-	if *out == "" {
-		if *in == "-" {
-			*out = "-"
-		} else {
-			base := strings.TrimSuffix(*in, ".gz")
-			base = strings.TrimSuffix(base, ".tgz")
-			base = strings.TrimSuffix(base, ".tar")
-			*out = base + ".stargz"
-		}
-	}
-	if *out == "-" {
-		fo = os.Stdout
-	} else {
-		fo, err = os.Create(*out)
-		if err != nil {
-			log.Fatal(err)
-		}
-	}
-	w := stargz.NewWriter(fo)
-	if err := w.AppendTar(f); err != nil {
-		log.Fatal(err)
-	}
-	if err := w.Close(); err != nil {
-		log.Fatal(err)
-	}
-	if err := fo.Close(); err != nil {
-		log.Fatal(err)
-	}
-}