chore(snix): s/tvix/snix/
Change-Id: Iae961416eea0a38bc57df7b736f6dda5903b0828
This commit is contained in:
parent
768f053416
commit
36e4d017f5
1417 changed files with 3741 additions and 3650 deletions
153
snix/boot/README.md
Normal file
153
snix/boot/README.md
Normal file
|
|
@ -0,0 +1,153 @@
|
|||
# snix/boot
|
||||
|
||||
This directory provides tooling to boot VMs with /nix/store provided by
|
||||
virtiofs.
|
||||
|
||||
In the `tests/` subdirectory, there's some integration tests.
|
||||
|
||||
## //snix/boot:runVM
|
||||
A script spinning up a `snix-store virtiofs` daemon, then starting a cloud-
|
||||
hypervisor VM.
|
||||
|
||||
The cloud-hypervisor VM is using a (semi-)minimal kernel image with virtiofs
|
||||
support, and a custom initrd (using u-root). It supports various command line
|
||||
options, to be able to do VM tests, act as an interactive shell or exec a binary
|
||||
from a closure.
|
||||
|
||||
It supports the following env vars:
|
||||
- `CH_NUM_CPUS=1` controls the number of CPUs available to the VM
|
||||
- `CH_MEM_SIZE=512M` controls the memory availabe to the VM
|
||||
- `CH_CMDLINE=` controls the kernel cmdline (which can be used to control the
|
||||
boot)
|
||||
|
||||
### Usage
|
||||
First, ensure you have `snix-store` in `$PATH`, as that's what `run-snix-vm`
|
||||
expects:
|
||||
|
||||
Assuming you ran `cargo build --profile=release-with-debug` before, and are in
|
||||
the `snix` directory:
|
||||
|
||||
```
|
||||
export PATH=$PATH:$PWD/target/release-with-debug
|
||||
```
|
||||
|
||||
Now, spin up snix-daemon, connecting to some (local) backends:
|
||||
|
||||
```
|
||||
snix-store --otlp=false daemon \
|
||||
--blob-service-addr=objectstore+file://$PWD/blobs \
|
||||
--directory-service-addr=redb://$PWD/directories.redb \
|
||||
--path-info-service-addr=redb://$PWD/pathinfo.redb &
|
||||
```
|
||||
|
||||
Copy some data into snix-store (we use `nar-bridge` for this for now):
|
||||
|
||||
```
|
||||
mg run //snix:nar-bridge -- --otlp=false &
|
||||
rm -Rf ~/.cache/nix; nix copy --to http://localhost:9000\?compression\=none $(mg build //third_party/nixpkgs:hello)
|
||||
pkill nar-bridge
|
||||
```
|
||||
|
||||
By default, the `snix-store virtiofs` command used in the `runVM` script
|
||||
connects to a running `snix-store daemon` via gRPC - in which case you want to
|
||||
keep `snix-store daemon` running.
|
||||
|
||||
In case you want to have `snix-store virtiofs` open the stores directly, kill
|
||||
`snix-store daemon` too, and export the addresses from above:
|
||||
|
||||
```
|
||||
pkill snix-store
|
||||
export BLOB_SERVICE_ADDR=objectstore+file://$PWD/blobs
|
||||
export DIRECTORY_SERVICE_ADDR=redb://$PWD/directories.redb
|
||||
export PATH_INFO_SERVICE_ADDR=redb://$PWD/pathinfo.redb
|
||||
```
|
||||
|
||||
#### Interactive shell
|
||||
Run the VM like this:
|
||||
|
||||
```
|
||||
CH_CMDLINE=snix.shell mg run //snix/boot:runVM --
|
||||
```
|
||||
|
||||
You'll get dropped into an interactive shell, from which you can do things with
|
||||
the store:
|
||||
|
||||
```
|
||||
______ _ ____ _ __
|
||||
/_ __/ __(_) __ / _/___ (_) /_
|
||||
/ / | | / / / |/_/ / // __ \/ / __/
|
||||
/ / | |/ / /> < _/ // / / / / /_
|
||||
/_/ |___/_/_/|_| /___/_/ /_/_/\__/
|
||||
|
||||
/# ls -la /nix/store/
|
||||
dr-xr-xr-x root 0 0 Jan 1 00:00 .
|
||||
dr-xr-xr-x root 0 989 Jan 1 00:00 aw2fw9ag10wr9pf0qk4nk5sxi0q0bn56-glibc-2.37-8
|
||||
dr-xr-xr-x root 0 3 Jan 1 00:00 jbwb8d8l28lg9z0xzl784wyb9vlbwss6-xgcc-12.3.0-libgcc
|
||||
dr-xr-xr-x root 0 82 Jan 1 00:00 k8ivghpggjrq1n49xp8sj116i4sh8lia-libidn2-2.3.4
|
||||
dr-xr-xr-x root 0 141 Jan 1 00:00 mdi7lvrn2mx7rfzv3fdq3v5yw8swiks6-hello-2.12.1
|
||||
dr-xr-xr-x root 0 5 Jan 1 00:00 s2gi8pfjszy6rq3ydx0z1vwbbskw994i-libunistring-1.1
|
||||
```
|
||||
|
||||
Once you exit the shell, the VM will power off itself.
|
||||
|
||||
#### Execute a specific binary
|
||||
Run the VM like this:
|
||||
|
||||
```
|
||||
hello_cmd=$(mg build //third_party/nixpkgs:hello)/bin/hello
|
||||
CH_CMDLINE=snix.run=$hello_cmd mg run //snix/boot:runVM --
|
||||
```
|
||||
|
||||
Observe it executing the file (and closure) from the snix-store:
|
||||
|
||||
```
|
||||
[ 0.277486] Run /init as init process
|
||||
______ _ ____ _ __
|
||||
/_ __/ __(_) __ / _/___ (_) /_
|
||||
/ / | | / / / |/_/ / // __ \/ / __/
|
||||
/ / | |/ / /> < _/ // / / / / /_
|
||||
/_/ |___/_/_/|_| /___/_/ /_/_/\__/
|
||||
|
||||
Hello, world!
|
||||
2023/09/24 21:10:19 Nothing left to be done, powering off.
|
||||
[ 0.299122] ACPI: PM: Preparing to enter system sleep state S5
|
||||
[ 0.299422] reboot: Power down
|
||||
```
|
||||
|
||||
#### Boot a NixOS system closure
|
||||
It's also possible to boot a system closure. To do this, snix-init honors the
|
||||
init= cmdline option, and will `switch_root` to it.
|
||||
|
||||
Make sure to first copy that system closure into snix-store,
|
||||
using a similar `nix copy` comamnd as above.
|
||||
|
||||
|
||||
```
|
||||
CH_CMDLINE=init=/nix/store/…-nixos-system-…/init mg run //snix/boot:runVM --
|
||||
```
|
||||
|
||||
```
|
||||
______ _ ____ _ __
|
||||
/_ __/ __(_) __ / _/___ (_) /_
|
||||
/ / | | / / / |/_/ / // __ \/ / __/
|
||||
/ / | |/ / /> < _/ // / / / / /_
|
||||
/_/ |___/_/_/|_| /___/_/ /_/_/\__/
|
||||
|
||||
2023/09/24 21:16:43 switch_root: moving mounts
|
||||
2023/09/24 21:16:43 switch_root: Skipping "/run" as the dir does not exist
|
||||
2023/09/24 21:16:43 switch_root: Changing directory
|
||||
2023/09/24 21:16:43 switch_root: Moving /
|
||||
2023/09/24 21:16:43 switch_root: Changing root!
|
||||
2023/09/24 21:16:43 switch_root: Deleting old /
|
||||
2023/09/24 21:16:43 switch_root: executing init
|
||||
|
||||
<<< NixOS Stage 2 >>>
|
||||
|
||||
[ 0.322096] booting system configuration /nix/store/g657sdxinpqfcdv0162zmb8vv9b5c4c5-nixos-system-client-23.11.git.82102fc37da
|
||||
running activation script...
|
||||
setting up /etc...
|
||||
starting systemd...
|
||||
[ 0.980740] systemd[1]: systemd 253.6 running in system mode (+PAM +AUDIT -SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD +BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
|
||||
```
|
||||
|
||||
This effectively replaces the NixOS Stage 1 entirely.
|
||||
116
snix/boot/default.nix
Normal file
116
snix/boot/default.nix
Normal file
|
|
@ -0,0 +1,116 @@
|
|||
{ lib, pkgs, ... }:
|
||||
|
||||
rec {
|
||||
# A binary that sets up /nix/store from virtiofs, lists all store paths, and
|
||||
# powers off the machine.
|
||||
snix-init = pkgs.buildGoModule rec {
|
||||
name = "snix-init";
|
||||
src = lib.fileset.toSource {
|
||||
root = ./.;
|
||||
fileset = ./snix-init.go;
|
||||
};
|
||||
vendorHash = null;
|
||||
postPatch = "go mod init ${name}";
|
||||
};
|
||||
|
||||
# A kernel with virtiofs support baked in
|
||||
# TODO: make a smaller kernel, we don't need a gazillion filesystems and
|
||||
# device drivers in it.
|
||||
kernel = pkgs.buildLinux ({ } // {
|
||||
inherit (pkgs.linuxPackages_latest.kernel) src version modDirVersion;
|
||||
autoModules = false;
|
||||
kernelPreferBuiltin = true;
|
||||
ignoreConfigErrors = true;
|
||||
kernelPatches = [ ];
|
||||
structuredExtraConfig = with pkgs.lib.kernel; {
|
||||
FUSE_FS = option yes;
|
||||
DAX_DRIVER = option yes;
|
||||
DAX = option yes;
|
||||
FS_DAX = option yes;
|
||||
VIRTIO_FS = option yes;
|
||||
VIRTIO = option yes;
|
||||
ZONE_DEVICE = option yes;
|
||||
};
|
||||
});
|
||||
|
||||
# A build framework for minimal initrds
|
||||
uroot = pkgs.buildGoModule rec {
|
||||
pname = "u-root";
|
||||
version = "0.14.0";
|
||||
src = pkgs.fetchFromGitHub {
|
||||
owner = "u-root";
|
||||
repo = "u-root";
|
||||
rev = "v${version}";
|
||||
hash = "sha256-8zA3pHf45MdUcq/MA/mf0KCTxB1viHieU/oigYwIPgo=";
|
||||
};
|
||||
vendorHash = null;
|
||||
|
||||
doCheck = false; # Some tests invoke /bin/bash
|
||||
};
|
||||
|
||||
# Use u-root to build a initrd with our snix-init inside.
|
||||
initrd = pkgs.stdenv.mkDerivation {
|
||||
name = "initrd.cpio";
|
||||
nativeBuildInputs = [ pkgs.go ];
|
||||
# https://github.com/u-root/u-root/issues/2466
|
||||
buildCommand = ''
|
||||
mkdir -p /tmp/go/src/github.com/u-root/
|
||||
cp -R ${uroot.src} /tmp/go/src/github.com/u-root/u-root
|
||||
cd /tmp/go/src/github.com/u-root/u-root
|
||||
chmod +w .
|
||||
cp ${snix-init}/bin/snix-init snix-init
|
||||
|
||||
export HOME=$(mktemp -d)
|
||||
export GOROOT="$(go env GOROOT)"
|
||||
|
||||
GO111MODULE=off GOPATH=/tmp/go GOPROXY=off ${uroot}/bin/u-root -files ./snix-init -initcmd "/snix-init" -o $out
|
||||
'';
|
||||
};
|
||||
|
||||
# Start a `snix-store` virtiofs daemon from $PATH, then a cloud-hypervisor
|
||||
# pointed to it.
|
||||
# Supports the following env vars (and defaults)
|
||||
# CH_NUM_CPUS=2
|
||||
# CH_MEM_SIZE=512M
|
||||
# CH_CMDLINE=""
|
||||
runVM = pkgs.writers.writeBashBin "run-snix-vm" ''
|
||||
tempdir=$(mktemp -d)
|
||||
|
||||
cleanup() {
|
||||
kill $virtiofsd_pid
|
||||
if [[ -n ''${work_dir-} ]]; then
|
||||
chmod -R u+rw "$tempdir"
|
||||
rm -rf "$tempdir"
|
||||
fi
|
||||
}
|
||||
trap cleanup EXIT
|
||||
|
||||
# Spin up the virtiofs daemon
|
||||
snix-store --otlp=false virtiofs -l $tempdir/snix.sock &
|
||||
virtiofsd_pid=$!
|
||||
|
||||
# Wait for the socket to exist.
|
||||
until [ -e $tempdir/snix.sock ]; do sleep 0.1; done
|
||||
|
||||
CH_NUM_CPUS="''${CH_NUM_CPUS:-2}"
|
||||
CH_MEM_SIZE="''${CH_MEM_SIZE:-512M}"
|
||||
CH_CMDLINE="''${CH_CMDLINE:-}"
|
||||
|
||||
# spin up cloud_hypervisor
|
||||
${pkgs.cloud-hypervisor}/bin/cloud-hypervisor \
|
||||
--cpus boot=$CH_NUM_CPU \
|
||||
--memory mergeable=on,shared=on,size=$CH_MEM_SIZE \
|
||||
--console null \
|
||||
--serial tty \
|
||||
--kernel ${kernel}/${pkgs.stdenv.hostPlatform.linux-kernel.target} \
|
||||
--initramfs ${initrd} \
|
||||
--cmdline "console=ttyS0 $CH_CMDLINE" \
|
||||
--fs tag=snix,socket=$tempdir/snix.sock,num_queues=''${CH_NUM_CPU},queue_size=512
|
||||
'';
|
||||
|
||||
meta.ci.targets = [
|
||||
"initrd"
|
||||
"kernel"
|
||||
"runVM"
|
||||
];
|
||||
}
|
||||
138
snix/boot/snix-init.go
Normal file
138
snix/boot/snix-init.go
Normal file
|
|
@ -0,0 +1,138 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"log"
|
||||
"os"
|
||||
"os/exec"
|
||||
"strings"
|
||||
"syscall"
|
||||
)
|
||||
|
||||
// run the given command, connecting std{in,err,out} with the OS one.
|
||||
func run(args ...string) error {
|
||||
cmd := exec.Command(args[0], args[1:]...)
|
||||
cmd.Stdin = os.Stdin
|
||||
cmd.Stderr = os.Stderr
|
||||
cmd.Stdout = os.Stdout
|
||||
|
||||
return cmd.Run()
|
||||
}
|
||||
|
||||
// parse the cmdline, return a map[string]string.
|
||||
func parseCmdline(cmdline string) map[string]string {
|
||||
line := strings.TrimSuffix(cmdline, "\n")
|
||||
fields := strings.Fields(line)
|
||||
out := make(map[string]string, 0)
|
||||
|
||||
for _, arg := range fields {
|
||||
kv := strings.SplitN(arg, "=", 2)
|
||||
switch len(kv) {
|
||||
case 1:
|
||||
out[kv[0]] = ""
|
||||
case 2:
|
||||
out[kv[0]] = kv[1]
|
||||
}
|
||||
}
|
||||
|
||||
return out
|
||||
}
|
||||
|
||||
// mounts the snix store from the virtiofs tag to the given destination,
|
||||
// creating the destination if it doesn't exist already.
|
||||
func mountSnixStore(dest string) error {
|
||||
if err := os.MkdirAll(dest, os.ModePerm); err != nil {
|
||||
return fmt.Errorf("unable to mkdir dest: %w", err)
|
||||
}
|
||||
if err := run("mount", "-t", "virtiofs", "snix", dest, "-o", "ro"); err != nil {
|
||||
return fmt.Errorf("unable to run mount: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func main() {
|
||||
fmt.Print(`
|
||||
______ _ ____ _ __
|
||||
/_ __/ __(_) __ / _/___ (_) /_
|
||||
/ / | | / / / |/_/ / // __ \/ / __/
|
||||
/ / | |/ / /> < _/ // / / / / /_
|
||||
/_/ |___/_/_/|_| /___/_/ /_/_/\__/
|
||||
|
||||
`)
|
||||
|
||||
// Set PATH to "/bbin", so we can find the u-root tools
|
||||
os.Setenv("PATH", "/bbin")
|
||||
|
||||
if err := run("mount", "-t", "proc", "none", "/proc"); err != nil {
|
||||
log.Printf("Failed to mount /proc: %v\n", err)
|
||||
}
|
||||
if err := run("mount", "-t", "sysfs", "none", "/sys"); err != nil {
|
||||
log.Printf("Failed to mount /sys: %v\n", err)
|
||||
}
|
||||
if err := run("mount", "-t", "devtmpfs", "devtmpfs", "/dev"); err != nil {
|
||||
log.Printf("Failed to mount /dev: %v\n", err)
|
||||
}
|
||||
|
||||
cmdline, err := os.ReadFile("/proc/cmdline")
|
||||
if err != nil {
|
||||
log.Printf("Failed to read cmdline: %s\n", err)
|
||||
}
|
||||
cmdlineFields := parseCmdline(string(cmdline))
|
||||
|
||||
if _, ok := cmdlineFields["snix.find"]; ok {
|
||||
// If snix.find is set, invoke find /nix/store
|
||||
if err := mountSnixStore("/nix/store"); err != nil {
|
||||
log.Printf("Failed to mount snix store: %v\n", err)
|
||||
}
|
||||
|
||||
if err := run("find", "/nix/store"); err != nil {
|
||||
log.Printf("Failed to run find command: %s\n", err)
|
||||
}
|
||||
} else if _, ok := cmdlineFields["snix.shell"]; ok {
|
||||
// If snix.shell is set, mount the nix store to /nix/store directly,
|
||||
// then invoke the elvish shell
|
||||
if err := mountSnixStore("/nix/store"); err != nil {
|
||||
log.Printf("Failed to mount snix store: %v\n", err)
|
||||
}
|
||||
|
||||
if err := run("elvish"); err != nil {
|
||||
log.Printf("Failed to run shell: %s\n", err)
|
||||
}
|
||||
} else if v, ok := cmdlineFields["snix.run"]; ok {
|
||||
// If snix.run is set, mount the nix store to /nix/store directly,
|
||||
// then invoke the command.
|
||||
if err := mountSnixStore("/nix/store"); err != nil {
|
||||
log.Printf("Failed to mount snix store: %v\n", err)
|
||||
}
|
||||
|
||||
if err := run(v); err != nil {
|
||||
log.Printf("Failed to run command: %s\n", err)
|
||||
}
|
||||
} else if v, ok := cmdlineFields["init"]; ok {
|
||||
// If init is set, invoke the binary specified (with switch_root),
|
||||
// and prepare /fs beforehand as well.
|
||||
os.Mkdir("/fs", os.ModePerm)
|
||||
if err := run("mount", "-t", "tmpfs", "none", "/fs"); err != nil {
|
||||
log.Fatalf("Failed to mount /fs tmpfs: %s\n", err)
|
||||
}
|
||||
|
||||
// Mount /fs/nix/store
|
||||
if err := mountSnixStore("/fs/nix/store"); err != nil {
|
||||
log.Fatalf("Failed to mount snix store: %v\n", err)
|
||||
}
|
||||
|
||||
// Invoke switch_root, which will take care of moving /proc, /sys and /dev.
|
||||
if err := syscall.Exec("/bbin/switch_root", []string{"switch_root", "/fs", v}, []string{}); err != nil {
|
||||
log.Printf("Failed to switch root: %s\n", err)
|
||||
}
|
||||
} else {
|
||||
log.Printf("No command detected, not knowing what to do!")
|
||||
}
|
||||
|
||||
// This is only reached in the non switch_root case.
|
||||
log.Printf("Nothing left to be done, powering off.")
|
||||
if err := run("poweroff"); err != nil {
|
||||
log.Printf("Failed to run poweroff command: %v\n", err)
|
||||
}
|
||||
}
|
||||
250
snix/boot/tests/default.nix
Normal file
250
snix/boot/tests/default.nix
Normal file
|
|
@ -0,0 +1,250 @@
|
|||
{ depot, pkgs, lib, ... }:
|
||||
|
||||
let
|
||||
# Seed a snix-store with the specified path, then start a VM with the
|
||||
# snix-boot initrd.
|
||||
# Allows customizing the cmdline, which can be used to list files,
|
||||
# or specify what init should be booted.
|
||||
mkBootTest =
|
||||
{ blobServiceAddr ? "memory://"
|
||||
, directoryServiceAddr ? "memory://"
|
||||
, pathInfoServiceAddr ? "memory://"
|
||||
|
||||
|
||||
# The path to import.
|
||||
, path
|
||||
|
||||
# Whether the path should be imported as a closure.
|
||||
# If false, importPathName must be specified.
|
||||
, isClosure ? false
|
||||
# Whether to use nar-bridge to upload, rather than snix-store copy.
|
||||
# using nar-bridge currently is "slower", as the `pkgs.mkBinaryCache` build
|
||||
# takes quite some time.
|
||||
, useNarBridge ? false
|
||||
|
||||
, importPathName ? null
|
||||
|
||||
# Commands to run before starting the snix-daemon. Useful to provide
|
||||
# auxillary mock services.
|
||||
, preStart ? ""
|
||||
|
||||
# The cmdline to pass to the VM.
|
||||
# Defaults to snix.find, which lists all files in the store.
|
||||
, vmCmdline ? "snix.find"
|
||||
# The string we expect to find in the VM output.
|
||||
# Defaults the value of `path` (the store path we upload).
|
||||
, assertVMOutput ? path
|
||||
}:
|
||||
|
||||
assert isClosure -> importPathName == null;
|
||||
assert (!isClosure) -> importPathName != null;
|
||||
|
||||
pkgs.stdenv.mkDerivation ({
|
||||
name = "run-vm";
|
||||
|
||||
nativeBuildInputs = [
|
||||
depot.snix.store
|
||||
depot.snix.boot.runVM
|
||||
] ++ lib.optionals (isClosure && useNarBridge) [
|
||||
depot.snix.nar-bridge
|
||||
pkgs.curl
|
||||
pkgs.rush-parallel
|
||||
pkgs.zstd.bin
|
||||
pkgs.nix
|
||||
];
|
||||
buildCommand = ''
|
||||
set -eou pipefail
|
||||
touch $out
|
||||
# Ensure we can construct http clients.
|
||||
export SSL_CERT_FILE=/dev/null
|
||||
|
||||
${preStart}
|
||||
|
||||
# Start the snix daemon, listening on a unix socket.
|
||||
BLOB_SERVICE_ADDR=${lib.escapeShellArg blobServiceAddr} \
|
||||
DIRECTORY_SERVICE_ADDR=${lib.escapeShellArg directoryServiceAddr} \
|
||||
PATH_INFO_SERVICE_ADDR=${lib.escapeShellArg pathInfoServiceAddr} \
|
||||
snix-store \
|
||||
--otlp=false \
|
||||
daemon -l $PWD/snix-store.sock &
|
||||
|
||||
# Wait for the service to report healthy.
|
||||
timeout 22 sh -c "until ${pkgs.ip2unix}/bin/ip2unix -r out,path=$PWD/snix-store.sock ${pkgs.grpc-health-check}/bin/grpc-health-check --address 127.0.0.1 --port 8080; do sleep 1; done"
|
||||
|
||||
# Export env vars so that subsequent snix-store commands will talk to
|
||||
# our snix-store daemon over the unix socket.
|
||||
export BLOB_SERVICE_ADDR=grpc+unix://$PWD/snix-store.sock
|
||||
export DIRECTORY_SERVICE_ADDR=grpc+unix://$PWD/snix-store.sock
|
||||
export PATH_INFO_SERVICE_ADDR=grpc+unix://$PWD/snix-store.sock
|
||||
'' + lib.optionalString (!isClosure) ''
|
||||
echo "Importing ${path} into snix-store with name ${importPathName}…"
|
||||
cp -R ${path} ${importPathName}
|
||||
outpath=$(snix-store import ${importPathName})
|
||||
|
||||
echo "imported to $outpath"
|
||||
'' + lib.optionalString (isClosure && !useNarBridge) ''
|
||||
echo "Copying closure ${path}…"
|
||||
# This picks up the `closure` key in `$NIX_ATTRS_JSON_FILE` automatically.
|
||||
snix-store --otlp=false copy
|
||||
'' + lib.optionalString (isClosure && useNarBridge) ''
|
||||
echo "Starting nar-bridge…"
|
||||
nar-bridge \
|
||||
--otlp=false \
|
||||
-l $PWD/nar-bridge.sock &
|
||||
|
||||
# Wait for nar-bridge to report healthy.
|
||||
timeout 22 sh -c "until ${pkgs.curl}/bin/curl -s --unix-socket $PWD/nar-bridge.sock http:///nix-binary-cache; do sleep 1; done"
|
||||
|
||||
# Upload. We can't use nix copy --to http://…, as it wants access to the nix db.
|
||||
# However, we can use mkBinaryCache to assemble .narinfo and .nar.xz to upload,
|
||||
# and then drive a HTTP client ourselves.
|
||||
to_upload=${
|
||||
pkgs.mkBinaryCache {
|
||||
rootPaths = [ path ];
|
||||
# Implemented in https://github.com/NixOS/nixpkgs/pull/376365
|
||||
compression = "zstd";
|
||||
}
|
||||
}
|
||||
|
||||
# Upload all NAR files (with some parallelism).
|
||||
# As mkBinaryCache produces them xz-compressed, unpack them on the fly.
|
||||
# nar-bridge doesn't care about the path we upload *to*, but a
|
||||
# subsequent .narinfo upload need to refer to its contents (by narhash).
|
||||
echo -e "Uploading NARs… "
|
||||
# TODO(flokli): extension of the nar files where changed from .nar.{compression} to .{compression}
|
||||
# https://github.com/NixOS/nixpkgs/pull/376365
|
||||
ls -d $to_upload/nar/*.zst | rush -n1 'nar_hash=$(zstdcat < {} | nix-hash --base32 --type sha256 --flat /dev/stdin);zstdcat < {} | curl -s --fail-with-body -T - --unix-socket $PWD/nar-bridge.sock http://localhost:9000/nar/''${nar_hash}.nar'
|
||||
echo "Done."
|
||||
|
||||
# Upload all NARInfo files.
|
||||
# FUTUREWORK: This doesn't upload them in order, and currently relies
|
||||
# on PathInfoService not doing any checking.
|
||||
# In the future, we might want to make this behaviour configurable,
|
||||
# and disable checking here, to keep the logic simple.
|
||||
ls -d $to_upload/*.narinfo | rush 'curl -s -T - --unix-socket $PWD/nar-bridge.sock http://localhost:9000/$(basename {}) < {}'
|
||||
'' + ''
|
||||
# Invoke a VM using snix as the backing store, ensure the outpath appears in its listing.
|
||||
echo "Starting VM…"
|
||||
|
||||
CH_CMDLINE="${vmCmdline}" run-snix-vm 2>&1 | tee output.txt
|
||||
grep "${assertVMOutput}" output.txt
|
||||
'';
|
||||
requiredSystemFeatures = [ "kvm" ];
|
||||
# HACK: The boot tests are sometimes flaky, and we don't want them to
|
||||
# periodically fail other build. Have Buildkite auto-retry them 2 times
|
||||
# on failure.
|
||||
# Logs for individual failures are still available, so it won't hinder
|
||||
# flakiness debuggability.
|
||||
meta.ci.buildkiteExtraStepArgs = {
|
||||
retry.automatic = true;
|
||||
};
|
||||
} // lib.optionalAttrs (isClosure && !useNarBridge) {
|
||||
__structuredAttrs = true;
|
||||
exportReferencesGraph.closure = [ path ];
|
||||
});
|
||||
|
||||
testSystem = (pkgs.nixos {
|
||||
# Set some options necessary to evaluate.
|
||||
boot.loader.systemd-boot.enable = true;
|
||||
# TODO: figure out how to disable this without causing eval to fail
|
||||
fileSystems."/" = {
|
||||
device = "/dev/root";
|
||||
fsType = "tmpfs";
|
||||
};
|
||||
|
||||
services.getty.helpLine = "Onwards and upwards.";
|
||||
systemd.services.do-shutdown = {
|
||||
after = [ "getty.target" ];
|
||||
description = "Shut down again";
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
serviceConfig.Type = "oneshot";
|
||||
script = "/run/current-system/sw/bin/systemctl poweroff --when=+10s";
|
||||
};
|
||||
|
||||
# Don't warn about stateVersion.
|
||||
system.stateVersion = "24.05";
|
||||
|
||||
# Speed-up evaluation and building.
|
||||
documentation.enable = lib.mkForce false;
|
||||
}).config.system.build.toplevel;
|
||||
|
||||
in
|
||||
depot.nix.readTree.drvTargets {
|
||||
docs-memory = (mkBootTest {
|
||||
path = ../../docs;
|
||||
importPathName = "docs";
|
||||
});
|
||||
docs-persistent = (mkBootTest {
|
||||
blobServiceAddr = "objectstore+file:///build/blobs";
|
||||
directoryServiceAddr = "redb:///build/directories.redb";
|
||||
pathInfoServiceAddr = "redb:///build/pathinfo.redb";
|
||||
path = ../../docs;
|
||||
importPathName = "docs";
|
||||
});
|
||||
|
||||
closure-snix = (mkBootTest {
|
||||
blobServiceAddr = "objectstore+file:///build/blobs";
|
||||
path = depot.snix.store;
|
||||
isClosure = true;
|
||||
});
|
||||
|
||||
closure-nixos = (mkBootTest {
|
||||
blobServiceAddr = "objectstore+file:///build/blobs";
|
||||
pathInfoServiceAddr = "redb:///build/pathinfo.redb";
|
||||
directoryServiceAddr = "redb:///build/directories.redb";
|
||||
path = testSystem;
|
||||
isClosure = true;
|
||||
vmCmdline = "init=${testSystem}/init panic=-1"; # reboot immediately on panic
|
||||
assertVMOutput = "Onwards and upwards.";
|
||||
});
|
||||
|
||||
closure-nixos-bigtable = (mkBootTest {
|
||||
blobServiceAddr = "objectstore+file:///build/blobs";
|
||||
directoryServiceAddr = "bigtable://instance-1?project_id=project-1&table_name=directories&family_name=cf1";
|
||||
pathInfoServiceAddr = "bigtable://instance-1?project_id=project-1&table_name=pathinfos&family_name=cf1";
|
||||
path = testSystem;
|
||||
useNarBridge = true;
|
||||
preStart = ''
|
||||
${pkgs.cbtemulator}/bin/cbtemulator -address $PWD/cbtemulator.sock &
|
||||
timeout 22 sh -c 'until [ -e $PWD/cbtemulator.sock ]; do sleep 1; done'
|
||||
|
||||
export BIGTABLE_EMULATOR_HOST=unix://$PWD/cbtemulator.sock
|
||||
${pkgs.google-cloud-bigtable-tool}/bin/cbt -instance instance-1 -project project-1 createtable directories
|
||||
${pkgs.google-cloud-bigtable-tool}/bin/cbt -instance instance-1 -project project-1 createfamily directories cf1
|
||||
${pkgs.google-cloud-bigtable-tool}/bin/cbt -instance instance-1 -project project-1 createtable pathinfos
|
||||
${pkgs.google-cloud-bigtable-tool}/bin/cbt -instance instance-1 -project project-1 createfamily pathinfos cf1
|
||||
'';
|
||||
isClosure = true;
|
||||
vmCmdline = "init=${testSystem}/init panic=-1"; # reboot immediately on panic
|
||||
assertVMOutput = "Onwards and upwards.";
|
||||
});
|
||||
|
||||
closure-nixos-s3 = (mkBootTest {
|
||||
blobServiceAddr = "objectstore+s3://mybucket/blobs?aws_access_key_id=myaccesskey&aws_secret_access_key=supersecret&aws_endpoint_url=http%3A%2F%2Flocalhost%3A9000&aws_allow_http=1";
|
||||
# we cannot use s3 here yet without any caching layer, as we don't allow "deeper" access to directories (non-root nodes)
|
||||
# directoryServiceAddr = "objectstore+s3://mybucket/directories?aws_access_key_id=myaccesskey&aws_secret_access_key=supersecret&endpoint=http%3A%2F%2Flocalhost%3A9000&aws_allow_http=1";
|
||||
directoryServiceAddr = "memory://";
|
||||
pathInfoServiceAddr = "memory://";
|
||||
path = testSystem;
|
||||
useNarBridge = true;
|
||||
preStart = ''
|
||||
MINIO_ACCESS_KEY=myaccesskey MINIO_SECRET_KEY=supersecret MINIO_ADDRESS=127.0.0.1:9000 ${pkgs.minio}/bin/minio server $(mktemp -d) &
|
||||
timeout 22 sh -c 'until ${pkgs.netcat}/bin/nc -z $0 $1; do sleep 1; done' localhost 9000
|
||||
mc_config_dir=$(mktemp -d)
|
||||
${pkgs.minio-client}/bin/mc --config-dir $mc_config_dir alias set 'myminio' 'http://127.0.0.1:9000' 'myaccesskey' 'supersecret'
|
||||
${pkgs.minio-client}/bin/mc --config-dir $mc_config_dir mb myminio/mybucket
|
||||
'';
|
||||
isClosure = true;
|
||||
vmCmdline = "init=${testSystem}/init panic=-1"; # reboot immediately on panic
|
||||
assertVMOutput = "Onwards and upwards.";
|
||||
});
|
||||
|
||||
closure-nixos-nar-bridge = (mkBootTest {
|
||||
blobServiceAddr = "objectstore+file:///build/blobs";
|
||||
path = testSystem;
|
||||
useNarBridge = true;
|
||||
isClosure = true;
|
||||
vmCmdline = "init=${testSystem}/init panic=-1"; # reboot immediately on panic
|
||||
assertVMOutput = "Onwards and upwards.";
|
||||
});
|
||||
}
|
||||
Loading…
Add table
Add a link
Reference in a new issue