furtka/docs/runner-setup.md
Daniel Maksymilian Syrnicki dfdbdd69aa
All checks were successful
Build ISO / build-iso (push) Successful in 17m13s
CI / lint (push) Successful in 26s
CI / test (push) Successful in 32s
CI / validate-json (push) Successful in 22s
CI / markdown-links (push) Successful in 13s
docs: sync README roadmap, runner-setup, and ops/ to today's reality
A lot moved since the last docs sweep. Catching everything up in one
batch so a newcomer (or future us) reading the repo isn't lied to.

**README.md roadmap:**
- Walking-skeleton live ISO: upgraded from "screens 1-3 work
  end-to-end" to "install runs to completion on a VM and the installed
  system logs in and runs `docker ps` without sudo".
- 26.0-alpha release: dropped the "deferred" note — its blocker
  (archinstall not completing) is gone; just needs a re-tag when we
  like the installer copy.
- Added an explicit "ISO-build in CI" line for the new
  `.forgejo/workflows/build-iso.yml`.
- Split the old "mDNS + local CA" item: mDNS is live (hostname baked
  in, avahi/nss-mdns in the image), HTTPS via local CA still open.
- Noted post-install reboot button, progress bar, archinstall 4.x
  schema work, console welcome, custom_commands docker group join in
  the wizard milestone bullet.

**docs/runner-setup.md:**
- Full rewrite for the docker-outside-of-docker architecture we
  actually run now (was still describing the DinD sidecar setup).
- Documents the `/data` symlink on the host that makes host-mode
  `-v /data/…:/work` resolve — the non-obvious piece that took the
  longest to nail down today.
- Describes the two runtime modes (`ubuntu-latest:docker://…` for CI,
  `self-hosted:host` for build-iso) and why each exists.
- Adds the `upload-artifact@v3` pin note — v4+ fails on Forgejo with
  `GHESNotSupportedError`.

**ops/forgejo-runner/compose.yml + config.yml:**
- Compose now matches what's actually running: DooD (no DinD sidecar),
  runs as root so apk can install nodejs + docker-cli at startup,
  /var/run/docker.sock bind-mounted.
- Config gets the three explicit label mappings and DooD
  `docker_host` + `valid_volumes`.

**.forgejo/workflows/build-iso.yml:**
- Added `paths-ignore` for docs/website/*.md so doc-only commits don't
  kick off 5-min ISO rebuilds. Code + ISO overlay changes still
  trigger.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 19:28:33 +02:00

6.3 KiB

Forgejo Runner Setup

How to stand up a forgejo-runner so the CI workflows under .forgejo/workflows/ci.yml (lint, pytest, JSON & link checks) and build-iso.yml (produces the live ISO as a downloadable artifact) — run on every push to main.

Ready-to-use compose.yml and config.yml live in ops/forgejo-runner/.

Choosing a host

Option Good for Trade-off
Dedicated VPS Production-ish CI that runs even when you're offline Costs a few €/month; one more machine to maintain
Home server / NAS Free; plenty of capacity CI blocked if home network / power drops
Local dev machine Quick to set up, fast runs CI only works while the machine is on

Recommendation: home server or a cheap VPS. Don't use a laptop that suspends.

Architecture at a glance

The runner uses docker-outside-of-docker (DooD): it mounts the host's /var/run/docker.sock into itself and spawns job containers on the host daemon. We went back and forth on this — the tempting alternative is a docker-in-docker (DinD) sidecar for isolation — but DinD makes iso/build.sh fail: build.sh does its own nested docker run -v … and the path inside a DinD-hosted job isn't visible to host docker. DooD trades some isolation for paths that line up everywhere. This runner VM is single-purpose, so that trade is fine.

One non-obvious piece: the runner's default internal data directory is /data. Host-mode jobs (see the self-hosted:host label below) tell host docker to bind-mount /data/.cache/act/…/hostexecutor — which is the container's filesystem path, not the host's. The fix is to make /data exist on the host too, pointing at the same files, via a symlink:

sudo ln -s /home/<user>/forgejo-runner/data /data

This one line is what lets -v /data/…:/work resolve correctly.

Install

On a fresh Ubuntu VM:

# Docker Engine + compose plugin (official repo)
./ops/forgejo-runner/bootstrap.sh

# Node.js on the HOST is not required — the runner container installs
# it inside itself on startup. But host tools help for debugging.

Copy the reference compose.yml and config.yml to ~/forgejo-runner/ and ~/forgejo-runner/data/ respectively. Create the /data symlink:

mkdir -p ~/forgejo-runner/data
cp ops/forgejo-runner/compose.yml ~/forgejo-runner/compose.yml
cp ops/forgejo-runner/config.yml ~/forgejo-runner/data/config.yml
sudo ln -s "$HOME/forgejo-runner/data" /data

Register

  1. In the Forgejo web UI: Site Administration → Actions → Runners → Create new Runner (or Repo Settings → Actions → Runners for a repo-scoped runner). Copy the registration token.

  2. Register from the host by running the registration inside a one-shot container so the resulting .runner file lands in the mounted data/ directory:

    cd ~/forgejo-runner
    docker run --rm -v "$PWD/data:/data" code.forgejo.org/forgejo/runner:6 \
      forgejo-runner register \
        --instance https://forgejo.sourcegate.online \
        --token <TOKEN> \
        --name forge-runner-01 \
        --no-interactive
    

    Note: labels are configured in config.yml, not at registration time — config.yml has labels: populated with the three we use (ubuntu-latest, docker, self-hosted), each mapped to either a container image or :host mode.

  3. Start the daemon: docker compose up -d.

  4. Verify in Forgejo admin → Actions → Runners that forge-runner-01 shows as Idle, and docker logs forgejo-runner prints runner: forge-runner-01, ..., declared successfully along with the installed node + docker-cli versions.

Two runtime modes

The config.yml labels set up two job execution modes:

  • ubuntu-latest / dockerdocker://catthehacker/ubuntu:act-latest. The standard mode. Jobs run in a fresh catthehacker/ubuntu:act-latest container. Good isolation, standard GHA-compatible image. Used by ci.yml (ruff, pytest, JSON & link checks).

  • self-hosted:host. Steps execute directly in the runner container (no per-job wrapping container). Used by build-iso.yml because iso/build.sh needs docker run -v $REPO_ROOT:/work to hit a path host docker can resolve — wrapping in a job container reintroduces the namespace mismatch.

Because host-mode jobs run inside the runner container, that container needs tools the jobs invoke — Node (for JS-based actions like actions/checkout@v4), Git (already in the base image), and the Docker CLI (for iso/build.sh). The command: in compose.yml apk-installs nodejs + docker-cli before launching the daemon, so those tools are always present after container start.

First CI run

Push a commit to main — the Actions tab should show:

  • CI workflow (ci.yml) running lint, tests, JSON validation, markdown links. Green in ~30 s.
  • Build ISO workflow (build-iso.yml) running iso/build.sh inside the runner container. Takes ~5 min (pacstrap + mkarchiso). The resulting .iso lands as a furtka-iso artifact on the run page, retained 14 days.

If the workflow queues forever, check:

  • Runner online in Forgejo admin.
  • docker logs forgejo-runner for errors.
  • The workflow's runs-on: matches a label the runner advertises.

Artifact compatibility note

Forgejo's Actions API is GHES-compatible (not full GHA), so use actions/upload-artifact@v3v4+ fails with GHESNotSupportedError because it needs the newer @actions/artifact protocol Forgejo hasn't implemented yet.

Security notes

  • DooD gives jobs full access to the host's docker daemon — they can spawn arbitrary containers, including --privileged ones. Keep the runner VM dedicated to CI; don't run other user workloads on it.
  • The runner container itself runs as root (user: "0:0"). This is acceptable because the whole VM is purpose-built, but it's a bigger footgun than the standard non-root runner image default.
  • Registration tokens are one-shot; once a runner is live, the token can't re-register.
  • Ubuntu's systemd-resolved stub resolver (127.0.0.53) sometimes leaks LAN-only DNS servers that containers can't reach. If container DNS fails, set explicit upstream DNS in /etc/docker/daemon.json (e.g. {"dns": ["1.1.1.1", "8.8.8.8"]}) and restart docker.