# Forgejo Runner Setup How to stand up a `forgejo-runner` so the CI workflows under [`.forgejo/workflows/`](../.forgejo/workflows/) — `ci.yml` (lint, pytest, JSON & link checks) and `build-iso.yml` (produces the live ISO as a downloadable artifact) — run on every push to `main`. Ready-to-use `compose.yml` and `config.yml` live in [`ops/forgejo-runner/`](../ops/forgejo-runner/). ## Choosing a host | Option | Good for | Trade-off | |--------|----------|-----------| | **Dedicated VPS** | Production-ish CI that runs even when you're offline | Costs a few €/month; one more machine to maintain | | **Home server / NAS** | Free; plenty of capacity | CI blocked if home network / power drops | | **Local dev machine** | Quick to set up, fast runs | CI only works while the machine is on | Recommendation: **home server or a cheap VPS**. Don't use a laptop that suspends. ## Architecture at a glance The runner uses **docker-outside-of-docker (DooD)**: it mounts the host's `/var/run/docker.sock` into itself and spawns job containers on the host daemon. We went back and forth on this — the tempting alternative is a docker-in-docker (DinD) sidecar for isolation — but DinD makes `iso/build.sh` fail: `build.sh` does its own nested `docker run -v …` and the path inside a DinD-hosted job isn't visible to host docker. DooD trades some isolation for paths that line up everywhere. This runner VM is single-purpose, so that trade is fine. One non-obvious piece: the runner's default internal data directory is `/data`. Host-mode jobs (see the `self-hosted:host` label below) tell host docker to bind-mount `/data/.cache/act/…/hostexecutor` — which is the container's filesystem path, not the host's. The fix is to make `/data` exist on the host too, pointing at the same files, via a symlink: ```bash sudo ln -s /home//forgejo-runner/data /data ``` This one line is what lets `-v /data/…:/work` resolve correctly. ## Install On a fresh Ubuntu VM: ```bash # Docker Engine + compose plugin (official repo) ./ops/forgejo-runner/bootstrap.sh # Node.js on the HOST is not required — the runner container installs # it inside itself on startup. But host tools help for debugging. ``` Copy the reference `compose.yml` and `config.yml` to `~/forgejo-runner/` and `~/forgejo-runner/data/` respectively. Create the `/data` symlink: ```bash mkdir -p ~/forgejo-runner/data cp ops/forgejo-runner/compose.yml ~/forgejo-runner/compose.yml cp ops/forgejo-runner/config.yml ~/forgejo-runner/data/config.yml sudo ln -s "$HOME/forgejo-runner/data" /data ``` ## Register 1. In the Forgejo web UI: **Site Administration → Actions → Runners → Create new Runner** (or **Repo Settings → Actions → Runners** for a repo-scoped runner). Copy the registration token. 2. Register from the host by running the registration inside a one-shot container so the resulting `.runner` file lands in the mounted `data/` directory: ```bash cd ~/forgejo-runner docker run --rm -v "$PWD/data:/data" code.forgejo.org/forgejo/runner:6 \ forgejo-runner register \ --instance https://forgejo.sourcegate.online \ --token \ --name forge-runner-01 \ --no-interactive ``` Note: labels are configured in `config.yml`, not at registration time — `config.yml` has `labels:` populated with the three we use (`ubuntu-latest`, `docker`, `self-hosted`), each mapped to either a container image or `:host` mode. 3. Start the daemon: `docker compose up -d`. 4. Verify in Forgejo admin → Actions → Runners that `forge-runner-01` shows as **Idle**, and `docker logs forgejo-runner` prints `runner: forge-runner-01, ..., declared successfully` along with the installed `node` + `docker-cli` versions. ## Two runtime modes The `config.yml` labels set up two job execution modes: - **`ubuntu-latest` / `docker` → `docker://catthehacker/ubuntu:act-latest`.** The standard mode. Jobs run in a fresh `catthehacker/ubuntu:act-latest` container. Good isolation, standard GHA-compatible image. Used by `ci.yml` (ruff, pytest, JSON & link checks). - **`self-hosted` → `:host`.** Steps execute *directly* in the runner container (no per-job wrapping container). Used by `build-iso.yml` because `iso/build.sh` needs `docker run -v $REPO_ROOT:/work` to hit a path host docker can resolve — wrapping in a job container reintroduces the namespace mismatch. Because host-mode jobs run inside the runner container, that container needs tools the jobs invoke — Node (for JS-based actions like `actions/checkout@v4`), Git (already in the base image), and the Docker CLI (for `iso/build.sh`). The `command:` in `compose.yml` apk-installs nodejs + docker-cli before launching the daemon, so those tools are always present after container start. ## First CI run Push a commit to `main` — the Actions tab should show: - `CI` workflow (`ci.yml`) running lint, tests, JSON validation, markdown links. Green in ~30 s. - `Build ISO` workflow (`build-iso.yml`) running `iso/build.sh` inside the runner container. Takes ~5 min (pacstrap + mkarchiso). The resulting `.iso` lands as a `furtka-iso` artifact on the run page, retained 14 days. If the workflow queues forever, check: - Runner online in Forgejo admin. - `docker logs forgejo-runner` for errors. - The workflow's `runs-on:` matches a label the runner advertises. ## Artifact compatibility note Forgejo's Actions API is GHES-compatible (not full GHA), so use `actions/upload-artifact@v3` — **v4+ fails with `GHESNotSupportedError`** because it needs the newer `@actions/artifact` protocol Forgejo hasn't implemented yet. ## Security notes - DooD gives jobs full access to the host's docker daemon — they can spawn arbitrary containers, including `--privileged` ones. Keep the runner VM dedicated to CI; don't run other user workloads on it. - The runner container itself runs as root (`user: "0:0"`). This is acceptable because the whole VM is purpose-built, but it's a bigger footgun than the standard non-root runner image default. - Registration tokens are one-shot; once a runner is live, the token can't re-register. - Ubuntu's `systemd-resolved` stub resolver (`127.0.0.53`) sometimes leaks LAN-only DNS servers that containers can't reach. If container DNS fails, set explicit upstream DNS in `/etc/docker/daemon.json` (e.g. `{"dns": ["1.1.1.1", "8.8.8.8"]}`) and restart docker.