Now that the runner uses docker-outside-of-docker, volume mounts in
`build.sh` (`docker run -v \$REPO_ROOT:/work ...`) are interpreted by
host docker — so `\$REPO_ROOT` must be a real host path. When the job
runs inside a job container, `\$REPO_ROOT` is only valid in the job
container's filesystem namespace and host docker can't find it, hence
`bash: /work/iso/build.sh: No such file or directory`.
Fix: switch `runs-on` to `self-hosted`. Forgejo-runner exposes that
label out of the box and, with no matching container image mapping,
runs steps directly on the runner VM. Checkout writes to a real host
path; `docker run -v …` then mounts a path both the outer CLI and
host docker agree on.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Forgejo-runner's valid_volumes already injects /var/run/docker.sock
into every job container, so the explicit `container.volumes` mount
in the workflow triggered 'Duplicate mount point' and the job never
started. Removed — DOCKER_HOST env is enough.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The DinD setup was the wrong tool here: forgejo-runner runs on host
docker, but it spawned jobs via the DinD sidecar — meaning jobs
were isolated inside DinD's own docker namespace and couldn't reach
`docker-in-docker` by hostname, and couldn't see the
`forgejo-runner_default` network (which only exists on host docker).
Switched the runner (compose.yml + data/config.yml) to talk directly
to host docker via `/var/run/docker.sock` and added it to the host
`docker` group (GID 988) so the non-root runner user can use the
socket. `valid_volumes` now whitelists the socket so job containers
can mount it too.
Workflow now mounts /var/run/docker.sock into the job container and
points DOCKER_HOST at that unix socket. `./iso/build.sh` then runs
its inner `docker run --privileged archlinux:latest` against the
host daemon — no nested docker.
Tradeoff: this is less isolated than DinD (jobs have full host docker
access — they could spawn arbitrary containers), but on a dedicated
single-user build VM the DooD simplification is worth it.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
forgejo-runner 6.4 filters `--network` out of `container.options`, so
the workflow-level override was silently ignored and the job kept
landing on a per-task network where `docker-in-docker` didn't resolve.
Fixed at the right level by editing the runner's `/data/config.yml`
(`container.network: "forgejo-runner_default"`) and restarting the
forgejo-runner container — every job now joins the shared network so
DOCKER_HOST=tcp://docker-in-docker:2375 just works.
Workflow trimmed back to only what's needed: DOCKER_HOST env pin. The
default runner image (catthehacker/ubuntu:act-latest) already has the
docker CLI.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- build-iso: the job container was on a per-job docker network, so
`docker-in-docker` (the DinD sidecar hostname on
`forgejo-runner_default`) didn't resolve. Pin the container to that
shared network via `container.options: --network forgejo-runner_default`.
catthehacker/ubuntu:act-latest already has the docker CLI, so drop
the apt-get step.
- ci.yml markdown-links: forgejo's action mirror at data.forgejo.org
doesn't carry `lycheeverse/lychee-action`, so `uses:` was 404ing
before the step could even run (rendering continue-on-error moot).
Fully-qualified GitHub URL bypasses the mirror.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Three things are broken on origin/main as of 6114cb2, all found in one
red CI run:
- build-iso workflow couldn't reach docker. forgejo-runner's config
sets `docker_host: tcp://docker-in-docker:2375` but that env doesn't
propagate into job containers on `runs-on: ubuntu-latest`, and the
default job image has no docker CLI. Fix: pin `DOCKER_HOST` on the
job and apt-install `docker.io` before invoking `iso/build.sh`.
- Two tests asserted on the pre-4.x archinstall schema:
`creds["root_password"]` (now `!root-password`) and
`cfg["disk_config"]["device"]` / `cfg["users"]` (users moved to
creds; disk_config is now a full `default_layout` dict). Rewrote
the tests to reflect 4.x reality and monkeypatched `build_disk_config`
since its real body imports archinstall, which isn't on CI.
- Ruff flagged one line of `PROGRESS_PHASES` at 107 chars — collapsed
the column alignment. `ruff format` pulled in a couple of cosmetic
expansions in spawn_archinstall and the tests that had been drifting.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds `.forgejo/workflows/build-iso.yml` that runs `./iso/build.sh` and
uploads the resulting ISO as a `furtka-iso` artifact (retained 14 days).
Triggers on `push: branches: [main]` and `workflow_dispatch` only —
feature branches don't pay the 15-20 min build cost. `concurrency`
cancels older runs of the same ref so only the most recent push
produces an artifact.
This is what Robert asked for: push change → download ISO from the
Forgejo run → test without needing a laptop to build.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>