The post-reboot page told users to log in with the username and
password — but Furtka is browser-first; users aren't meant to touch
the TTY. Show the actual URL they should open instead, plus an mDNS
fallback hint.
Also pin the header SVG to width="24" height="24" so it can never
render at full viewport size, even if CSS somehow fails to load.
Belt-and-suspenders with the reboot-delay fix.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The reboot route fired systemctl reboot in parallel with returning
the rebooting HTML. The browser's follow-up request for /static/style.css
was racing the shutdown — often the server was already gone, leaving
the page unstyled (inline SVG rendered at full viewBox size, filling
the screen). A small sleep gives the browser time to pull CSS + icons
before the network drops.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
systemd-boot is UEFI-only. Hardcoding it broke the install on
BIOS/legacy hosts with HardwareIncompatibilityError in
installer._add_systemd_bootloader. Detect via /sys/firmware/efi and
fall back to GRUB for BIOS.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
MENU TITLE in the syslinux box now reads "Furtka" instead of
"Arch Linux", and the per-entry HELP line at the bottom speaks of
"Furtka Live Installer" / "install Furtka" instead of the upstream
Arch strings. Same sed-not-overlay approach we already use for the
menu entry labels.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
archinstall runs `systemctl enable` over the `services` list *before*
custom_commands, so our own unit files (written in custom_commands)
didn't exist yet at enable-time and install aborted with
"Unit furtka-welcome.service does not exist". Keep `caddy` +
`avahi-daemon` in `services` since those are packaged units present
right after pacstrap; move `furtka-welcome` + `furtka-status.timer`
to a `systemctl enable` call appended to custom_commands so they fire
after the unit files land on disk.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Installs caddy + avahi + nss-mdns on the target and writes a small
landing page, live status tiles (uptime / docker version / free disk
via furtka-status.timer), and a console welcome banner — all via
archinstall's custom_commands so the payload travels with the
user_configuration.json. After reboot `http://<hostname>.local`
serves a Furtka-branded page on :80 instead of the bare Arch login.
No Authentik / no app store yet — demo shell for the real post-
install work (Robert's area).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
A lot moved since the last docs sweep. Catching everything up in one
batch so a newcomer (or future us) reading the repo isn't lied to.
**README.md roadmap:**
- Walking-skeleton live ISO: upgraded from "screens 1-3 work
end-to-end" to "install runs to completion on a VM and the installed
system logs in and runs `docker ps` without sudo".
- 26.0-alpha release: dropped the "deferred" note — its blocker
(archinstall not completing) is gone; just needs a re-tag when we
like the installer copy.
- Added an explicit "ISO-build in CI" line for the new
`.forgejo/workflows/build-iso.yml`.
- Split the old "mDNS + local CA" item: mDNS is live (hostname baked
in, avahi/nss-mdns in the image), HTTPS via local CA still open.
- Noted post-install reboot button, progress bar, archinstall 4.x
schema work, console welcome, custom_commands docker group join in
the wizard milestone bullet.
**docs/runner-setup.md:**
- Full rewrite for the docker-outside-of-docker architecture we
actually run now (was still describing the DinD sidecar setup).
- Documents the `/data` symlink on the host that makes host-mode
`-v /data/…:/work` resolve — the non-obvious piece that took the
longest to nail down today.
- Describes the two runtime modes (`ubuntu-latest:docker://…` for CI,
`self-hosted:host` for build-iso) and why each exists.
- Adds the `upload-artifact@v3` pin note — v4+ fails on Forgejo with
`GHESNotSupportedError`.
**ops/forgejo-runner/compose.yml + config.yml:**
- Compose now matches what's actually running: DooD (no DinD sidecar),
runs as root so apk can install nodejs + docker-cli at startup,
/var/run/docker.sock bind-mounted.
- Config gets the three explicit label mappings and DooD
`docker_host` + `valid_volumes`.
**.forgejo/workflows/build-iso.yml:**
- Added `paths-ignore` for docs/website/*.md so doc-only commits don't
kick off 5-min ISO rebuilds. Code + ISO overlay changes still
trigger.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Forgejo Actions only speaks the GHES-compatible @actions/artifact
protocol; upload-artifact@v4+ insists on the newer API and fails with
`GHESNotSupportedError`. Pin to v3, which uses the old protocol that
Forgejo implements.
Good news: the ISO itself built end-to-end in ~5m on the runner
(DooD + /data symlink resolved the path-mismatch). Only the upload
failed, and this pins it.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Now that the runner uses docker-outside-of-docker, volume mounts in
`build.sh` (`docker run -v \$REPO_ROOT:/work ...`) are interpreted by
host docker — so `\$REPO_ROOT` must be a real host path. When the job
runs inside a job container, `\$REPO_ROOT` is only valid in the job
container's filesystem namespace and host docker can't find it, hence
`bash: /work/iso/build.sh: No such file or directory`.
Fix: switch `runs-on` to `self-hosted`. Forgejo-runner exposes that
label out of the box and, with no matching container image mapping,
runs steps directly on the runner VM. Checkout writes to a real host
path; `docker run -v …` then mounts a path both the outer CLI and
host docker agree on.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Forgejo-runner's valid_volumes already injects /var/run/docker.sock
into every job container, so the explicit `container.volumes` mount
in the workflow triggered 'Duplicate mount point' and the job never
started. Removed — DOCKER_HOST env is enough.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The DinD setup was the wrong tool here: forgejo-runner runs on host
docker, but it spawned jobs via the DinD sidecar — meaning jobs
were isolated inside DinD's own docker namespace and couldn't reach
`docker-in-docker` by hostname, and couldn't see the
`forgejo-runner_default` network (which only exists on host docker).
Switched the runner (compose.yml + data/config.yml) to talk directly
to host docker via `/var/run/docker.sock` and added it to the host
`docker` group (GID 988) so the non-root runner user can use the
socket. `valid_volumes` now whitelists the socket so job containers
can mount it too.
Workflow now mounts /var/run/docker.sock into the job container and
points DOCKER_HOST at that unix socket. `./iso/build.sh` then runs
its inner `docker run --privileged archlinux:latest` against the
host daemon — no nested docker.
Tradeoff: this is less isolated than DinD (jobs have full host docker
access — they could spawn arbitrary containers), but on a dedicated
single-user build VM the DooD simplification is worth it.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
forgejo-runner 6.4 filters `--network` out of `container.options`, so
the workflow-level override was silently ignored and the job kept
landing on a per-task network where `docker-in-docker` didn't resolve.
Fixed at the right level by editing the runner's `/data/config.yml`
(`container.network: "forgejo-runner_default"`) and restarting the
forgejo-runner container — every job now joins the shared network so
DOCKER_HOST=tcp://docker-in-docker:2375 just works.
Workflow trimmed back to only what's needed: DOCKER_HOST env pin. The
default runner image (catthehacker/ubuntu:act-latest) already has the
docker CLI.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- build-iso: the job container was on a per-job docker network, so
`docker-in-docker` (the DinD sidecar hostname on
`forgejo-runner_default`) didn't resolve. Pin the container to that
shared network via `container.options: --network forgejo-runner_default`.
catthehacker/ubuntu:act-latest already has the docker CLI, so drop
the apt-get step.
- ci.yml markdown-links: forgejo's action mirror at data.forgejo.org
doesn't carry `lycheeverse/lychee-action`, so `uses:` was 404ing
before the step could even run (rendering continue-on-error moot).
Fully-qualified GitHub URL bypasses the mirror.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Three things are broken on origin/main as of 6114cb2, all found in one
red CI run:
- build-iso workflow couldn't reach docker. forgejo-runner's config
sets `docker_host: tcp://docker-in-docker:2375` but that env doesn't
propagate into job containers on `runs-on: ubuntu-latest`, and the
default job image has no docker CLI. Fix: pin `DOCKER_HOST` on the
job and apt-install `docker.io` before invoking `iso/build.sh`.
- Two tests asserted on the pre-4.x archinstall schema:
`creds["root_password"]` (now `!root-password`) and
`cfg["disk_config"]["device"]` / `cfg["users"]` (users moved to
creds; disk_config is now a full `default_layout` dict). Rewrote
the tests to reflect 4.x reality and monkeypatched `build_disk_config`
since its real body imports archinstall, which isn't on CI.
- Ruff flagged one line of `PROGRESS_PHASES` at 107 chars — collapsed
the column alignment. `ruff format` pulled in a couple of cosmetic
expansions in spawn_archinstall and the tests that had been drifting.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The "Drive list includes /dev/loop0 and /dev/sr0" rough-edge bullet
claimed the filter hadn't been added yet, but it has — `drives.py`'s
`parse_lsblk_output` skips everything with `TYPE != disk`, so loop
and rom devices never reach the picker. Tested.
Replaced with a note about the remaining real footgun: on bare-metal
installs, the USB stick the user booted from is `TYPE=disk` and would
show up alongside the actual install target, so a user could pick
their boot media by mistake. Not urgent while we test in VMs (the ISO
is a CD-ROM there, already filtered), but flagged so it's visible
when bare-metal testing starts.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds `.forgejo/workflows/build-iso.yml` that runs `./iso/build.sh` and
uploads the resulting ISO as a `furtka-iso` artifact (retained 14 days).
Triggers on `push: branches: [main]` and `workflow_dispatch` only —
feature branches don't pay the 15-20 min build cost. `concurrency`
cancels older runs of the same ref so only the most recent push
produces an artifact.
This is what Robert asked for: push change → download ISO from the
Forgejo run → test without needing a laptop to build.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Two user-visible polish passes on top of the walking-skeleton install:
- Console welcome: live ISO's getty no longer shows the bare Arch prompt.
`/etc/hostname` is now `proksi` so avahi advertises `proksi.local`;
a systemd oneshot (`furtka-issue.service`, runs after
network-online.target) regenerates `/etc/issue` via
`/usr/local/bin/furtka-update-issue` to show both
`http://proksi.local:5000` (preferred, via mDNS — avahi and nss-mdns
are already in `packages.extra`) and the raw IP as a fallback for
networks where mDNS is flaky. `agetty --reload` nudges the already-
running login prompt to redraw.
- /install/log now polls a JSON endpoint (`/install/log.json`) every
3 s instead of meta-refresh, so expanding the collapsed log
`<details>` doesn't get eaten by the refresh. Noscript fallback
keeps the meta-refresh for JS-off users. When the install finishes,
the Done state shows a Reboot-now button that POSTs to
`/install/reboot` (guarded server-side to only reboot once status
is "done", so a panicked click mid-pacstrap can't brick the box).
A confirm() reminds the user to pull the USB / eject the ISO first.
End-to-end tested on a Proxmox VM 2026-04-14: boot → wizard →
archinstall → Done state → Reboot now → VM came back up → login as
created user → `docker ps` worked.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Two tangled changes to the install flow, batched because they're both
small and hit app.py:
1. Phase-based progress bar on /install/log. parse_install_progress()
scans the archinstall log for ordered phase markers ("Wiping
partitions", "Installing packages: ['base'", "Adding bootloader",
"Installation completed without any errors", …) and exposes
percent + user-facing phase label + status (running/done/error).
Template wraps the raw log in a collapsed <details> so the default
view stays calm; the meta-refresh stops once status is terminal.
If archinstall changes its stdout wording the bar stalls on the
last recognized phase — the install itself is unaffected.
2. Drop "docker" from the user's groups in creds and do the
`gpasswd -a <user> docker` via custom_commands instead.
archinstall creates users before pacstrapping the extras list, so
the docker group doesn't exist at user-create time —
caused the second real install to crash with
`gpasswd: group 'docker' does not exist`. custom_commands runs
at the very end, after docker is installed. Username is validated
by USERNAME_RE so no shell injection.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Walking-skeleton install on a real VM surfaced two archinstall 4.x
schema breakages that the wizard hit only at runtime:
- `use_entire_disk` was removed as a `config_type`. Now builds a full
`default_layout` disk_config by calling `suggest_single_disk_layout`
(forced ext4 + no separate /home, which bypasses its interactive
prompts) and serializing the returned DeviceModification.
- Credentials keys renamed to plaintext sentinels: `!root-password`
and `!password`. Users with neither `!password` nor `enc_password`
are silently dropped by `User.parse_arguments` — which is why the
first real install booted but wouldn't log in.
Also rolls in Robert's UX feedback quick-wins: `(Recommended)` prefix
on the default boot entry across GRUB/syslinux/systemd-boot, and
less-jargon hints on the step-1 hostname/username fields. iso/README
loses three stale bullets that described pre-15b876c behaviour.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Wires the live-ISO wizard from "shows three screens" to "actually invokes
archinstall on the chosen disk", plus first-pass styling so it stops looking
like raw <h1>/<form>.
Webinstaller flow:
- S1 form gains username/password/password2/language with server-side
validation (hostname/username regex, ≥8 char password, match check).
- /install/run writes user_configuration.json + user_credentials.json
(creds 0600) to FURTKA_STATE_DIR (default /tmp/furtka), then execs
`archinstall --config … --creds … --silent` as a backgrounded subprocess.
- /install/log renders the subprocess output via meta-refresh polling.
- FURTKA_DRY_RUN=1 short-circuits the exec for testing.
- archinstall flag names verified against `archinstall --help` in an
archlinux container before committing.
Drive list:
- drives.py now filters via `lsblk … -o NAME,SIZE,TYPE` keeping TYPE=disk,
so the live ISO's own squashfs (loop) and CD-ROM (rom) stop appearing
as install targets.
Boot menu:
- iso/build.sh sed-rebrands "Arch Linux install medium" →
"Furtka Live Installer" across grub/, syslinux/, and efiboot/loader/
entries. Verified zero leftovers against the current releng profile.
Styling:
- static/style.css adopts the website's design tokens (palette,
typography, gate-mark accent), with light + dark via prefers-color-scheme.
- New base.html with header (gate SVG + FURTKA·INSTALLER wordmark + step
indicator) and footer; all install templates extend it.
- Drive picker uses radio cards with score chip; overview uses a summary
table and a destructive "wipe drive" button.
Tests: 17 pass (4 new in test_app.py covering validation + config builders,
2 new in test_drives.py covering the lsblk filter). Ruff clean.
README roadmap updated to mark these done and explicitly defer the
26.0-alpha release until archinstall actually completes end-to-end on a VM.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Hugo static site with an intentionally minimal single-page copy — English
default, German under /de/ — while the project stays pre-alpha. No CMS, no
external theme, no webfonts, no external requests. System-UI sans on a
paper-white / near-black palette with a deep crimson accent; a small
wicket-gate SVG as the sole brand mark.
Hosting: nginx on forge-runner-01 serves /var/www/furtka.org; the upstream
openresty proxy terminates TLS so the VM itself only speaks plain HTTP.
Deploy is ./website/deploy.sh (rsync + remote hugo --minify). One-time VM
bootstrap in ops/nginx/setup-vm.sh.
These two cost us real time tonight — SeaBIOS failing at ldlinux.c32,
then OVMF rejecting our unsigned GRUB with "Access Denied" until we
disabled Secure Boot in the firmware setup menu. Also flagged the
silent browser-upload truncation and the two known drive-list bugs
surfaced during the first live boot.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
iso/build.sh runs mkarchiso inside a privileged archlinux container,
overlays our customizations onto Arch's stock releng profile
(systemd unit that launches Flask on 0.0.0.0:5000, the webinstaller
under /opt/furtka, extra packages for python/flask/avahi), and drops
a hybrid BIOS/UEFI ISO in iso/out/.
Verified end to end: Proxmox VM (OVMF, Secure Boot off) boots the ISO,
DHCP's onto the LAN, and serves screens 1-3 of the existing wizard at
http://<vm-ip>:5000/install/step1. This is the first point at which
Furtka is something you can run instead of something you can read about.
Two known drive-list bugs surfaced while testing (/dev/loop0 and
/dev/sr0 appear as install targets) — captured in the README roadmap.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
furtka.org registered via Strato 2026-04-13, so the working title is
retired. Python package, managed-gateway NS hostnames, and repo URLs all
follow. The CHANGELOG "Unreleased" section documents the switch so the
history is preserved at the 26.0-alpha → next-release boundary.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Bootstrap script + compose + config checked in under ops/forgejo-runner/
so a second runner is a scripted setup. runner-setup.md corrects the
register label format (<name>:docker://<image>, not bare names) and
documents the Ubuntu systemd-resolved DNS gotcha.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Mark release-process + CI work complete. Add two next-session TODOs
for Daniel: stand up the forgejo-runner (without which CI queues
forever) and publish the 26.0-alpha Forgejo Release.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- CHANGELOG.md: Keep-a-Changelog format, [26.0-alpha] entry covering
everything shipped so far (installer webapp, drive scoring, base
archinstall config, wireframes, competitor analysis, wizard flow spec)
- CONTRIBUTING.md: dev setup, conventional commit format, code style
- RELEASING.md: calendar versioning rules (YY.N-stage, no "v" prefix)
and the release workflow (bump changelog, commit, tag, push, create
Forgejo Release)
- docs/runner-setup.md: install + register a forgejo-runner so the
upcoming CI workflow actually executes
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Targeted edits reflecting findings from docs/competitors.md:
- New "Recent signals" subsection under Landscape: Umbrel license
complaints, Umbrel's 4+ year HTTPS refusal (#546), CasaOS
maintenance mode
- "Where we differentiate" bullet 4 replaced: "Arch base (rolling
release)" -> "HTTPS + AGPL from day one" — the actual counter-
positioning shots vs Umbrel per the analysis
- "Gap we're targeting" tightened to include HTTPS-by-default
- Key Decisions table: added rows for locked tech picks (Caddy,
Authentik, NS delegation, local CA) with link to wizard-flow.md
- Roadmap: marked competitor analysis + wizard flow spec complete,
reordered so bootable image is clearly the next blocker, added
Caddy/Authentik bootstrap and managed gateway infra items
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
8-screen first-boot installer spec extending Robert's 4-screen
wireframe with the YunoHost-style post-install pattern (domain,
SSL, diagnostic, confirm). Covers:
- Entry point via https://proksi.local with local CA cert install
- Screens S1-S8, each mapped to archinstall config fields or side
effects (SSL cert issuance, DNS delegation, diagnostic gates)
- Data model mapping wizard fields to user_configuration.json +
user_credentials.json
- Locked tech picks with rejected alternatives: Caddy (reverse
proxy), Authentik (SSO), NS delegation (managed gateway DNS),
local CA (HTTPS on proksi.local)
- Open questions for Robert: Backend on/off meaning, local CA vs
Tailscale ACME, UI framework choice, language list, S2 auto-setup
branch behavior
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Online research (no hands-on testing yet) covering install flow,
hardware detection, app store UX, reverse proxy/SSL, and common
user complaints for the top 3 competitors.
Key findings: device-aware install wizard and managed gateway/DNS
are both uncontested. Umbrel's PolyForm license and 4+ year refusal
of HTTPS on local UI are direct counter-positioning opportunities.
CasaOS is in maintenance mode (IceWhale moved to ZimaOS).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Untrack archinstall/user_credentials.json; ship .example file and
add a root .gitignore so real creds stay out of git
- Fix SyntaxError in webinstaller/app.py (malformed "language" entry)
- Drop import-time lshw call in hardware.py
- Consolidate driveval/ and webinstaller/hardware.py into a single
webinstaller/drives.py with a list_scored_devices() API; step 2
now renders a proper <select> with name/size/score
- Replace dependancies.txt typo with webinstaller/requirements.txt
(Flask only — psutil was imported but unused)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Daniel: test CasaOS/Umbrel/YunoHost on Proxmox.
Robert: get minimal bootable Arch image with Docker + installer webapp.
Robert's resource: awesome-docker-compose.com for later app store defaults.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
CasaOS, Umbrel, Runtipi, HomeDock OS, Cosmos Server, YunoHost,
TurnKey Linux — plus analysis of where Homebase differentiates
(installer wizard, auto setup, gateway-as-a-service, Arch base).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
4-screen wizard flow: welcome/account setup, drive selection with
auto-setup, per-device purpose assignment, network/VPN config.
Accessed via http://proksi.local from a headless live USB boot.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Reflects actual project state from Robert/Daniel discussions — Arch already
running on Proxmox, webapp prototype working, and long-term Proxmox-style
business model.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>