Commit graph

67 commits

Author SHA1 Message Date
df08938d7e refactor(webinstaller): extract inline payload constants to furtka/assets/
Slice 1a of the self-update story. Every HTML/CSS/shell-script/systemd-
unit payload that used to live as a triple-quoted string constant inside
webinstaller/app.py now lives as a real file under furtka/assets/:

  furtka/assets/Caddyfile
  furtka/assets/VERSION                       (new — matches pyproject.toml)
  furtka/assets/www/{index.html, settings/index.html, style.css, status.json}
  furtka/assets/bin/{furtka-status, furtka-welcome}
  furtka/assets/systemd/furtka-{api,reconcile,status,welcome}.service
  furtka/assets/systemd/furtka-status.timer

The installer now pulls each file from disk via _read_asset(). Byte-for-
byte identical output at install time — a fresh-ISO install should land
the same files in the same places with the same contents, verified by
tests/test_webinstaller_assets.py which reconstructs each base64 blob
and asserts equality against the on-disk asset.

iso/build.sh also copies furtka/assets/ next to the webinstaller source
at /opt/furtka/assets on the live ISO so _resolve_assets_dir() finds
them with a "next to me" lookup. In dev the same function walks two
levels up to the repo copy, so pytest works without any env vars.

furtka-status.sh drops the /etc/furtka/version TODO — it now reads
/opt/furtka/VERSION directly, which Slice 1b will upgrade to
/opt/furtka/current/VERSION once the symlink layout lands.

_FURTKA_WRAPPER_SH (the 5-line /usr/local/bin/furtka shim) stays inline;
it's tiny and not asset-shaped.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 13:08:53 +02:00
9bfbf209b6 ops(forgejo): whitelist owner in branch protection push rule
All checks were successful
Build ISO / build-iso (push) Successful in 17m6s
CI / lint (push) Successful in 27s
CI / test (push) Successful in 35s
CI / validate-json (push) Successful in 22s
CI / markdown-links (push) Successful in 14s
Earlier config was enable_push=false + apply_to_admins=false, which I
expected to let the repo owner push directly. Empirically it blocked
owner pushes too — apply_to_admins governs approval-rule bypass, not
push-rule bypass. Switch to enable_push=true with enable_push_whitelist
and a single entry so the owner has explicit, auditable direct-push
access while casual commits still can't land without being whitelisted
or going through a PR.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 13:02:25 +02:00
e6f52ada5c feat(furtka): per-app image updates via POST /api/apps/<name>/update
Phase 1 of updates. User clicks Update on an installed app row →
the resource manager runs `docker compose pull`, compares the
running container's image ID to the just-pulled local image ID
per service, and only runs `docker compose up -d` if something
actually changed. Response is {updated: bool, services: [{service,
from, to, tag}]} so the UI can tell the user what happened.

Deliberately small: no pinning, no background checks, no "update
all" button, no version/changelog display. The update flow doesn't
mutate the compose file — it just acts on what's already there.
Reinstall still serves as rollback.

New dockerops helpers: compose_pull, compose_image_tags (parses
`docker compose config --format json`), local_image_id (via
`docker image inspect`), running_container_image_id (via compose
ps --quiet + docker inspect). Six new tests cover the endpoint:
not installed, no changes, changes applied, service not running,
docker pull error, and the HTTP route end-to-end.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 12:45:47 +02:00
4e4dc1001f feat(ui): /settings page + nav link on every page
Slice 5 of the on-box UI uplevel. Adds a third page at /settings/
served by Caddy from /srv/furtka/www/settings/index.html. Three
groups of content:

  - About this box (read-only): hostname, IP, Furtka version,
    kernel, RAM, Docker, uptime — all consumed from status.json
    via the same 30s refresh loop the landing uses.
  - Appearance: theme follows prefers-color-scheme, language is
    English for v1. Shown read-only.
  - Coming next: linked roadmap chips (Reboot / Shut down / Change
    hostname / Backup / User accounts / Remote access), each
    jumping to the planned section on furtka.org. Implementing any
    of these graduates it in-place.

Nav link to Settings also added to the landing page and /apps so
the three pages share one persistent navigation (Jakob's Law).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 12:29:43 +02:00
c7ca6bfbb1 feat(ui): landing page redesign — apps grid + roadmap placeholders
Slice 4 of the on-box UI uplevel. The landing page is now the peak-end
first impression after install: welcome + hostname chip, a "Your apps"
tile grid consuming /api/apps (with the real icon and an app-specific
primary action — fileshare gets smb://<host>.local/files, everything
else falls back to Manage →), the existing system-status tiles kept
intact, and a subtle "Coming next" row of text-only links that jump to
the planned-features section on furtka.org. No dead tiles.

The status script now also writes ip_primary, kernel, ram_total and a
furtka_version read from /etc/furtka/version (TODO: pin that file at
install time; for now it reports "dev"). The settings page will
consume those in slice 5.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 12:27:56 +02:00
358444839c feat(fileshare): new icon — folder with network-broadcast motif
Replaces the generic stroke-only folder with a flat two-layer
folder (depth via a 28%-opacity back plate) plus two signal arcs
and a node dot radiating from the top-right corner. currentColor
throughout so the .app-icon tint applies cleanly in both themes.

Manifest version bumped to 0.1.1 so the resource manager sees the
change as a distinct release.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 12:25:04 +02:00
e8ed224eea feat(ui): inline app icons into /api/apps JSON, render on /apps
Slice 2 of the on-box UI uplevel. The resource-manager API already
returned the icon filename in each manifest summary, but the /apps
page never rendered it — and there was no endpoint to fetch the
file either. This inlines the SVG content directly into the JSON
response (one round-trip, Doherty Threshold) and injects it into
each app card's new icon slot on the left.

_read_icon_svg defends against the obvious SVG-XSS vectors (script
tags, on* handlers, javascript: URLs) and rejects anything over
16 KB. The trust model stays what it was — bundled apps are built
into the ISO, the install API has no auth — but the filter keeps
accidents from becoming exploits if an icon gets swapped upstream.

/apps now shows a generic folder fallback for any app without a
parseable icon.svg; slice 3 ships the real fileshare artwork.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 12:23:41 +02:00
a6878f5d23 feat(ui): shared /style.css + top nav across landing and /apps
Slice 1 of the on-box UI uplevel. Consolidates the two duplicated
stylesheets (landing's webinstaller/app.py and /apps's inline block
in furtka/api.py) into one sheet served by Caddy at /style.css.
Expands the token set (spacing, radii, shadows, focus ring, warn-fg,
accent-soft, card-hover), adds a prefers-color-scheme light theme,
and introduces shared primitives for later slices: .nav, .chip,
.card, .kv, .coming, .grid-apps, .app-tile, .app-icon.

Also adds a persistent top nav (Home / Apps) to both pages — Jakob's
Law, so users always have a way back — and collapses the /apps "Last
action" log behind a details disclosure so it stops dominating the
page. Format fallout on drives.py picked up by ruff.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 12:19:54 +02:00
0f5e6bb950 ops(forgejo): apply-branch-protection script + main-branch rule
Codifies the branch protection applied to main on 2026-04-16: no
direct pushes, required checks = CI / {lint,test,validate-json}*,
zero approvals (2-person team), admin bypass left on for emergencies.

Script is idempotent (create-or-patch) and reads its token from
\$FORGEJO_TOKEN or the local git remote URL as a fallback, so a
clean re-run just reconciles the rule with branch-protection.json.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 12:02:10 +02:00
8498dd576f fix(furtka): rename "Einstellungen" button to "Settings"
Leftover German string from prototyping — the rest of the apps UI is
English, so it stood out as a mixed-language bug during 2026-04-16
VM testing.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 12:02:03 +02:00
3b61931936 feat(webinstaller): plain-English drive picker on step 2
Replace the numeric "score N" pill with a Recommended badge on the
auto-selected drive plus size/type/health chips. The score itself
stays as the sort key, users just never see the raw number.

Why: Robert's 2026-04-14 wizard UX direction — less jargon, explain
Fachbegriffs, recommend defaults. A bare "score 35" gave users no
reason why one drive was picked.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 12:01:57 +02:00
70001f54fd docs(resource-manager): document settings schema + new endpoints
All checks were successful
CI / lint (push) Successful in 28s
CI / test (push) Successful in 34s
CI / validate-json (push) Successful in 24s
CI / markdown-links (push) Successful in 12s
Reflects commit 61c7ee2 — manifest gains `settings` + `description_long`,
API gains `GET/POST /api/apps/<name>/settings`, install/reinstall accepts
a `settings` object. Drops the stale "in-UI .env editor" from the
out-of-scope list since that's what just shipped.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 13:07:23 +02:00
61c7ee232c feat(furtka): in-browser app settings + ISO recovery-path fixes
Some checks are pending
CI / lint (push) Waiting to run
CI / test (push) Waiting to run
CI / validate-json (push) Waiting to run
CI / markdown-links (push) Waiting to run
Build ISO / build-iso (push) Successful in 16m54s
End-to-end VM test today (2026-04-15) validated the resource manager
golden path but exposed four things blocking "dein-Vater-tauglich":
no way to configure an app without SSH+editor, no openssh, no nano,
keyboard stuck on US, and a samba healthcheck that cried wolf.

Resource-manager side:
- Manifest schema gains optional `settings` list (name/label/
  description/type/required/default) and `description_long`.
- Bundled-app install opens a form rendered from the manifest;
  submit carries values to `POST /api/apps/install` which writes
  them into the new app's `.env` before the placeholder check runs.
- Installed apps grow an "Einstellungen" button that merges a
  partial settings dict into the existing `.env` (unsubmitted
  password fields = keep current), then reconciles to restart.
- New endpoints: `GET/POST /api/apps/<name>/settings`. Passwords
  are never returned to the client.
- Fileshare manifest declares its SMB_USER/SMB_PASSWORD settings
  in German with help text.

ISO side (so the next build is actually usable on the TTY):
- Add `openssh` to the package list + `sshd` to enabled services.
  `archinstall: true` in 4.x did not install openssh-server.
- Add `nano` — `vim` was the only editor pitched at users, which
  is brutal for first-timers (and was missing anyway).
- Keyboard layout follows the installer language (`de→de`, `pl→pl`,
  `en→us`) instead of hardcoded `us`. A German user couldn't type
  `/` or `-` at the console, making even `sudo nano` painful.
- Disable the dperson/samba healthcheck in the compose override —
  it timed out on every probe while the share itself worked fine.

19 new tests (manifest parsing + settings-merge + two new API
endpoints over live HTTP); 94 total, format + lint clean.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 13:00:02 +02:00
0af2134b7e docs(website): plain-language landing page with shipped/planned sections
All checks were successful
CI / lint (push) Successful in 26s
CI / test (push) Successful in 32s
CI / validate-json (push) Successful in 25s
CI / markdown-links (push) Successful in 12s
Speak to non-technical visitors: drop "x86", swap "domain" for "name on
the network", and list concrete upcoming apps (photos, files, smart home,
game streaming, media) so the page says something real instead of just
"it's early".

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 12:24:58 +02:00
a90582a3a3 docs: refresh resource-manager.md to reflect shipped v1
All checks were successful
CI / lint (push) Successful in 26s
CI / test (push) Successful in 32s
CI / validate-json (push) Successful in 24s
CI / markdown-links (push) Successful in 12s
Open-questions section is gone — all seven were answered live in
session and are now codified in the furtka/ package. Doc now
describes the actual contract (manifest schema, lifecycle, code
map) instead of a planning scaffold. Out-of-scope list is preserved
so future contributors don't propose things that were deliberately
deferred.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 10:31:28 +02:00
c6ed7a8159 feat(furtka): web UI + HTTP API for app install/remove
Some checks are pending
CI / lint (push) Waiting to run
CI / test (push) Waiting to run
CI / validate-json (push) Waiting to run
CI / markdown-links (push) Waiting to run
Build ISO / build-iso (push) Successful in 16m52s
Adds the management UI Daniel asked for end-of-session. Goes beyond
the original MVP scope (plan punted UI to v2) but the architecture
already supports it cleanly: stdlib http.server only, no new deps.

- furtka.api: minimal HTTP server. GET / serves a self-contained
  HTML page (dark-mode card list, vanilla JS, no build step). GET
  /api/apps + /api/bundled return JSON. POST /api/apps/{install,
  remove} accept {"name": "..."} and call the same installer +
  reconciler the CLI uses, so the placeholder-secret refusal and
  per-app reconcile isolation flow through unchanged.
- furtka.cli: new `furtka serve` subcommand. Imports api lazily so
  `furtka app list` / `reconcile` startup stays zero-cost.
- webinstaller: new furtka-api.service (Type=simple, restart on
  failure, after reconcile). Caddyfile gets two new handle blocks
  to reverse-proxy /api and /apps to localhost:7000. Landing page's
  "App store coming soon" tile becomes a real "Manage installed apps
  →" link to /apps.
- Bound to 127.0.0.1 by default; Caddy makes it LAN-reachable. The
  UI shouts a "no auth, anyone on your LAN can install/remove" warning
  at the top — Authentik integration is the proper fix later.

UX wrinkle worth noting: a placeholder-rejected install leaves the
app in /var/lib/furtka/apps/<name>/ (so the user can edit .env in
place). To re-trigger after editing, the Installed list now shows
both Reinstall and Remove buttons.

10 new tests: helper functions (list_installed, list_bundled with
hide-already-installed), install/remove endpoints with the no_docker
fixture, and two real-socket urllib smoke tests that boot the actual
HTTPServer on an ephemeral port and round-trip GET / + POST.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 10:23:46 +02:00
ff68dd5ae6 fix(furtka): audit follow-ups — placeholder secrets, isolate reconcile, .env perms
Addresses the four issues raised in the slice-3 audit before pushing.

#1 (critical) — refuse to finish install when .env still contains
placeholder secrets like "changeme". Without this, `furtka app install
fileshare` would happily start an SMB server with a publicly-known
password — the kind of default that ends up screenshotted on Hacker
News. PLACEHOLDER_SECRETS lives in installer.py; new tests cover
placeholder rejection, post-edit retry, and quoted values.

#3 — reconciler now catches DockerError / FileNotFoundError / OSError
per-app instead of letting a single broken app abort the whole
boot-scan. Errors get surfaced as Action(kind="error", …) and
has_errors() drives the CLI exit code so systemd still shows red,
but the other apps actually got reconciled.

#4 — chmod 0600 on .env after install so app secrets aren't world-
readable on multi-user boxes. Done before the placeholder check so
even the half-installed state is safe.

#5 — load_manifest() got an optional expected_name. The scanner
passes the folder name (filesystem source-of-truth contract);
installer leaves it None so `furtka app install /tmp/some-fork/`
works regardless of what the source folder is named.

#2 — TODO comment on dperson/samba:latest. Switching to a digest
needs a verified upstream release; left for the test-day pin.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 10:17:00 +02:00
9f4e514d8a feat(furtka): ship resource manager + fileshare app on the ISO — slice 3
Closes the loop end-to-end. The ISO build now bundles the furtka/
package and the apps/ tree as a tarball; webinstaller hands it to
archinstall via custom_commands; the installed system gets the
`furtka` CLI, a boot-scan systemd unit, and the fileshare app
ready to install.

- iso/build.sh: stages furtka/ + apps/ into a tmpdir, drops
  __pycache__, tarballs into airootfs/opt/furtka-resource-manager.tar.gz.
- webinstaller/app.py: _resource_manager_commands() reads the staged
  payload at request-time, base64-encodes it into a single untar
  command, and writes /usr/local/bin/furtka (PYTHONPATH wrapper, no
  pip needed) + furtka-reconcile.service. Python pacstrapped so the
  wrapper has an interpreter.
- Graceful degradation: dev box / CI without an ISO build has no
  payload tarball, so those commands are skipped (logs a warning).
  Tests cover both branches.
- furtka-reconcile.service is conditionally enabled only if the unit
  file actually landed — keeps the systemctl enable line green when
  the payload was absent.
- apps/fileshare/: first real Furtka app. dperson/samba on host
  network, single named volume, .env.example with placeholder creds.
  Manifest matches the schema locked in slice 1.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 10:06:01 +02:00
7b96a25f5b feat(furtka): reconciler + install/remove — slice 2
Fills in the act-on-it half of the resource manager. Reconciler walks
the scanner output and brings docker into the desired state: ensures
each manifest-declared volume exists (idempotent), then runs
docker compose up -d for the project. install/remove on the CLI work
end-to-end against a real /var/lib/furtka/apps/ tree.

- furtka.dockerops: thin subprocess wrapper. Volume + compose
  primitives that other modules call. `_run` raises DockerError with
  the actual stderr so failures are diagnosable.
- furtka.reconciler: builds an ordered Action list (volumes then
  compose_up per app), executes unless dry-run. Broken manifests
  produce a "skip" action, the rest of the apps still get reconciled.
- furtka.installer: copy-from-source with two non-obvious rules —
  user .env is preserved across upgrade installs, and a missing .env
  is bootstrapped from .env.example so compose has values to
  substitute on first install. Bundled-app lookup falls back to
  /opt/furtka/apps/<name>/ when the source arg isn't a path.
- furtka.cli: app install/remove wired up. remove() ignores compose
  down failures so a botched compose doesn't trap users with an
  un-removable folder.
- 15 new tests using monkeypatch'd dockerops so the suite still runs
  without docker installed. Covers reconcile dry-run, multi-volume
  apps, broken-manifest skip behavior, .env preservation, bundled-name
  resolution, and remove edge cases.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 10:02:00 +02:00
cfc4c0b9c1 feat(furtka): resource-manager skeleton — manifest, scanner, CLI
Slice 1 of the Resource Manager (see docs/resource-manager.md +
plan in ~/.claude/plans/stateful-juggling-pike.md). Lays down the
read-only half: a JSON manifest schema with namespacing, a scanner
that walks /var/lib/furtka/apps/, and a `furtka` CLI with
`app list` and `reconcile --dry-run`. Reconciler / volume creation
/ docker compose calls land in the next slice.

- furtka.manifest: dataclass + load_manifest with required-field +
  type validation. volume_name() injects the furtka_<app>_<vol>
  namespace so apps can each declare a "data" volume without colliding.
- furtka.scanner: tolerant — broken manifest = ScanResult with error,
  not an exception. Lets reconcile log + skip rather than abort.
- furtka.cli: text + --json output. argparse with `app list` and
  `reconcile --dry-run`. main() returns int for clean exit codes.
- furtka.paths: FURTKA_APPS_DIR env override so tests don't need root.
- 19 new tests covering valid manifests, every validation branch,
  scanner edge cases (missing root, broken manifest, sort order), and
  the CLI subcommands.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 09:59:41 +02:00
28e82bfccb fix(webinstaller): point users at http://<hostname>.local after reboot
All checks were successful
Build ISO / build-iso (push) Successful in 16m54s
CI / lint (push) Successful in 25s
CI / test (push) Successful in 32s
CI / validate-json (push) Successful in 23s
CI / markdown-links (push) Successful in 13s
The post-reboot page told users to log in with the username and
password — but Furtka is browser-first; users aren't meant to touch
the TTY. Show the actual URL they should open instead, plus an mDNS
fallback hint.

Also pin the header SVG to width="24" height="24" so it can never
render at full viewport size, even if CSS somehow fails to load.
Belt-and-suspenders with the reboot-delay fix.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 09:27:49 +02:00
8b00873da2 fix(webinstaller): delay reboot so the rebooting page can fetch CSS
The reboot route fired systemctl reboot in parallel with returning
the rebooting HTML. The browser's follow-up request for /static/style.css
was racing the shutdown — often the server was already gone, leaving
the page unstyled (inline SVG rendered at full viewBox size, filling
the screen). A small sleep gives the browser time to pull CSS + icons
before the network drops.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 09:24:05 +02:00
1d145f7f0c fix: pick bootloader based on firmware (BIOS → GRUB, UEFI → systemd-boot)
Some checks failed
CI / lint (push) Waiting to run
CI / test (push) Waiting to run
CI / validate-json (push) Waiting to run
CI / markdown-links (push) Waiting to run
Build ISO / build-iso (push) Has been cancelled
systemd-boot is UEFI-only. Hardcoding it broke the install on
BIOS/legacy hosts with HardwareIncompatibilityError in
installer._add_systemd_bootloader. Detect via /sys/firmware/efi and
fall back to GRUB for BIOS.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 09:11:58 +02:00
54dd88d4c6 iso: brand syslinux menu header and BIOS help text
Some checks failed
CI / lint (push) Waiting to run
CI / test (push) Waiting to run
CI / validate-json (push) Waiting to run
CI / markdown-links (push) Waiting to run
Build ISO / build-iso (push) Has been cancelled
MENU TITLE in the syslinux box now reads "Furtka" instead of
"Arch Linux", and the per-entry HELP line at the bottom speaks of
"Furtka Live Installer" / "install Furtka" instead of the upstream
Arch strings. Same sed-not-overlay approach we already use for the
menu entry labels.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 09:10:20 +02:00
3909ee781b style: ruff format webinstaller/app.py
All checks were successful
Build ISO / build-iso (push) Successful in 20m45s
CI / lint (push) Successful in 27s
CI / test (push) Successful in 32s
CI / validate-json (push) Successful in 23s
CI / markdown-links (push) Successful in 13s
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 08:46:58 +02:00
8c56c036cb fix: enable Furtka units inside custom_commands, not services list
Some checks failed
Build ISO / build-iso (push) Successful in 16m44s
CI / lint (push) Failing after 25s
CI / test (push) Successful in 36s
CI / validate-json (push) Successful in 22s
CI / markdown-links (push) Successful in 29s
archinstall runs `systemctl enable` over the `services` list *before*
custom_commands, so our own unit files (written in custom_commands)
didn't exist yet at enable-time and install aborted with
"Unit furtka-welcome.service does not exist". Keep `caddy` +
`avahi-daemon` in `services` since those are packaged units present
right after pacstrap; move `furtka-welcome` + `furtka-status.timer`
to a `systemctl enable` call appended to custom_commands so they fire
after the unit files land on disk.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 20:34:34 +02:00
8ed1d82fd3 feat: post-install bootstrap — land in Furtka after reboot
Some checks failed
Build ISO / build-iso (push) Successful in 16m47s
CI / lint (push) Failing after 32s
CI / test (push) Successful in 33s
CI / validate-json (push) Successful in 23s
CI / markdown-links (push) Successful in 13s
Installs caddy + avahi + nss-mdns on the target and writes a small
landing page, live status tiles (uptime / docker version / free disk
via furtka-status.timer), and a console welcome banner — all via
archinstall's custom_commands so the payload travels with the
user_configuration.json. After reboot `http://<hostname>.local`
serves a Furtka-branded page on :80 instead of the bare Arch login.

No Authentik / no app store yet — demo shell for the real post-
install work (Robert's area).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 19:51:50 +02:00
dfdbdd69aa docs: sync README roadmap, runner-setup, and ops/ to today's reality
All checks were successful
Build ISO / build-iso (push) Successful in 17m13s
CI / lint (push) Successful in 26s
CI / test (push) Successful in 32s
CI / validate-json (push) Successful in 22s
CI / markdown-links (push) Successful in 13s
A lot moved since the last docs sweep. Catching everything up in one
batch so a newcomer (or future us) reading the repo isn't lied to.

**README.md roadmap:**
- Walking-skeleton live ISO: upgraded from "screens 1-3 work
  end-to-end" to "install runs to completion on a VM and the installed
  system logs in and runs `docker ps` without sudo".
- 26.0-alpha release: dropped the "deferred" note — its blocker
  (archinstall not completing) is gone; just needs a re-tag when we
  like the installer copy.
- Added an explicit "ISO-build in CI" line for the new
  `.forgejo/workflows/build-iso.yml`.
- Split the old "mDNS + local CA" item: mDNS is live (hostname baked
  in, avahi/nss-mdns in the image), HTTPS via local CA still open.
- Noted post-install reboot button, progress bar, archinstall 4.x
  schema work, console welcome, custom_commands docker group join in
  the wizard milestone bullet.

**docs/runner-setup.md:**
- Full rewrite for the docker-outside-of-docker architecture we
  actually run now (was still describing the DinD sidecar setup).
- Documents the `/data` symlink on the host that makes host-mode
  `-v /data/…:/work` resolve — the non-obvious piece that took the
  longest to nail down today.
- Describes the two runtime modes (`ubuntu-latest:docker://…` for CI,
  `self-hosted:host` for build-iso) and why each exists.
- Adds the `upload-artifact@v3` pin note — v4+ fails on Forgejo with
  `GHESNotSupportedError`.

**ops/forgejo-runner/compose.yml + config.yml:**
- Compose now matches what's actually running: DooD (no DinD sidecar),
  runs as root so apk can install nodejs + docker-cli at startup,
  /var/run/docker.sock bind-mounted.
- Config gets the three explicit label mappings and DooD
  `docker_host` + `valid_volumes`.

**.forgejo/workflows/build-iso.yml:**
- Added `paths-ignore` for docs/website/*.md so doc-only commits don't
  kick off 5-min ISO rebuilds. Code + ISO overlay changes still
  trigger.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 19:28:33 +02:00
05ef50f74e ci: pin upload-artifact to v3 — v4+ unsupported on forgejo
All checks were successful
Build ISO / build-iso (push) Successful in 17m29s
CI / lint (push) Successful in 25s
CI / test (push) Successful in 32s
CI / validate-json (push) Successful in 24s
CI / markdown-links (push) Successful in 12s
Forgejo Actions only speaks the GHES-compatible @actions/artifact
protocol; upload-artifact@v4+ insists on the newer API and fails with
`GHESNotSupportedError`. Pin to v3, which uses the old protocol that
Forgejo implements.

Good news: the ISO itself built end-to-end in ~5m on the runner
(DooD + /data symlink resolved the path-mismatch). Only the upload
failed, and this pins it.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 19:10:16 +02:00
e27c98c927 ci: retrigger build-iso with /data path unified across container+host
Some checks failed
CI / validate-json (push) Waiting to run
CI / markdown-links (push) Waiting to run
Build ISO / build-iso (push) Failing after 5m50s
CI / lint (push) Successful in 29s
CI / test (push) Has been cancelled
2026-04-14 19:03:37 +02:00
fb7a503df9 ci: retrigger build-iso with matching container+host workspace paths
Some checks failed
Build ISO / build-iso (push) Failing after 4s
CI / lint (push) Successful in 25s
CI / test (push) Successful in 33s
CI / validate-json (push) Successful in 24s
CI / markdown-links (push) Successful in 12s
2026-04-14 19:00:10 +02:00
cb646776f7 ci: retrigger build-iso with docker-cli-enabled runner
Some checks failed
Build ISO / build-iso (push) Failing after 4s
CI / lint (push) Successful in 25s
CI / test (push) Successful in 32s
CI / validate-json (push) Successful in 23s
CI / markdown-links (push) Successful in 13s
2026-04-14 18:58:03 +02:00
944a4fe220 ci: retrigger build-iso with nodejs-enabled runner
Some checks failed
CI / test (push) Waiting to run
CI / markdown-links (push) Waiting to run
Build ISO / build-iso (push) Failing after 4s
CI / lint (push) Successful in 28s
CI / validate-json (push) Has been cancelled
2026-04-14 18:56:50 +02:00
a2f079fcf2 ci: retrigger build-iso now that node is on the runner host
Some checks failed
Build ISO / build-iso (push) Failing after 2s
CI / lint (push) Successful in 29s
CI / validate-json (push) Successful in 23s
CI / markdown-links (push) Successful in 12s
CI / test (push) Has been cancelled
2026-04-14 18:54:14 +02:00
cb0ffc217f ci: retrigger build-iso now that runner has self-hosted:host label mapping
Some checks failed
Build ISO / build-iso (push) Failing after 2s
CI / lint (push) Successful in 25s
CI / test (push) Successful in 32s
CI / validate-json (push) Successful in 25s
CI / markdown-links (push) Has been cancelled
2026-04-14 18:52:34 +02:00
e9e8bd3319 ci: run build-iso on the runner host (DooD path fix)
Some checks failed
CI / test (push) Waiting to run
Build ISO / build-iso (push) Failing after 6s
CI / lint (push) Successful in 25s
CI / validate-json (push) Successful in 23s
CI / markdown-links (push) Has been cancelled
Now that the runner uses docker-outside-of-docker, volume mounts in
`build.sh` (`docker run -v \$REPO_ROOT:/work ...`) are interpreted by
host docker — so `\$REPO_ROOT` must be a real host path. When the job
runs inside a job container, `\$REPO_ROOT` is only valid in the job
container's filesystem namespace and host docker can't find it, hence
`bash: /work/iso/build.sh: No such file or directory`.

Fix: switch `runs-on` to `self-hosted`. Forgejo-runner exposes that
label out of the box and, with no matching container image mapping,
runs steps directly on the runner VM. Checkout writes to a real host
path; `docker run -v …` then mounts a path both the outer CLI and
host docker agree on.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 18:50:47 +02:00
a6cccc67c1 ci: drop duplicate docker.sock mount in build-iso
Some checks failed
Build ISO / build-iso (push) Failing after 6s
CI / lint (push) Successful in 26s
CI / test (push) Successful in 33s
CI / validate-json (push) Successful in 23s
CI / markdown-links (push) Has been cancelled
Forgejo-runner's valid_volumes already injects /var/run/docker.sock
into every job container, so the explicit `container.volumes` mount
in the workflow triggered 'Duplicate mount point' and the job never
started. Removed — DOCKER_HOST env is enough.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 18:49:12 +02:00
0f0308bf68 ci: switch build-iso to docker-outside-of-docker
Some checks failed
Build ISO / build-iso (push) Failing after 46s
CI / lint (push) Successful in 25s
CI / test (push) Successful in 32s
CI / validate-json (push) Successful in 24s
CI / markdown-links (push) Successful in 14s
The DinD setup was the wrong tool here: forgejo-runner runs on host
docker, but it spawned jobs via the DinD sidecar — meaning jobs
were isolated inside DinD's own docker namespace and couldn't reach
`docker-in-docker` by hostname, and couldn't see the
`forgejo-runner_default` network (which only exists on host docker).

Switched the runner (compose.yml + data/config.yml) to talk directly
to host docker via `/var/run/docker.sock` and added it to the host
`docker` group (GID 988) so the non-root runner user can use the
socket. `valid_volumes` now whitelists the socket so job containers
can mount it too.

Workflow now mounts /var/run/docker.sock into the job container and
points DOCKER_HOST at that unix socket. `./iso/build.sh` then runs
its inner `docker run --privileged archlinux:latest` against the
host daemon — no nested docker.

Tradeoff: this is less isolated than DinD (jobs have full host docker
access — they could spawn arbitrary containers), but on a dedicated
single-user build VM the DooD simplification is worth it.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 18:45:32 +02:00
4c5a00a0e0 ci: drop ineffective container.options override for build-iso
Some checks failed
Build ISO / build-iso (push) Failing after 1s
CI / lint (push) Failing after 1s
CI / test (push) Failing after 1s
CI / validate-json (push) Failing after 1s
CI / markdown-links (push) Failing after 1s
forgejo-runner 6.4 filters `--network` out of `container.options`, so
the workflow-level override was silently ignored and the job kept
landing on a per-task network where `docker-in-docker` didn't resolve.
Fixed at the right level by editing the runner's `/data/config.yml`
(`container.network: "forgejo-runner_default"`) and restarting the
forgejo-runner container — every job now joins the shared network so
DOCKER_HOST=tcp://docker-in-docker:2375 just works.

Workflow trimmed back to only what's needed: DOCKER_HOST env pin. The
default runner image (catthehacker/ubuntu:act-latest) already has the
docker CLI.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 18:40:16 +02:00
ba36bb4741 ci: attach build-iso job to DinD network, pin lychee-action source
Some checks failed
Build ISO / build-iso (push) Failing after 5s
CI / lint (push) Successful in 27s
CI / test (push) Successful in 44s
CI / markdown-links (push) Failing after 1s
CI / validate-json (push) Failing after 10m34s
- build-iso: the job container was on a per-job docker network, so
  `docker-in-docker` (the DinD sidecar hostname on
  `forgejo-runner_default`) didn't resolve. Pin the container to that
  shared network via `container.options: --network forgejo-runner_default`.
  catthehacker/ubuntu:act-latest already has the docker CLI, so drop
  the apt-get step.

- ci.yml markdown-links: forgejo's action mirror at data.forgejo.org
  doesn't carry `lycheeverse/lychee-action`, so `uses:` was 404ing
  before the step could even run (rendering continue-on-error moot).
  Fully-qualified GitHub URL bypasses the mirror.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 18:37:54 +02:00
a777efd4c0 ci: green the pipeline — tests match 4.x schema, build-iso hits DinD, lint clean
Some checks failed
Build ISO / build-iso (push) Failing after 20s
CI / lint (push) Successful in 26s
CI / test (push) Successful in 31s
CI / validate-json (push) Successful in 23s
CI / markdown-links (push) Failing after 2s
Three things are broken on origin/main as of 6114cb2, all found in one
red CI run:

- build-iso workflow couldn't reach docker. forgejo-runner's config
  sets `docker_host: tcp://docker-in-docker:2375` but that env doesn't
  propagate into job containers on `runs-on: ubuntu-latest`, and the
  default job image has no docker CLI. Fix: pin `DOCKER_HOST` on the
  job and apt-install `docker.io` before invoking `iso/build.sh`.

- Two tests asserted on the pre-4.x archinstall schema:
  `creds["root_password"]` (now `!root-password`) and
  `cfg["disk_config"]["device"]` / `cfg["users"]` (users moved to
  creds; disk_config is now a full `default_layout` dict). Rewrote
  the tests to reflect 4.x reality and monkeypatched `build_disk_config`
  since its real body imports archinstall, which isn't on CI.

- Ruff flagged one line of `PROGRESS_PHASES` at 107 chars — collapsed
  the column alignment. `ruff format` pulled in a couple of cosmetic
  expansions in spawn_archinstall and the tests that had been drifting.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 18:29:42 +02:00
9d8fd34043 docs: reflect reality on drive filtering in iso/README
The "Drive list includes /dev/loop0 and /dev/sr0" rough-edge bullet
claimed the filter hadn't been added yet, but it has — `drives.py`'s
`parse_lsblk_output` skips everything with `TYPE != disk`, so loop
and rom devices never reach the picker. Tested.

Replaced with a note about the remaining real footgun: on bare-metal
installs, the USB stick the user booted from is `TYPE=disk` and would
show up alongside the actual install target, so a user could pick
their boot media by mistake. Not urgent while we test in VMs (the ISO
is a CD-ROM there, already filtered), but flagged so it's visible
when bare-metal testing starts.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 18:18:24 +02:00
6114cb2f27 ci: build the live ISO on push-to-main and publish as artifact
Some checks failed
Build ISO / build-iso (push) Failing after 19s
CI / lint (push) Failing after 27s
CI / test (push) Failing after 41s
CI / validate-json (push) Successful in 24s
CI / markdown-links (push) Failing after 2s
Adds `.forgejo/workflows/build-iso.yml` that runs `./iso/build.sh` and
uploads the resulting ISO as a `furtka-iso` artifact (retained 14 days).
Triggers on `push: branches: [main]` and `workflow_dispatch` only —
feature branches don't pay the 15-20 min build cost. `concurrency`
cancels older runs of the same ref so only the most recent push
produces an artifact.

This is what Robert asked for: push change → download ISO from the
Forgejo run → test without needing a laptop to build.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 18:13:15 +02:00
7442dbe47e feat: console welcome with proksi.local + post-install reboot flow
Some checks failed
CI / lint (push) Failing after 28s
CI / test (push) Failing after 32s
CI / validate-json (push) Successful in 24s
CI / markdown-links (push) Failing after 2s
Two user-visible polish passes on top of the walking-skeleton install:

- Console welcome: live ISO's getty no longer shows the bare Arch prompt.
  `/etc/hostname` is now `proksi` so avahi advertises `proksi.local`;
  a systemd oneshot (`furtka-issue.service`, runs after
  network-online.target) regenerates `/etc/issue` via
  `/usr/local/bin/furtka-update-issue` to show both
  `http://proksi.local:5000` (preferred, via mDNS — avahi and nss-mdns
  are already in `packages.extra`) and the raw IP as a fallback for
  networks where mDNS is flaky. `agetty --reload` nudges the already-
  running login prompt to redraw.

- /install/log now polls a JSON endpoint (`/install/log.json`) every
  3 s instead of meta-refresh, so expanding the collapsed log
  `<details>` doesn't get eaten by the refresh. Noscript fallback
  keeps the meta-refresh for JS-off users. When the install finishes,
  the Done state shows a Reboot-now button that POSTs to
  `/install/reboot` (guarded server-side to only reboot once status
  is "done", so a panicked click mid-pacstrap can't brick the box).
  A confirm() reminds the user to pull the USB / eject the ISO first.

End-to-end tested on a Proxmox VM 2026-04-14: boot → wizard →
archinstall → Done state → Reboot now → VM came back up → login as
created user → `docker ps` worked.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 18:08:59 +02:00
3a259beb98 feat: install progress bar + fix docker group creation order
Two tangled changes to the install flow, batched because they're both
small and hit app.py:

1. Phase-based progress bar on /install/log. parse_install_progress()
   scans the archinstall log for ordered phase markers ("Wiping
   partitions", "Installing packages: ['base'", "Adding bootloader",
   "Installation completed without any errors", …) and exposes
   percent + user-facing phase label + status (running/done/error).
   Template wraps the raw log in a collapsed <details> so the default
   view stays calm; the meta-refresh stops once status is terminal.
   If archinstall changes its stdout wording the bar stalls on the
   last recognized phase — the install itself is unaffected.

2. Drop "docker" from the user's groups in creds and do the
   `gpasswd -a <user> docker` via custom_commands instead.
   archinstall creates users before pacstrapping the extras list, so
   the docker group doesn't exist at user-create time —
   caused the second real install to crash with
   `gpasswd: group 'docker' does not exist`. custom_commands runs
   at the very end, after docker is installed. Username is validated
   by USERNAME_RE so no shell injection.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 17:07:57 +02:00
51cdf460d9 fix: wire webinstaller to archinstall 4.x config schema
Walking-skeleton install on a real VM surfaced two archinstall 4.x
schema breakages that the wizard hit only at runtime:

- `use_entire_disk` was removed as a `config_type`. Now builds a full
  `default_layout` disk_config by calling `suggest_single_disk_layout`
  (forced ext4 + no separate /home, which bypasses its interactive
  prompts) and serializing the returned DeviceModification.
- Credentials keys renamed to plaintext sentinels: `!root-password`
  and `!password`. Users with neither `!password` nor `enc_password`
  are silently dropped by `User.parse_arguments` — which is why the
  first real install booted but wouldn't log in.

Also rolls in Robert's UX feedback quick-wins: `(Recommended)` prefix
on the default boot entry across GRUB/syslinux/systemd-boot, and
less-jargon hints on the step-1 hostname/username fields. iso/README
loses three stale bullets that described pre-15b876c behaviour.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 17:00:39 +02:00
15b876c70a feat: webinstaller writes archinstall config + execs install, styled
Some checks failed
CI / lint (push) Failing after 25s
CI / test (push) Successful in 31s
CI / validate-json (push) Successful in 23s
CI / markdown-links (push) Failing after 2s
Wires the live-ISO wizard from "shows three screens" to "actually invokes
archinstall on the chosen disk", plus first-pass styling so it stops looking
like raw <h1>/<form>.

Webinstaller flow:
- S1 form gains username/password/password2/language with server-side
  validation (hostname/username regex, ≥8 char password, match check).
- /install/run writes user_configuration.json + user_credentials.json
  (creds 0600) to FURTKA_STATE_DIR (default /tmp/furtka), then execs
  `archinstall --config … --creds … --silent` as a backgrounded subprocess.
- /install/log renders the subprocess output via meta-refresh polling.
- FURTKA_DRY_RUN=1 short-circuits the exec for testing.
- archinstall flag names verified against `archinstall --help` in an
  archlinux container before committing.

Drive list:
- drives.py now filters via `lsblk … -o NAME,SIZE,TYPE` keeping TYPE=disk,
  so the live ISO's own squashfs (loop) and CD-ROM (rom) stop appearing
  as install targets.

Boot menu:
- iso/build.sh sed-rebrands "Arch Linux install medium" →
  "Furtka Live Installer" across grub/, syslinux/, and efiboot/loader/
  entries. Verified zero leftovers against the current releng profile.

Styling:
- static/style.css adopts the website's design tokens (palette,
  typography, gate-mark accent), with light + dark via prefers-color-scheme.
- New base.html with header (gate SVG + FURTKA·INSTALLER wordmark + step
  indicator) and footer; all install templates extend it.
- Drive picker uses radio cards with score chip; overview uses a summary
  table and a destructive "wipe drive" button.

Tests: 17 pass (4 new in test_app.py covering validation + config builders,
2 new in test_drives.py covering the lsblk filter). Ruff clean.

README roadmap updated to mark these done and explicitly defer the
26.0-alpha release until archinstall actually completes end-to-end on a VM.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 10:54:49 +02:00
defd2eda06 feat: publish public website at furtka.org
Some checks failed
CI / lint (push) Successful in 24s
CI / test (push) Successful in 32s
CI / validate-json (push) Successful in 23s
CI / markdown-links (push) Failing after 2s
Hugo static site with an intentionally minimal single-page copy — English
default, German under /de/ — while the project stays pre-alpha. No CMS, no
external theme, no webfonts, no external requests. System-UI sans on a
paper-white / near-black palette with a deep crimson accent; a small
wicket-gate SVG as the sole brand mark.

Hosting: nginx on forge-runner-01 serves /var/www/furtka.org; the upstream
openresty proxy terminates TLS so the VM itself only speaks plain HTTP.
Deploy is ./website/deploy.sh (rsync + remote hugo --minify). One-time VM
bootstrap in ops/nginx/setup-vm.sh.
2026-04-14 10:27:51 +02:00
7f15543f1c docs: capture UEFI + Secure Boot gotchas in iso/README.md
Some checks failed
CI / lint (push) Successful in 42s
CI / test (push) Successful in 47s
CI / validate-json (push) Successful in 38s
CI / markdown-links (push) Failing after 2s
These two cost us real time tonight — SeaBIOS failing at ldlinux.c32,
then OVMF rejecting our unsigned GRUB with "Access Denied" until we
disabled Secure Boot in the firmware setup menu. Also flagged the
silent browser-upload truncation and the two known drive-list bugs
surfaced during the first live boot.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-13 23:57:54 +02:00
a535debf2e feat: walking-skeleton live ISO that boots into the Flask wizard
Some checks are pending
CI / lint (push) Waiting to run
CI / test (push) Waiting to run
CI / validate-json (push) Waiting to run
CI / markdown-links (push) Waiting to run
iso/build.sh runs mkarchiso inside a privileged archlinux container,
overlays our customizations onto Arch's stock releng profile
(systemd unit that launches Flask on 0.0.0.0:5000, the webinstaller
under /opt/furtka, extra packages for python/flask/avahi), and drops
a hybrid BIOS/UEFI ISO in iso/out/.

Verified end to end: Proxmox VM (OVMF, Secure Boot off) boots the ISO,
DHCP's onto the LAN, and serves screens 1-3 of the existing wizard at
http://<vm-ip>:5000/install/step1. This is the first point at which
Furtka is something you can run instead of something you can read about.

Two known drive-list bugs surfaced while testing (/dev/loop0 and
/dev/sr0 appear as install targets) — captured in the README roadmap.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-13 23:55:58 +02:00