Slice 1 of the on-box UI uplevel. Consolidates the two duplicated
stylesheets (landing's webinstaller/app.py and /apps's inline block
in furtka/api.py) into one sheet served by Caddy at /style.css.
Expands the token set (spacing, radii, shadows, focus ring, warn-fg,
accent-soft, card-hover), adds a prefers-color-scheme light theme,
and introduces shared primitives for later slices: .nav, .chip,
.card, .kv, .coming, .grid-apps, .app-tile, .app-icon.
Also adds a persistent top nav (Home / Apps) to both pages — Jakob's
Law, so users always have a way back — and collapses the /apps "Last
action" log behind a details disclosure so it stops dominating the
page. Format fallout on drives.py picked up by ruff.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Codifies the branch protection applied to main on 2026-04-16: no
direct pushes, required checks = CI / {lint,test,validate-json}*,
zero approvals (2-person team), admin bypass left on for emergencies.
Script is idempotent (create-or-patch) and reads its token from
\$FORGEJO_TOKEN or the local git remote URL as a fallback, so a
clean re-run just reconciles the rule with branch-protection.json.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Leftover German string from prototyping — the rest of the apps UI is
English, so it stood out as a mixed-language bug during 2026-04-16
VM testing.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace the numeric "score N" pill with a Recommended badge on the
auto-selected drive plus size/type/health chips. The score itself
stays as the sort key, users just never see the raw number.
Why: Robert's 2026-04-14 wizard UX direction — less jargon, explain
Fachbegriffs, recommend defaults. A bare "score 35" gave users no
reason why one drive was picked.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Reflects commit 61c7ee2 — manifest gains `settings` + `description_long`,
API gains `GET/POST /api/apps/<name>/settings`, install/reinstall accepts
a `settings` object. Drops the stale "in-UI .env editor" from the
out-of-scope list since that's what just shipped.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
End-to-end VM test today (2026-04-15) validated the resource manager
golden path but exposed four things blocking "dein-Vater-tauglich":
no way to configure an app without SSH+editor, no openssh, no nano,
keyboard stuck on US, and a samba healthcheck that cried wolf.
Resource-manager side:
- Manifest schema gains optional `settings` list (name/label/
description/type/required/default) and `description_long`.
- Bundled-app install opens a form rendered from the manifest;
submit carries values to `POST /api/apps/install` which writes
them into the new app's `.env` before the placeholder check runs.
- Installed apps grow an "Einstellungen" button that merges a
partial settings dict into the existing `.env` (unsubmitted
password fields = keep current), then reconciles to restart.
- New endpoints: `GET/POST /api/apps/<name>/settings`. Passwords
are never returned to the client.
- Fileshare manifest declares its SMB_USER/SMB_PASSWORD settings
in German with help text.
ISO side (so the next build is actually usable on the TTY):
- Add `openssh` to the package list + `sshd` to enabled services.
`archinstall: true` in 4.x did not install openssh-server.
- Add `nano` — `vim` was the only editor pitched at users, which
is brutal for first-timers (and was missing anyway).
- Keyboard layout follows the installer language (`de→de`, `pl→pl`,
`en→us`) instead of hardcoded `us`. A German user couldn't type
`/` or `-` at the console, making even `sudo nano` painful.
- Disable the dperson/samba healthcheck in the compose override —
it timed out on every probe while the share itself worked fine.
19 new tests (manifest parsing + settings-merge + two new API
endpoints over live HTTP); 94 total, format + lint clean.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Speak to non-technical visitors: drop "x86", swap "domain" for "name on
the network", and list concrete upcoming apps (photos, files, smart home,
game streaming, media) so the page says something real instead of just
"it's early".
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Open-questions section is gone — all seven were answered live in
session and are now codified in the furtka/ package. Doc now
describes the actual contract (manifest schema, lifecycle, code
map) instead of a planning scaffold. Out-of-scope list is preserved
so future contributors don't propose things that were deliberately
deferred.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds the management UI Daniel asked for end-of-session. Goes beyond
the original MVP scope (plan punted UI to v2) but the architecture
already supports it cleanly: stdlib http.server only, no new deps.
- furtka.api: minimal HTTP server. GET / serves a self-contained
HTML page (dark-mode card list, vanilla JS, no build step). GET
/api/apps + /api/bundled return JSON. POST /api/apps/{install,
remove} accept {"name": "..."} and call the same installer +
reconciler the CLI uses, so the placeholder-secret refusal and
per-app reconcile isolation flow through unchanged.
- furtka.cli: new `furtka serve` subcommand. Imports api lazily so
`furtka app list` / `reconcile` startup stays zero-cost.
- webinstaller: new furtka-api.service (Type=simple, restart on
failure, after reconcile). Caddyfile gets two new handle blocks
to reverse-proxy /api and /apps to localhost:7000. Landing page's
"App store coming soon" tile becomes a real "Manage installed apps
→" link to /apps.
- Bound to 127.0.0.1 by default; Caddy makes it LAN-reachable. The
UI shouts a "no auth, anyone on your LAN can install/remove" warning
at the top — Authentik integration is the proper fix later.
UX wrinkle worth noting: a placeholder-rejected install leaves the
app in /var/lib/furtka/apps/<name>/ (so the user can edit .env in
place). To re-trigger after editing, the Installed list now shows
both Reinstall and Remove buttons.
10 new tests: helper functions (list_installed, list_bundled with
hide-already-installed), install/remove endpoints with the no_docker
fixture, and two real-socket urllib smoke tests that boot the actual
HTTPServer on an ephemeral port and round-trip GET / + POST.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Addresses the four issues raised in the slice-3 audit before pushing.
#1 (critical) — refuse to finish install when .env still contains
placeholder secrets like "changeme". Without this, `furtka app install
fileshare` would happily start an SMB server with a publicly-known
password — the kind of default that ends up screenshotted on Hacker
News. PLACEHOLDER_SECRETS lives in installer.py; new tests cover
placeholder rejection, post-edit retry, and quoted values.
#3 — reconciler now catches DockerError / FileNotFoundError / OSError
per-app instead of letting a single broken app abort the whole
boot-scan. Errors get surfaced as Action(kind="error", …) and
has_errors() drives the CLI exit code so systemd still shows red,
but the other apps actually got reconciled.
#4 — chmod 0600 on .env after install so app secrets aren't world-
readable on multi-user boxes. Done before the placeholder check so
even the half-installed state is safe.
#5 — load_manifest() got an optional expected_name. The scanner
passes the folder name (filesystem source-of-truth contract);
installer leaves it None so `furtka app install /tmp/some-fork/`
works regardless of what the source folder is named.
#2 — TODO comment on dperson/samba:latest. Switching to a digest
needs a verified upstream release; left for the test-day pin.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Closes the loop end-to-end. The ISO build now bundles the furtka/
package and the apps/ tree as a tarball; webinstaller hands it to
archinstall via custom_commands; the installed system gets the
`furtka` CLI, a boot-scan systemd unit, and the fileshare app
ready to install.
- iso/build.sh: stages furtka/ + apps/ into a tmpdir, drops
__pycache__, tarballs into airootfs/opt/furtka-resource-manager.tar.gz.
- webinstaller/app.py: _resource_manager_commands() reads the staged
payload at request-time, base64-encodes it into a single untar
command, and writes /usr/local/bin/furtka (PYTHONPATH wrapper, no
pip needed) + furtka-reconcile.service. Python pacstrapped so the
wrapper has an interpreter.
- Graceful degradation: dev box / CI without an ISO build has no
payload tarball, so those commands are skipped (logs a warning).
Tests cover both branches.
- furtka-reconcile.service is conditionally enabled only if the unit
file actually landed — keeps the systemctl enable line green when
the payload was absent.
- apps/fileshare/: first real Furtka app. dperson/samba on host
network, single named volume, .env.example with placeholder creds.
Manifest matches the schema locked in slice 1.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Fills in the act-on-it half of the resource manager. Reconciler walks
the scanner output and brings docker into the desired state: ensures
each manifest-declared volume exists (idempotent), then runs
docker compose up -d for the project. install/remove on the CLI work
end-to-end against a real /var/lib/furtka/apps/ tree.
- furtka.dockerops: thin subprocess wrapper. Volume + compose
primitives that other modules call. `_run` raises DockerError with
the actual stderr so failures are diagnosable.
- furtka.reconciler: builds an ordered Action list (volumes then
compose_up per app), executes unless dry-run. Broken manifests
produce a "skip" action, the rest of the apps still get reconciled.
- furtka.installer: copy-from-source with two non-obvious rules —
user .env is preserved across upgrade installs, and a missing .env
is bootstrapped from .env.example so compose has values to
substitute on first install. Bundled-app lookup falls back to
/opt/furtka/apps/<name>/ when the source arg isn't a path.
- furtka.cli: app install/remove wired up. remove() ignores compose
down failures so a botched compose doesn't trap users with an
un-removable folder.
- 15 new tests using monkeypatch'd dockerops so the suite still runs
without docker installed. Covers reconcile dry-run, multi-volume
apps, broken-manifest skip behavior, .env preservation, bundled-name
resolution, and remove edge cases.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Slice 1 of the Resource Manager (see docs/resource-manager.md +
plan in ~/.claude/plans/stateful-juggling-pike.md). Lays down the
read-only half: a JSON manifest schema with namespacing, a scanner
that walks /var/lib/furtka/apps/, and a `furtka` CLI with
`app list` and `reconcile --dry-run`. Reconciler / volume creation
/ docker compose calls land in the next slice.
- furtka.manifest: dataclass + load_manifest with required-field +
type validation. volume_name() injects the furtka_<app>_<vol>
namespace so apps can each declare a "data" volume without colliding.
- furtka.scanner: tolerant — broken manifest = ScanResult with error,
not an exception. Lets reconcile log + skip rather than abort.
- furtka.cli: text + --json output. argparse with `app list` and
`reconcile --dry-run`. main() returns int for clean exit codes.
- furtka.paths: FURTKA_APPS_DIR env override so tests don't need root.
- 19 new tests covering valid manifests, every validation branch,
scanner edge cases (missing root, broken manifest, sort order), and
the CLI subcommands.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The post-reboot page told users to log in with the username and
password — but Furtka is browser-first; users aren't meant to touch
the TTY. Show the actual URL they should open instead, plus an mDNS
fallback hint.
Also pin the header SVG to width="24" height="24" so it can never
render at full viewport size, even if CSS somehow fails to load.
Belt-and-suspenders with the reboot-delay fix.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The reboot route fired systemctl reboot in parallel with returning
the rebooting HTML. The browser's follow-up request for /static/style.css
was racing the shutdown — often the server was already gone, leaving
the page unstyled (inline SVG rendered at full viewBox size, filling
the screen). A small sleep gives the browser time to pull CSS + icons
before the network drops.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
systemd-boot is UEFI-only. Hardcoding it broke the install on
BIOS/legacy hosts with HardwareIncompatibilityError in
installer._add_systemd_bootloader. Detect via /sys/firmware/efi and
fall back to GRUB for BIOS.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
MENU TITLE in the syslinux box now reads "Furtka" instead of
"Arch Linux", and the per-entry HELP line at the bottom speaks of
"Furtka Live Installer" / "install Furtka" instead of the upstream
Arch strings. Same sed-not-overlay approach we already use for the
menu entry labels.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
archinstall runs `systemctl enable` over the `services` list *before*
custom_commands, so our own unit files (written in custom_commands)
didn't exist yet at enable-time and install aborted with
"Unit furtka-welcome.service does not exist". Keep `caddy` +
`avahi-daemon` in `services` since those are packaged units present
right after pacstrap; move `furtka-welcome` + `furtka-status.timer`
to a `systemctl enable` call appended to custom_commands so they fire
after the unit files land on disk.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Installs caddy + avahi + nss-mdns on the target and writes a small
landing page, live status tiles (uptime / docker version / free disk
via furtka-status.timer), and a console welcome banner — all via
archinstall's custom_commands so the payload travels with the
user_configuration.json. After reboot `http://<hostname>.local`
serves a Furtka-branded page on :80 instead of the bare Arch login.
No Authentik / no app store yet — demo shell for the real post-
install work (Robert's area).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
A lot moved since the last docs sweep. Catching everything up in one
batch so a newcomer (or future us) reading the repo isn't lied to.
**README.md roadmap:**
- Walking-skeleton live ISO: upgraded from "screens 1-3 work
end-to-end" to "install runs to completion on a VM and the installed
system logs in and runs `docker ps` without sudo".
- 26.0-alpha release: dropped the "deferred" note — its blocker
(archinstall not completing) is gone; just needs a re-tag when we
like the installer copy.
- Added an explicit "ISO-build in CI" line for the new
`.forgejo/workflows/build-iso.yml`.
- Split the old "mDNS + local CA" item: mDNS is live (hostname baked
in, avahi/nss-mdns in the image), HTTPS via local CA still open.
- Noted post-install reboot button, progress bar, archinstall 4.x
schema work, console welcome, custom_commands docker group join in
the wizard milestone bullet.
**docs/runner-setup.md:**
- Full rewrite for the docker-outside-of-docker architecture we
actually run now (was still describing the DinD sidecar setup).
- Documents the `/data` symlink on the host that makes host-mode
`-v /data/…:/work` resolve — the non-obvious piece that took the
longest to nail down today.
- Describes the two runtime modes (`ubuntu-latest:docker://…` for CI,
`self-hosted:host` for build-iso) and why each exists.
- Adds the `upload-artifact@v3` pin note — v4+ fails on Forgejo with
`GHESNotSupportedError`.
**ops/forgejo-runner/compose.yml + config.yml:**
- Compose now matches what's actually running: DooD (no DinD sidecar),
runs as root so apk can install nodejs + docker-cli at startup,
/var/run/docker.sock bind-mounted.
- Config gets the three explicit label mappings and DooD
`docker_host` + `valid_volumes`.
**.forgejo/workflows/build-iso.yml:**
- Added `paths-ignore` for docs/website/*.md so doc-only commits don't
kick off 5-min ISO rebuilds. Code + ISO overlay changes still
trigger.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Forgejo Actions only speaks the GHES-compatible @actions/artifact
protocol; upload-artifact@v4+ insists on the newer API and fails with
`GHESNotSupportedError`. Pin to v3, which uses the old protocol that
Forgejo implements.
Good news: the ISO itself built end-to-end in ~5m on the runner
(DooD + /data symlink resolved the path-mismatch). Only the upload
failed, and this pins it.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Now that the runner uses docker-outside-of-docker, volume mounts in
`build.sh` (`docker run -v \$REPO_ROOT:/work ...`) are interpreted by
host docker — so `\$REPO_ROOT` must be a real host path. When the job
runs inside a job container, `\$REPO_ROOT` is only valid in the job
container's filesystem namespace and host docker can't find it, hence
`bash: /work/iso/build.sh: No such file or directory`.
Fix: switch `runs-on` to `self-hosted`. Forgejo-runner exposes that
label out of the box and, with no matching container image mapping,
runs steps directly on the runner VM. Checkout writes to a real host
path; `docker run -v …` then mounts a path both the outer CLI and
host docker agree on.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Forgejo-runner's valid_volumes already injects /var/run/docker.sock
into every job container, so the explicit `container.volumes` mount
in the workflow triggered 'Duplicate mount point' and the job never
started. Removed — DOCKER_HOST env is enough.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The DinD setup was the wrong tool here: forgejo-runner runs on host
docker, but it spawned jobs via the DinD sidecar — meaning jobs
were isolated inside DinD's own docker namespace and couldn't reach
`docker-in-docker` by hostname, and couldn't see the
`forgejo-runner_default` network (which only exists on host docker).
Switched the runner (compose.yml + data/config.yml) to talk directly
to host docker via `/var/run/docker.sock` and added it to the host
`docker` group (GID 988) so the non-root runner user can use the
socket. `valid_volumes` now whitelists the socket so job containers
can mount it too.
Workflow now mounts /var/run/docker.sock into the job container and
points DOCKER_HOST at that unix socket. `./iso/build.sh` then runs
its inner `docker run --privileged archlinux:latest` against the
host daemon — no nested docker.
Tradeoff: this is less isolated than DinD (jobs have full host docker
access — they could spawn arbitrary containers), but on a dedicated
single-user build VM the DooD simplification is worth it.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
forgejo-runner 6.4 filters `--network` out of `container.options`, so
the workflow-level override was silently ignored and the job kept
landing on a per-task network where `docker-in-docker` didn't resolve.
Fixed at the right level by editing the runner's `/data/config.yml`
(`container.network: "forgejo-runner_default"`) and restarting the
forgejo-runner container — every job now joins the shared network so
DOCKER_HOST=tcp://docker-in-docker:2375 just works.
Workflow trimmed back to only what's needed: DOCKER_HOST env pin. The
default runner image (catthehacker/ubuntu:act-latest) already has the
docker CLI.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- build-iso: the job container was on a per-job docker network, so
`docker-in-docker` (the DinD sidecar hostname on
`forgejo-runner_default`) didn't resolve. Pin the container to that
shared network via `container.options: --network forgejo-runner_default`.
catthehacker/ubuntu:act-latest already has the docker CLI, so drop
the apt-get step.
- ci.yml markdown-links: forgejo's action mirror at data.forgejo.org
doesn't carry `lycheeverse/lychee-action`, so `uses:` was 404ing
before the step could even run (rendering continue-on-error moot).
Fully-qualified GitHub URL bypasses the mirror.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Three things are broken on origin/main as of 6114cb2, all found in one
red CI run:
- build-iso workflow couldn't reach docker. forgejo-runner's config
sets `docker_host: tcp://docker-in-docker:2375` but that env doesn't
propagate into job containers on `runs-on: ubuntu-latest`, and the
default job image has no docker CLI. Fix: pin `DOCKER_HOST` on the
job and apt-install `docker.io` before invoking `iso/build.sh`.
- Two tests asserted on the pre-4.x archinstall schema:
`creds["root_password"]` (now `!root-password`) and
`cfg["disk_config"]["device"]` / `cfg["users"]` (users moved to
creds; disk_config is now a full `default_layout` dict). Rewrote
the tests to reflect 4.x reality and monkeypatched `build_disk_config`
since its real body imports archinstall, which isn't on CI.
- Ruff flagged one line of `PROGRESS_PHASES` at 107 chars — collapsed
the column alignment. `ruff format` pulled in a couple of cosmetic
expansions in spawn_archinstall and the tests that had been drifting.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The "Drive list includes /dev/loop0 and /dev/sr0" rough-edge bullet
claimed the filter hadn't been added yet, but it has — `drives.py`'s
`parse_lsblk_output` skips everything with `TYPE != disk`, so loop
and rom devices never reach the picker. Tested.
Replaced with a note about the remaining real footgun: on bare-metal
installs, the USB stick the user booted from is `TYPE=disk` and would
show up alongside the actual install target, so a user could pick
their boot media by mistake. Not urgent while we test in VMs (the ISO
is a CD-ROM there, already filtered), but flagged so it's visible
when bare-metal testing starts.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds `.forgejo/workflows/build-iso.yml` that runs `./iso/build.sh` and
uploads the resulting ISO as a `furtka-iso` artifact (retained 14 days).
Triggers on `push: branches: [main]` and `workflow_dispatch` only —
feature branches don't pay the 15-20 min build cost. `concurrency`
cancels older runs of the same ref so only the most recent push
produces an artifact.
This is what Robert asked for: push change → download ISO from the
Forgejo run → test without needing a laptop to build.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Two user-visible polish passes on top of the walking-skeleton install:
- Console welcome: live ISO's getty no longer shows the bare Arch prompt.
`/etc/hostname` is now `proksi` so avahi advertises `proksi.local`;
a systemd oneshot (`furtka-issue.service`, runs after
network-online.target) regenerates `/etc/issue` via
`/usr/local/bin/furtka-update-issue` to show both
`http://proksi.local:5000` (preferred, via mDNS — avahi and nss-mdns
are already in `packages.extra`) and the raw IP as a fallback for
networks where mDNS is flaky. `agetty --reload` nudges the already-
running login prompt to redraw.
- /install/log now polls a JSON endpoint (`/install/log.json`) every
3 s instead of meta-refresh, so expanding the collapsed log
`<details>` doesn't get eaten by the refresh. Noscript fallback
keeps the meta-refresh for JS-off users. When the install finishes,
the Done state shows a Reboot-now button that POSTs to
`/install/reboot` (guarded server-side to only reboot once status
is "done", so a panicked click mid-pacstrap can't brick the box).
A confirm() reminds the user to pull the USB / eject the ISO first.
End-to-end tested on a Proxmox VM 2026-04-14: boot → wizard →
archinstall → Done state → Reboot now → VM came back up → login as
created user → `docker ps` worked.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Two tangled changes to the install flow, batched because they're both
small and hit app.py:
1. Phase-based progress bar on /install/log. parse_install_progress()
scans the archinstall log for ordered phase markers ("Wiping
partitions", "Installing packages: ['base'", "Adding bootloader",
"Installation completed without any errors", …) and exposes
percent + user-facing phase label + status (running/done/error).
Template wraps the raw log in a collapsed <details> so the default
view stays calm; the meta-refresh stops once status is terminal.
If archinstall changes its stdout wording the bar stalls on the
last recognized phase — the install itself is unaffected.
2. Drop "docker" from the user's groups in creds and do the
`gpasswd -a <user> docker` via custom_commands instead.
archinstall creates users before pacstrapping the extras list, so
the docker group doesn't exist at user-create time —
caused the second real install to crash with
`gpasswd: group 'docker' does not exist`. custom_commands runs
at the very end, after docker is installed. Username is validated
by USERNAME_RE so no shell injection.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Walking-skeleton install on a real VM surfaced two archinstall 4.x
schema breakages that the wizard hit only at runtime:
- `use_entire_disk` was removed as a `config_type`. Now builds a full
`default_layout` disk_config by calling `suggest_single_disk_layout`
(forced ext4 + no separate /home, which bypasses its interactive
prompts) and serializing the returned DeviceModification.
- Credentials keys renamed to plaintext sentinels: `!root-password`
and `!password`. Users with neither `!password` nor `enc_password`
are silently dropped by `User.parse_arguments` — which is why the
first real install booted but wouldn't log in.
Also rolls in Robert's UX feedback quick-wins: `(Recommended)` prefix
on the default boot entry across GRUB/syslinux/systemd-boot, and
less-jargon hints on the step-1 hostname/username fields. iso/README
loses three stale bullets that described pre-15b876c behaviour.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Wires the live-ISO wizard from "shows three screens" to "actually invokes
archinstall on the chosen disk", plus first-pass styling so it stops looking
like raw <h1>/<form>.
Webinstaller flow:
- S1 form gains username/password/password2/language with server-side
validation (hostname/username regex, ≥8 char password, match check).
- /install/run writes user_configuration.json + user_credentials.json
(creds 0600) to FURTKA_STATE_DIR (default /tmp/furtka), then execs
`archinstall --config … --creds … --silent` as a backgrounded subprocess.
- /install/log renders the subprocess output via meta-refresh polling.
- FURTKA_DRY_RUN=1 short-circuits the exec for testing.
- archinstall flag names verified against `archinstall --help` in an
archlinux container before committing.
Drive list:
- drives.py now filters via `lsblk … -o NAME,SIZE,TYPE` keeping TYPE=disk,
so the live ISO's own squashfs (loop) and CD-ROM (rom) stop appearing
as install targets.
Boot menu:
- iso/build.sh sed-rebrands "Arch Linux install medium" →
"Furtka Live Installer" across grub/, syslinux/, and efiboot/loader/
entries. Verified zero leftovers against the current releng profile.
Styling:
- static/style.css adopts the website's design tokens (palette,
typography, gate-mark accent), with light + dark via prefers-color-scheme.
- New base.html with header (gate SVG + FURTKA·INSTALLER wordmark + step
indicator) and footer; all install templates extend it.
- Drive picker uses radio cards with score chip; overview uses a summary
table and a destructive "wipe drive" button.
Tests: 17 pass (4 new in test_app.py covering validation + config builders,
2 new in test_drives.py covering the lsblk filter). Ruff clean.
README roadmap updated to mark these done and explicitly defer the
26.0-alpha release until archinstall actually completes end-to-end on a VM.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Hugo static site with an intentionally minimal single-page copy — English
default, German under /de/ — while the project stays pre-alpha. No CMS, no
external theme, no webfonts, no external requests. System-UI sans on a
paper-white / near-black palette with a deep crimson accent; a small
wicket-gate SVG as the sole brand mark.
Hosting: nginx on forge-runner-01 serves /var/www/furtka.org; the upstream
openresty proxy terminates TLS so the VM itself only speaks plain HTTP.
Deploy is ./website/deploy.sh (rsync + remote hugo --minify). One-time VM
bootstrap in ops/nginx/setup-vm.sh.
These two cost us real time tonight — SeaBIOS failing at ldlinux.c32,
then OVMF rejecting our unsigned GRUB with "Access Denied" until we
disabled Secure Boot in the firmware setup menu. Also flagged the
silent browser-upload truncation and the two known drive-list bugs
surfaced during the first live boot.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
iso/build.sh runs mkarchiso inside a privileged archlinux container,
overlays our customizations onto Arch's stock releng profile
(systemd unit that launches Flask on 0.0.0.0:5000, the webinstaller
under /opt/furtka, extra packages for python/flask/avahi), and drops
a hybrid BIOS/UEFI ISO in iso/out/.
Verified end to end: Proxmox VM (OVMF, Secure Boot off) boots the ISO,
DHCP's onto the LAN, and serves screens 1-3 of the existing wizard at
http://<vm-ip>:5000/install/step1. This is the first point at which
Furtka is something you can run instead of something you can read about.
Two known drive-list bugs surfaced while testing (/dev/loop0 and
/dev/sr0 appear as install targets) — captured in the README roadmap.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
furtka.org registered via Strato 2026-04-13, so the working title is
retired. Python package, managed-gateway NS hostnames, and repo URLs all
follow. The CHANGELOG "Unreleased" section documents the switch so the
history is preserved at the 26.0-alpha → next-release boundary.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Bootstrap script + compose + config checked in under ops/forgejo-runner/
so a second runner is a scripted setup. runner-setup.md corrects the
register label format (<name>:docker://<image>, not bare names) and
documents the Ubuntu systemd-resolved DNS gotcha.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Mark release-process + CI work complete. Add two next-session TODOs
for Daniel: stand up the forgejo-runner (without which CI queues
forever) and publish the 26.0-alpha Forgejo Release.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- CHANGELOG.md: Keep-a-Changelog format, [26.0-alpha] entry covering
everything shipped so far (installer webapp, drive scoring, base
archinstall config, wireframes, competitor analysis, wizard flow spec)
- CONTRIBUTING.md: dev setup, conventional commit format, code style
- RELEASING.md: calendar versioning rules (YY.N-stage, no "v" prefix)
and the release workflow (bump changelog, commit, tag, push, create
Forgejo Release)
- docs/runner-setup.md: install + register a forgejo-runner so the
upcoming CI workflow actually executes
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Targeted edits reflecting findings from docs/competitors.md:
- New "Recent signals" subsection under Landscape: Umbrel license
complaints, Umbrel's 4+ year HTTPS refusal (#546), CasaOS
maintenance mode
- "Where we differentiate" bullet 4 replaced: "Arch base (rolling
release)" -> "HTTPS + AGPL from day one" — the actual counter-
positioning shots vs Umbrel per the analysis
- "Gap we're targeting" tightened to include HTTPS-by-default
- Key Decisions table: added rows for locked tech picks (Caddy,
Authentik, NS delegation, local CA) with link to wizard-flow.md
- Roadmap: marked competitor analysis + wizard flow spec complete,
reordered so bootable image is clearly the next blocker, added
Caddy/Authentik bootstrap and managed gateway infra items
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
8-screen first-boot installer spec extending Robert's 4-screen
wireframe with the YunoHost-style post-install pattern (domain,
SSL, diagnostic, confirm). Covers:
- Entry point via https://proksi.local with local CA cert install
- Screens S1-S8, each mapped to archinstall config fields or side
effects (SSL cert issuance, DNS delegation, diagnostic gates)
- Data model mapping wizard fields to user_configuration.json +
user_credentials.json
- Locked tech picks with rejected alternatives: Caddy (reverse
proxy), Authentik (SSO), NS delegation (managed gateway DNS),
local CA (HTTPS on proksi.local)
- Open questions for Robert: Backend on/off meaning, local CA vs
Tailscale ACME, UI framework choice, language list, S2 auto-setup
branch behavior
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>