furtka/docs/resource-manager.md
Daniel Maksymilian Syrnicki a90582a3a3
All checks were successful
CI / lint (push) Successful in 26s
CI / test (push) Successful in 32s
CI / validate-json (push) Successful in 24s
CI / markdown-links (push) Successful in 12s
docs: refresh resource-manager.md to reflect shipped v1
Open-questions section is gone — all seven were answered live in
session and are now codified in the furtka/ package. Doc now
describes the actual contract (manifest schema, lifecycle, code
map) instead of a planning scaffold. Out-of-scope list is preserved
so future contributors don't propose things that were deliberately
deferred.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 10:31:28 +02:00

5.6 KiB

Resource Manager

The layer between Furtka apps and the underlying system (disk, Docker, network). Apps don't touch Docker or the filesystem directly — they declare what they need in a manifest and the Resource Manager provisions, runs, and tracks them.

Status: v1 shipped 2026-04-15 in commits cfc4c0b..c6ed7a8. First consumer is the LAN fileshare app at apps/fileshare/. Web UI at http://<host>.local/apps. Not yet validated on a real VM with Docker — that's the first test of the next session.

For the conversation that produced these decisions and the Q&A live in the chat, see ~/.claude/plans/stateful-juggling-pike.md.


Anatomy of an app

Every Furtka app is a directory containing exactly:

manifest.json       # required — the contract
docker-compose.yaml # required — container lifecycle
.env.example        # optional — bootstraps .env on first install
.env                # optional — user-edited secrets (preserved across upgrades)
icon.svg            # optional but referenced in the manifest

The directory name is the app name. The manifest's name field must match it once installed (the scanner enforces this). When you install from an arbitrary source path the manifest's name decides where it lands — so furtka app install /tmp/some-fork/ works regardless of what the source folder is called.

Manifest schema

JSON, all fields required:

{
  "name": "fileshare",
  "display_name": "Network Files",
  "version": "0.1.0",
  "description": "SMB share for LAN devices",
  "volumes": ["files"],
  "ports": [445, 139],
  "icon": "icon.svg"
}
  • name — machine id, must equal the install folder name.
  • display_name — shown in the UI.
  • version — free-form string (semver expected, not enforced).
  • volumes — list of short names. Furtka creates each as furtka_<app>_<vol> (collision-free across apps). Compose files MUST reference the namespaced name as external: true.
  • ports — informational for the UI. Compose owns the actual port binding.
  • icon — relative path inside the app folder.

Lifecycle

Discovery: boot-scan

furtka-reconcile.service (oneshot, after docker.service) runs furtka reconcile at every boot. The reconciler walks /var/lib/furtka/apps/*, validates each manifest, ensures every declared volume exists, then docker compose up -d per app. Filesystem is the only source of truth — no separate index, no DB.

A failed reconcile of one app does not abort the others. The CLI exits non-zero if any app errored, so systemd marks the unit red, but the healthy apps still come up.

Install

  • furtka app install <path> — install from a local folder.
  • furtka app install <name> — falls back to /opt/furtka/apps/<name>/ (apps bundled with the ISO).
  • Web UI: click Install on a card under "Available to install".

The installer copies files into /var/lib/furtka/apps/<name>/, preserves any existing user .env, bootstraps .env from .env.example on first install, and chmod 0600 on .env.

Placeholder secrets are refused. If .env ends up containing values listed in furtka.installer.PLACEHOLDER_SECRETS (currently {"changeme"}), install raises InstallError and the reconciler is not run. Files are left in place so the user can vim the .env and re-run install.

Remove

  • furtka app remove <name>docker compose down, then delete the app folder.
  • Web UI: Remove button.

Volumes are NEVER deleted. Reinstall recovers the data. Manual docker volume rm furtka_<app>_<vol> if you really want to wipe.


Backend

furtka/api.py runs as furtka-api.service — Python stdlib http.server (no Flask), bound to 127.0.0.1:7000. Caddy reverse-proxies /api/* and /apps* from :80.

Endpoints:

  • GET / and /apps — self-contained HTML UI.
  • GET /api/apps — installed apps as JSON.
  • GET /api/bundled — apps available in /opt/furtka/apps/ that aren't installed.
  • POST /api/apps/install {"name": "..."} — install/reinstall.
  • POST /api/apps/remove {"name": "..."} — remove (folder, not volume).

The UI has no authentication. It shouts the warning at the top. Authentik integration is the proper fix later.


Out of scope for v1

These are deliberate omissions, not forgotten work. Adding any of them is a discussed design change.

  • SQL database — filesystem is authoritative, full stop.
  • Volume sharing between apps (would be the first DB use case).
  • Auth on the web UI.
  • TLS on .local (separate problem — see commit history around mDNS for the reasoning).
  • Catalog repo — install <name> only resolves bundled apps, no network catalog.
  • Auto-updates of installed apps.
  • In-UI .env editor — re-install after editing currently needs SSH.

Code map

File Purpose
furtka/manifest.py JSON schema validation, Manifest dataclass, namespacing helper
furtka/scanner.py Walks /var/lib/furtka/apps/, returns ScanResults (broken manifests = error, not exception)
furtka/reconciler.py Drives the per-app loop; isolates errors so one broken app doesn't block others
furtka/installer.py Copy-from-source, .env bootstrap + 0600, placeholder rejection
furtka/dockerops.py docker volume + docker compose subprocess wrappers
furtka/api.py HTTP server + HTML UI
furtka/cli.py furtka app list/install/remove, furtka reconcile, furtka serve
apps/fileshare/ First consumer — SMB share via dperson/samba
iso/build.sh Tarballs furtka/ + apps/ into the live ISO at build time
webinstaller/app.py _resource_manager_commands() + new systemd units (reconcile + api) for archinstall custom_commands