docs: refresh resource-manager.md to reflect shipped v1
All checks were successful
CI / lint (push) Successful in 26s
CI / test (push) Successful in 32s
CI / validate-json (push) Successful in 24s
CI / markdown-links (push) Successful in 12s

Open-questions section is gone — all seven were answered live in
session and are now codified in the furtka/ package. Doc now
describes the actual contract (manifest schema, lifecycle, code
map) instead of a planning scaffold. Out-of-scope list is preserved
so future contributors don't propose things that were deliberately
deferred.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
Daniel Maksymilian Syrnicki 2026-04-15 10:31:28 +02:00
parent c6ed7a8159
commit a90582a3a3

View file

@ -1,126 +1,116 @@
# Resource Manager — Design Notes
# Resource Manager
Scaffold for the conversation between Daniel and Robert. Captures what Robert already sketched in chat on 2026-04-15 and lists the open questions that need an answer before we write code.
The layer between Furtka apps and the underlying system (disk, Docker, network). Apps don't touch Docker or the filesystem directly — they declare what they need in a manifest and the Resource Manager provisions, runs, and tracks them.
**Status:** Draft 2026-04-15. Nothing implemented. First app to consume this will be a LAN fileshare (SMB/NFS) — that's the forcing function.
**Status:** v1 shipped 2026-04-15 in commits `cfc4c0b..c6ed7a8`. First consumer is the LAN fileshare app at `apps/fileshare/`. Web UI at `http://<host>.local/apps`. Not yet validated on a real VM with Docker — that's the first test of the next session.
For the conversation that produced these decisions and the Q&A live in the chat, see `~/.claude/plans/stateful-juggling-pike.md`.
---
## What's the Resource Manager?
## Anatomy of an app
The layer between Furtka apps and the underlying system (disk, Docker, network, users). Apps don't touch Docker or the filesystem directly — they declare what they need, the Resource Manager provisions and tracks it.
Robert's framing (2026-04-15): *"żeby później można było manipulować wszystkimi peryferiami"* — so we can manipulate all peripherals from one place later.
---
## Decided (Robert, 2026-04-15)
1. **An app is a folder.** A Furtka app is defined as a directory containing:
- a manifest
- a `docker-compose.yaml`
- a `.env` file
2. **A registration script** reads that folder and wires the app into the backend.
3. **Each app gets its own named Docker volume.** Sharing between apps is possible but opt-in, not default.
4. **Filesystem is the source of truth (for now).** No SQL database yet — the Resource Manager reads current state from Docker and the OS ad-hoc. DB gets added when we actually need to store intent the OS doesn't know (e.g. "this share belongs to App X, readable by User Y").
These are locked; don't re-litigate without Robert.
---
## Open questions
These need an answer before the first line of code.
### Q1 — What's in the manifest?
This is the contract between app authors and Furtka. Changing it later is expensive. Known candidates:
- `name` (machine id, must match folder name?)
- `display_name` (shown in UI)
- `version` (semver? calver?)
- `description`
- `volumes` (list of named volumes the app needs)
- `ports` (which ports the app wants exposed; needed for reverse proxy later)
- `icon` (path or URL)
- ... anything else?
**Format:** YAML / TOML / JSON — not decided.
### Q2 — Where do apps live on disk?
Suggested-but-not-decided: `/var/lib/furtka/apps/<app-name>/`. Robert: confirm path?
### Q3 — What does the registration script actually do?
Robert said "Skript do rejestrowania w backendzie". Concretely that means some subset of:
- Validate manifest (schema check, required fields, volume names unique)
- Create any declared named volumes that don't exist yet
- Run `docker compose up -d` inside the app folder
- Record the app in whatever passes for the backend index (right now: nothing — we'd just re-scan the folder each time)
**Sub-question:** is registration a one-shot on install, or does something (systemd unit?) re-scan `/var/lib/furtka/apps/` on every boot so the filesystem really is authoritative?
### Q4 — How does the user actually install an app?
Out of scope for the first cut, but needed to know the shape of the CLI:
- Admin UI with a button "Install Fileshare"?
- CLI: `furtka app install <tarball>` / `<git repo>` / `<name-from-catalog>`?
- Catalog = a separate repo with app folders?
### Q5 — Upgrades and removal
- Upgrade strategy: replace folder + re-run compose? Preserve volumes across upgrades (yes obviously, but needs stating)?
- Removal: does it delete the volume or keep it? Robert's "apki się dzielą" implies a volume can outlive its original app.
### Q6 — Volume sharing semantics
Robert: "jak będzie trzeba to będą mogły się dzielić". Open:
- Who requests the share — the sharing app's manifest, or an admin action?
- What's the permission model (read-only, read-write, per-user)?
- This is probably the *first* piece of "intent not reflected in OS state" — might be what finally motivates the DB.
### Q7 — Backend API surface
Once the filesystem model is clear, the Resource Manager's backend still needs *some* Python/whatever surface that the web UI and the register script both call. Rough shape:
- `list_apps()` — scan `/var/lib/furtka/apps/`, return manifests + running status
- `install_app(folder)` — validate + compose up
- `remove_app(name)` — compose down, optionally rm volume
- `list_volumes()` — from Docker
- `list_shares()` — ???
Not decided whether this lives in the existing `webinstaller` Flask app, a new backend service, or a thin CLI that the Flask app shells out to.
---
## First consumer — Fileshare
The point of doing this now is to unblock the fileshare app (see `../memory/project_first_app_fileshare.md` for the conversation context). Walking it through the model above:
Every Furtka app is a directory containing exactly:
```
/var/lib/furtka/apps/fileshare/
manifest.??? # see Q1 — what exactly goes here?
docker-compose.yaml
.env
manifest.json # required — the contract
docker-compose.yaml # required — container lifecycle
.env.example # optional — bootstraps .env on first install
.env # optional — user-edited secrets (preserved across upgrades)
icon.svg # optional but referenced in the manifest
```
- App declares a volume (e.g. `files`).
- Compose runs a Samba/NFS container mounting that volume.
- On Mac/Windows/Android you mount `smb://furtka.local/files`.
The directory **name** is the app name. The manifest's `name` field must match it once installed (the scanner enforces this). When you install from an arbitrary source path the manifest's name decides where it lands — so `furtka app install /tmp/some-fork/` works regardless of what the source folder is called.
If we can get *this* end-to-end through the Resource Manager, we've validated the whole model with the simplest possible app. Nextcloud, Jellyfin, etc. are the same shape with more knobs.
### Manifest schema
JSON, all fields required:
```json
{
"name": "fileshare",
"display_name": "Network Files",
"version": "0.1.0",
"description": "SMB share for LAN devices",
"volumes": ["files"],
"ports": [445, 139],
"icon": "icon.svg"
}
```
- `name` — machine id, must equal the install folder name.
- `display_name` — shown in the UI.
- `version` — free-form string (semver expected, not enforced).
- `volumes` — list of short names. Furtka creates each as `furtka_<app>_<vol>` (collision-free across apps). Compose files MUST reference the namespaced name as `external: true`.
- `ports` — informational for the UI. Compose owns the actual port binding.
- `icon` — relative path inside the app folder.
---
## What we do NOT do yet
## Lifecycle
- No database. (Decided.)
- No user/permissions model beyond what Samba/OS already give us.
- No reverse proxy / TLS integration in the manifest. (Tracked separately — see `../memory/project_ssl_local_deferred.md`.)
- No app catalog / store UI. Manual install first.
- No auto-updates.
### Discovery: boot-scan
`furtka-reconcile.service` (oneshot, after `docker.service`) runs `furtka reconcile` at every boot. The reconciler walks `/var/lib/furtka/apps/*`, validates each manifest, ensures every declared volume exists, then `docker compose up -d` per app. Filesystem is the only source of truth — no separate index, no DB.
These all become easier once Q1Q7 are answered — adding them to an existing model is straightforward; guessing them now is expensive.
A failed reconcile of one app does not abort the others. The CLI exits non-zero if any app errored, so systemd marks the unit red, but the healthy apps still come up.
### Install
- `furtka app install <path>` — install from a local folder.
- `furtka app install <name>` — falls back to `/opt/furtka/apps/<name>/` (apps bundled with the ISO).
- Web UI: click Install on a card under "Available to install".
The installer copies files into `/var/lib/furtka/apps/<name>/`, preserves any existing user `.env`, bootstraps `.env` from `.env.example` on first install, and `chmod 0600` on `.env`.
**Placeholder secrets are refused.** If `.env` ends up containing values listed in `furtka.installer.PLACEHOLDER_SECRETS` (currently `{"changeme"}`), install raises `InstallError` and the reconciler is not run. Files are left in place so the user can vim the `.env` and re-run install.
### Remove
- `furtka app remove <name>``docker compose down`, then delete the app folder.
- Web UI: Remove button.
**Volumes are NEVER deleted.** Reinstall recovers the data. Manual `docker volume rm furtka_<app>_<vol>` if you really want to wipe.
---
## Backend
`furtka/api.py` runs as `furtka-api.service` — Python stdlib `http.server` (no Flask), bound to `127.0.0.1:7000`. Caddy reverse-proxies `/api/*` and `/apps*` from `:80`.
Endpoints:
- `GET /` and `/apps` — self-contained HTML UI.
- `GET /api/apps` — installed apps as JSON.
- `GET /api/bundled` — apps available in `/opt/furtka/apps/` that aren't installed.
- `POST /api/apps/install` `{"name": "..."}` — install/reinstall.
- `POST /api/apps/remove` `{"name": "..."}` — remove (folder, not volume).
The UI has **no authentication**. It shouts the warning at the top. Authentik integration is the proper fix later.
---
## Out of scope for v1
These are deliberate omissions, not forgotten work. Adding any of them is a discussed design change.
- SQL database — filesystem is authoritative, full stop.
- Volume sharing between apps (would be the first DB use case).
- Auth on the web UI.
- TLS on `.local` (separate problem — see commit history around mDNS for the reasoning).
- Catalog repo — `install <name>` only resolves bundled apps, no network catalog.
- Auto-updates of installed apps.
- In-UI `.env` editor — re-install after editing currently needs SSH.
---
## Code map
| File | Purpose |
| --- | --- |
| `furtka/manifest.py` | JSON schema validation, `Manifest` dataclass, namespacing helper |
| `furtka/scanner.py` | Walks `/var/lib/furtka/apps/`, returns `ScanResult`s (broken manifests = error, not exception) |
| `furtka/reconciler.py` | Drives the per-app loop; isolates errors so one broken app doesn't block others |
| `furtka/installer.py` | Copy-from-source, `.env` bootstrap + 0600, placeholder rejection |
| `furtka/dockerops.py` | `docker volume` + `docker compose` subprocess wrappers |
| `furtka/api.py` | HTTP server + HTML UI |
| `furtka/cli.py` | `furtka app list/install/remove`, `furtka reconcile`, `furtka serve` |
| `apps/fileshare/` | First consumer — SMB share via `dperson/samba` |
| `iso/build.sh` | Tarballs `furtka/` + `apps/` into the live ISO at build time |
| `webinstaller/app.py` | `_resource_manager_commands()` + new systemd units (reconcile + api) for archinstall custom_commands |