feat(apps): app-to-app dependencies with install + start hooks
Some checks failed
Build ISO / build-iso (push) Successful in 21m39s
CI / lint (push) Failing after 28s
CI / test (push) Successful in 1m29s
CI / validate-json (push) Successful in 24s
CI / markdown-links (push) Successful in 14s
Release / release (push) Successful in 12m2s

Manifests gain an optional `requires` array. Each entry points at
another app and may declare `on_install` + `on_start` hook scripts
that live in the *provider's* folder and run inside its container
via `docker compose exec`. Hook stdout (KEY=VALUE + optional
FURTKA_JSON: sentinel) gets merged into the consumer's .env; the
placeholder-secret check re-runs over the merged file. Provider apps
that aren't installed get auto-installed first (topo order, cycle
detection, explicit UI confirm). Removing an app is blocked while
other installed apps require it. Reconcile now visits apps in
dependency order so consumers' on_start hooks fire against already-up
providers; per-app error isolation skips just the offending consumer's
compose_up.

Release 26.17-alpha.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
Daniel Maksymilian Syrnicki 2026-05-11 19:39:10 +02:00
parent 863ffa9737
commit 8e1f817d85
19 changed files with 2140 additions and 52 deletions

View file

@ -7,6 +7,76 @@ This project uses calendar versioning: `YY.N-stage` (e.g. `26.0-alpha` = 2026, r
## [Unreleased] ## [Unreleased]
## [26.17-alpha] - 2026-05-11
### Added
- **App-to-app dependencies.** Manifests gain an optional `requires`
array; each entry names a provider app plus two optional hook scripts
that live in the *provider's* folder. `on_install` runs once via
`docker compose exec` against the provider's running container while
the consumer is being installed (use case: `mosquitto_passwd` a new
MQTT user for the consumer). `on_start` runs every boot during
reconcile, before the consumer's container starts (use case: make
sure the user still exists after a Mosquitto wipe). Hook stdout
parses as `KEY=VALUE` lines and optional `FURTKA_JSON: {…}` sentinel
lines, both validated against the existing `SETTING_NAME` regex; the
values get merged into the consumer's `.env` (hook wins on conflict)
and the placeholder-secret check runs again over the merged file so
a hook returning `MQTT_PASS=changeme` is refused the same way an
unedited `.env.example` is.
- **`POST /api/apps/install/plan`.** New read-only endpoint that
returns the topo-sorted install order for a target app plus per-app
summaries (display_name, version, has_settings, installed flag). The
catalog UI calls this before opening the settings dialog so it can
show a confirm modal — "Installing zigbee2mqtt also installs
Mosquitto" — before anything mutates. Circular dependencies surface
as `400 {error: "circular dependency: A -> B -> A"}`; missing
providers as `400 {error: "required app 'X' not found …"}`.
- **`/var/lib/furtka/install-plan.json`** (overridable via
`FURTKA_INSTALL_PLAN`). The HTTP install endpoint writes this before
it spawns the systemd-run background job so the runner knows the
full chain to pull → create volumes → fire hooks → `compose up` for
in plan order. The runner consumes the file after reading so a stale
plan from a previous install can't accidentally steer the next one.
### Changed
- **`furtka reconcile` now visits apps in dependency order, not
alphabetical.** Topo-sort over `requires` puts providers before
consumers so a consumer's `on_start` hook can talk to an already-up
provider. Within a tier, ties stay alphabetical so boot logs are
still deterministic across reboots. Apps with unresolvable `requires`
(missing provider) are visited last; the per-app error-isolation in
reconcile then catches them without aborting the whole sweep.
- **`POST /api/apps/install` requires `confirm_dependencies: true`**
when installing a named app would pull in transitive providers.
Without the flag, the endpoint returns `409` plus the full plan body
so the UI can render the confirm dialog without a second round-trip.
Lone-target installs (no transitive deps) keep the existing
one-click flow — no UX change for `fileshare`-style standalone apps.
- **`furtka app install <name>` and the web UI now install transitive
dependencies automatically.** `furtka app install /path/to/dir`
stays as today (single-app, dev/test workflow).
- **`compose_exec` and `compose_exec_script` helpers** in
`furtka/dockerops.py`. Both pass `-T` (no TTY) so they work from the
install runner and from reconcile; both raise `DockerError` on
non-zero exit or timeout. `compose_exec_script` streams the script
body via stdin to `sh -s` so hooks don't need to be baked into the
provider's container image.
### Notes
- Hook target service: v1 auto-picks the *first* service in the
provider's compose config. Works for Mosquitto, Postgres, Redis.
Multi-service providers (Authentik server+worker) will need an
optional `service` field on the requirement entry; deferred until a
real case lands.
- Hook timeouts: `on_install` 60 s, `on_start` 30 s. Hardcoded for
v1 — revisit if a DB seed legitimately needs longer.
- Removing an app is now blocked (`409 {dependents: […]}` from the
API, exit 2 from the CLI) when other installed apps require it.
## [26.16-alpha] - 2026-05-10 ## [26.16-alpha] - 2026-05-10
### Added ### Added

View file

@ -322,6 +322,16 @@ details.log-details[open] > summary { color: var(--fg); }
} }
.modal .error.show, .modal .error.show,
.login-wrap .error.show { display: block; } .login-wrap .error.show { display: block; }
.modal .dep-list {
margin: 0 0 1rem;
padding: 0.75rem 1rem 0.75rem 1.75rem;
background: var(--bg);
border: 1px solid var(--border);
border-radius: var(--r-sm);
font-size: 0.9rem;
line-height: 1.4;
}
.modal .dep-list li { margin: 0.15rem 0; }
/* Login + first-run setup page. Shares .wrap's max-width so the form /* Login + first-run setup page. Shares .wrap's max-width so the form
sits in the same column the rest of the app uses, just without the sits in the same column the rest of the app uses, just without the

View file

@ -21,7 +21,7 @@ import time
from http.cookies import SimpleCookie from http.cookies import SimpleCookie
from http.server import BaseHTTPRequestHandler, HTTPServer from http.server import BaseHTTPRequestHandler, HTTPServer
from furtka import auth, dockerops, install_runner, installer, reconciler, sources from furtka import auth, deps, dockerops, install_runner, installer, reconciler, sources
from furtka.manifest import ManifestError, load_manifest from furtka.manifest import ManifestError, load_manifest
from furtka.paths import apps_dir, static_www_dir from furtka.paths import apps_dir, static_www_dir
from furtka.scanner import scan from furtka.scanner import scan
@ -152,7 +152,10 @@ const modal = {
error: document.getElementById('modal-error'), error: document.getElementById('modal-error'),
submit: document.getElementById('modal-submit'), submit: document.getElementById('modal-submit'),
cancel: document.getElementById('modal-cancel'), cancel: document.getElementById('modal-cancel'),
current: null, // { name, action: 'install' | 'edit' } // current: { name, action: 'install' | 'edit', confirmDeps?: bool }
// confirmDeps == true means the dependency-confirm step was already passed,
// so submitModal sends confirm_dependencies:true to the API.
current: null,
}; };
modal.cancel.addEventListener('click', () => closeModal()); modal.cancel.addEventListener('click', () => closeModal());
@ -167,7 +170,7 @@ function closeModal() {
modal.current = null; modal.current = null;
} }
async function openSettingsDialog(name, action) { async function openSettingsDialog(name, action, opts = {}) {
const r = await fetch(`/api/apps/${encodeURIComponent(name)}/settings`); const r = await fetch(`/api/apps/${encodeURIComponent(name)}/settings`);
if (!r.ok) { if (!r.ok) {
document.getElementById('log').textContent = document.getElementById('log').textContent =
@ -175,7 +178,7 @@ async function openSettingsDialog(name, action) {
return; return;
} }
const data = await r.json(); const data = await r.json();
modal.current = { name, action }; modal.current = { name, action, confirmDeps: !!opts.confirmDeps };
modal.title.textContent = data.display_name || data.name; modal.title.textContent = data.display_name || data.name;
modal.long.textContent = data.description_long || data.description || ''; modal.long.textContent = data.description_long || data.description || '';
modal.long.style.display = modal.long.textContent ? '' : 'none'; modal.long.style.display = modal.long.textContent ? '' : 'none';
@ -221,10 +224,30 @@ modal.submit.addEventListener('click', submitModal);
const INSTALL_STAGE_LABELS = { const INSTALL_STAGE_LABELS = {
'pulling_image': 'Image wird heruntergeladen…', 'pulling_image': 'Image wird heruntergeladen…',
'creating_volumes': 'Speicherbereiche werden erstellt…', 'creating_volumes': 'Speicherbereiche werden erstellt…',
'running_hooks': 'Verknüpfungen werden eingerichtet…',
'starting_container': 'Container wird gestartet…', 'starting_container': 'Container wird gestartet…',
'done': 'Fertig', 'done': 'Fertig',
}; };
// Confirm dialog for transitive dependencies: "Installing X also installs
// Y, Z proceed?". Reuses the existing modal CSS/structure; the submit
// button on confirm reopens openSettingsDialog with confirmDeps=true.
async function openDependencyConfirmDialog(name, plan) {
modal.current = { name, action: 'install-confirm-deps' };
modal.title.textContent = `Install ${name}: dependencies required`;
const transitive = plan.summaries.filter(s => s.name !== name && !s.installed);
const long = transitive.length === 1
? `Installing ${name} requires ${transitive[0].display_name || transitive[0].name}. It will be installed and configured automatically.`
: `Installing ${name} requires ${transitive.length} additional apps. They will be installed and configured automatically.`;
modal.long.textContent = long;
modal.long.style.display = '';
modal.form.innerHTML = '<ul class="dep-list">'
+ transitive.map(s => `<li><strong>${esc(s.display_name || s.name)}</strong>${s.description ? '' + esc(s.description) : ''}</li>`).join('')
+ '</ul>';
modal.submit.textContent = 'Install all and continue';
modal.backdrop.classList.add('open');
}
async function pollInstallStatus(original) { async function pollInstallStatus(original) {
// Two-minute ceiling: Jellyfin over a slow DSL line can take ~90s // Two-minute ceiling: Jellyfin over a slow DSL line can take ~90s
// just on the image pull. Beyond that something's stuck — the // just on the image pull. Beyond that something's stuck — the
@ -261,7 +284,15 @@ async function pollInstallStatus(original) {
async function submitModal() { async function submitModal() {
if (!modal.current) return; if (!modal.current) return;
const { name, action } = modal.current; const { name, action, confirmDeps } = modal.current;
// Two-step dance: when this is the dep-confirm step, transition straight
// into the settings dialog with confirmDeps flagged so the actual install
// POST carries `confirm_dependencies: true`.
if (action === 'install-confirm-deps') {
closeModal();
await openSettingsDialog(name, 'install', { confirmDeps: true });
return;
}
const values = {}; const values = {};
for (const input of modal.form.querySelectorAll('input')) { for (const input of modal.form.querySelectorAll('input')) {
// In edit mode, skip password fields left blank server keeps existing. // In edit mode, skip password fields left blank server keeps existing.
@ -276,7 +307,9 @@ async function submitModal() {
const url = action === 'install' const url = action === 'install'
? '/api/apps/install' ? '/api/apps/install'
: `/api/apps/${encodeURIComponent(name)}/settings`; : `/api/apps/${encodeURIComponent(name)}/settings`;
const body = action === 'install' ? { name, settings: values } : { settings: values }; const body = action === 'install'
? { name, settings: values, ...(confirmDeps ? { confirm_dependencies: true } : {}) }
: { settings: values };
const r = await fetch(url, { const r = await fetch(url, {
method: 'POST', method: 'POST',
headers: {'Content-Type': 'application/json'}, headers: {'Content-Type': 'application/json'},
@ -363,8 +396,37 @@ async function refresh() {
} }
async function handleButton(op, name, btn) { async function handleButton(op, name, btn) {
if (op === 'install' || op === 'edit') { if (op === 'install') {
openSettingsDialog(name, op === 'install' ? 'install' : 'edit'); // Check dependencies before opening the settings form. If the target
// has transitive providers we need to install, show the confirm modal
// first; the user then proceeds to the regular settings form, which
// POSTs with confirm_dependencies:true.
try {
const r = await fetch('/api/apps/install/plan', {
method: 'POST',
headers: {'Content-Type': 'application/json'},
body: JSON.stringify({name}),
});
const plan = await r.json();
if (!r.ok) {
document.getElementById('log').textContent =
`[install ${name}] HTTP ${r.status}\\n` + JSON.stringify(plan, null, 2);
return;
}
const transitive = (plan.to_install || []).filter(n => n !== name);
if (transitive.length > 0) {
openDependencyConfirmDialog(name, plan);
return;
}
} catch (e) {
// Network blip fall through to the legacy single-app path; the API
// would reject again if deps actually were required.
}
openSettingsDialog(name, 'install');
return;
}
if (op === 'edit') {
openSettingsDialog(name, 'edit');
return; return;
} }
// Reinstall + update + remove are direct actions, no form. // Reinstall + update + remove are direct actions, no form.
@ -390,6 +452,9 @@ async function handleButton(op, name, btn) {
? ` updated ${data.services.length} service(s)` ? ` updated ${data.services.length} service(s)`
: ' — already up to date'; : ' — already up to date';
} }
if (op === 'remove' && r.status === 409 && data.dependents) {
header += ` blocked by: ${data.dependents.join(', ')}`;
}
document.getElementById('log').textContent = header + '\\n' + JSON.stringify(data, null, 2); document.getElementById('log').textContent = header + '\\n' + JSON.stringify(data, null, 2);
// Reinstall dispatches an async install the same way the modal does // Reinstall dispatches an async install the same way the modal does
// follow the background job on the button label until terminal. // follow the background job on the button label until terminal.
@ -596,6 +661,10 @@ def _manifest_summary(m, app_dir=None):
# Optional template URL with `{host}` placeholder; frontend # Optional template URL with `{host}` placeholder; frontend
# substitutes against location.hostname at render time. # substitutes against location.hostname at render time.
"open_url": m.open_url, "open_url": m.open_url,
"requires": [
{"app": req.app, "on_install": req.on_install, "on_start": req.on_start}
for req in m.requires
],
} }
@ -695,7 +764,41 @@ def _do_get_settings(name):
_INSTALL_TERMINAL_STAGES = frozenset({"done", "error"}) _INSTALL_TERMINAL_STAGES = frozenset({"done", "error"})
def _do_install(name, settings=None): def _do_install_plan(name):
"""Compute a dependency plan for installing `name`.
Read-only no FS mutation, doesn't take the install lock. The UI calls
this before opening the settings dialog so it can show a "this will also
install X, Y" confirm step when the target has transitive deps.
"""
try:
plan = deps.plan_install(name)
except deps.DependencyError as e:
return 400, {"error": str(e)}
summaries: list[dict] = []
for app_name in plan.install_order:
m, _values, installed = _load_manifest_for(app_name)
if m is None:
return 400, {"error": f"could not load manifest for {app_name!r}"}
# Per-entry source folder for icon resolution — installed wins.
if installed:
src_dir = apps_dir() / app_name
else:
resolved = sources.resolve_app_name(app_name)
src_dir = resolved.path if resolved is not None else None
summary = _manifest_summary(m, src_dir)
summary["installed"] = installed
summaries.append(summary)
return 200, {
"target": plan.target,
"install_order": list(plan.install_order),
"already_installed": sorted(plan.already_installed),
"to_install": list(plan.to_install),
"summaries": summaries,
}
def _do_install(name, settings=None, confirm_dependencies=False):
"""Kick off an app install. Synchronous sync-phase + async docker-phase. """Kick off an app install. Synchronous sync-phase + async docker-phase.
Fast parts run inline so validation failures come back as immediate Fast parts run inline so validation failures come back as immediate
@ -722,6 +825,25 @@ def _do_install(name, settings=None):
) )
} }
# Resolve dependencies before taking the lock so a 4xx (cycle, missing
# provider) comes back fast without ever touching install state.
try:
plan = deps.plan_install(name)
except deps.DependencyError as e:
return 400, {"error": str(e)}
# If installing `name` pulls in transitive providers, require an explicit
# `confirm_dependencies: true` so the UI gets a chance to show the user
# what's about to land. The 409 body carries the plan so the UI can
# render the confirm modal without a second `/plan` round-trip.
transitive = [n for n in plan.to_install if n != name]
if transitive and not confirm_dependencies:
status, body = _do_install_plan(name)
if status != 200:
return status, body
body["error"] = "additional apps required — set confirm_dependencies: true"
return 409, body
# Fast-fail if another install is already in flight. Lock lives under # Fast-fail if another install is already in flight. Lock lives under
# /run/ so a previous reboot clears it automatically. # /run/ so a previous reboot clears it automatically.
try: try:
@ -730,13 +852,32 @@ def _do_install(name, settings=None):
return 409, {"error": str(e)} return 409, {"error": str(e)}
try: try:
try: try:
if plan.to_install:
targets = installer.install_plan(plan, settings_target=settings)
target = targets[-1]
else:
# Reinstall of an already-installed target with no new
# transitive providers — re-run the single-app path so an
# edited `.env` lands correctly.
src = installer.resolve_source(name) src = installer.resolve_source(name)
target = installer.install_from(src, settings=settings) target = installer.install_from(src, settings=settings)
except installer.InstallError as e: except installer.InstallError as e:
return 400, {"error": str(e)} return 400, {"error": str(e)}
# Write the plan file the background runner reads. Even single-app
# installs go through it so install_runner has one code path.
install_runner.plan_path().parent.mkdir(parents=True, exist_ok=True)
install_runner.plan_path().write_text(
json.dumps(
{
"target": name,
"to_install": list(plan.to_install) if plan.to_install else [name],
}
)
)
# Initial state so the UI has something to show between this # Initial state so the UI has something to show between this
# response and the background job's first write. # response and the background job's first write.
install_runner.write_state("pulling_image", app=name) first_app = plan.to_install[0] if plan.to_install else name
install_runner.write_state("pulling_image", app=first_app, target=name)
finally: finally:
# Release the lock so the background job can re-acquire it. # Release the lock so the background job can re-acquire it.
fh.close() fh.close()
@ -801,6 +942,15 @@ def _do_remove(name):
target = apps_dir() / name target = apps_dir() / name
if not target.exists(): if not target.exists():
return 404, {"error": f"{name!r} is not installed"} return 404, {"error": f"{name!r} is not installed"}
dependents = deps.dependents_of(name)
if dependents:
return 409, {
"error": (
f"{name!r} is required by: {', '.join(dependents)}. "
"Remove those first."
),
"dependents": list(dependents),
}
compose_warning = None compose_warning = None
try: try:
dockerops.compose_down(target, name) dockerops.compose_down(target, name)
@ -1369,11 +1519,14 @@ class _Handler(BaseHTTPRequestHandler):
if not isinstance(name, str) or not name: if not isinstance(name, str) or not name:
return self._json(400, {"error": "missing or empty 'name' field"}) return self._json(400, {"error": "missing or empty 'name' field"})
if self.path == "/api/apps/install": if self.path == "/api/apps/install/plan":
status, body = _do_install_plan(name)
elif self.path == "/api/apps/install":
settings = _parse_settings_body(payload) settings = _parse_settings_body(payload)
if settings is False: if settings is False:
return self._json(400, {"error": "'settings' must be an object"}) return self._json(400, {"error": "'settings' must be an object"})
status, body = _do_install(name, settings=settings) confirm = bool(payload.get("confirm_dependencies", False))
status, body = _do_install(name, settings=settings, confirm_dependencies=confirm)
elif self.path == "/api/apps/remove": elif self.path == "/api/apps/remove":
status, body = _do_remove(name) status, body = _do_remove(name)
else: else:

View file

@ -1,8 +1,9 @@
import argparse import argparse
import json import json
import sys import sys
from pathlib import Path
from furtka import dockerops, installer, reconciler from furtka import deps, dockerops, installer, reconciler
from furtka.paths import apps_dir from furtka.paths import apps_dir
from furtka.scanner import scan from furtka.scanner import scan
@ -37,6 +38,14 @@ def _cmd_app_list(args: argparse.Namespace) -> int:
} }
for s in r.manifest.settings for s in r.manifest.settings
], ],
"requires": [
{
"app": req.app,
"on_install": req.on_install,
"on_start": req.on_start,
}
for req in r.manifest.requires
],
} }
if r.manifest if r.manifest
else None, else None,
@ -58,13 +67,35 @@ def _cmd_app_list(args: argparse.Namespace) -> int:
def _cmd_app_install(args: argparse.Namespace) -> int: def _cmd_app_install(args: argparse.Namespace) -> int:
# If the user passed a path (or a path-ish thing), bypass dep resolution —
# local paths are dev/test workflows where the caller knows what they want.
# Catalog/bundled name installs go through plan_install() so transitive
# `requires` are pulled in.
src_path = Path(args.source)
is_path = src_path.is_dir() or "/" in args.source or args.source.startswith(".")
try: try:
if is_path:
src = installer.resolve_source(args.source) src = installer.resolve_source(args.source)
target = installer.install_from(src) target = installer.install_from(src)
print(f"installed {target.name} to {target}")
else:
try:
plan = deps.plan_install(args.source)
except deps.DependencyError as e:
print(f"error: {e}", file=sys.stderr)
return 2
if not plan.to_install:
# Target is already installed — re-run as a single-app install
# to refresh files (matches reinstall semantics).
target_path = installer.install_from(installer.resolve_source(args.source))
print(f"reinstalled {target_path.name} to {target_path}")
else:
targets = installer.install_plan(plan)
for t in targets:
print(f"installed {t.name} to {t}")
except installer.InstallError as e: except installer.InstallError as e:
print(f"error: {e}", file=sys.stderr) print(f"error: {e}", file=sys.stderr)
return 2 return 2
print(f"installed {target.name} to {target}")
actions = reconciler.reconcile(apps_dir()) actions = reconciler.reconcile(apps_dir())
for a in actions: for a in actions:
print(f" {a.describe()}") print(f" {a.describe()}")
@ -94,6 +125,14 @@ def _cmd_app_remove(args: argparse.Namespace) -> int:
if not target.exists(): if not target.exists():
print(f"error: {args.name!r} is not installed", file=sys.stderr) print(f"error: {args.name!r} is not installed", file=sys.stderr)
return 1 return 1
dependents = deps.dependents_of(args.name)
if dependents:
print(
f"error: {args.name!r} is required by: {', '.join(dependents)}. "
"Remove those first.",
file=sys.stderr,
)
return 2
try: try:
dockerops.compose_down(target, args.name) dockerops.compose_down(target, args.name)
except dockerops.DockerError as e: except dockerops.DockerError as e:

238
furtka/deps.py Normal file
View file

@ -0,0 +1,238 @@
"""App-to-app dependency planning.
A manifest may declare ``requires: [{"app": "<name>", "on_install": ...,
"on_start": ...}]``. This module turns that graph into:
- ``plan_install(name)`` topo-sorted install order so providers come up
before consumers, with cycle detection. Read-only over the catalog +
installed tree; the installer is the one that mutates.
- ``dependents_of(name)`` installed apps that name ``<name>`` in their
``requires``. Used by the remove guard to block "rip out mosquitto"
while zigbee2mqtt is still installed.
- ``installed_topo_order(scan_results)`` re-order a list of installed
apps so reconcile's per-boot sweep visits providers before consumers
(so a consumer's ``on_start`` hook runs against an already-up provider).
- ``provider_exec_service(provider_dir, project)`` pick the compose
service to ``docker compose exec`` into when firing a hook. v1: first
service in the provider's compose config.
"""
from __future__ import annotations
from dataclasses import dataclass
from pathlib import Path
from furtka import dockerops, sources
from furtka.manifest import Manifest, ManifestError, load_manifest
from furtka.paths import apps_dir
from furtka.scanner import ScanResult, scan
class DependencyError(RuntimeError):
pass
@dataclass(frozen=True)
class DepPlan:
target: str
install_order: tuple[str, ...] # topo: providers first, target last
already_installed: frozenset[str]
to_install: tuple[str, ...] # install_order minus already_installed
def _load_any(name: str) -> Manifest | None:
"""Load `<name>`'s manifest — prefer installed, fall back to catalog/bundled.
Returns None if the app exists nowhere we can see. Caller decides how
loud to be about that `plan_install` raises, `dependents_of` just
skips entries it can't parse so reconcile keeps working.
"""
installed = apps_dir() / name / "manifest.json"
if installed.is_file():
try:
return load_manifest(installed, expected_name=name)
except ManifestError:
return None
src = sources.resolve_app_name(name)
if src is None:
return None
try:
return load_manifest(src.path / "manifest.json")
except ManifestError:
return None
def _installed_names() -> frozenset[str]:
return frozenset(r.manifest.name for r in scan(apps_dir()) if r.ok)
def plan_install(name: str) -> DepPlan:
"""Build a topo-sorted install plan for `name`.
Walks the dependency graph via the catalog/bundled+installed manifests,
detects cycles, and returns the order plus which entries are already
installed (those get skipped at install time but stay in `install_order`
for sequencing the `on_install` hooks correctly).
"""
WHITE, GRAY, BLACK = 0, 1, 2
color: dict[str, int] = {}
order: list[str] = []
stack_chain: list[str] = []
# Iterative post-order DFS with a per-frame iterator over children.
# Cycle detection uses GRAY-on-GRAY (Tarjan-style) so a chain through
# several apps still surfaces the full path in the error message.
def visit(start: str) -> None:
if color.get(start, WHITE) == BLACK:
return
# Each frame: (name, manifest, iterator over sorted requires)
m = _load_any(start)
if m is None:
raise DependencyError(
f"required app {start!r} not found in installed apps, "
"catalog, or bundled apps"
)
# Sort requires alphabetically for deterministic install order.
children = iter(sorted(r.app for r in m.requires))
stack: list[tuple[str, Manifest, "object"]] = [(start, m, children)] # noqa: UP037
color[start] = GRAY
stack_chain.append(start)
while stack:
cur_name, cur_m, it = stack[-1]
child = next(it, None)
if child is None:
# All children processed — emit and pop.
color[cur_name] = BLACK
order.append(cur_name)
stack.pop()
stack_chain.pop()
continue
c = color.get(child, WHITE)
if c == BLACK:
continue
if c == GRAY:
# Cycle — find the back-edge target in the chain and report.
idx = stack_chain.index(child)
cycle = " -> ".join(stack_chain[idx:] + [child])
raise DependencyError(f"circular dependency: {cycle}")
# WHITE — descend.
child_m = _load_any(child)
if child_m is None:
raise DependencyError(
f"required app {child!r} (needed by {cur_name!r}) "
"not found in installed apps, catalog, or bundled apps"
)
color[child] = GRAY
stack_chain.append(child)
stack.append((child, child_m, iter(sorted(r.app for r in child_m.requires))))
visit(name)
installed = _installed_names()
to_install = tuple(n for n in order if n not in installed)
return DepPlan(
target=name,
install_order=tuple(order),
already_installed=frozenset(n for n in order if n in installed),
to_install=to_install,
)
def dependents_of(name: str) -> tuple[str, ...]:
"""Names of installed apps that declare `<name>` in their `requires`.
Used by the remove guard. Result is sorted alphabetically so error
messages read in a stable order.
"""
out: list[str] = []
for r in scan(apps_dir()):
if not r.ok:
continue
if any(req.app == name for req in r.manifest.requires):
out.append(r.manifest.name)
out.sort()
return tuple(out)
def installed_topo_order(results: list[ScanResult]) -> list[ScanResult]:
"""Re-order installed apps so providers come before consumers.
Apps whose `requires` point at uninstalled providers (or that contain
cycles) are emitted at the tail in their original order reconcile
already isolates per-app failure so we don't want to abort the whole
sweep on a misconfigured manifest. Ties within a tier stay alphabetical
(the scanner already returns alphabetical), matching the deterministic
boot order users rely on.
"""
ok = [r for r in results if r.ok]
bad = [r for r in results if not r.ok]
by_name = {r.manifest.name: r for r in ok}
# Kahn's algorithm against the installed subgraph only. Edges from
# consumer -> provider; we want providers first, so build the indegree
# over consumers ("how many of MY providers are still pending").
pending_providers: dict[str, set[str]] = {}
consumers_of: dict[str, list[str]] = {n: [] for n in by_name}
for r in ok:
deps = {req.app for req in r.manifest.requires if req.app in by_name}
pending_providers[r.manifest.name] = deps
for dep in deps:
consumers_of[dep].append(r.manifest.name)
# Seed with anything that has no installed providers, alphabetical.
ready = sorted(n for n, deps in pending_providers.items() if not deps)
ordered: list[str] = []
while ready:
# Pop the alphabetically-smallest so ties stay deterministic.
n = ready.pop(0)
ordered.append(n)
for consumer in consumers_of[n]:
pending_providers[consumer].discard(n)
if not pending_providers[consumer]:
# Insert in sorted position.
_insort(ready, consumer)
# Anything left has unresolved providers (missing or cyclic) — append
# in scanner order so reconcile still tries them and gets a clean
# per-app error.
leftover = [n for n in by_name if n not in set(ordered)]
leftover_set = set(leftover)
leftover_in_scan_order = [r.manifest.name for r in ok if r.manifest.name in leftover_set]
out = [by_name[n] for n in ordered]
out.extend(by_name[n] for n in leftover_in_scan_order)
out.extend(bad) # broken manifests already had their place in `results`; append last
return out
def _insort(seq: list[str], value: str) -> None:
"""Insert `value` into the sorted list `seq` (keeping it sorted)."""
lo, hi = 0, len(seq)
while lo < hi:
mid = (lo + hi) // 2
if seq[mid] < value:
lo = mid + 1
else:
hi = mid
seq.insert(lo, value)
def provider_exec_service(provider_dir: Path, project: str) -> str:
"""Pick the compose service name to `docker compose exec` into for a hook.
v1: first service in the provider's compose file. Works for the apps we
actually have (Mosquitto, Postgres, Redis all single-service). When a
multi-service provider (Authentik etc.) lands, the deferred follow-up is
to add an explicit `service` field on the Requirement entry.
Falls back to the project name if compose config can't be read — that's
a desperate guess but better than crashing, and the resulting exec error
will be surfaced cleanly as a DockerError to the caller.
"""
try:
cfg = dockerops.compose_image_tags(provider_dir, project)
except dockerops.DockerError:
return project
if not cfg:
return project
return next(iter(cfg.keys()))

View file

@ -60,6 +60,89 @@ def compose_pull(app_dir: Path, project: str) -> None:
_run([*_compose_args(app_dir, project), "pull"], cwd=app_dir) _run([*_compose_args(app_dir, project), "pull"], cwd=app_dir)
def compose_exec(
app_dir: Path,
project: str,
service: str,
argv: list[str],
*,
env: dict[str, str] | None = None,
timeout: float | None = None,
) -> str:
"""`docker compose exec -T <service> <argv...>`. Returns captured stdout.
`-T` disables TTY allocation required when called from a non-interactive
parent (the install background job, the reconcile service). Without it,
docker exits with "the input device is not a TTY".
"""
cmd = [*_compose_args(app_dir, project), "exec", "-T"]
for k, v in (env or {}).items():
cmd.extend(["--env", f"{k}={v}"])
cmd.append(service)
cmd.extend(argv)
try:
proc = subprocess.run(
cmd,
cwd=app_dir,
check=False,
capture_output=True,
text=True,
timeout=timeout,
)
except subprocess.TimeoutExpired as e:
raise DockerError(f"compose exec {service}: timed out after {timeout}s") from e
if proc.returncode != 0:
msg = proc.stderr.strip() or proc.stdout.strip()
raise DockerError(f"compose exec {service} exited {proc.returncode}: {msg}")
return proc.stdout
def compose_exec_script(
app_dir: Path,
project: str,
service: str,
script_path: Path,
*,
env: dict[str, str] | None = None,
timeout: float | None = None,
) -> str:
"""Run a host-side script inside the compose container via `sh -s`.
The script's bytes are streamed on stdin, so it doesn't need to be
copied into the image. Used by the app-dependency feature to run a
provider's hook scripts (e.g. "create an MQTT user for the consumer")
when a consumer is being installed or every time it starts.
Returns the script's stdout as text (UTF-8, replace-on-error). Raises
DockerError on non-zero exit or timeout, mirroring `compose_exec`.
"""
body = Path(script_path).read_bytes()
cmd = [*_compose_args(app_dir, project), "exec", "-T"]
for k, v in (env or {}).items():
cmd.extend(["--env", f"{k}={v}"])
cmd.extend([service, "sh", "-s"])
try:
proc = subprocess.run(
cmd,
cwd=app_dir,
check=False,
input=body,
capture_output=True,
timeout=timeout,
)
except subprocess.TimeoutExpired as e:
raise DockerError(
f"compose exec {service}: hook {script_path.name} timed out after {timeout}s"
) from e
if proc.returncode != 0:
err = (proc.stderr or proc.stdout or b"").decode("utf-8", "replace").strip()
raise DockerError(
f"compose exec {service} hook {script_path.name} exited "
f"{proc.returncode}: {err}"
)
return proc.stdout.decode("utf-8", "replace")
def compose_image_tags(app_dir: Path, project: str) -> dict[str, str]: def compose_image_tags(app_dir: Path, project: str) -> dict[str, str]:
"""Return {service_name: image_tag} as declared in the compose file. """Return {service_name: image_tag} as declared in the compose file.

View file

@ -8,23 +8,31 @@ This module mirrors the exact same shape as ``furtka.catalog`` and
``furtka.updater`` so the UI can poll an install just like it polls a ``furtka.updater`` so the UI can poll an install just like it polls a
catalog sync or a self-update. The split is: catalog sync or a self-update. The split is:
- ``furtka.api._do_install`` runs synchronously: resolve source, copy - ``furtka.api._do_install`` runs synchronously: resolve source(s), copy
the app folder, write .env, validate path settings + placeholders. the app folder(s), write .env. Those are fast, and their failures
Those are fast, and their failures deserve an immediate 4xx so the deserve an immediate 4xx so the install modal can surface them in-line.
install modal can surface them in-line.
- After that the API writes an initial state file (stage - After that the API writes an initial state file (stage
"pulling_image") and dispatches ``systemd-run --unit=furtka-install- "pulling_image") and dispatches ``systemd-run --unit=furtka-install-
<name>`` to run ``furtka app install-bg <name>`` in the background. <name>`` to run ``furtka app install-bg <name>`` in the background.
That CLI subcommand is what calls ``run_install()`` here it does the That CLI subcommand is what calls ``run_install()`` here it does the
docker-facing phases and writes state transitions as it goes. docker-facing phases and writes state transitions as it goes.
If the API also wrote a plan file at ``/var/lib/furtka/install-plan.json``
(because the target had transitive dependencies), the runner iterates
through every app in ``to_install`` pulling, creating volumes, firing
``on_install`` hooks against already-up providers, then ``compose up``
so providers are ready before consumers' hooks try to talk to them. The
state file's ``target`` field carries the original user-chosen app name
so the UI can show "Installing mosquitto (required by zigbee2mqtt)".
State file schema (``/var/lib/furtka/install-state.json``): State file schema (``/var/lib/furtka/install-state.json``):
{ {
"stage": "pulling_image" | "creating_volumes" "stage": "pulling_image" | "creating_volumes"
| "starting_container" | "done" | "error", | "running_hooks" | "starting_container" | "done" | "error",
"updated_at": "2026-04-21T17:30:45+0200", "updated_at": "2026-04-21T17:30:45+0200",
"app": "jellyfin", "app": "mosquitto", // app currently being processed
"target": "zigbee2mqtt", // original target (== app for single-app installs)
"version": "1.0.0", // added at "done" "version": "1.0.0", // added at "done"
"error": "details..." // added at "error" "error": "details..." // added at "error"
} }
@ -40,16 +48,21 @@ from __future__ import annotations
import fcntl import fcntl
import json import json
import os import os
import re
import time import time
from pathlib import Path from pathlib import Path
from furtka import dockerops from furtka import deps, dockerops, installer
from furtka.manifest import load_manifest from furtka.manifest import SETTING_NAME_RE, Manifest, load_manifest
from furtka.paths import apps_dir from furtka.paths import apps_dir
_INSTALL_STATE = Path(os.environ.get("FURTKA_INSTALL_STATE", "/var/lib/furtka/install-state.json")) _INSTALL_STATE = Path(os.environ.get("FURTKA_INSTALL_STATE", "/var/lib/furtka/install-state.json"))
_INSTALL_PLAN = Path(os.environ.get("FURTKA_INSTALL_PLAN", "/var/lib/furtka/install-plan.json"))
_LOCK_PATH = Path(os.environ.get("FURTKA_INSTALL_LOCK", "/run/furtka/install.lock")) _LOCK_PATH = Path(os.environ.get("FURTKA_INSTALL_LOCK", "/run/furtka/install.lock"))
_ON_INSTALL_TIMEOUT_SECONDS = 60.0
_FURTKA_JSON_RE = re.compile(r"^FURTKA_JSON:\s*(.*)$")
class InstallRunnerError(RuntimeError): class InstallRunnerError(RuntimeError):
"""Any failure in the background install flow that should surface to the caller.""" """Any failure in the background install flow that should surface to the caller."""
@ -59,6 +72,10 @@ def state_path() -> Path:
return _INSTALL_STATE return _INSTALL_STATE
def plan_path() -> Path:
return _INSTALL_PLAN
def lock_path() -> Path: def lock_path() -> Path:
return _LOCK_PATH return _LOCK_PATH
@ -79,6 +96,35 @@ def read_state() -> dict:
return {} return {}
def _read_plan(target: str) -> dict:
"""Load the install plan if the API wrote one; otherwise the single-app fallback.
The plan file is consumed once removed after read so a stale plan from
a previous install can't accidentally steer this run. If the file is
missing/unparseable we synthesize a one-element plan from the target arg
so the old single-app behaviour still works (CLI invocations, smoke tests).
"""
try:
raw = plan_path().read_text()
except (FileNotFoundError, OSError):
return {"target": target, "to_install": [target]}
try:
data = json.loads(raw)
except json.JSONDecodeError:
return {"target": target, "to_install": [target]}
finally:
try:
plan_path().unlink()
except OSError:
pass
if not isinstance(data, dict):
return {"target": target, "to_install": [target]}
return {
"target": data.get("target", target),
"to_install": data.get("to_install") or [target],
}
def acquire_lock(): def acquire_lock():
path = lock_path() path = lock_path()
path.parent.mkdir(parents=True, exist_ok=True) path.parent.mkdir(parents=True, exist_ok=True)
@ -91,31 +137,174 @@ def acquire_lock():
return fh return fh
def _parse_hook_output(text: str) -> dict[str, str]:
"""Extract KEY=VALUE pairs from hook stdout plus any FURTKA_JSON: {...} line.
KEY=VALUE keys must match the manifest's SETTING_NAME regex (UPPER_SNAKE_CASE)
so a misbehaving hook can't inject e.g. `PATH=` and clobber the container's
runtime environment.
The FURTKA_JSON sentinel is opt-in for hooks that need to return structured
data later (e.g. a list of generated certificates). Only string values are
accepted; non-string values raise so a hook can't smuggle non-env content
into the .env file. JSON values overlay KEY=VALUE values.
"""
out: dict[str, str] = {}
# First pass: skip FURTKA_JSON lines for KEY=VALUE extraction.
kv_lines = [
line for line in text.splitlines() if not _FURTKA_JSON_RE.match(line.strip())
]
kv = installer.parse_env_text("\n".join(kv_lines))
for key, value in kv.items():
if not SETTING_NAME_RE.match(key):
raise InstallRunnerError(
f"hook returned invalid env-var name {key!r} "
"(must be UPPER_SNAKE_CASE, e.g. MQTT_USER)"
)
out[key] = value
# Second pass: pick up FURTKA_JSON sentinels.
for raw in text.splitlines():
m = _FURTKA_JSON_RE.match(raw.strip())
if not m:
continue
try:
payload = json.loads(m.group(1))
except json.JSONDecodeError as e:
raise InstallRunnerError(
f"hook returned invalid FURTKA_JSON payload: {e}"
) from e
if not isinstance(payload, dict):
raise InstallRunnerError(
"hook FURTKA_JSON payload must be an object of KEY=VALUE strings"
)
for key, value in payload.items():
if not isinstance(key, str) or not SETTING_NAME_RE.match(key):
raise InstallRunnerError(
f"hook FURTKA_JSON key {key!r} must be UPPER_SNAKE_CASE"
)
if not isinstance(value, str):
raise InstallRunnerError(
f"hook FURTKA_JSON value for {key!r} must be a string"
)
out[key] = value
return out
def _merge_hook_output_into_env(env_path: Path, hook_stdout: str) -> None:
"""Overlay hook-returned keys onto an app's `.env`. Hook wins on conflict.
Re-runs the placeholder-secret check so a hook returning literal "changeme"
is refused the same way an unedited .env.example is. Re-chmods to 0600 so
even an interrupted run leaves the file root-only.
"""
overlay = _parse_hook_output(hook_stdout)
if not overlay:
return
existing = installer.read_env_values(env_path)
merged: dict[str, str] = {}
merged.update(existing)
merged.update(overlay) # hook wins
installer.write_env(env_path, merged)
env_path.chmod(0o600)
bad = installer._placeholder_keys(env_path)
if bad:
raise InstallRunnerError(
f"{env_path}: hook returned placeholder values for {', '.join(bad)}"
)
def _fire_install_hooks(consumer: Manifest, consumer_dir: Path) -> None:
"""Run each `on_install` hook against the corresponding provider's container.
The provider must already be running (its `compose up` ran earlier in the
same plan). Hook stdout is parsed via `_parse_hook_output` and merged into
the consumer's `.env` before its own `compose up` fires.
"""
for req in consumer.requires:
if not req.on_install:
continue
provider_dir = apps_dir() / req.app
provider_manifest_path = provider_dir / "manifest.json"
if not provider_manifest_path.is_file():
raise InstallRunnerError(
f"{consumer.name}: required app {req.app!r} is not installed"
)
# Validate provider manifest loads (matches the contract the rest of
# the system relies on — never trust a provider folder with a busted
# manifest).
load_manifest(provider_manifest_path, expected_name=req.app)
hook_abs = provider_dir / req.on_install
if not hook_abs.is_file():
raise InstallRunnerError(
f"{consumer.name}: on_install hook "
f"{req.on_install!r} missing in provider {req.app}"
)
service = deps.provider_exec_service(provider_dir, req.app)
stdout = dockerops.compose_exec_script(
provider_dir,
req.app,
service,
hook_abs,
env={
"FURTKA_CONSUMER_APP": consumer.name,
"FURTKA_CONSUMER_VERSION": consumer.version,
},
timeout=_ON_INSTALL_TIMEOUT_SECONDS,
)
_merge_hook_output_into_env(consumer_dir / ".env", stdout)
def run_install(name: str) -> None: def run_install(name: str) -> None:
"""Docker-facing phases of the install: pull → volumes → compose up. """Docker-facing phases of the install: pull → volumes → hooks → compose up.
Called by the ``furtka app install-bg <name>`` CLI subcommand from the Called by the ``furtka app install-bg <name>`` CLI subcommand from the
systemd-run spawned by the API. Assumes the API has already run systemd-run spawned by the API. Assumes the API has already run
``installer.install_from()``, so the app folder, .env, and manifest ``installer.install_from()`` for every app in the plan, so each app folder,
are on disk at ``apps_dir() / <name>``. `.env`, and manifest are on disk under ``apps_dir() / <name>``.
Every phase transition is written to the state file for the UI to If ``/var/lib/furtka/install-plan.json`` exists, every app in its
poll. On exception the state flips to ``"error"`` with the message, ``to_install`` is processed in order (providers before consumers). Each
then the exception is re-raised so the CLI exits non-zero and provider is fully up before the consumer's ``on_install`` hooks fire,
journald has a traceback. so a hook can ``mosquitto_passwd``/`createuser` against a live broker/DB.
Every phase transition is written to the state file for the UI to poll.
On exception the state flips to ``"error"`` with the message, then the
exception is re-raised so the CLI exits non-zero and journald gets a
traceback. Per-app failure aborts the rest of the plan: a half-installed
consumer whose provider is fine is recoverable by retrying.
""" """
with acquire_lock(): with acquire_lock():
target = apps_dir() / name plan = _read_plan(name)
manifest = load_manifest(target / "manifest.json", expected_name=name) target = plan["target"]
to_install = list(plan["to_install"])
try: try:
write_state("pulling_image", app=name) last_manifest = None
dockerops.compose_pull(target, name) for app_name in to_install:
write_state("creating_volumes", app=name) target_dir = apps_dir() / app_name
for short in manifest.volumes: m = load_manifest(target_dir / "manifest.json", expected_name=app_name)
dockerops.ensure_volume(manifest.volume_name(short)) last_manifest = m
write_state("starting_container", app=name) write_state("pulling_image", app=app_name, target=target)
dockerops.compose_up(target, name) dockerops.compose_pull(target_dir, app_name)
write_state("done", app=name, version=manifest.version) write_state("creating_volumes", app=app_name, target=target)
for short in m.volumes:
dockerops.ensure_volume(m.volume_name(short))
if m.requires:
write_state("running_hooks", app=app_name, target=target)
_fire_install_hooks(m, target_dir)
write_state("starting_container", app=app_name, target=target)
dockerops.compose_up(target_dir, app_name)
# Terminal state carries the original target's name + version so
# the UI's poll loop ("is install of <target> done yet?") still
# works unchanged.
if last_manifest is not None and last_manifest.name == target:
write_state("done", app=target, target=target, version=last_manifest.version)
else:
# Fallback: target wasn't last in the plan (shouldn't happen for
# a well-formed plan, but don't crash on the terminal write).
write_state("done", app=target, target=target)
except Exception as e: except Exception as e:
write_state("error", app=name, error=str(e)) current = read_state().get("app", target)
write_state("error", app=current, target=target, error=str(e))
raise raise

View file

@ -233,10 +233,15 @@ def install_from(src: Path, settings: dict[str, str] | None = None) -> Path:
return target return target
def _read_env(env_path: Path) -> dict[str, str]: def parse_env_text(text: str) -> dict[str, str]:
"""Parse a simple KEY=VALUE .env into a dict. Unquotes quoted values.""" """Parse KEY=VALUE lines from a string into a dict. Unquotes quoted values.
Reusable by anything that needs the same lenient .env parsing logic
without reading a file e.g. hook script stdout merged into an app's
.env during install (see install_runner._fire_install_hooks).
"""
out: dict[str, str] = {} out: dict[str, str] = {}
for raw in env_path.read_text().splitlines(): for raw in text.splitlines():
line = raw.strip() line = raw.strip()
if not line or line.startswith("#") or "=" not in line: if not line or line.startswith("#") or "=" not in line:
continue continue
@ -250,6 +255,11 @@ def _read_env(env_path: Path) -> dict[str, str]:
return out return out
def _read_env(env_path: Path) -> dict[str, str]:
"""Parse a simple KEY=VALUE .env into a dict. Unquotes quoted values."""
return parse_env_text(env_path.read_text())
def read_env_values(env_path: Path) -> dict[str, str]: def read_env_values(env_path: Path) -> dict[str, str]:
"""Public wrapper — returns {} if the file doesn't exist.""" """Public wrapper — returns {} if the file doesn't exist."""
if not env_path.exists(): if not env_path.exists():
@ -307,6 +317,28 @@ def update_env(name: str, settings: dict[str, str]) -> Path:
return target return target
def install_plan(plan, settings_target: dict[str, str] | None = None) -> list[Path]:
"""Run the synchronous install phase for every app in `plan.to_install`.
Each name is resolved via `resolve_source()` and copied via `install_from`
in plan order, so providers land before consumers. Only the target app
receives user-supplied settings transitive providers install from their
catalog/bundled `.env.example` and rely on the placeholder-secret check
to refuse if anyone shipped a "changeme" default.
No rollback on partial failure. Re-running install is the recovery path;
stopping providers a user may already rely on for other apps is more
destructive than a partial state. Returns the list of target folders in
install order.
"""
targets: list[Path] = []
for name in plan.to_install:
src = resolve_source(name)
settings = settings_target if name == plan.target else None
targets.append(install_from(src, settings=settings))
return targets
def remove(name: str) -> Path: def remove(name: str) -> Path:
"""Delete /var/lib/furtka/apps/<name>/. Volumes are NOT touched. """Delete /var/lib/furtka/apps/<name>/. Volumes are NOT touched.

View file

@ -15,6 +15,7 @@ REQUIRED_FIELDS = (
VALID_SETTING_TYPES = frozenset({"text", "password", "number", "path"}) VALID_SETTING_TYPES = frozenset({"text", "password", "number", "path"})
SETTING_NAME_RE = re.compile(r"^[A-Z_][A-Z0-9_]*$") SETTING_NAME_RE = re.compile(r"^[A-Z_][A-Z0-9_]*$")
APP_NAME_RE = re.compile(r"^[a-z][a-z0-9_-]*$")
class ManifestError(Exception): class ManifestError(Exception):
@ -31,6 +32,18 @@ class Setting:
default: str | None default: str | None
@dataclass(frozen=True)
class Requirement:
app: str # name of the required app — must resolve in installed/catalog/bundled
# Hook paths are relative to the PROVIDER's app folder (not the consumer's).
# Resolved at hook-fire time, not manifest-load time — the provider may not
# be installed yet when this manifest is parsed.
# on_install: script run via `docker compose exec` on the provider during install.
on_install: str | None
# on_start: script run on every boot before the consumer starts (must be idempotent).
on_start: str | None
@dataclass(frozen=True) @dataclass(frozen=True)
class Manifest: class Manifest:
name: str name: str
@ -48,6 +61,7 @@ class Manifest:
# furtka.local, a raw IP, a future reverse-proxy hostname. Apps with # furtka.local, a raw IP, a future reverse-proxy hostname. Apps with
# no frontend (CLI-only, background workers) leave this empty. # no frontend (CLI-only, background workers) leave this empty.
open_url: str = "" open_url: str = ""
requires: tuple[Requirement, ...] = field(default_factory=tuple)
def volume_name(self, short: str) -> str: def volume_name(self, short: str) -> str:
# Namespace volume names so two apps can each declare e.g. "data" # Namespace volume names so two apps can each declare e.g. "data"
@ -98,6 +112,53 @@ def _parse_settings(raw: object, manifest_path: Path) -> tuple[Setting, ...]:
return tuple(out) return tuple(out)
def _validate_hook_path(value: object, manifest_path: Path, where: str) -> str | None:
if value is None:
return None
if not isinstance(value, str) or not value:
raise ManifestError(f"{manifest_path}: {where} must be a non-empty string if set")
if value.startswith("/"):
raise ManifestError(f"{manifest_path}: {where} must be relative (no leading /)")
parts = value.replace("\\", "/").split("/")
if any(p == ".." for p in parts):
raise ManifestError(f"{manifest_path}: {where} must not contain '..'")
return value
def _parse_requires(
raw: object, manifest_path: Path, self_name: str
) -> tuple[Requirement, ...]:
if raw is None:
return ()
if not isinstance(raw, list):
raise ManifestError(f"{manifest_path}: requires must be a list")
out: list[Requirement] = []
seen: set[str] = set()
for i, item in enumerate(raw):
if not isinstance(item, dict):
raise ManifestError(f"{manifest_path}: requires[{i}] must be an object")
app = item.get("app")
if not isinstance(app, str) or not app or not APP_NAME_RE.match(app):
raise ManifestError(
f"{manifest_path}: requires[{i}].app must be a non-empty lowercase app name"
)
if app == self_name:
raise ManifestError(
f"{manifest_path}: requires[{i}].app {app!r} is a self-reference"
)
if app in seen:
raise ManifestError(f"{manifest_path}: requires has duplicate app {app!r}")
seen.add(app)
on_install = _validate_hook_path(
item.get("on_install"), manifest_path, f"requires[{app}].on_install"
)
on_start = _validate_hook_path(
item.get("on_start"), manifest_path, f"requires[{app}].on_start"
)
out.append(Requirement(app=app, on_install=on_install, on_start=on_start))
return tuple(out)
def load_manifest(path: Path, expected_name: str | None = None) -> Manifest: def load_manifest(path: Path, expected_name: str | None = None) -> Manifest:
"""Parse and validate a manifest.json. """Parse and validate a manifest.json.
@ -132,6 +193,7 @@ def load_manifest(path: Path, expected_name: str | None = None) -> Manifest:
raise ManifestError(f"{path}: ports must be a list of integers") raise ManifestError(f"{path}: ports must be a list of integers")
settings = _parse_settings(raw.get("settings"), path) settings = _parse_settings(raw.get("settings"), path)
requires = _parse_requires(raw.get("requires"), path, name)
open_url_raw = raw.get("open_url", "") open_url_raw = raw.get("open_url", "")
if not isinstance(open_url_raw, str): if not isinstance(open_url_raw, str):
@ -148,4 +210,5 @@ def load_manifest(path: Path, expected_name: str | None = None) -> Manifest:
description_long=str(raw.get("description_long", "")), description_long=str(raw.get("description_long", "")),
settings=settings, settings=settings,
open_url=open_url_raw, open_url=open_url_raw,
requires=requires,
) )

View file

@ -1,13 +1,16 @@
from dataclasses import dataclass from dataclasses import dataclass
from pathlib import Path from pathlib import Path
from furtka import dockerops from furtka import deps, dockerops
from furtka.manifest import ManifestError, load_manifest
from furtka.scanner import scan from furtka.scanner import scan
_ON_START_TIMEOUT_SECONDS = 30.0
@dataclass(frozen=True) @dataclass(frozen=True)
class Action: class Action:
kind: str # "ensure_volume" | "compose_up" | "skip" kind: str # "ensure_volume" | "compose_up" | "hook" | "skip" | "error"
target: str target: str
detail: str = "" detail: str = ""
@ -20,13 +23,20 @@ class Action:
def reconcile(apps_root: Path, dry_run: bool = False) -> list[Action]: def reconcile(apps_root: Path, dry_run: bool = False) -> list[Action]:
"""Walk the apps tree and bring docker into the desired state. """Walk the apps tree and bring docker into the desired state.
Apps are visited in dependency order providers before consumers so a
consumer's `on_start` hook runs against an already-up provider. Within a
tier, order stays alphabetical for deterministic boot logs. Apps with
unresolvable `requires` (missing provider, broken manifest cycle) are
visited last; reconcile's per-app isolation still kicks in if they fail.
Failures during one app's reconcile (Docker errors, missing binary, …) are Failures during one app's reconcile (Docker errors, missing binary, …) are
captured as Action(kind='error', ) and do NOT abort the whole sweep the captured as Action(kind='error', ) and do NOT abort the whole sweep the
other apps still get reconciled. Callers inspect the returned actions to other apps still get reconciled. Callers inspect the returned actions to
decide overall success. decide overall success.
""" """
actions: list[Action] = [] actions: list[Action] = []
for result in scan(apps_root): results = scan(apps_root)
for result in deps.installed_topo_order(results):
if not result.ok: if not result.ok:
actions.append(Action("skip", result.path.name, result.error or "")) actions.append(Action("skip", result.path.name, result.error or ""))
continue continue
@ -37,6 +47,33 @@ def reconcile(apps_root: Path, dry_run: bool = False) -> list[Action]:
actions.append(Action("ensure_volume", full)) actions.append(Action("ensure_volume", full))
if not dry_run: if not dry_run:
dockerops.ensure_volume(full) dockerops.ensure_volume(full)
hook_failed = False
for req in m.requires:
if not req.on_start:
continue
hook_label = f"{m.name}:{req.app}:on_start"
actions.append(Action("hook", hook_label, req.on_start))
if dry_run:
continue
try:
_fire_on_start_hook(m, req, apps_root)
except (
dockerops.DockerError,
FileNotFoundError,
OSError,
ManifestError,
) as e:
actions.append(
Action("error", m.name, f"on_start({req.app}): {e}")
)
hook_failed = True
break
if hook_failed:
# Skip compose_up: a consumer whose provider's contract didn't
# get re-established (e.g. missing MQTT user) starting up
# blindly is worse than not starting it. The provider stays up
# and other apps in the sweep keep going.
continue
actions.append(Action("compose_up", m.name)) actions.append(Action("compose_up", m.name))
if not dry_run: if not dry_run:
dockerops.compose_up(result.path, m.name) dockerops.compose_up(result.path, m.name)
@ -48,5 +85,40 @@ def reconcile(apps_root: Path, dry_run: bool = False) -> list[Action]:
return actions return actions
def _fire_on_start_hook(consumer, req, apps_root: Path) -> None:
"""Run a single `on_start` hook against the provider's running container.
Reconciler-local helper kept narrow on purpose so reconcile's main loop
stays scannable. Errors propagate; the caller decorates with the per-app
Action("error", ...) and skips compose_up for this consumer.
"""
provider_dir = apps_root / req.app
provider_manifest_path = provider_dir / "manifest.json"
if not provider_manifest_path.is_file():
raise FileNotFoundError(
f"required app {req.app!r} is not installed"
)
# Validate provider manifest loads (otherwise scanner would have skipped
# it and we'd still try to exec — fail loud here instead).
load_manifest(provider_manifest_path, expected_name=req.app)
hook_abs = provider_dir / req.on_start
if not hook_abs.is_file():
raise FileNotFoundError(
f"on_start hook {req.on_start!r} missing in provider {req.app}"
)
service = deps.provider_exec_service(provider_dir, req.app)
dockerops.compose_exec_script(
provider_dir,
req.app,
service,
hook_abs,
env={
"FURTKA_CONSUMER_APP": consumer.name,
"FURTKA_CONSUMER_VERSION": consumer.version,
},
timeout=_ON_START_TIMEOUT_SECONDS,
)
def has_errors(actions: list[Action]) -> bool: def has_errors(actions: list[Action]) -> bool:
return any(a.kind == "error" for a in actions) return any(a.kind == "error" for a in actions)

View file

@ -1,6 +1,6 @@
[project] [project]
name = "furtka" name = "furtka"
version = "26.16-alpha" version = "26.17-alpha"
description = "Open-source home server OS — simple enough for everyone." description = "Open-source home server OS — simple enough for everyone."
requires-python = ">=3.11" requires-python = ">=3.11"
readme = "README.md" readme = "README.md"

View file

@ -40,6 +40,7 @@ def fake_dirs(tmp_path, monkeypatch):
# /run/furtka/install.lock by default — redirect into tmp_path so # /run/furtka/install.lock by default — redirect into tmp_path so
# test code doesn't need root. # test code doesn't need root.
monkeypatch.setenv("FURTKA_INSTALL_STATE", str(tmp_path / "install-state.json")) monkeypatch.setenv("FURTKA_INSTALL_STATE", str(tmp_path / "install-state.json"))
monkeypatch.setenv("FURTKA_INSTALL_PLAN", str(tmp_path / "install-plan.json"))
monkeypatch.setenv("FURTKA_INSTALL_LOCK", str(tmp_path / "install.lock")) monkeypatch.setenv("FURTKA_INSTALL_LOCK", str(tmp_path / "install.lock"))
# install_runner caches env vars at import time, so reload it to # install_runner caches env vars at import time, so reload it to
# pick up the tmp-path env vars this fixture just set. # pick up the tmp-path env vars this fixture just set.
@ -258,6 +259,144 @@ def test_remove_endpoint_happy_path(fake_dirs, no_docker, no_systemd_run):
assert not (apps / "fileshare").exists() assert not (apps / "fileshare").exists()
# --- Dependency plan + confirm flow ----------------------------------------
def test_install_plan_endpoint_lone_app(fake_dirs):
_, bundled = fake_dirs
_write_bundled(bundled, "fileshare", env_example="A=real")
status, body = api._do_install_plan("fileshare")
assert status == 200
assert body["target"] == "fileshare"
assert body["to_install"] == ["fileshare"]
assert body["install_order"] == ["fileshare"]
def test_install_plan_endpoint_with_provider(fake_dirs):
_, bundled = fake_dirs
_write_bundled(
bundled,
"mosquitto",
manifest=dict(VALID_MANIFEST, name="mosquitto"),
env_example="A=real",
)
consumer = dict(
VALID_MANIFEST,
name="zigbee2mqtt",
requires=[{"app": "mosquitto"}],
)
_write_bundled(bundled, "zigbee2mqtt", manifest=consumer, env_example="A=real")
status, body = api._do_install_plan("zigbee2mqtt")
assert status == 200
assert body["install_order"] == ["mosquitto", "zigbee2mqtt"]
assert body["to_install"] == ["mosquitto", "zigbee2mqtt"]
assert body["already_installed"] == []
assert [s["name"] for s in body["summaries"]] == ["mosquitto", "zigbee2mqtt"]
def test_install_plan_endpoint_rejects_cycle(fake_dirs):
_, bundled = fake_dirs
_write_bundled(bundled, "a", manifest=dict(VALID_MANIFEST, name="a", requires=[{"app": "b"}]))
_write_bundled(bundled, "b", manifest=dict(VALID_MANIFEST, name="b", requires=[{"app": "a"}]))
status, body = api._do_install_plan("a")
assert status == 400
assert "circular" in body["error"]
def test_install_endpoint_without_confirm_returns_409_for_transitive(fake_dirs):
_, bundled = fake_dirs
_write_bundled(
bundled,
"mosquitto",
manifest=dict(VALID_MANIFEST, name="mosquitto"),
env_example="A=real",
)
consumer = dict(
VALID_MANIFEST, name="zigbee2mqtt", requires=[{"app": "mosquitto"}]
)
_write_bundled(bundled, "zigbee2mqtt", manifest=consumer, env_example="A=real")
status, body = api._do_install("zigbee2mqtt")
assert status == 409
assert "additional apps required" in body["error"]
assert body["to_install"] == ["mosquitto", "zigbee2mqtt"]
def test_install_endpoint_with_confirm_dispatches_plan(fake_dirs, no_docker, no_systemd_run):
apps, bundled = fake_dirs
_write_bundled(
bundled,
"mosquitto",
manifest=dict(VALID_MANIFEST, name="mosquitto"),
env_example="A=real",
)
consumer = dict(
VALID_MANIFEST, name="zigbee2mqtt", requires=[{"app": "mosquitto"}]
)
_write_bundled(bundled, "zigbee2mqtt", manifest=consumer, env_example="A=real")
status, body = api._do_install("zigbee2mqtt", confirm_dependencies=True)
assert status == 202
# Both apps installed in plan order.
assert (apps / "mosquitto").exists()
assert (apps / "zigbee2mqtt").exists()
# Plan file written for the background runner.
import json as _json
plan = _json.loads(api.install_runner.plan_path().read_text())
assert plan["target"] == "zigbee2mqtt"
assert plan["to_install"] == ["mosquitto", "zigbee2mqtt"]
def test_install_endpoint_lone_target_skips_confirm(fake_dirs, no_docker, no_systemd_run):
_, bundled = fake_dirs
_write_bundled(bundled, "fileshare", env_example="A=real")
status, body = api._do_install("fileshare")
# No transitive deps, so no 409 — straight to 202.
assert status == 202
def test_remove_blocked_when_other_app_depends(fake_dirs, no_docker, no_systemd_run):
apps, bundled = fake_dirs
_write_bundled(
bundled,
"mosquitto",
manifest=dict(VALID_MANIFEST, name="mosquitto"),
env_example="A=real",
)
consumer = dict(
VALID_MANIFEST, name="zigbee2mqtt", requires=[{"app": "mosquitto"}]
)
_write_bundled(bundled, "zigbee2mqtt", manifest=consumer, env_example="A=real")
api._do_install("zigbee2mqtt", confirm_dependencies=True)
status, body = api._do_remove("mosquitto")
assert status == 409
assert body["dependents"] == ["zigbee2mqtt"]
assert "required by" in body["error"]
# mosquitto is still installed — guard refused to remove it.
assert (apps / "mosquitto").exists()
def test_remove_succeeds_when_dependent_first_removed(fake_dirs, no_docker, no_systemd_run):
apps, bundled = fake_dirs
_write_bundled(
bundled,
"mosquitto",
manifest=dict(VALID_MANIFEST, name="mosquitto"),
env_example="A=real",
)
consumer = dict(
VALID_MANIFEST, name="zigbee2mqtt", requires=[{"app": "mosquitto"}]
)
_write_bundled(bundled, "zigbee2mqtt", manifest=consumer, env_example="A=real")
api._do_install("zigbee2mqtt", confirm_dependencies=True)
# Remove consumer first — should succeed.
status, _ = api._do_remove("zigbee2mqtt")
assert status == 200
# Now mosquitto has no dependents, so remove succeeds too.
status, _ = api._do_remove("mosquitto")
assert status == 200
assert not (apps / "mosquitto").exists()
def _request(port, path, cookie=None, method="GET", body=None): def _request(port, path, cookie=None, method="GET", body=None):
headers = {} headers = {}
if cookie is not None: if cookie is not None:

View file

@ -103,3 +103,88 @@ def test_app_install_bg_returns_1_on_failure(tmp_path, monkeypatch, capsys):
err = capsys.readouterr().err err = capsys.readouterr().err
assert "install-bg failed" in err assert "install-bg failed" in err
assert "compose pull failed" in err assert "compose pull failed" in err
# --- Dependency-aware install + remove ---------------------------------------
def _write_manifest(root, name, **overrides):
app = root / name
app.mkdir(parents=True, exist_ok=True)
payload = {
"name": name,
"display_name": name,
"version": "0.1.0",
"description": "x",
"volumes": [],
"ports": [],
"icon": "icon.svg",
**overrides,
}
(app / "manifest.json").write_text(json.dumps(payload))
(app / "docker-compose.yaml").write_text("services: {}\n")
return app
def test_app_remove_blocked_by_dependent(tmp_path, monkeypatch, capsys):
_set_env(monkeypatch, tmp_path)
_write_manifest(tmp_path, "mosquitto")
_write_manifest(tmp_path, "zigbee2mqtt", requires=[{"app": "mosquitto"}])
rc = main(["app", "remove", "mosquitto"])
assert rc == 2
err = capsys.readouterr().err
assert "required by: zigbee2mqtt" in err
def test_app_remove_unblocked_when_no_dependents(tmp_path, monkeypatch):
_set_env(monkeypatch, tmp_path)
_write_manifest(tmp_path, "mosquitto")
from furtka import dockerops
monkeypatch.setattr(dockerops, "compose_down", lambda *a, **k: None)
rc = main(["app", "remove", "mosquitto"])
assert rc == 0
assert not (tmp_path / "mosquitto").exists()
def test_app_install_uses_plan_for_named_install(tmp_path, monkeypatch, capsys):
"""Named install pulls in dependencies via plan_install."""
_set_env(monkeypatch, tmp_path)
bundled = tmp_path / "bundled"
bundled.mkdir()
monkeypatch.setenv("FURTKA_BUNDLED_APPS_DIR", str(bundled))
# No catalog dir — bundled-only.
monkeypatch.setenv("FURTKA_CATALOG_DIR", str(tmp_path / "catalog"))
_write_manifest(bundled, "mosquitto")
_write_manifest(bundled, "zigbee2mqtt", requires=[{"app": "mosquitto"}])
from furtka import installer, reconciler
# Stub install_from so we don't actually copy files / mess with placeholders.
install_calls: list[str] = []
def fake_install_from(src, settings=None):
install_calls.append(src.name)
return tmp_path / src.name
monkeypatch.setattr(installer, "install_from", fake_install_from)
monkeypatch.setattr(reconciler, "reconcile", lambda *a, **k: [])
rc = main(["app", "install", "zigbee2mqtt"])
assert rc == 0
# Provider installed before consumer.
assert install_calls == ["mosquitto", "zigbee2mqtt"]
def test_app_install_named_with_cycle_exits_2(tmp_path, monkeypatch, capsys):
_set_env(monkeypatch, tmp_path)
bundled = tmp_path / "bundled"
bundled.mkdir()
monkeypatch.setenv("FURTKA_BUNDLED_APPS_DIR", str(bundled))
monkeypatch.setenv("FURTKA_CATALOG_DIR", str(tmp_path / "catalog"))
_write_manifest(bundled, "a", requires=[{"app": "b"}])
_write_manifest(bundled, "b", requires=[{"app": "a"}])
rc = main(["app", "install", "a"])
assert rc == 2
err = capsys.readouterr().err
assert "circular" in err.lower()

185
tests/test_deps.py Normal file
View file

@ -0,0 +1,185 @@
import json
import pytest
from furtka import deps
BASE_MANIFEST = {
"display_name": "X",
"version": "0.1.0",
"description": "x",
"volumes": [],
"ports": [],
"icon": "icon.svg",
}
@pytest.fixture
def apps_root(tmp_path, monkeypatch):
"""Three roots: installed, catalog, bundled. Each set up empty by default."""
installed = tmp_path / "installed"
catalog = tmp_path / "catalog" / "apps"
bundled = tmp_path / "bundled"
for p in (installed, catalog, bundled):
p.mkdir(parents=True)
monkeypatch.setenv("FURTKA_APPS_DIR", str(installed))
monkeypatch.setenv("FURTKA_CATALOG_DIR", str(tmp_path / "catalog"))
monkeypatch.setenv("FURTKA_BUNDLED_APPS_DIR", str(bundled))
return {"installed": installed, "catalog": catalog, "bundled": bundled}
def _write_manifest(root, name, **overrides):
app = root / name
app.mkdir(parents=True, exist_ok=True)
payload = dict(BASE_MANIFEST, name=name, **overrides)
(app / "manifest.json").write_text(json.dumps(payload))
return app
def test_plan_install_no_deps(apps_root):
_write_manifest(apps_root["catalog"], "alone")
plan = deps.plan_install("alone")
assert plan.target == "alone"
assert plan.install_order == ("alone",)
assert plan.to_install == ("alone",)
assert plan.already_installed == frozenset()
def test_plan_install_linear_chain(apps_root):
# A requires B, B requires C — all in catalog, none installed yet.
_write_manifest(apps_root["catalog"], "c")
_write_manifest(apps_root["catalog"], "b", requires=[{"app": "c"}])
_write_manifest(apps_root["catalog"], "a", requires=[{"app": "b"}])
plan = deps.plan_install("a")
assert plan.install_order == ("c", "b", "a")
assert plan.to_install == ("c", "b", "a")
def test_plan_install_diamond(apps_root):
# A requires B and C; B requires D; C requires D. D must appear once,
# before B and C, which come before A.
_write_manifest(apps_root["catalog"], "d")
_write_manifest(apps_root["catalog"], "b", requires=[{"app": "d"}])
_write_manifest(apps_root["catalog"], "c", requires=[{"app": "d"}])
_write_manifest(
apps_root["catalog"], "a", requires=[{"app": "b"}, {"app": "c"}]
)
plan = deps.plan_install("a")
order = plan.install_order
# D first, A last, B and C in between (deterministically alphabetical).
assert order[0] == "d"
assert order[-1] == "a"
assert set(order[1:-1]) == {"b", "c"}
assert order.count("d") == 1
def test_plan_install_already_installed_provider(apps_root):
_write_manifest(apps_root["installed"], "b") # provider already installed
_write_manifest(apps_root["catalog"], "b")
_write_manifest(apps_root["catalog"], "a", requires=[{"app": "b"}])
plan = deps.plan_install("a")
assert plan.install_order == ("b", "a")
assert plan.to_install == ("a",)
assert plan.already_installed == frozenset({"b"})
def test_plan_install_cycle_two_node(apps_root):
# Manifest validator already rejects self-reference at load time, but
# mutual references (A -> B -> A) only show up at plan time.
_write_manifest(apps_root["catalog"], "b", requires=[{"app": "a"}])
_write_manifest(apps_root["catalog"], "a", requires=[{"app": "b"}])
with pytest.raises(deps.DependencyError, match="circular"):
deps.plan_install("a")
def test_plan_install_cycle_three_node(apps_root):
_write_manifest(apps_root["catalog"], "c", requires=[{"app": "a"}])
_write_manifest(apps_root["catalog"], "b", requires=[{"app": "c"}])
_write_manifest(apps_root["catalog"], "a", requires=[{"app": "b"}])
with pytest.raises(deps.DependencyError, match="a -> b -> c -> a"):
deps.plan_install("a")
def test_plan_install_missing_provider(apps_root):
_write_manifest(apps_root["catalog"], "a", requires=[{"app": "ghost"}])
with pytest.raises(deps.DependencyError, match="ghost"):
deps.plan_install("a")
def test_plan_install_prefers_installed_over_catalog(apps_root):
# If a provider exists in both installed and catalog, we resolve via
# installed (so we read the actual on-disk manifest the user has).
_write_manifest(apps_root["installed"], "b")
_write_manifest(apps_root["catalog"], "b", requires=[{"app": "extra"}])
_write_manifest(apps_root["catalog"], "a", requires=[{"app": "b"}])
plan = deps.plan_install("a")
# The installed manifest has no requires, so "extra" is NOT pulled in.
assert plan.install_order == ("b", "a")
def test_dependents_of_empty(apps_root):
assert deps.dependents_of("anything") == ()
def test_dependents_of_finds_consumers(apps_root):
_write_manifest(apps_root["installed"], "x")
_write_manifest(apps_root["installed"], "a", requires=[{"app": "x"}])
_write_manifest(apps_root["installed"], "b", requires=[{"app": "x"}])
_write_manifest(apps_root["installed"], "unrelated")
assert deps.dependents_of("x") == ("a", "b")
assert deps.dependents_of("unrelated") == ()
def test_installed_topo_order_preserves_alpha_when_independent(apps_root):
from furtka.scanner import scan
_write_manifest(apps_root["installed"], "alpha")
_write_manifest(apps_root["installed"], "bravo")
_write_manifest(apps_root["installed"], "charlie")
ordered = deps.installed_topo_order(scan(apps_root["installed"]))
assert [r.manifest.name for r in ordered] == ["alpha", "bravo", "charlie"]
def test_installed_topo_order_puts_providers_first(apps_root):
from furtka.scanner import scan
# Alphabetically z2m comes before mqtt? No — but let's force the
# dependency to win. consumer=alpha requires=provider=zulu, so naive
# alpha order would put alpha first. Topo must flip them.
_write_manifest(apps_root["installed"], "zulu")
_write_manifest(apps_root["installed"], "alpha", requires=[{"app": "zulu"}])
ordered = deps.installed_topo_order(scan(apps_root["installed"]))
names = [r.manifest.name for r in ordered]
assert names == ["zulu", "alpha"]
def test_installed_topo_order_missing_provider_tails_app(apps_root):
from furtka.scanner import scan
_write_manifest(apps_root["installed"], "good")
_write_manifest(apps_root["installed"], "needy", requires=[{"app": "ghost"}])
ordered = deps.installed_topo_order(scan(apps_root["installed"]))
names = [r.manifest.name for r in ordered]
# `good` first (no deps), `needy` last (unresolved).
assert names == ["good", "needy"]
def test_provider_exec_service_picks_first_service(apps_root, monkeypatch):
from furtka import dockerops
monkeypatch.setattr(
dockerops,
"compose_image_tags",
lambda app_dir, project: {"server": "img:1", "worker": "img:2"},
)
assert deps.provider_exec_service(apps_root["installed"] / "x", "x") == "server"
def test_provider_exec_service_falls_back_to_project_on_docker_error(apps_root, monkeypatch):
from furtka import dockerops
def boom(app_dir, project):
raise dockerops.DockerError("docker not running")
monkeypatch.setattr(dockerops, "compose_image_tags", boom)
assert deps.provider_exec_service(apps_root["installed"] / "x", "myproj") == "myproj"

118
tests/test_dockerops.py Normal file
View file

@ -0,0 +1,118 @@
import subprocess
import pytest
from furtka import dockerops
class FakeProc:
def __init__(self, stdout=b"", stderr=b"", returncode=0):
self.stdout = stdout
self.stderr = stderr
self.returncode = returncode
def test_compose_exec_builds_command(tmp_path, monkeypatch):
recorded = {}
def fake_run(cmd, **kwargs):
recorded["cmd"] = cmd
recorded["kwargs"] = kwargs
return FakeProc(stdout="ok\n", returncode=0)
monkeypatch.setattr(subprocess, "run", fake_run)
out = dockerops.compose_exec(tmp_path, "myproj", "svc", ["echo", "hi"])
assert out == "ok\n"
cmd = recorded["cmd"]
# docker compose --project-name myproj --file <path>/docker-compose.yaml exec -T svc echo hi
assert cmd[0] == "docker"
assert cmd[1] == "compose"
assert "--project-name" in cmd and "myproj" in cmd
assert "exec" in cmd
assert "-T" in cmd
# -T must come before the service name
assert cmd.index("-T") < cmd.index("svc")
# argv appended after service
assert cmd[-2:] == ["echo", "hi"]
def test_compose_exec_propagates_env(tmp_path, monkeypatch):
recorded = {}
def fake_run(cmd, **kwargs):
recorded["cmd"] = cmd
return FakeProc()
monkeypatch.setattr(subprocess, "run", fake_run)
dockerops.compose_exec(
tmp_path, "p", "s", ["true"], env={"A": "1", "B": "two"}
)
cmd = recorded["cmd"]
# `--env A=1 --env B=two` should appear before the service name.
s_idx = cmd.index("s")
env_args = cmd[:s_idx]
assert env_args.count("--env") == 2
assert "A=1" in env_args
assert "B=two" in env_args
def test_compose_exec_raises_on_nonzero(tmp_path, monkeypatch):
def fake_run(cmd, **kwargs):
return FakeProc(stdout="", stderr="boom", returncode=2)
monkeypatch.setattr(subprocess, "run", fake_run)
with pytest.raises(dockerops.DockerError, match="exited 2"):
dockerops.compose_exec(tmp_path, "p", "s", ["fail"])
def test_compose_exec_raises_on_timeout(tmp_path, monkeypatch):
def fake_run(cmd, **kwargs):
raise subprocess.TimeoutExpired(cmd, timeout=kwargs.get("timeout"))
monkeypatch.setattr(subprocess, "run", fake_run)
with pytest.raises(dockerops.DockerError, match="timed out"):
dockerops.compose_exec(tmp_path, "p", "s", ["sleep", "9999"], timeout=1)
def test_compose_exec_script_streams_via_stdin(tmp_path, monkeypatch):
script = tmp_path / "hook.sh"
body = b"#!/bin/sh\necho hello\n"
script.write_bytes(body)
recorded = {}
def fake_run(cmd, **kwargs):
recorded["cmd"] = cmd
recorded["input"] = kwargs["input"]
return FakeProc(stdout=b"hello\n", returncode=0)
monkeypatch.setattr(subprocess, "run", fake_run)
out = dockerops.compose_exec_script(tmp_path, "p", "s", script)
assert out == "hello\n"
# exec ... s sh -s (script body comes in on stdin)
cmd = recorded["cmd"]
assert cmd[-3:] == ["s", "sh", "-s"]
assert recorded["input"] == body
def test_compose_exec_script_raises_on_nonzero(tmp_path, monkeypatch):
script = tmp_path / "fail.sh"
script.write_bytes(b"exit 1\n")
def fake_run(cmd, **kwargs):
return FakeProc(stdout=b"", stderr=b"hook says no", returncode=1)
monkeypatch.setattr(subprocess, "run", fake_run)
with pytest.raises(dockerops.DockerError, match="hook fail.sh exited 1"):
dockerops.compose_exec_script(tmp_path, "p", "s", script)
def test_compose_exec_script_raises_on_timeout(tmp_path, monkeypatch):
script = tmp_path / "slow.sh"
script.write_bytes(b"sleep 10\n")
def fake_run(cmd, **kwargs):
raise subprocess.TimeoutExpired(cmd, timeout=kwargs.get("timeout"))
monkeypatch.setattr(subprocess, "run", fake_run)
with pytest.raises(dockerops.DockerError, match="hook slow.sh timed out"):
dockerops.compose_exec_script(tmp_path, "p", "s", script, timeout=1)

View file

@ -21,6 +21,7 @@ def runner(tmp_path, monkeypatch):
apps.mkdir() apps.mkdir()
monkeypatch.setenv("FURTKA_APPS_DIR", str(apps)) monkeypatch.setenv("FURTKA_APPS_DIR", str(apps))
monkeypatch.setenv("FURTKA_INSTALL_STATE", str(tmp_path / "install-state.json")) monkeypatch.setenv("FURTKA_INSTALL_STATE", str(tmp_path / "install-state.json"))
monkeypatch.setenv("FURTKA_INSTALL_PLAN", str(tmp_path / "install-plan.json"))
monkeypatch.setenv("FURTKA_INSTALL_LOCK", str(tmp_path / "install.lock")) monkeypatch.setenv("FURTKA_INSTALL_LOCK", str(tmp_path / "install.lock"))
import importlib import importlib
@ -33,7 +34,7 @@ def runner(tmp_path, monkeypatch):
return r return r
def _write_installed_app(apps_dir: Path, name: str = "fileshare"): def _write_installed_app(apps_dir: Path, name: str = "fileshare", **overrides):
app = apps_dir / name app = apps_dir / name
app.mkdir() app.mkdir()
manifest = { manifest = {
@ -44,6 +45,7 @@ def _write_installed_app(apps_dir: Path, name: str = "fileshare"):
"volumes": ["files"], "volumes": ["files"],
"ports": [445], "ports": [445],
"icon": "icon.svg", "icon": "icon.svg",
**overrides,
} }
(app / "manifest.json").write_text(json.dumps(manifest)) (app / "manifest.json").write_text(json.dumps(manifest))
(app / "docker-compose.yaml").write_text("services: {}\n") (app / "docker-compose.yaml").write_text("services: {}\n")
@ -175,3 +177,310 @@ def test_run_install_releases_lock_after_error(runner, monkeypatch):
fh = runner.acquire_lock() fh = runner.acquire_lock()
fh.close() fh.close()
# --- plan-aware multi-app installs -------------------------------------------
def _write_plan(plan_path: Path, target: str, to_install: list[str]) -> None:
plan_path.write_text(json.dumps({"target": target, "to_install": to_install}))
def _stub_docker_ops(monkeypatch, calls: list):
import furtka.dockerops as dockerops
def _pull(app_dir, project):
calls.append(("pull", project))
def _vol(name):
calls.append(("vol", name))
def _up(app_dir, project):
calls.append(("up", project))
monkeypatch.setattr(dockerops, "compose_pull", _pull)
monkeypatch.setattr(dockerops, "ensure_volume", _vol)
monkeypatch.setattr(dockerops, "compose_up", _up)
def test_run_install_iterates_plan_order(runner, monkeypatch):
from furtka.paths import apps_dir
_write_installed_app(apps_dir(), "mosquitto")
_write_installed_app(
apps_dir(),
"zigbee2mqtt",
requires=[{"app": "mosquitto"}],
)
_write_plan(runner.plan_path(), "zigbee2mqtt", ["mosquitto", "zigbee2mqtt"])
calls: list = []
_stub_docker_ops(monkeypatch, calls)
runner.run_install("zigbee2mqtt")
# mosquitto fully reconciled before zigbee2mqtt starts.
assert [c for c in calls if c[0] == "pull"] == [("pull", "mosquitto"), ("pull", "zigbee2mqtt")]
assert [c for c in calls if c[0] == "up"] == [("up", "mosquitto"), ("up", "zigbee2mqtt")]
s = runner.read_state()
assert s["stage"] == "done"
assert s["target"] == "zigbee2mqtt"
assert s["app"] == "zigbee2mqtt"
def test_run_install_fires_on_install_hook_against_provider(runner, monkeypatch):
import furtka.dockerops as dockerops
from furtka.paths import apps_dir
mosq = _write_installed_app(apps_dir(), "mosquitto")
# Provider ships a hook script.
(mosq / "hooks").mkdir()
hook = mosq / "hooks" / "create-user.sh"
hook.write_bytes(b"#!/bin/sh\necho MQTT_USER=z2m\necho MQTT_PASS=hunter2\n")
consumer = _write_installed_app(
apps_dir(),
"zigbee2mqtt",
requires=[{"app": "mosquitto", "on_install": "hooks/create-user.sh"}],
)
# Consumer's .env starts empty.
(consumer / ".env").write_text("")
_write_plan(runner.plan_path(), "zigbee2mqtt", ["mosquitto", "zigbee2mqtt"])
calls: list = []
_stub_docker_ops(monkeypatch, calls)
captured = {}
def fake_exec_script(app_dir, project, service, script_path, *, env, timeout):
captured["app_dir"] = app_dir
captured["project"] = project
captured["service"] = service
captured["script_path"] = script_path
captured["env"] = env
captured["timeout"] = timeout
return "MQTT_USER=z2m\nMQTT_PASS=hunter2\n"
# Tell the provider_exec_service helper to pick a deterministic service.
monkeypatch.setattr(
dockerops, "compose_image_tags", lambda a, p: {"mosquitto": "eclipse-mosquitto:2"}
)
monkeypatch.setattr(dockerops, "compose_exec_script", fake_exec_script)
runner.run_install("zigbee2mqtt")
# Hook was called against the provider, with the consumer's name + version
# in env, and the timeout we expect.
assert captured["project"] == "mosquitto"
assert captured["service"] == "mosquitto"
assert captured["script_path"] == hook
assert captured["env"] == {
"FURTKA_CONSUMER_APP": "zigbee2mqtt",
"FURTKA_CONSUMER_VERSION": "0.1.0",
}
assert captured["timeout"] == 60.0
# Consumer's .env now has the hook output.
env_text = (consumer / ".env").read_text()
assert "MQTT_USER=z2m" in env_text
assert "MQTT_PASS=hunter2" in env_text
# Mode 0600.
assert (consumer / ".env").stat().st_mode & 0o777 == 0o600
def test_run_install_hook_furtka_json_sentinel(runner, monkeypatch):
import furtka.dockerops as dockerops
from furtka.paths import apps_dir
_write_installed_app(apps_dir(), "mosquitto")
consumer = _write_installed_app(
apps_dir(),
"z2m",
requires=[{"app": "mosquitto", "on_install": "hooks/x.sh"}],
)
(apps_dir() / "mosquitto" / "hooks").mkdir()
(apps_dir() / "mosquitto" / "hooks" / "x.sh").write_bytes(b"")
(consumer / ".env").write_text("")
_write_plan(runner.plan_path(), "z2m", ["mosquitto", "z2m"])
calls: list = []
_stub_docker_ops(monkeypatch, calls)
monkeypatch.setattr(dockerops, "compose_image_tags", lambda a, p: {"mosquitto": "img"})
# Hook output mixes plain KEY=VALUE and a FURTKA_JSON sentinel. JSON
# wins on conflict (overlays plain).
monkeypatch.setattr(
dockerops,
"compose_exec_script",
lambda *a, **k: 'MQTT_USER=oldval\nFURTKA_JSON: {"MQTT_USER": "newval", "TOKEN": "abc"}\n',
)
runner.run_install("z2m")
env_text = (consumer / ".env").read_text()
assert "MQTT_USER=newval" in env_text # JSON overlay wins
assert "TOKEN=abc" in env_text
def test_run_install_hook_rejects_bad_key_name(runner, monkeypatch):
import furtka.dockerops as dockerops
from furtka.paths import apps_dir
_write_installed_app(apps_dir(), "mosquitto")
consumer = _write_installed_app(
apps_dir(),
"z2m",
requires=[{"app": "mosquitto", "on_install": "hooks/x.sh"}],
)
(apps_dir() / "mosquitto" / "hooks").mkdir()
(apps_dir() / "mosquitto" / "hooks" / "x.sh").write_bytes(b"")
(consumer / ".env").write_text("")
_write_plan(runner.plan_path(), "z2m", ["mosquitto", "z2m"])
calls: list = []
_stub_docker_ops(monkeypatch, calls)
monkeypatch.setattr(dockerops, "compose_image_tags", lambda a, p: {"mosquitto": "img"})
monkeypatch.setattr(
dockerops, "compose_exec_script", lambda *a, **k: "lowercase_key=oops\n"
)
with pytest.raises(runner.InstallRunnerError, match="UPPER_SNAKE_CASE"):
runner.run_install("z2m")
s = runner.read_state()
assert s["stage"] == "error"
# Consumer's compose_up was never called because the hook failed.
assert not any(c[0] == "up" and c[1] == "z2m" for c in calls)
def test_run_install_hook_rejects_placeholder_value(runner, monkeypatch):
import furtka.dockerops as dockerops
from furtka.paths import apps_dir
_write_installed_app(apps_dir(), "mosquitto")
consumer = _write_installed_app(
apps_dir(),
"z2m",
requires=[{"app": "mosquitto", "on_install": "hooks/x.sh"}],
)
(apps_dir() / "mosquitto" / "hooks").mkdir()
(apps_dir() / "mosquitto" / "hooks" / "x.sh").write_bytes(b"")
(consumer / ".env").write_text("")
_write_plan(runner.plan_path(), "z2m", ["mosquitto", "z2m"])
calls: list = []
_stub_docker_ops(monkeypatch, calls)
monkeypatch.setattr(dockerops, "compose_image_tags", lambda a, p: {"mosquitto": "img"})
monkeypatch.setattr(
dockerops, "compose_exec_script", lambda *a, **k: "MQTT_PASS=changeme\n"
)
with pytest.raises(runner.InstallRunnerError, match="placeholder"):
runner.run_install("z2m")
def test_run_install_hook_failure_skips_consumer_compose_up(runner, monkeypatch):
import furtka.dockerops as dockerops
from furtka.paths import apps_dir
_write_installed_app(apps_dir(), "mosquitto")
consumer = _write_installed_app(
apps_dir(),
"z2m",
requires=[{"app": "mosquitto", "on_install": "hooks/x.sh"}],
)
(apps_dir() / "mosquitto" / "hooks").mkdir()
(apps_dir() / "mosquitto" / "hooks" / "x.sh").write_bytes(b"")
(consumer / ".env").write_text("")
_write_plan(runner.plan_path(), "z2m", ["mosquitto", "z2m"])
calls: list = []
_stub_docker_ops(monkeypatch, calls)
monkeypatch.setattr(dockerops, "compose_image_tags", lambda a, p: {"mosquitto": "img"})
def boom(*a, **k):
raise dockerops.DockerError("hook returned 1: connection refused")
monkeypatch.setattr(dockerops, "compose_exec_script", boom)
with pytest.raises(dockerops.DockerError):
runner.run_install("z2m")
s = runner.read_state()
assert s["stage"] == "error"
assert s["target"] == "z2m"
# The provider's compose_up DID run earlier in the plan.
assert ("up", "mosquitto") in calls
# But the consumer's never did.
assert ("up", "z2m") not in calls
def test_run_install_missing_provider_hook_file_raises(runner, monkeypatch):
import furtka.dockerops as dockerops
from furtka.paths import apps_dir
_write_installed_app(apps_dir(), "mosquitto")
consumer = _write_installed_app(
apps_dir(),
"z2m",
requires=[{"app": "mosquitto", "on_install": "hooks/missing.sh"}],
)
(consumer / ".env").write_text("")
_write_plan(runner.plan_path(), "z2m", ["mosquitto", "z2m"])
calls: list = []
_stub_docker_ops(monkeypatch, calls)
monkeypatch.setattr(dockerops, "compose_image_tags", lambda a, p: {"mosquitto": "img"})
with pytest.raises(runner.InstallRunnerError, match="missing in provider"):
runner.run_install("z2m")
def test_run_install_plan_file_is_consumed_after_read(runner, monkeypatch):
"""After a run, the plan file is removed so a stale plan can't steer the next run."""
from furtka.paths import apps_dir
_write_installed_app(apps_dir(), "fileshare")
_write_plan(runner.plan_path(), "fileshare", ["fileshare"])
calls: list = []
_stub_docker_ops(monkeypatch, calls)
runner.run_install("fileshare")
assert not runner.plan_path().exists()
# --- _parse_hook_output (unit) -----------------------------------------------
def test_parse_hook_output_kv_only(runner):
out = runner._parse_hook_output("MQTT_USER=z2m\nMQTT_PASS=hunter2\n")
assert out == {"MQTT_USER": "z2m", "MQTT_PASS": "hunter2"}
def test_parse_hook_output_rejects_lowercase_key(runner):
with pytest.raises(runner.InstallRunnerError, match="UPPER_SNAKE_CASE"):
runner._parse_hook_output("lowercase=oops\n")
def test_parse_hook_output_furtka_json(runner):
out = runner._parse_hook_output(
'FURTKA_JSON: {"FOO": "bar", "BAZ": "qux"}\n'
)
assert out == {"FOO": "bar", "BAZ": "qux"}
def test_parse_hook_output_furtka_json_rejects_non_string(runner):
with pytest.raises(runner.InstallRunnerError, match="must be a string"):
runner._parse_hook_output('FURTKA_JSON: {"FOO": 42}\n')
def test_parse_hook_output_furtka_json_rejects_bad_payload(runner):
with pytest.raises(runner.InstallRunnerError, match="must be an object"):
runner._parse_hook_output('FURTKA_JSON: ["not", "a", "dict"]\n')
def test_parse_hook_output_furtka_json_invalid_json(runner):
with pytest.raises(runner.InstallRunnerError, match="invalid FURTKA_JSON"):
runner._parse_hook_output("FURTKA_JSON: {not json}\n")

View file

@ -355,3 +355,85 @@ def test_update_env_rejects_invalid_path(tmp_path, fake_dirs):
# Then try to update to a bad path. # Then try to update to a bad path.
with pytest.raises(installer.InstallError, match="does not exist"): with pytest.raises(installer.InstallError, match="does not exist"):
installer.update_env("jellyfin", {"MEDIA_PATH": str(tmp_path / "ghost")}) installer.update_env("jellyfin", {"MEDIA_PATH": str(tmp_path / "ghost")})
# --- parse_env_text ----------------------------------------------------------
def test_parse_env_text_basic():
from furtka.installer import parse_env_text
out = parse_env_text("A=1\nB=two\n#comment\n\nC=three=four\n")
assert out == {"A": "1", "B": "two", "C": "three=four"}
def test_parse_env_text_handles_quoted_values():
from furtka.installer import parse_env_text
out = parse_env_text('A="has space"\nB=\'plain\'\nC="quote \\"inside\\""\n')
assert out == {"A": "has space", "B": "plain", "C": 'quote "inside"'}
def test_parse_env_text_ignores_malformed_lines():
from furtka.installer import parse_env_text
out = parse_env_text("no-equals-sign\n=missing-key\nGOOD=ok\n")
assert out == {"GOOD": "ok", "": "missing-key"}
# --- install_plan driver -----------------------------------------------------
def test_install_plan_calls_install_from_in_order(tmp_path, fake_dirs, monkeypatch):
from furtka.deps import DepPlan
calls: list[tuple[str, dict | None]] = []
def fake_resolve(name):
return tmp_path / "src" / name
def fake_install_from(src, settings=None):
calls.append((src.name, settings))
return apps_dir() / src.name
monkeypatch.setattr(installer, "resolve_source", fake_resolve)
monkeypatch.setattr(installer, "install_from", fake_install_from)
plan = DepPlan(
target="a",
install_order=("c", "b", "a"),
already_installed=frozenset(),
to_install=("c", "b", "a"),
)
out = installer.install_plan(plan, settings_target={"K": "v"})
assert [name for name, _ in calls] == ["c", "b", "a"]
# Only the target receives settings.
assert calls[0] == ("c", None)
assert calls[1] == ("b", None)
assert calls[2] == ("a", {"K": "v"})
assert [p.name for p in out] == ["c", "b", "a"]
def test_install_plan_skips_already_installed(tmp_path, fake_dirs, monkeypatch):
from furtka.deps import DepPlan
calls: list[str] = []
def fake_resolve(name):
return tmp_path / "src" / name
def fake_install_from(src, settings=None):
calls.append(src.name)
return apps_dir() / src.name
monkeypatch.setattr(installer, "resolve_source", fake_resolve)
monkeypatch.setattr(installer, "install_from", fake_install_from)
plan = DepPlan(
target="a",
install_order=("b", "a"),
already_installed=frozenset({"b"}),
to_install=("a",),
)
installer.install_plan(plan)
assert calls == ["a"]

View file

@ -191,3 +191,104 @@ def test_settings_non_list_rejected(tmp_path):
path = _write_app(tmp_path, "fileshare", bad) path = _write_app(tmp_path, "fileshare", bad)
with pytest.raises(ManifestError, match="settings must be a list"): with pytest.raises(ManifestError, match="settings must be a list"):
load_manifest(path) load_manifest(path)
def test_requires_optional_default_empty(tmp_path):
path = _write_app(tmp_path, "fileshare", VALID_MANIFEST)
m = load_manifest(path)
assert m.requires == ()
def test_requires_parsed_full_entry(tmp_path):
payload = dict(
VALID_MANIFEST,
name="zigbee2mqtt",
requires=[
{
"app": "mosquitto",
"on_install": "hooks/create-user.sh",
"on_start": "hooks/ensure-user.sh",
}
],
)
path = _write_app(tmp_path, "zigbee2mqtt", payload)
m = load_manifest(path)
assert len(m.requires) == 1
r = m.requires[0]
assert r.app == "mosquitto"
assert r.on_install == "hooks/create-user.sh"
assert r.on_start == "hooks/ensure-user.sh"
def test_requires_app_only_no_hooks(tmp_path):
payload = dict(VALID_MANIFEST, name="z2m", requires=[{"app": "mosquitto"}])
path = _write_app(tmp_path, "z2m", payload)
m = load_manifest(path)
assert m.requires[0].app == "mosquitto"
assert m.requires[0].on_install is None
assert m.requires[0].on_start is None
def test_requires_rejects_self_reference(tmp_path):
payload = dict(VALID_MANIFEST, requires=[{"app": "fileshare"}])
path = _write_app(tmp_path, "fileshare", payload)
with pytest.raises(ManifestError, match="self-reference"):
load_manifest(path)
def test_requires_rejects_duplicate_app(tmp_path):
payload = dict(
VALID_MANIFEST,
name="z2m",
requires=[{"app": "mosquitto"}, {"app": "mosquitto"}],
)
path = _write_app(tmp_path, "z2m", payload)
with pytest.raises(ManifestError, match="duplicate"):
load_manifest(path)
def test_requires_rejects_traversal_hook_path(tmp_path):
payload = dict(
VALID_MANIFEST,
name="z2m",
requires=[{"app": "mosquitto", "on_install": "../../etc/passwd"}],
)
path = _write_app(tmp_path, "z2m", payload)
with pytest.raises(ManifestError, match=r"must not contain '\.\.'"):
load_manifest(path)
def test_requires_rejects_absolute_hook_path(tmp_path):
payload = dict(
VALID_MANIFEST,
name="z2m",
requires=[{"app": "mosquitto", "on_start": "/tmp/hook.sh"}],
)
path = _write_app(tmp_path, "z2m", payload)
with pytest.raises(ManifestError, match="must be relative"):
load_manifest(path)
def test_requires_non_list_rejected(tmp_path):
payload = dict(VALID_MANIFEST, requires={"app": "mosquitto"})
path = _write_app(tmp_path, "fileshare", payload)
with pytest.raises(ManifestError, match="requires must be a list"):
load_manifest(path)
def test_requires_rejects_invalid_app_name(tmp_path):
payload = dict(VALID_MANIFEST, requires=[{"app": "Bad-Name!"}])
path = _write_app(tmp_path, "fileshare", payload)
with pytest.raises(ManifestError, match="lowercase app name"):
load_manifest(path)
def test_requires_rejects_empty_hook_string(tmp_path):
payload = dict(
VALID_MANIFEST,
name="z2m",
requires=[{"app": "mosquitto", "on_install": ""}],
)
path = _write_app(tmp_path, "z2m", payload)
with pytest.raises(ManifestError, match="non-empty string"):
load_manifest(path)

View file

@ -133,3 +133,123 @@ def test_reconcile_isolates_missing_docker_binary(tmp_path, monkeypatch):
error = next(a for a in actions if a.kind == "error") error = next(a for a in actions if a.kind == "error")
assert error.target == "fileshare" assert error.target == "fileshare"
assert "docker" in error.detail assert "docker" in error.detail
# --- Topo ordering + on_start hooks ----------------------------------------
PROVIDER_MANIFEST = dict(
VALID_MANIFEST,
name="mosquitto",
volumes=["data"],
)
CONSUMER_MANIFEST = dict(
VALID_MANIFEST,
name="zigbee2mqtt",
volumes=["state"],
requires=[
{
"app": "mosquitto",
"on_start": "hooks/ensure-user.sh",
}
],
)
def test_reconcile_topo_orders_providers_before_consumers(tmp_path, fake_docker, monkeypatch):
# Consumer comes alphabetically AFTER provider here, but the explicit dep
# also needs to win when the order was reversed. Add an alpha-first
# consumer name to make this load-bearing.
consumer = dict(CONSUMER_MANIFEST, name="alpha", requires=[{"app": "mosquitto"}])
_make_app(tmp_path, "mosquitto", PROVIDER_MANIFEST)
_make_app(tmp_path, "alpha", consumer)
reconciler.reconcile(tmp_path)
up_order = [project for _, project in fake_docker["compose_up"]]
assert up_order == ["mosquitto", "alpha"]
def test_reconcile_fires_on_start_before_compose_up(tmp_path, fake_docker, monkeypatch):
provider = _make_app(tmp_path, "mosquitto", PROVIDER_MANIFEST)
(provider / "hooks").mkdir()
(provider / "hooks" / "ensure-user.sh").write_bytes(b"#!/bin/sh\necho ok\n")
_make_app(tmp_path, "zigbee2mqtt", CONSUMER_MANIFEST)
hook_calls: list[str] = []
def fake_exec_script(app_dir, project, service, script_path, *, env, timeout):
hook_calls.append(f"{project}:{script_path.name}")
return ""
monkeypatch.setattr(dockerops, "compose_image_tags", lambda a, p: {"mosquitto": "img"})
monkeypatch.setattr(dockerops, "compose_exec_script", fake_exec_script)
actions = reconciler.reconcile(tmp_path)
# Hook fired against mosquitto exactly once.
assert hook_calls == ["mosquitto:ensure-user.sh"]
# Hook action appears before consumer's compose_up.
kinds = [(a.kind, a.target) for a in actions]
hook_idx = kinds.index(("hook", "zigbee2mqtt:mosquitto:on_start"))
up_idx = kinds.index(("compose_up", "zigbee2mqtt"))
assert hook_idx < up_idx
# And the provider's compose_up happened first.
assert fake_docker["compose_up"][0][1] == "mosquitto"
def test_reconcile_on_start_failure_skips_consumer_compose_up(tmp_path, fake_docker, monkeypatch):
provider = _make_app(tmp_path, "mosquitto", PROVIDER_MANIFEST)
(provider / "hooks").mkdir()
(provider / "hooks" / "ensure-user.sh").write_bytes(b"")
_make_app(tmp_path, "zigbee2mqtt", CONSUMER_MANIFEST)
# Unrelated third app: must still come up despite the consumer's hook fail.
_make_app(tmp_path, "lonely", dict(VALID_MANIFEST, name="lonely", volumes=["data"]))
def boom(*a, **k):
raise dockerops.DockerError("hook returned 1")
monkeypatch.setattr(dockerops, "compose_image_tags", lambda a, p: {"mosquitto": "img"})
monkeypatch.setattr(dockerops, "compose_exec_script", boom)
actions = reconciler.reconcile(tmp_path)
assert reconciler.has_errors(actions)
error_actions = [a for a in actions if a.kind == "error"]
assert len(error_actions) == 1
assert error_actions[0].target == "zigbee2mqtt"
assert "on_start(mosquitto)" in error_actions[0].detail
# Provider AND unrelated app came up; consumer did NOT.
up_projects = {p for _, p in fake_docker["compose_up"]}
assert "mosquitto" in up_projects
assert "lonely" in up_projects
assert "zigbee2mqtt" not in up_projects
def test_reconcile_dry_run_emits_hook_action_without_executing(tmp_path, fake_docker, monkeypatch):
provider = _make_app(tmp_path, "mosquitto", PROVIDER_MANIFEST)
(provider / "hooks").mkdir()
(provider / "hooks" / "ensure-user.sh").write_bytes(b"")
_make_app(tmp_path, "zigbee2mqtt", CONSUMER_MANIFEST)
called = []
monkeypatch.setattr(
dockerops, "compose_exec_script", lambda *a, **k: called.append(1) or ""
)
actions = reconciler.reconcile(tmp_path, dry_run=True)
assert called == []
hook_actions = [a for a in actions if a.kind == "hook"]
assert any(a.target == "zigbee2mqtt:mosquitto:on_start" for a in hook_actions)
def test_reconcile_missing_provider_still_isolated(tmp_path, fake_docker, monkeypatch):
"""Consumer requires an app that isn't installed — per-app error, others continue."""
_make_app(tmp_path, "zigbee2mqtt", CONSUMER_MANIFEST)
_make_app(tmp_path, "lonely", dict(VALID_MANIFEST, name="lonely", volumes=["data"]))
actions = reconciler.reconcile(tmp_path)
assert reconciler.has_errors(actions)
errors = [a for a in actions if a.kind == "error"]
assert len(errors) == 1
assert errors[0].target == "zigbee2mqtt"
# `lonely` still got its compose_up.
assert any(p == "lonely" for _, p in fake_docker["compose_up"])