feat(install): async background install with progress polling
All checks were successful
Build ISO / build-iso (push) Successful in 17m24s
CI / lint (push) Successful in 26s
CI / test (push) Successful in 43s
CI / validate-json (push) Successful in 24s
CI / markdown-links (push) Successful in 16s
Release / release (push) Successful in 11m34s

POST /api/apps/install now returns 202 Accepted after the synchronous
pre-validation (resolve source, copy files, write .env, check for
placeholder secrets, validate path-type settings). The docker-facing
phases (compose pull → ensure volumes → compose up) are dispatched as
a background systemd-run unit (furtka-install-<app>) that writes stage
transitions to /var/lib/furtka/install-state.json. The UI polls
GET /api/apps/install/status every 1.5s and re-labels the modal
submit button — "Image wird heruntergeladen…" →
"Speicherbereiche werden erstellt…" → "Container wird gestartet…" —
instead of sitting dead on "Installing…" for 30+ seconds on large
images like Jellyfin.

Mirrors the exact shape of /api/catalog/sync/apply and
/api/furtka/update/apply: same fcntl lock, same atomic state-file
writes, same terminal-state poll loop ("done" | "error"). New CLI
subcommand `furtka app install-bg <name>` is what systemd-run invokes;
it's hidden from --help because regular CLI users still want the
synchronous `furtka app install <name>`.

Reinstall button on the app list polls too — after dispatch, its text
reflects the background stage until terminal, matching the modal
flow.

Tests:
- tests/test_install_runner.py (new, 9 cases): state roundtrip, lock
  contention, happy-path phase ordering, error writes on pull/up
  failure, lock release on both terminal outcomes.
- tests/test_api.py: new no_systemd_run fixture stubs subprocess.run;
  existing install tests adapted to 202 response; new tests for 409
  lock contention and the status endpoint.
- tests/test_cli.py: install-bg dispatches correctly and returns 1
  on failure with journald-friendly stderr.

256 tests pass, ruff check + format clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
Daniel Maksymilian Syrnicki 2026-04-21 15:50:49 +02:00
parent 470823b347
commit f3cd9e963c
8 changed files with 605 additions and 29 deletions

View file

@ -7,6 +7,27 @@ This project uses calendar versioning: `YY.N-stage` (e.g. `26.0-alpha` = 2026, r
## [Unreleased] ## [Unreleased]
## [26.12-alpha] - 2026-04-21
### Changed
- **App-Install geht async mit Live-Progress.** `POST /api/apps/install`
returnt jetzt `202 Accepted` nach der synchronen Pre-Validation
(Source auflösen, Files kopieren, `.env` schreiben, Placeholder- und
Path-Checks). Den eigentlichen Docker-Teil (`compose pull` → volumes
`compose up`) dispatched der Handler als `systemd-run
--unit=furtka-install-<app>` Hintergrund-Job, der seine Phase in
`/var/lib/furtka/install-state.json` schreibt. Neues
`GET /api/apps/install/status` für UI-Polling. Das Install-Modal
zeigt jetzt live "Image wird heruntergeladen…" →
"Speicherbereiche werden erstellt…" → "Container wird gestartet…"
statt ~30 Sekunden totem "Installing…". Muster 1:1 parallel zu
`/api/catalog/sync/apply` und `/api/furtka/update/apply`. Neue CLI-
Subcommand `furtka app install-bg <name>` (intern, von der API
aufgerufen); `furtka app install` für Terminal-User bleibt synchron.
Die Reinstall-Taste in der App-Liste pollt ebenfalls den
Install-Status und spiegelt die Phase im Button-Text.
## [26.11-alpha] - 2026-04-21 ## [26.11-alpha] - 2026-04-21
### Added ### Added
@ -222,7 +243,8 @@ First tagged snapshot. Pre-alpha — the installer does not yet boot, but the de
- **Containers:** Docker + Compose - **Containers:** Docker + Compose
- **License:** AGPL-3.0 - **License:** AGPL-3.0
[Unreleased]: https://forgejo.sourcegate.online/daniel/furtka/compare/26.11-alpha...HEAD [Unreleased]: https://forgejo.sourcegate.online/daniel/furtka/compare/26.12-alpha...HEAD
[26.12-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.12-alpha
[26.11-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.11-alpha [26.11-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.11-alpha
[26.10-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.10-alpha [26.10-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.10-alpha
[26.9-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.9-alpha [26.9-alpha]: https://forgejo.sourcegate.online/daniel/furtka/releases/tag/26.9-alpha

View file

@ -21,7 +21,7 @@ import time
from http.cookies import SimpleCookie from http.cookies import SimpleCookie
from http.server import BaseHTTPRequestHandler, HTTPServer from http.server import BaseHTTPRequestHandler, HTTPServer
from furtka import auth, dockerops, installer, reconciler, sources from furtka import auth, dockerops, install_runner, installer, reconciler, sources
from furtka.manifest import ManifestError, load_manifest from furtka.manifest import ManifestError, load_manifest
from furtka.paths import apps_dir from furtka.paths import apps_dir
from furtka.scanner import scan from furtka.scanner import scan
@ -214,6 +214,51 @@ async function openSettingsDialog(name, action) {
modal.submit.addEventListener('click', submitModal); modal.submit.addEventListener('click', submitModal);
// Install progress phases written by the background job's state file.
// Mirrors furtka/install_runner.py stage strings. Unknown stages fall
// back to a neutral "Installing…" so a future phase rename doesn't
// leave the modal button blank.
const INSTALL_STAGE_LABELS = {
'pulling_image': 'Image wird heruntergeladen…',
'creating_volumes': 'Speicherbereiche werden erstellt…',
'starting_container': 'Container wird gestartet…',
'done': 'Fertig',
};
async function pollInstallStatus(original) {
// Two-minute ceiling: Jellyfin over a slow DSL line can take ~90s
// just on the image pull. Beyond that something's stuck — the
// background job is still running in systemd, but the UI gives up
// on the modal and lets the user close it.
const deadline = Date.now() + 120000;
while (Date.now() < deadline) {
await new Promise(res => setTimeout(res, 1500));
let s = {};
try {
s = await fetch('/api/apps/install/status').then(r => r.json());
} catch (e) { /* transient; keep polling */ }
const stage = s.stage || '';
modal.submit.textContent = INSTALL_STAGE_LABELS[stage] || 'Installing…';
if (stage === 'done') {
closeModal();
await refresh();
return;
}
if (stage === 'error') {
modal.error.textContent = s.error || 'Install failed';
modal.error.classList.add('show');
modal.submit.disabled = false;
modal.submit.textContent = original;
return;
}
}
// Timed out waiting for a terminal state don't lie to the user.
modal.error.textContent = 'Installation is taking longer than expected. Check /settings for the background job status.';
modal.error.classList.add('show');
modal.submit.disabled = false;
modal.submit.textContent = original;
}
async function submitModal() { async function submitModal() {
if (!modal.current) return; if (!modal.current) return;
const { name, action } = modal.current; const { name, action } = modal.current;
@ -247,6 +292,13 @@ async function submitModal() {
modal.submit.textContent = original; modal.submit.textContent = original;
return; return;
} }
// Install dispatched a background job poll until terminal. The
// edit path stays synchronous (settings updates are fast: env write
// + reconcile, no image pull).
if (action === 'install' && r.status === 202) {
await pollInstallStatus(original);
return;
}
closeModal(); closeModal();
await refresh(); await refresh();
} catch (e) { } catch (e) {
@ -339,10 +391,24 @@ async function handleButton(op, name, btn) {
: ' — already up to date'; : ' — already up to date';
} }
document.getElementById('log').textContent = header + '\\n' + JSON.stringify(data, null, 2); document.getElementById('log').textContent = header + '\\n' + JSON.stringify(data, null, 2);
// Reinstall dispatches an async install the same way the modal does
// follow the background job on the button label until terminal.
if (op === 'reinstall' && r.status === 202) {
const deadline = Date.now() + 120000;
while (Date.now() < deadline) {
await new Promise(res => setTimeout(res, 1500));
let s = {};
try { s = await fetch('/api/apps/install/status').then(r => r.json()); } catch (e) {}
const stage = s.stage || '';
btn.textContent = INSTALL_STAGE_LABELS[stage] || 'Reinstalling…';
if (stage === 'done' || stage === 'error') break;
}
}
} catch (e) { } catch (e) {
document.getElementById('log').textContent = `[${op} ${name}] network error: ${e.message}`; document.getElementById('log').textContent = `[${op} ${name}] network error: ${e.message}`;
} }
btn.textContent = original; btn.textContent = original;
btn.disabled = false;
await refresh(); await refresh();
} }
@ -627,18 +693,66 @@ def _do_get_settings(name):
def _do_install(name, settings=None): def _do_install(name, settings=None):
"""Kick off an app install. Synchronous sync-phase + async docker-phase.
Fast parts run inline so validation failures come back as immediate
4xx (bad path, placeholder secret, unknown app, etc.). The slow
`docker compose pull` then `compose up` are dispatched as a
background systemd-run unit that writes phase transitions to
/var/lib/furtka/install-state.json for the UI to poll.
"""
import subprocess
# Fast-fail if another install is already in flight. Lock lives under
# /run/ so a previous reboot clears it automatically.
try:
fh = install_runner.acquire_lock()
except install_runner.InstallRunnerError as e:
return 409, {"error": str(e)}
try:
try: try:
src = installer.resolve_source(name) src = installer.resolve_source(name)
target = installer.install_from(src, settings=settings) target = installer.install_from(src, settings=settings)
except installer.InstallError as e: except installer.InstallError as e:
return 400, {"error": str(e)} return 400, {"error": str(e)}
actions = reconciler.reconcile(apps_dir()) # Initial state so the UI has something to show between this
payload = { # response and the background job's first write.
"installed": str(target), install_runner.write_state("pulling_image", app=name)
"actions": [{"kind": a.kind, "target": a.target, "detail": a.detail} for a in actions], finally:
} # Release the lock so the background job can re-acquire it.
# 207 Multi-Status — install copy succeeded but reconcile had per-app errors. fh.close()
return (207 if reconciler.has_errors(actions) else 200, payload)
unit = f"furtka-install-{name}"
try:
subprocess.run(
[
"systemd-run",
f"--unit={unit}",
"--no-block",
"--collect",
"/usr/local/bin/furtka",
"app",
"install-bg",
name,
],
check=True,
capture_output=True,
text=True,
)
except FileNotFoundError:
install_runner.write_state("error", app=name, error="systemd-run not available")
return 502, {"error": "systemd-run not available"}
except subprocess.CalledProcessError as e:
err = (e.stderr or e.stdout or "").strip()
install_runner.write_state("error", app=name, error=f"dispatch failed: {err}")
return 502, {"error": f"systemd-run failed: {err}"}
return 202, {"status": "dispatched", "unit": unit, "installed": str(target)}
def _do_install_status():
"""Return the current install-state.json contents (or {})."""
return 200, install_runner.read_state()
def _do_update_settings(name, settings): def _do_update_settings(name, settings):
@ -1100,6 +1214,9 @@ class _Handler(BaseHTTPRequestHandler):
if self.path == "/api/catalog/status": if self.path == "/api/catalog/status":
status, body = _do_catalog_status() status, body = _do_catalog_status()
return self._json(status, body) return self._json(status, body)
if self.path == "/api/apps/install/status":
status, body = _do_install_status()
return self._json(status, body)
# /api/apps/<name>/settings # /api/apps/<name>/settings
if self.path.startswith("/api/apps/") and self.path.endswith("/settings"): if self.path.startswith("/api/apps/") and self.path.endswith("/settings"):
name = self.path[len("/api/apps/") : -len("/settings")] name = self.path[len("/api/apps/") : -len("/settings")]

View file

@ -71,6 +71,24 @@ def _cmd_app_install(args: argparse.Namespace) -> int:
return 1 if reconciler.has_errors(actions) else 0 return 1 if reconciler.has_errors(actions) else 0
def _cmd_app_install_bg(args: argparse.Namespace) -> int:
"""Docker-facing phases of an install — called by the API via systemd-run.
Internal subcommand; normal CLI users want `app install` (synchronous).
This exists to separate the slow docker pull/up from the synchronous
validation the API does inline, so the UI can poll a state file.
"""
from furtka import install_runner
try:
install_runner.run_install(args.name)
except Exception as e:
# run_install already wrote state="error"; echo for journald.
print(f"install-bg failed: {e}", file=sys.stderr)
return 1
return 0
def _cmd_app_remove(args: argparse.Namespace) -> int: def _cmd_app_remove(args: argparse.Namespace) -> int:
target = apps_dir() / args.name target = apps_dir() / args.name
if not target.exists(): if not target.exists():
@ -237,6 +255,15 @@ def build_parser() -> argparse.ArgumentParser:
) )
app_install.set_defaults(func=_cmd_app_install) app_install.set_defaults(func=_cmd_app_install)
# Internal — called by the HTTP API via systemd-run. Deliberately omitted
# from the help listing; regular CLI users want `app install` above.
app_install_bg = app_sub.add_parser(
"install-bg",
help=argparse.SUPPRESS,
)
app_install_bg.add_argument("name", help="Installed app folder name")
app_install_bg.set_defaults(func=_cmd_app_install_bg)
app_remove = app_sub.add_parser("remove", help="Stop and uninstall an app (keeps volumes)") app_remove = app_sub.add_parser("remove", help="Stop and uninstall an app (keeps volumes)")
app_remove.add_argument("name", help="App name (folder name under /var/lib/furtka/apps/)") app_remove.add_argument("name", help="App name (folder name under /var/lib/furtka/apps/)")
app_remove.set_defaults(func=_cmd_app_remove) app_remove.set_defaults(func=_cmd_app_remove)

121
furtka/install_runner.py Normal file
View file

@ -0,0 +1,121 @@
"""Background job for app installs — progress-visible via state file.
The slow part of installing an app is `docker compose pull` on a large
image (Jellyfin ~500 MB); without progress feedback, the UI modal sits
dead on "Installing…" for 30+ seconds and the user wonders if it hung.
This module mirrors the exact same shape as ``furtka.catalog`` and
``furtka.updater`` so the UI can poll an install just like it polls a
catalog sync or a self-update. The split is:
- ``furtka.api._do_install`` runs synchronously: resolve source, copy
the app folder, write .env, validate path settings + placeholders.
Those are fast, and their failures deserve an immediate 4xx so the
install modal can surface them in-line.
- After that the API writes an initial state file (stage
"pulling_image") and dispatches ``systemd-run --unit=furtka-install-
<name>`` to run ``furtka app install-bg <name>`` in the background.
That CLI subcommand is what calls ``run_install()`` here it does the
docker-facing phases and writes state transitions as it goes.
State file schema (``/var/lib/furtka/install-state.json``):
{
"stage": "pulling_image" | "creating_volumes"
| "starting_container" | "done" | "error",
"updated_at": "2026-04-21T17:30:45+0200",
"app": "jellyfin",
"version": "1.0.0", // added at "done"
"error": "details..." // added at "error"
}
Lock: ``/run/furtka/install.lock`` (tmpfs, reboot-safe). Global, not
per-app two parallel installs are not a v1 use-case and the lock
keeps the state-file representation simple (one in-flight install at
a time).
"""
from __future__ import annotations
import fcntl
import json
import os
import time
from pathlib import Path
from furtka import dockerops
from furtka.manifest import load_manifest
from furtka.paths import apps_dir
_INSTALL_STATE = Path(os.environ.get("FURTKA_INSTALL_STATE", "/var/lib/furtka/install-state.json"))
_LOCK_PATH = Path(os.environ.get("FURTKA_INSTALL_LOCK", "/run/furtka/install.lock"))
class InstallRunnerError(RuntimeError):
"""Any failure in the background install flow that should surface to the caller."""
def state_path() -> Path:
return _INSTALL_STATE
def lock_path() -> Path:
return _LOCK_PATH
def write_state(stage: str, **extra) -> None:
"""Atomic JSON state write — same shape as catalog/update-state."""
state_path().parent.mkdir(parents=True, exist_ok=True)
tmp = state_path().with_suffix(".tmp")
payload = {"stage": stage, "updated_at": time.strftime("%Y-%m-%dT%H:%M:%S%z"), **extra}
tmp.write_text(json.dumps(payload, indent=2))
tmp.replace(state_path())
def read_state() -> dict:
try:
return json.loads(state_path().read_text())
except (FileNotFoundError, json.JSONDecodeError):
return {}
def acquire_lock():
path = lock_path()
path.parent.mkdir(parents=True, exist_ok=True)
fh = path.open("w")
try:
fcntl.flock(fh, fcntl.LOCK_EX | fcntl.LOCK_NB)
except BlockingIOError as e:
fh.close()
raise InstallRunnerError("another install is already in progress") from e
return fh
def run_install(name: str) -> None:
"""Docker-facing phases of the install: pull → volumes → compose up.
Called by the ``furtka app install-bg <name>`` CLI subcommand from the
systemd-run spawned by the API. Assumes the API has already run
``installer.install_from()``, so the app folder, .env, and manifest
are on disk at ``apps_dir() / <name>``.
Every phase transition is written to the state file for the UI to
poll. On exception the state flips to ``"error"`` with the message,
then the exception is re-raised so the CLI exits non-zero and
journald has a traceback.
"""
with acquire_lock():
target = apps_dir() / name
manifest = load_manifest(target / "manifest.json", expected_name=name)
try:
write_state("pulling_image", app=name)
dockerops.compose_pull(target, name)
write_state("creating_volumes", app=name)
for short in manifest.volumes:
dockerops.ensure_volume(manifest.volume_name(short))
write_state("starting_container", app=name)
dockerops.compose_up(target, name)
write_state("done", app=name, version=manifest.version)
except Exception as e:
write_state("error", app=name, error=str(e))
raise

View file

@ -1,6 +1,6 @@
[project] [project]
name = "furtka" name = "furtka"
version = "26.11-alpha" version = "26.12-alpha"
description = "Open-source home server OS — simple enough for everyone." description = "Open-source home server OS — simple enough for everyone."
requires-python = ">=3.11" requires-python = ">=3.11"
readme = "README.md" readme = "README.md"

View file

@ -30,6 +30,18 @@ def fake_dirs(tmp_path, monkeypatch):
monkeypatch.setenv("FURTKA_BUNDLED_APPS_DIR", str(bundled)) monkeypatch.setenv("FURTKA_BUNDLED_APPS_DIR", str(bundled))
monkeypatch.setenv("FURTKA_CATALOG_DIR", str(catalog)) monkeypatch.setenv("FURTKA_CATALOG_DIR", str(catalog))
monkeypatch.setenv("FURTKA_USERS_FILE", str(users_file)) monkeypatch.setenv("FURTKA_USERS_FILE", str(users_file))
# install_runner writes to /var/lib/furtka/install-state.json and
# /run/furtka/install.lock by default — redirect into tmp_path so
# test code doesn't need root.
monkeypatch.setenv("FURTKA_INSTALL_STATE", str(tmp_path / "install-state.json"))
monkeypatch.setenv("FURTKA_INSTALL_LOCK", str(tmp_path / "install.lock"))
# install_runner caches env vars at import time, so reload it to
# pick up the tmp-path env vars this fixture just set.
import importlib
from furtka import install_runner
importlib.reload(install_runner)
# Scrub any sessions that leaked from a prior test — the SESSIONS # Scrub any sessions that leaked from a prior test — the SESSIONS
# store is module-level. # store is module-level.
auth.SESSIONS.clear() auth.SESSIONS.clear()
@ -53,6 +65,29 @@ def no_docker(monkeypatch):
monkeypatch.setattr(dockerops, "compose_down", lambda app_dir, project: None) monkeypatch.setattr(dockerops, "compose_down", lambda app_dir, project: None)
@pytest.fixture
def no_systemd_run(monkeypatch):
"""Stub the systemd-run dispatch in _do_install so tests don't need it.
The install endpoint now spawns a background systemd-run unit to do
the docker-facing phases. Tests that exercise the install path only
care that the sync pre-phase succeeded and the dispatch was
attempted with the right args they shouldn't actually fire up
systemd. subprocess.run gets monkeypatched to return a fake success
CompletedProcess, and the call args get captured for assertions.
"""
import subprocess
calls = []
def fake_run(cmd, check=False, capture_output=False, text=False, **kwargs):
calls.append(cmd)
return subprocess.CompletedProcess(cmd, 0, stdout="", stderr="")
monkeypatch.setattr(subprocess, "run", fake_run)
return calls
def _write_bundled(bundled, name, manifest=None, env_example=None): def _write_bundled(bundled, name, manifest=None, env_example=None):
app = bundled / name app = bundled / name
app.mkdir() app.mkdir()
@ -145,7 +180,7 @@ def test_list_available_inlines_icon_svg(fake_dirs):
assert entry["icon_svg"] == _SIMPLE_SVG assert entry["icon_svg"] == _SIMPLE_SVG
def test_list_installed_inlines_icon_svg(fake_dirs, no_docker): def test_list_installed_inlines_icon_svg(fake_dirs, no_docker, no_systemd_run):
apps, bundled = fake_dirs apps, bundled = fake_dirs
app = _write_bundled(bundled, "fileshare", env_example="A=real") app = _write_bundled(bundled, "fileshare", env_example="A=real")
_write_icon(app, _SIMPLE_SVG) _write_icon(app, _SIMPLE_SVG)
@ -154,12 +189,15 @@ def test_list_installed_inlines_icon_svg(fake_dirs, no_docker):
assert entry["icon_svg"] == _SIMPLE_SVG assert entry["icon_svg"] == _SIMPLE_SVG
def test_list_available_hides_already_installed(fake_dirs, no_docker): def test_list_available_hides_already_installed(fake_dirs, no_docker, no_systemd_run):
apps, bundled = fake_dirs apps, bundled = fake_dirs
_write_bundled(bundled, "fileshare", env_example="A=real") _write_bundled(bundled, "fileshare", env_example="A=real")
status, _ = api._do_install("fileshare") status, _ = api._do_install("fileshare")
assert status == 200 assert status == 202 # async dispatch
# Now bundled should NOT include fileshare anymore. # Now bundled should NOT include fileshare anymore — the app folder
# exists on disk (install_from finished synchronously before the
# dispatch), which is what _list_available uses for the "installed"
# check.
assert api._list_available() == [] assert api._list_available() == []
# But installed list should. # But installed list should.
installed = api._list_installed() installed = api._list_installed()
@ -202,7 +240,7 @@ def test_remove_endpoint_unknown(fake_dirs, no_docker):
assert status == 404 assert status == 404
def test_remove_endpoint_happy_path(fake_dirs, no_docker): def test_remove_endpoint_happy_path(fake_dirs, no_docker, no_systemd_run):
apps, bundled = fake_dirs apps, bundled = fake_dirs
_write_bundled(bundled, "fileshare", env_example="A=real") _write_bundled(bundled, "fileshare", env_example="A=real")
api._do_install("fileshare") api._do_install("fileshare")
@ -562,13 +600,13 @@ def test_get_settings_not_found(fake_dirs):
assert status == 404 assert status == 404
def test_install_with_settings_writes_env_via_api(fake_dirs, no_docker): def test_install_with_settings_writes_env_via_api(fake_dirs, no_docker, no_systemd_run):
_, bundled = fake_dirs _, bundled = fake_dirs
_write_bundled(bundled, "fileshare", manifest=SETTINGS_MANIFEST) _write_bundled(bundled, "fileshare", manifest=SETTINGS_MANIFEST)
status, body = api._do_install( status, body = api._do_install(
"fileshare", settings={"SMB_USER": "alice", "SMB_PASSWORD": "s3cret"} "fileshare", settings={"SMB_USER": "alice", "SMB_PASSWORD": "s3cret"}
) )
assert status == 200, body assert status == 202, body
apps, _ = fake_dirs apps, _ = fake_dirs
env = (apps / "fileshare" / ".env").read_text() env = (apps / "fileshare" / ".env").read_text()
assert "SMB_USER=alice" in env assert "SMB_USER=alice" in env
@ -583,7 +621,7 @@ def test_install_with_settings_rejects_empty_required_via_api(fake_dirs, no_dock
assert "SMB_PASSWORD" in body["error"] assert "SMB_PASSWORD" in body["error"]
def test_update_settings_merges(fake_dirs, no_docker): def test_update_settings_merges(fake_dirs, no_docker, no_systemd_run):
_, bundled = fake_dirs _, bundled = fake_dirs
_write_bundled(bundled, "fileshare", manifest=SETTINGS_MANIFEST) _write_bundled(bundled, "fileshare", manifest=SETTINGS_MANIFEST)
api._do_install("fileshare", settings={"SMB_USER": "alice", "SMB_PASSWORD": "original"}) api._do_install("fileshare", settings={"SMB_USER": "alice", "SMB_PASSWORD": "original"})
@ -665,7 +703,7 @@ def test_update_not_installed(fake_dirs):
assert "not installed" in body["error"] assert "not installed" in body["error"]
def test_update_no_changes(fake_dirs, no_docker, update_docker_stubs): def test_update_no_changes(fake_dirs, no_docker, no_systemd_run, update_docker_stubs):
_, bundled = fake_dirs _, bundled = fake_dirs
_write_bundled(bundled, "fileshare", env_example="A=real") _write_bundled(bundled, "fileshare", env_example="A=real")
api._do_install("fileshare") api._do_install("fileshare")
@ -678,7 +716,7 @@ def test_update_no_changes(fake_dirs, no_docker, update_docker_stubs):
assert update_docker_stubs["up_called"] == 0 assert update_docker_stubs["up_called"] == 0
def test_update_changes_applied(fake_dirs, no_docker, update_docker_stubs): def test_update_changes_applied(fake_dirs, no_docker, no_systemd_run, update_docker_stubs):
_, bundled = fake_dirs _, bundled = fake_dirs
_write_bundled(bundled, "fileshare", env_example="A=real") _write_bundled(bundled, "fileshare", env_example="A=real")
api._do_install("fileshare") api._do_install("fileshare")
@ -698,7 +736,9 @@ def test_update_changes_applied(fake_dirs, no_docker, update_docker_stubs):
assert update_docker_stubs["up_called"] == 1 assert update_docker_stubs["up_called"] == 1
def test_update_skips_services_not_running(fake_dirs, no_docker, update_docker_stubs): def test_update_skips_services_not_running(
fake_dirs, no_docker, no_systemd_run, update_docker_stubs
):
_, bundled = fake_dirs _, bundled = fake_dirs
_write_bundled(bundled, "fileshare", env_example="A=real") _write_bundled(bundled, "fileshare", env_example="A=real")
api._do_install("fileshare") api._do_install("fileshare")
@ -712,7 +752,9 @@ def test_update_skips_services_not_running(fake_dirs, no_docker, update_docker_s
assert update_docker_stubs["up_called"] == 0 assert update_docker_stubs["up_called"] == 0
def test_update_returns_502_on_pull_error(fake_dirs, no_docker, update_docker_stubs): def test_update_returns_502_on_pull_error(
fake_dirs, no_docker, no_systemd_run, update_docker_stubs
):
_, bundled = fake_dirs _, bundled = fake_dirs
_write_bundled(bundled, "fileshare", env_example="A=real") _write_bundled(bundled, "fileshare", env_example="A=real")
api._do_install("fileshare") api._do_install("fileshare")
@ -823,7 +865,9 @@ def test_furtka_update_status_endpoint(stub_furtka_updater):
assert stub_furtka_updater["status_called"] == 1 assert stub_furtka_updater["status_called"] == 1
def test_http_post_update_route(fake_dirs, no_docker, update_docker_stubs, admin_session): def test_http_post_update_route(
fake_dirs, no_docker, no_systemd_run, update_docker_stubs, admin_session
):
_, bundled = fake_dirs _, bundled = fake_dirs
_write_bundled(bundled, "fileshare", env_example="A=real") _write_bundled(bundled, "fileshare", env_example="A=real")
api._do_install("fileshare") api._do_install("fileshare")
@ -851,7 +895,7 @@ def test_http_post_update_route(fake_dirs, no_docker, update_docker_stubs, admin
server.server_close() server.server_close()
def test_http_post_install_with_settings(fake_dirs, no_docker, admin_session): def test_http_post_install_with_settings(fake_dirs, no_docker, no_systemd_run, admin_session):
_, bundled = fake_dirs _, bundled = fake_dirs
_write_bundled(bundled, "fileshare", manifest=SETTINGS_MANIFEST) _write_bundled(bundled, "fileshare", manifest=SETTINGS_MANIFEST)
server = api.HTTPServer(("127.0.0.1", 0), api._Handler) server = api.HTTPServer(("127.0.0.1", 0), api._Handler)
@ -870,14 +914,50 @@ def test_http_post_install_with_settings(fake_dirs, no_docker, admin_session):
}, },
) )
with urllib.request.urlopen(req) as r: with urllib.request.urlopen(req) as r:
assert r.status == 200 # Async: 202 Accepted + dispatched background job.
assert r.status == 202
body = json.loads(r.read())
assert body["status"] == "dispatched"
assert body["unit"] == "furtka-install-fileshare"
# Sync phase wrote the .env before dispatch.
apps, _ = fake_dirs apps, _ = fake_dirs
assert "SMB_PASSWORD=s3cret" in (apps / "fileshare" / ".env").read_text() assert "SMB_PASSWORD=s3cret" in (apps / "fileshare" / ".env").read_text()
# And systemd-run was called exactly once with the expected cmd.
assert len(no_systemd_run) == 1
assert no_systemd_run[0][:4] == [
"systemd-run",
"--unit=furtka-install-fileshare",
"--no-block",
"--collect",
]
assert no_systemd_run[0][-3:] == ["app", "install-bg", "fileshare"]
finally: finally:
server.shutdown() server.shutdown()
server.server_close() server.server_close()
def test_do_install_returns_409_when_locked(fake_dirs, no_docker, no_systemd_run):
_, bundled = fake_dirs
_write_bundled(bundled, "fileshare", env_example="A=real")
# Hold the install lock so _do_install fast-fails.
fh = api.install_runner.acquire_lock()
try:
status, body = api._do_install("fileshare")
assert status == 409
assert "in progress" in body["error"]
finally:
fh.close()
def test_do_install_status_returns_state(fake_dirs):
# Write state directly, then GET it via the status handler.
api.install_runner.write_state("pulling_image", app="jellyfin")
status, body = api._do_install_status()
assert status == 200
assert body["stage"] == "pulling_image"
assert body["app"] == "jellyfin"
# --- Catalog endpoints ------------------------------------------------------ # --- Catalog endpoints ------------------------------------------------------

View file

@ -71,3 +71,35 @@ def test_reconcile_dry_run_empty(tmp_path, monkeypatch, capsys):
assert rc == 0 assert rc == 0
out = capsys.readouterr().out out = capsys.readouterr().out
assert "0 actions" in out assert "0 actions" in out
def test_app_install_bg_dispatches_to_runner(tmp_path, monkeypatch):
"""CLI `app install-bg <name>` must call install_runner.run_install(name).
This is the entry point the HTTP API fires via systemd-run; regression
here would leave the UI hanging at "pulling_image…" forever because
the background never transitions state.
"""
_set_env(monkeypatch, tmp_path)
from furtka import install_runner
called = []
monkeypatch.setattr(install_runner, "run_install", lambda name: called.append(name))
rc = main(["app", "install-bg", "fileshare"])
assert rc == 0
assert called == ["fileshare"]
def test_app_install_bg_returns_1_on_failure(tmp_path, monkeypatch, capsys):
_set_env(monkeypatch, tmp_path)
from furtka import install_runner
def boom(name):
raise RuntimeError("compose pull failed")
monkeypatch.setattr(install_runner, "run_install", boom)
rc = main(["app", "install-bg", "fileshare"])
assert rc == 1
err = capsys.readouterr().err
assert "install-bg failed" in err
assert "compose pull failed" in err

View file

@ -0,0 +1,177 @@
"""Tests for the background app-install runner.
Same shape as test_catalog.py / test_updater.py: fixture reloads the
module with env-overridden paths, dockerops calls are stubbed so nothing
touches a real daemon. Asserts that state transitions happen in the
right order and that exceptions flip the state to "error" with the
message before re-raising.
"""
from __future__ import annotations
import json
from pathlib import Path
import pytest
@pytest.fixture
def runner(tmp_path, monkeypatch):
apps = tmp_path / "apps"
apps.mkdir()
monkeypatch.setenv("FURTKA_APPS_DIR", str(apps))
monkeypatch.setenv("FURTKA_INSTALL_STATE", str(tmp_path / "install-state.json"))
monkeypatch.setenv("FURTKA_INSTALL_LOCK", str(tmp_path / "install.lock"))
import importlib
from furtka import install_runner as r
from furtka import paths as p
importlib.reload(p)
importlib.reload(r)
return r
def _write_installed_app(apps_dir: Path, name: str = "fileshare"):
app = apps_dir / name
app.mkdir()
manifest = {
"name": name,
"display_name": "Fileshare",
"version": "0.1.0",
"description": "Test fixture",
"volumes": ["files"],
"ports": [445],
"icon": "icon.svg",
}
(app / "manifest.json").write_text(json.dumps(manifest))
(app / "docker-compose.yaml").write_text("services: {}\n")
return app
def test_write_and_read_state_round_trip(runner):
runner.write_state("pulling_image", app="jellyfin")
s = runner.read_state()
assert s["stage"] == "pulling_image"
assert s["app"] == "jellyfin"
assert "updated_at" in s
def test_read_state_returns_empty_when_missing(runner):
assert runner.read_state() == {}
def test_read_state_returns_empty_on_junk(runner):
runner.state_path().parent.mkdir(parents=True, exist_ok=True)
runner.state_path().write_text("{not json")
assert runner.read_state() == {}
def test_acquire_lock_prevents_concurrent_runs(runner):
held = runner.acquire_lock()
try:
with pytest.raises(runner.InstallRunnerError, match="in progress"):
runner.acquire_lock()
finally:
held.close()
def test_run_install_happy_path(runner, monkeypatch):
import furtka.dockerops as dockerops
from furtka.paths import apps_dir
_write_installed_app(apps_dir(), "fileshare")
calls = []
monkeypatch.setattr(dockerops, "compose_pull", lambda *a, **k: calls.append(("pull", a)))
monkeypatch.setattr(dockerops, "ensure_volume", lambda name: calls.append(("vol", name)))
monkeypatch.setattr(dockerops, "compose_up", lambda *a, **k: calls.append(("up", a)))
runner.run_install("fileshare")
# Ordering: pull first, then volumes, then up.
assert [c[0] for c in calls] == ["pull", "vol", "up"]
# Exactly the namespaced volume name got created.
assert calls[1] == ("vol", "furtka_fileshare_files")
# Final state is "done" with the manifest version.
s = runner.read_state()
assert s["stage"] == "done"
assert s["app"] == "fileshare"
assert s["version"] == "0.1.0"
def test_run_install_writes_error_on_pull_failure(runner, monkeypatch):
import furtka.dockerops as dockerops
from furtka.paths import apps_dir
_write_installed_app(apps_dir(), "fileshare")
def boom(*a, **k):
raise dockerops.DockerError("pull failed: registry unreachable")
monkeypatch.setattr(dockerops, "compose_pull", boom)
monkeypatch.setattr(dockerops, "ensure_volume", lambda name: None)
monkeypatch.setattr(dockerops, "compose_up", lambda *a, **k: None)
with pytest.raises(dockerops.DockerError):
runner.run_install("fileshare")
s = runner.read_state()
assert s["stage"] == "error"
assert s["app"] == "fileshare"
assert "registry unreachable" in s["error"]
def test_run_install_writes_error_on_up_failure(runner, monkeypatch):
import furtka.dockerops as dockerops
from furtka.paths import apps_dir
_write_installed_app(apps_dir(), "fileshare")
monkeypatch.setattr(dockerops, "compose_pull", lambda *a, **k: None)
monkeypatch.setattr(dockerops, "ensure_volume", lambda name: None)
def boom(*a, **k):
raise dockerops.DockerError("compose up: container refused to start")
monkeypatch.setattr(dockerops, "compose_up", boom)
with pytest.raises(dockerops.DockerError):
runner.run_install("fileshare")
s = runner.read_state()
assert s["stage"] == "error"
assert "refused to start" in s["error"]
def test_run_install_releases_lock_after_done(runner, monkeypatch):
import furtka.dockerops as dockerops
from furtka.paths import apps_dir
_write_installed_app(apps_dir(), "fileshare")
monkeypatch.setattr(dockerops, "compose_pull", lambda *a, **k: None)
monkeypatch.setattr(dockerops, "ensure_volume", lambda name: None)
monkeypatch.setattr(dockerops, "compose_up", lambda *a, **k: None)
runner.run_install("fileshare")
# Lock released — a fresh acquire must succeed.
fh = runner.acquire_lock()
fh.close()
def test_run_install_releases_lock_after_error(runner, monkeypatch):
import furtka.dockerops as dockerops
from furtka.paths import apps_dir
_write_installed_app(apps_dir(), "fileshare")
monkeypatch.setattr(
dockerops, "compose_pull", lambda *a, **k: (_ for _ in ()).throw(dockerops.DockerError("x"))
)
with pytest.raises(dockerops.DockerError):
runner.run_install("fileshare")
fh = runner.acquire_lock()
fh.close()