Commit graph

8 commits

Author SHA1 Message Date
caa8609908 Merge pull request 'release-26.4-alpha' (#7) from release-26.4-alpha into main
Some checks failed
Build ISO / build-iso (push) Successful in 26m22s
Deploy site / deploy (push) Successful in 3s
CI / lint (push) Successful in 26s
CI / test (push) Successful in 1m37s
CI / markdown-links (push) Successful in 33s
Release / release (push) Successful in 6s
CI / validate-json (push) Failing after 14m0s
Reviewed-on: #7
2026-04-18 14:29:19 +02:00
522ea06cd0 fix(smoke): bump smoke-VM RAM to 8 GiB + make cores/memory configurable
All checks were successful
CI / lint (pull_request) Successful in 1m10s
CI / test (pull_request) Successful in 2m17s
CI / validate-json (pull_request) Successful in 1m5s
CI / markdown-links (pull_request) Successful in 41s
pollux (192.168.178.165) wedged at the network level during an
end-to-end install test today — mkinitcpio on a 4 GiB smoke VM +
the cached 1.5 GB ISO + a busy runner container pushed the host into
OOM, taking pveproxy and the SSH path down with it. Recovered by
physical reset.

Smoke VM now defaults to 8192 MiB / 2 vCPU, configurable via
PVE_TEST_VM_MEMORY / PVE_TEST_VM_CORES. Host has 64 GiB, so one
smoke VM at 8 GiB is well within headroom.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 14:28:29 +02:00
f4f7d853ba chore(smoke): surface PVE response body on API failure
Some checks failed
CI / lint (pull_request) Successful in 1m3s
CI / test (pull_request) Successful in 1m23s
CI / markdown-links (pull_request) Has been cancelled
CI / validate-json (pull_request) Has been cancelled
api() was swallowing Proxmox's error body because callers pipe its
output to /dev/null. With a bare "curl: (22) 403" in the log we can't
tell which permission is missing. Now we capture the response body,
print it to stderr on failure, and only emit it to stdout on success.

No behaviour change on the happy path.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 14:06:09 +02:00
afbb8d59f9 fix(smoke): reuse existing PVE-side ISO instead of delete+re-upload
Some checks failed
CI / markdown-links (pull_request) Waiting to run
CI / lint (pull_request) Successful in 1m5s
CI / validate-json (pull_request) Has been cancelled
CI / test (pull_request) Has been cancelled
The delete branch required Datastore.Allocate (or was hitting a
privilege-separated token ACL edge case) and produced 403s on re-runs
against the same commit SHA. Since the ISO bytes are reproducible for
a given SHA — furtka-<sha>.iso is content-addressed — we can just
reuse whatever is already in PVE storage instead of cycling it.

Fixes the "runs-on-same-sha" re-dispatch case without needing any extra
PVE permission, and shaves ~2 min off repeated smoke runs.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 13:59:42 +02:00
d499907613 feat(ci): auto-boot every main-ISO in smoke VM on .165 Proxmox
Some checks failed
Build ISO / smoke-vm (push) Blocked by required conditions
Build ISO / build-iso (push) Successful in 24m28s
CI / test (push) Successful in 3m1s
CI / validate-json (push) Successful in 55s
CI / markdown-links (push) Successful in 37s
CI / lint (push) Failing after 13m19s
After build-iso, a new smoke-vm job uploads the freshly built ISO to
the test Proxmox at 192.168.178.165 via PVE API token, boots it in a
fresh VM (VMID range 9000-9099, MAC derived from commit SHA so the
runner can find the DHCP IP by scanning the LAN), and curls :5000 to
confirm the webinstaller answers HTTP 200. Last 5 smoke VMs + their
ISOs are kept for post-mortem; older ones are purged. continue-on-error
on the smoke job so a VM-side flake doesn't mark the ISO build red.

Shortens the feedback loop on ISO regressions from "next manual VM
test session" (days) to "next push" (minutes) — the 2026-04-15/16 VM
sessions each found real boot-time bugs that unit tests missed.

Docs at docs/smoke-vm.md. Requires Forgejo secrets PVE_TEST_HOST and
PVE_TEST_TOKEN (dedicated smoke@pve!ci PVE token, privilege-separated).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 11:41:44 +02:00
b4c65f46bf fix(release): drop jq dependency, use python3 for JSON assembly
All checks were successful
Build ISO / build-iso (push) Successful in 17m30s
CI / lint (push) Successful in 25s
CI / test (push) Successful in 33s
CI / validate-json (push) Successful in 24s
CI / markdown-links (push) Successful in 12s
Release / release (push) Successful in 6s
The 26.2-alpha release workflow hung for 15+ minutes on
"apt-get install -y jq" — the runner's apt mirror was unreachable
(or very slow), and the whole publish stalled.

jq was only used for two tiny things: building the release-create
POST body and reading the release id from the response. Both are
one-liners in Python, which is guaranteed-present on the Forgejo
Actions ubuntu-latest runner image. Replaced both uses; removed
the apt-get step from release.yml entirely. Slow mirrors no
longer block tagged releases.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 17:05:21 +02:00
c080764c7e fix(furtka): move assets/ to repo top level so Caddy + systemd find it
All checks were successful
Build ISO / build-iso (push) Successful in 17m5s
CI / lint (push) Successful in 27s
CI / test (push) Successful in 40s
CI / validate-json (push) Successful in 25s
CI / markdown-links (push) Successful in 12s
Root cause of today's 403 on a fresh install: assets/ lived inside the
Python package at furtka/assets/, so the resource-manager tarball
extracted to /opt/furtka/versions/<ver>/furtka/assets/. But Caddyfile
has `root * /opt/furtka/current/assets/www`, systemd units point at
/opt/furtka/current/assets/bin/furtka-status, and the install-time
`systemctl link /opt/furtka/current/assets/systemd/*.service` expected
the top-level layout. All three found nothing:

- Caddy → 403 Forbidden (empty/missing document root)
- systemctl link → silent no-op, nothing ever linked into
  /etc/systemd/system/
- furtka-api.service + furtka-reconcile.service → "inactive" because
  they were never registered

Nothing in the Python package ever imported furtka.assets — these are
shell scripts, HTML/CSS, systemd units, and a Caddyfile, which is
config data, not package data. Promoting assets/ to the repo root
matches how it's referenced everywhere downstream and eliminates the
path mismatch.

Changes:
- git mv furtka/assets assets
- iso/build.sh: tarball-staging step now also `cp -a "$REPO_ROOT/assets"`
  so the tarball ships ./assets at its root, and the live-ISO copy
  reads from $REPO_ROOT/assets instead of $REPO_ROOT/furtka/assets.
- scripts/build-release-tarball.sh: same for release tarballs.
- webinstaller/app.py: _resolve_assets_dir's dev fallback walks one
  level up to REPO_ROOT/assets/.
- tests/test_webinstaller_assets.py: ASSETS constant updated.

Tests still green (150/150) because both paths were fs-level — no
code imports changed. Next ISO build will land assets at the path
everything downstream expects.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 15:26:10 +02:00
f0acc4427e feat(furtka): release CI + \furtka update\ / \furtka rollback\ CLI
Slice 2 of the self-update story. Tagging a release on main now
produces a downloadable self-update payload on the Forgejo releases
page, and a running box can pull it down, verify it, atomically swap
to the new version, and health-check the result.

New pieces:

- scripts/build-release-tarball.sh <version> — packages the furtka/
  package + bundled apps/ + a root-level VERSION file as
  dist/furtka-<version>.tar.gz, plus a .sha256 sidecar and a
  release.json metadata blob.
- scripts/publish-release.sh <version> — uses the Forgejo v1 API to
  create a release (body pulled from the CHANGELOG section for this
  tag, pre-release auto-flagged on -alpha/-beta/-rc) and upload the
  three assets sequentially. Needs \$FORGEJO_TOKEN.
- .forgejo/workflows/release.yml — tag-triggered, runs both scripts
  with the new \$FORGEJO_RELEASE_TOKEN repo secret.
- furtka/updater.py — check_update, prepare_update, apply_update,
  run_update, rollback. Atomic symlink swap, sha256 verify (TOCTOU-
  safe: re-hashes on-disk file), health-check post-restart with
  auto-rollback on failure, stage-by-stage progress persisted to
  /var/lib/furtka/update-state.json so the UI can poll independent
  of the (restarting) API process. Path overrides via FURTKA_ROOT /
  FURTKA_STATE_DIR / FURTKA_LOCK_PATH so tests pin a tmpdir.
- furtka/cli.py — \`furtka update [--check] [--json]\` and
  \`furtka rollback\`.
- tests/test_updater.py — 15 tests: version compare, sha256 verify,
  tarball extract (including traversal refusal), lockfile, apply
  happy + rollback paths, rollback CLI, check_update with stubbed
  Forgejo.
- iso/build.sh — writes VERSION at the tarball root so the install
  path matches the self-update path (previously assumed only the
  release script did this).

RELEASING.md now points at the automated flow — no more manually
clicking "Create release" on the Forgejo UI.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 13:30:45 +02:00