Podman Update Service
Please use my code as a guide till I figure out what is going on.
My bet is the label is missing from a lot of containers I don't want on the update list
My second bet is that I have not enabled the remaining podman service that I shove in my notes somewhere - and I mean somewhere.
Found it number 2 - although that is not the direct problem.
Each has a distinct role in Podman’s systemd integration:
1. podman-auto-update.service
- Purpose: Runs Podman’s auto-update mechanism.
- This checks containers that were created with
--label io.containers.autoupdate=registryor=imageand automatically pulls newer images and restarts containers if updates are available. - Status: Disabled by default (you don’t want every container auto-updating unless explicitly configured). It’s paired with the timer below.
2. podman-clean-transient.service
- Purpose: Cleans up transient units created by Podman.
- When you run
podman run --rm ...or use systemd transient units, Podman may generate temporary systemd service files. This unit ensures those don’t accumulate and clutter your system. - Status: Disabled by default but can be triggered when needed.
# Enable the clean function so that it runs on boot - there is no timer service
systemctl enable --now podman-clean-transient.service
crontab -l 2>/dev/null | grep -F "0 23 * * * /usr/bin/systemctl start podman-clean-transient.service" \
|| (crontab -l 2>/dev/null; echo "0 23 * * * /usr/bin/systemctl start podman-clean-transient.service") | crontab -
systemctl restart cron.service 2>/dev/null || systemctl restart crond.service 2>/dev/null || { error "Cron restart failed"; exit 1; }
3. podman-kube@.service
- Purpose: Template unit for running Kubernetes YAML manifests with Podman.
- You can run
systemctl start podman-kube@mydeploymentand it will translate a Kubernetes YAML into Podman pods/containers. - Status: Disabled by default, since it’s only useful if you’re deploying via Kubernetes YAML.
4. podman-restart.service
- Purpose: Handles automatic restart of containers that were created with restart policies (
--restart=always, etc.). - Podman itself doesn’t daemonize like Docker, so this service ensures containers with restart policies are properly restarted after reboot or failure.
- Status: Disabled by default but enabled when you use restart policies.
5. podman.service
- Purpose: The main Podman systemd service.
- Provides the Podman REST API (when socket activation is used) and can be used to run Podman in a daemon-like mode.
- Status: Enabled — meaning it will start at boot.
6. podman.socket
- Purpose: The systemd socket unit for Podman’s REST API.
- When a client connects to
/run/podman/podman.sock, systemd activatespodman.serviceautomatically. - This is how Podman achieves “daemonless” operation: the service only runs when the socket is hit.
- Status: Enabled — so the socket is listening and will spawn Podman on demand.
7. podman-auto-update.timer
- Purpose: The systemd timer that triggers
podman-auto-update.serviceon a schedule (usually daily). - If enabled, it periodically checks for updated images and restarts containers accordingly.
- Status: Disabled by default — you’d enable this if you want automated container updates.
Problem
The label that was added to the stacks
labels:
- com.docker.compose.project=basestack
- com.docker.compose.service=beszel-agent
- io.containers.autoupdate=registry
- dockflare.enable=falseDoes not work in the way that I logically thought it would
- Auto‑update service scans all containers with
io.containers.autoupdateset. - If a container has that label but wasn’t created via
podman generate systemd, it won’t have thePODMAN_SYSTEMD_UNITlabel. - The service then complains: “no PODMAN_SYSTEMD_UNIT label found”.
Solution
- Remove the label within each service stack that it exists
- Go back to a manual update mechanism
- Redeploy watchtower on Podman servers - docker servers > version 29 a problem
- Look for a branched and maintained version of watchtower
- Write a script that can be deployed via cron (yuck)
- Switch to systemd for all containers - this gets complicated very quickly.
Direct answer: Yes — there are active forks of Watchtower that continue development after Docker v29 broke compatibility. The two most notable are Marrrrrrrrry/watchtower and nickfedor/watchtower, both maintained and updated to work with Docker 29+ Github Virtualization Howto.
Why forks are needed
- The original containrrr/watchtower project has gone stale, with no commits for ~3 years.
- Docker v29 introduced API changes (minimum supported API version 1.44), which broke compatibility with the old Watchtower client segutaxi.com.
- Community members stepped in to maintain forks or alternatives.
Active Forks & Alternatives
| Project | Status | Key Features | Notes |
|---|---|---|---|
| Marrrrrrrrry/watchtower | Active (last updated ~3 weeks ago) | - Dependencies upgraded<br>- Refactored codebase<br>- Published Docker images<br>- Compliance with Go standards | Maintainer promises quarterly updates, urgent fixes faster Github |
| nickfedor/watchtower | Active | - Drop-in replacement for containrrr/watchtower<br>- Works with Docker v29 without config changes | Reported by community as stable and reliable Github |
| What’s Up Docker (WUD) | Alternative project (not a fork) | - UI for monitoring container updates<br>- Notifications when new images are available<br>- Schedule-based checks | Designed as a modern replacement for Watchtower WUD Introduction |
Recommendation for your environment
Given your focus on fleet-wide automation, compliance overlays, and idempotent scheduling, I’d suggest:
- nickfedor/watchtower if you want a minimal change drop-in replacement (just swap the image in your Compose/Podman stack).
- Marrrrrrrrry/watchtower if you want a more actively refactored codebase with dependency upgrades and Go compliance.
- WUD if you’d like a UI-driven monitoring tool - nope
Lots to learn about Podman.
#enoughsaid