Security Discussion for Software updates

Hey there,

Please feel free to delete this thread, if it is too basic.

I am asking myself - since critical compromises of software/docker images/dependencies like the axios hack latetly or other examples like the xz-utils compromise - is updating to tags like latest at the nearest time possible (e.g. with dockhand/tugtainer at a daily basis) always the best option?

What are best practises you guys follow with updates?

Are there also “stable” tags for docker images, e.g. that are also dynamically updated, but not the latest version?

What about Linux OS (Proxmox, Debian…)?

I am searching for best practises, that are somewhere in the middle between “enterprise grade security, i have to manually approve and update everything”(not suitable for long term selfhosting maybe?) ←–→ and “comfort/efficiency”.

Thanks guys!

This is your forum. no once can delete any post here. unless you talk about pirated content.

no, updating to :latest (or equivalent “bleeding edge” tags) as aggressively as possible — even daily with tools like Watchtower(dont use its not maintained)-style auto-updaters (dockhand/tugtainer or similar) — is not the best option for most selfhosting setups. it trades one risk (outdated vulnerable software) for another (supply-chain attacks and breaking changes). the recent axios npm compromise (malicious versions pushed via a hijacked maintainer account in March 2026) and the lingering xz-utils backdoor in some debian docker images are perfect examples: automated “grab the newest” can pull poisoned packages before they’re yanked.

why “latest” is risky in practice

  • mutability: the :latest tag is just “whatever was last pushed.” publishers can (and do) change it, sometimes with breaking changes or (rarely) malice. docker and the myself strongly advise against it in production or long-running setups.
  • no reproducibility: hard to roll back or know exactly what’s running.
  • supply-chain exposure: as seen with xz-utils (still in some old/official-ish images) and axios, malicious updates can hit fast. daily pulls amplify this.
  • breaking changes: many images don’t guarantee backward compatibility on every push.

community consensus (Docker docs, OWASP, Github etc.): pin your images. use versioned or “stable” tags instead.

best practices: the Comfortable Middle Ground for Self-Hosting

you arere looking for the spot between “enterprise change-control board” and “set it and forget it.” Here’s what i will do (and security folks) actually do:

  1. pin to stable/semantic tags (never :latest)

    • use things like nginx:1.27-alpine, node:22-lts-alpine, postgres:16-alpine, or whatever the maintainer calls the “current stable” branch.
    • many official images also offer :stable, :lts, or major-version tags that get security backports without major feature churn. These are dynamically updated (security patches flow in) but far safer than :latest.
    • for maximum safety: pin to a digest (SHA256) once you know a version is good. docker compose supports image: nginx:1.27-alpine@sha256:abc123....
    • rebuild your own images regularly with --pull so base layers stay fresh.
  2. monitor, don’t auto-update blindly

    • diun (docker Image Update Notifier) is the gold standard in homelabs right now. It watches your running containers (or specific ones via labels) and sends you notifications (Discord, Telegram, email, Gotify, etc.) when updates are available. No auto-restart, full control.
      • Super lightweight, actively maintained, works great with Docker Compose.
      • You get a daily/weekly digest → you review changelogs/CVEs → you docker compose pull && up -d.
    • If your compose files live in Git (highly recommended), Renovate (or Dependabot) is even better: it opens PRs automatically for updates, can auto-merge patch/minor versions, and you just merge the safe ones. Perfect middle ground.
  3. Scan before you deploy


    hhftechnology/vps-monitor: A lightweight, Go-based VPS monitoring solution with real-time web dashboard supporting multiple agents. Monitor unlimited servers from a single dashboard overview analytics.
    App coming soon

appstore     googleplay
  • Run Trivy (or Docker Scout, Snyk, Grype) on images as part of your workflow. Many people add a simple cron or CI step: trivy image myimage:tag.
  1. For your apps’ internal dependencies (npm, pip, etc.)
    • Use lockfiles (package-lock.json, requirements.txt, etc.) and let Renovate/Dependabot propose updates. Don’t let npm install pull whatever is newest at runtime.

what about Linux / Proxmox / Debian?

Same logic, just tuned for the host:

  • Proxmox host:

    • Do not enable full unattended apt upgrade or dist-upgrade blindly — it can break clustering or PVE packages.
    • Best middle-ground: Install unattended-upgrades and configure it only for security updates (Debian-Security + Proxmox repos). Many guides exist for this exact setup.
    • Or just run updates manually via the web GUI (or apt update && apt full-upgrade) every 1–2 weeks + reboot when needed. It’s quick and you stay in control.
    • Never pull from testing or unstable branches (that’s how xz-utils slipped into some Docker images).
  • debian-based VM/LXC:

    • unattended-upgrades configured for security-only is perfectly fine and recommended. It keeps the attack surface small without major breakage.

qmuick “middle-Ground” Workflow Most People Settle On

  • Docker Compose files in a private Git repo.
  • Renovate (self-hosted or GitHub App) watches the repo and opens PRs.
  • Diun runs on the host and pings you via Discord/Telegram for anything Renovate missed.
  • Weekly 15-minute “update day”: review PRs, run Trivy, merge, docker compose up -d.
  • Critical/edge-case containers get stricter pinning + manual review.
  • Everything else (media servers, *arr stack, etc.) can be a bit more aggressive.

this gives you enterprise-grade security (scanning, pinning, review) with self-hosting comfort (mostly automated notifications + one-click merges). No daily fire drills, no “oops I just pulled a backdoored image.”

If you share your typical stack or how your compose files are managed, or i dm me i can share my config. :slight_smile:

1 Like

Here are practical Diun configuration examples tailored for self-hosting (Docker Compose + local Docker provider). These are pulled straight from the official docs and reflect the most common homelab setups.

Diun is super flexible — you can configure it entirely with environment variables (easiest) or a diun.yml file (cleaner for complex setups). Both approaches are equivalent.

1. Minimal Docker Compose (Environment Variables Only) — Recommended Starting Point

This watches all running containers by default every 6 hours and uses the Docker socket.

name: diun

services:
  diun:
    image: crazymax/diun:latest
    container_name: diun
    command: serve
    volumes:
      - "./diun-data:/data"          # Persistent database
      - "/var/run/docker.sock:/var/run/docker.sock:ro"
    environment:
      - TZ=Asia/Kolkata              # Change to your timezone
      - LOG_LEVEL=info
      - LOG_JSON=false
      
      # Watch settings
      - DIUN_WATCH_WORKERS=20
      - DIUN_WATCH_SCHEDULE=0 */6 * * *   # Every 6 hours
      - DIUN_WATCH_JITTER=30s
      - DIUN_WATCH_FIRSTCHECKNOTIF=false  # Don't notify on first run
      - DIUN_WATCH_RUNONSTARTUP=true
      
      # Docker provider
      - DIUN_PROVIDERS_DOCKER=true
      - DIUN_PROVIDERS_DOCKER_WATCHBYDEFAULT=true   # Watch ALL containers
      
      # Optional: Only watch containers that have the label (safer)
      # - DIUN_PROVIDERS_DOCKER_WATCHBYDEFAULT=false
    restart: unless-stopped

How to use:

  • Create a folder (e.g. ~/diun/), drop this as docker-compose.yml, run docker compose up -d.
  • Diun will automatically monitor any container that has the label diun.enable=true (or all if WATCHBYDEFAULT=true).

2. Full Example with Config File + Notifications (Most Popular Setup)

Many people prefer this for readability and adding multiple notifiers.

docker-compose.yml

name: diun

services:
  diun:
    image: crazymax/diun:latest
    container_name: diun
    command: serve
    volumes:
      - "./diun-data:/data"
      - "./diun.yml:/diun.yml:ro"           # Your config
      - "/var/run/docker.sock:/var/run/docker.sock:ro"
    environment:
      - TZ=Asia/Kolkata
      - LOG_LEVEL=info
      - LOG_JSON=false
    restart: unless-stopped

diun.yml (in the same folder)

db:
  path: diun.db

watch:
  workers: 20
  schedule: "0 */6 * * *"     # Every 6 hours (or "0 0 * * *" for daily at midnight)
  jitter: 30s
  firstCheckNotif: false
  runOnStartup: true

defaults:
  watchRepo: false            # Only watch the exact tag, not the whole repo
  notifyOn:
    - new
    - update

providers:
  docker:
    watchByDefault: false     # Safer: only containers with label diun.enable=true
    # watchStopped: true      # Uncomment to also check stopped containers

# Example notifications - pick ONE or more
notif:
  # Gotify (very popular in self-hosting)
  gotify:
    endpoint: http://gotify.yourdomain.com
    token: your-gotify-token-here
    priority: 5
    timeout: 10s

  # OR Telegram
  # telegram:
  #   token: 123456:ABC-DEF1234ghIkl-zyx57W2v1u123ew11
  #   chatIDs:
  #     - 123456789
  #   timeout: 10s

  # OR Discord
  # discord:
  #   webhookURL: https://discord.com/api/webhooks/...
  #   timeout: 10s

  # OR ntfy.sh (simple, no account needed)
  # ntfy:
  #   endpoint: https://ntfy.sh
  #   topic: my-diun-alerts
  #   priority: 3

3. How to Tell Diun Which Containers to Watch (Labels)

Add these labels to any service in your other docker-compose files:

services:
  your-app:
    image: linuxserver/sonarr:latest
    labels:
      - diun.enable=true              # Required
      - diun.watch_repo=false         # Optional overrides
      - diun.notify_on=new,update

Pro tip: If you set watchByDefault: true, you don’t even need the label on every container.

Quick Tips for Self-Hosters

  • Data persistence: The /data volume stores the database so Diun remembers what it has already seen.
  • Schedule: 0 */6 * * * = every 6 hours. Use 0 0 * * * for once per day at midnight.
  • Security: Run Diun with the read-only socket mount (:ro).
  • File provider alternative: If you prefer a static list instead of labels, use the file provider and point to a images.yml with explicit image names.
  • Test it: After starting, check logs with docker logs diun — you should see it scanning.

1. Diun with Multiple Notifiers (Gotify + Telegram + Discord + ntfy)

You can define as many notifiers as you want under the notif: section. Diun sends the same update message to all of them.

diun.yml (full example)

db:
  path: diun.db

watch:
  workers: 20
  schedule: "0 */6 * * *"     # every 6 hours
  jitter: 30s
  firstCheckNotif: false
  runOnStartup: true

defaults:
  watchRepo: false
  notifyOn:
    - new
    - update

providers:
  docker:
    watchByDefault: false     # only containers with diun.enable=true label

# ── MULTIPLE NOTIFIERS ─────────────────────────────────────
notif:
  gotify:
    endpoint: http://gotify.yourdomain.com
    token: A1234567890abcdef
    priority: 5
    timeout: 10s

  telegram:
    token: 123456:ABC-DEF1234ghIkl-zyx57W2v1u123ew11
    chatIDs:
      - 123456789          # your personal chat
      - -987654321         # optional group chat (negative ID)
    timeout: 10s

  discord:
    webhookURL: https://discord.com/api/webhooks/...
    timeout: 10s

  ntfy:
    endpoint: https://ntfy.sh
    topic: my-diun-alerts
    priority: 3
    tags: ["docker", "update"]
    timeout: 10s

Just mount this file as before. Diun will fire notifications to all four services whenever an update is found.

2. File Provider Setup (Static Image List – No Docker Labels Needed)

Perfect if you want a central static list instead of (or in addition to) labels on every container.

docker-compose.yml (same as before, just change providers)

services:
  diun:
    image: crazymax/diun:latest
    volumes:
      - "./diun-data:/data"
      - "./diun.yml:/diun.yml:ro"
      - "./images:/images:ro"               # ← folder for static list
      - "/var/run/docker.sock:/var/run/docker.sock:ro"
    # ... environment and restart same

diun.yml

# ... db, watch, defaults same as above ...

providers:
  file:
    directory: /images                # Diun will read every *.yml in this folder

# Optional: you can also put images directly here (newer Diun versions support it)
# images:
#   - name: nginx:1.27-alpine
#   - name: postgres:16-alpine

Create a folder ./images/ next to diun.yml and drop one or more files like this:

images/my-stack.yml

- name: linuxserver/sonarr:develop
  watchRepo: true
  notifyOn: [new, update]
- name: postgres:16-alpine
- name: your-private-registry.com/app:1.2.3
  regopt: myprivate               # see section 3 below

This is great for servers where you don’t want to touch every compose file.

3. Registry Auth for Private Registries (regopts)

Add this top-level section to diun.yml:

regopts:
  - name: "myprivate"                  # reference name
    username: myuser
    password: supersecret
    timeout: 30s
    insecureTLS: false                 # set true for self-signed certs

  - name: "docker.io"                  # optional: override Docker Hub rate limits
    selector: image
    username: yourdockerhubuser
    password: yourdockerhubpat

  - name: "ghcr.io"                    # GitHub Container Registry example
    username: yourgithubuser
    passwordFile: /run/secrets/ghcr_pat   # safer than plain text

Then reference it in any image (File provider or Docker label):

  • In File provider (see above): regopt: myprivate
  • In Docker label on a container:
    labels:
      - diun.enable=true
      - diun.regopt=myprivate
    

Alternative (even simpler): Just mount your existing Docker credentials:

volumes:
  - "/root/.docker/config.json:/root/.docker/config.json:ro"

Diun will automatically use them.

4. How to Combine Diun with Renovate (Best Self-Hosting Workflow)

This is the sweet spot most people run today:

  • Renovate = Git-based updater (opens PRs for your docker-compose.yml files)
  • Diun = Runtime notifier (tells you when new images are available, even if Renovate hasn’t merged yet)

Typical setup:

  1. Put all your compose files in a private Git repo (Gitea, GitHub, etc.).

  2. Install Renovate (self-hosted or GitHub App) with a config like:

    {
      "extends": ["config:recommended"],
      "docker-compose": {
        "fileMatch": ["(^|/)(?:docker-)?compose[^/]*\\.ya?ml$"]
      },
      "packageRules": [
        {
          "matchUpdateTypes": ["patch", "minor"],
          "automerge": true,
          "automergeType": "branch"
        }
      ]
    }
    

    Renovate will automatically bump image tags (e.g. nginx:1.26nginx:1.27) and create PRs.

  3. Run Diun exactly as above (Docker provider + labels or File provider).

  4. Optional: Add a label in your compose files so Diun ignores Renovate-managed images if you want:

    labels:
      - diun.enable=true
      - diun.watchRepo=false   # only exact tag, not whole repo
    

Workflow you’ll actually use:

  • Renovate opens a PR every few days → you review/merge (or auto-merge patch versions).
  • Diun pings you on Gotify/Telegram/etc. the moment a newer image appears (great for critical security updates).
  • You git pull && docker compose up -d once a week.

This gives you audit trail + automatic PRs + instant notifications with almost zero manual work.


2 Likes

Thanks mate, you dont disappoint :wink:

I have homework to do :smiley:

1 Like

I will like to add for anyone that wants more control and work for themselves(Suffering-as-a-Service). Some project provide the dockefile to create the image. You can pull it and build the image and push to an internal registry that way your internal registry can scan(I use Harbor with Trivy) daily and let you know if the current version of any of the dependency the application is using is vulnerable or not. False positives exist so it is not a silver bullet. Google is your best friend when you discover something.
As I said easier this is a lot of work but if you have a workflow it is doable.

1 Like

Watch out for bloated images. As @po_tato said building your image is the best defense, but if you can’t do that then watch out for bloated images anything above 250 mb is a must scan.

1 Like