Someone recommended it for keeping my containers up to date automatically. I checked out the repo and it seems too good to be true. It just updates your containers when a new image is available and everything just works out of the box? I’m a bit scared of just leaving it alone in case it might break something. The fact that it doesn’t come with a gui also scares me a bit.
Does anyone here use it and can recommend it? Any horror stories?
I got my proxmox in production and I’ve installed before whatschtower and just broke me down 4 containers with bad updates so I stoped from using whatschtower…
I would like any services that just notify me about any new docker image update whitout making any updating
You can set notifications so you know which container are updated recently. If that container stops working, then just revert to previous image.
And configure when watchtower should run the update. I set mine to update at 8pm, so in case something breaks, I still have a few hours before bedtime to fix it.
Watchtower itself works great, it doesn’t need a GUI for what it does.
But updating containers in general, either manually or automatically, always carries a risk of something breaking due to the new update.
One thing you can do is make sure you’re not using
:latest
tags in your compose files, and instead pin major versions likepostgres:13
And of course make sure you have backups going back multiple points in time in case something does break, and test those backups!
Used Watchtower on my Synology for a while and it worked well. No issues in that time.
Now I’ve moved to a Nuc and am more experienced with Docker and understand a lot more of it but by no means am a professional by any means, I would say that I wouldn’t use Watchtower. I can definitely see it messing a config up and prefer not to deal with the headache of troubleshooting something without knowing it was an auto update. If I had the time, I may tag the apps I’m happy to auto-update but for now I prefer to have the higher availability.
I don’t know if maybe I’m using watchtower wrong but i don’t like how it behaves by default. It’s always messing up my container names, not removing old containers and just spinning up new ones, etc and there’s no interface so I can’t view jobs that ran overnight, or see what’s queued or in progress. It just does it’s own thing and I really don’t like that. I’ve installed and un-installed it a few times and it’s been un-installed for a while now. I just redeploy my portainer stacks and pull down the latest image manually when I want to upgrade my containers now. At least I get some control
I just had a strange issue with Watchtower where it somehow failed to update itself. And it left a running but unhealthy duplicate of itself. Just restarting the old container fixed it. But I guess that’s a risk?
Yes, there are risks:
- First, updates can break things. Already explained here.
- Second, exposing Docker socket to Watchtower means you have to trust it ultimately. Any vulnerability in WT can lead to whole system compromise.
Personally, I use DIUN. It just sends me notifications about available updates. I update things manually later. My system is pretty well isolated from outside world, so no need to hurry.
On a VPS, I would prefer a different approach though.If you want highly available system, then you should perform updates with a custom made script, where you can control update issues. Otherwise watchtower is good.
Curious how a custom script to perform the update would be different than watchtower doing it? Is an automated update not an automated update regardless of what triggers it?
If you have a script like this any chance you could open source it?
My script looks like this: https://gist.github.com/dgalli1/010fb978bae509dda43a1f31145a530f
And is ment to update docker-compose.yml files.
If you want to have zero downtime, you can scale the container to two and if everything succeeded just kill the old container. (Need reverse proxy with balancing like caddy)
Great script! Only thing I can recommend is adding a “docker image prune -af” command after all compose files have had new images pulled and are up…unless you want old images taking up hard drive space/you have a valid reason for keeping old images.
You understand that at the point where they open source it and publish it it would be essentially watchtower right? The point of having a custom-made script is so that you can customize it to your specific needs if it’s a generalized item then just use watchtower.
I use it but only on containers where I can configure it to not do major updates, sadly most images don’t have the needed tags for this 😢
Normally fine but if you want to be more careful about what is being pushed to your server you can use something like diun to get notifications and run updates manually.
Personally I love dockcheck, which I think is by a guy on the sub. I tend to just run that every now and again and be done with it unless I am notified of a perssing update, although I do still have a couple of things I don’t care too much about just auto update with watchtower.
I am happy in the camp of diun+dockcheck too, they both dont get enough love.
Yeah I used it, it broke paperless for me. I uninstalled it.
The latest version isn’t always the best version. In a home lab or home network, this is rarely a big problem, but in a production environment, I wouldn’t recommend it.
As example, some software pushes out updates that can (and sometimes will) break your setup.
Of course nobody pushes out something like that on purpose to mess with users. But mistakes happen all the time. And even if the dont, some version upgrades require the user to take manual steps, when these are ignored and with something like Watchtower just blindly upgraded, setups can and very likely will break.
Imo its not worth the very short amount of time saved by automatic-updates versus the amount of time it costs to fix such a mess when it occurs.
For example, NPM (Nginx Proxy Manager) had a update months ago that broke many users setups. They of course did warn about this in the changenotes, but i remember people here on sub saying “well damn i used watchtower and it updated npm overnight and i wake up and nothing works anymore, took me hours to figure out the reason and fix it”.
https://github.com/NginxProxyManager/nginx-proxy-manager/releases/tag/v2.10.0
Using an outdated version of a container (including DBs!) that have known vulnerabilities that will be very easy to exploits including by bots, is so much worse than the risk of a container breaking after an update. Just monitor your server properly and you’ll be good
I’ve been using watchtower for more than a year on all my containers and no issues so far. I have read many warnings against automating the updates, but it has never broken anything in my case. I’m talking about 3 VMs (on Proxmox) and 2 Synology boxes. 5 instances of watchtower keeping a total of 84 containers updated.
Nonetheless I try to play it on the safe side and make daily backups in case something breaks. I’ve had a couple of containers breaking (nothing related to watchtower, AFAIK) and I have recovered easily restoring the latest backup.
Using Watchtower for approximately 2 years on about 20 Containers. I had 1 issue, where a container would not start after the update. The Error Message said I had an unsupported entry in the configuration file of the app. I looked up the changelog of that app, and found out that the option was removed and replaced by something else. Had to change one line in the configuration. Not really a problem for me.
Though I decided to exclude my Home Automation Container and my kasm container ( my gateway to my network, a bit like guacamole ). Those may pose problems if they are offline unexpected.
Thank you. What do you use for container backups and restoring?
Don’t backup the actual container, backup the volumes and the docker-compose file
- 3 VMs in Proxmox hosting 70 containers get backed up everyday with ProxmoxBackupServer (VM in my primary NAS) to a NFS mounted folder on my primary NAS
- Primary NAS (with 7 containers) gets backed up with Snapshot replication to my secondary NAS everyday.
- Secondary NAS (with 7 containers) gets backed up with Snapshot replication to my primay NAS everyday.
- And once a month I backup my primary NAS (not the whole thing,only the important folders) to a USB drive that I store at a friends house.
Please take such advice with a large grain of salt. OP’s experience is very much not the norm. Especially for more complex apps like Jellyfin or Nextcloud, it’s almost guaranteed you’ll break them if you just update blindly.