In my home lab I have containers spread across a few devices (mainly a server and a NAS). I have Docker containers running on the server whose main storage is on the NAS, and I have named cifs volumes to expose the SMB shares to the containers. So far so good.
The problem occurs when there are partial outages. E.g. the NAS has an update and needs to reboot, or there is a network problem between the server and the NAS. Even after the NAS returns, the containers on the server are still in a bad state since the cifs mount doesn't get restarted. And for most of them (e.g. Jellyfin), it's not worth having the containers running at all if the network share they point to isn't available.
I'm wondering if there is a general practice to handle this kind of dependency short of Kubernetes. If not, it seems like I need a mechanism to stop the containers (or maybe the whole Compose stack) when the share is unavailable and restart them when it returns. I.e. something akin to deunhealth but for network shares rather than container health[1] . I could probably write such a tool pretty easily, but it seems like something that might already exist.
[1] One could imagine adding this to the health check and using deunhealth, but this wouldn't be right for this situation since the container should be stopped when the share is unavailable, not forced into a restart loop.