• 1 Post
  • 16 Comments
Joined 2 years ago
cake
Cake day: February 14th, 2024

help-circle
  • Yeah, you are right a custom bridge network can do DNS resolution with container names. I just saw in a video from Lawrence Systems, that he exposed the socket. And somewhere else I saw that container names where used for the proxy hosts in NPM. Since the default bridge doesn’t do DNS resolution I assumed that is why some people expose the socket.

    I just checked again and apparently he created the compose file with ChatGPT which added the socket. https://forums.lawrencesystems.com/t/nginx-proxy-manager-docker/24147/6 I always considered him to be one of the more trustworthy and also security conscious people out there, but this makes me question his authority. Atleast he corrected the mistake, so everyone who actually uses his compose file now doesn’t expose the socket.


  • Thanks for the write-up and sorry for the late reply. I guess I didn’t come very far without exposing the docker socket. Nextcloud was actually one of the services on my list I wanted to try out. But I haven’t looked at the compose file yet. It makes sense why it is needed by the AIO image. Interestingly, it uses a Docker socket proxy to presumably also mitigate some of the security risks that come from exposing the socket. Just like another comment in this thread already mentioned.

    However, since I don’t know much about Kubernetes I can’t really tell if it improves something, or if the privileges are just shifted e.g. from the container having socket access to the Kubernetes orchestration thingy having socket access. But it looks indeed interesting and maybe it is not a bad idea to look into it even early on in my selfhost and container adventure.

    Even though I said otherwise in another comment, I think I have also seen socket access in Nginx Proxy Manager in some example now. I don’t really know the advantages other than that you are able to use the container names for your proxy hosts instead of IP and port. I have also seen it in a monitoring setup, where I think Prometheus has access to the socket to track different Docker/Container statistics.



  • I am a strong believer in separate docker compose files to keep it more organized and hopefully have more control over everything. But in the end most of it comes down to personal preference.

    I actually have some kind of network issues with one of my containers at the moment (Adguard in this case), where your ideas already came in handy. Unfortunately, I couldn’t solve it yet, but this is also something for a new topic I believe.


  • I have heard the name Kubernetes and know that is also some kind of container thing, but never went really deeper than that. It was more a general question how people handle the whole business of exposing the docker socket to a container. Since I came across it in Watchtower and considered installing that I used it as an example. I always thought that Kubernetes and Docker swarms and things like that are something for the future when I have more experience with Docker and containers in general, but thank you for the idea.


  • I have set all this up on my Asustor NAS, therefore things like apt install are not applicable in my use-case. Nevertheless, thank you very much for your time and expertise with regards to users and volumes. What is your strategy for networks in general? Do you setup a separate network for each and every container unless the services have to communicate with each other? I am not sure I understand your network setup in the Jellyfin container.

    In the ports: part that 10.0.1.69 would be the IP of your server (or in this case, what I declare the jellyfin container’s IP to be) - it makes it so the container can only bind to the IP you provide, otherwise it can bind to anything the server has access to (as far as I understand). With the macvlan driver the virtual network driver of your container behaves like its own physical network interface which you can assign a separate IP to, right? What advantage does this have exactly or what potential problems does this solve?


  • I think I get where your coming from. In this specific case of Watchtower it is not a security flaw it just uses the socket to do what it is supposed to do. You either trust them and live with the risks it comes with or you don’t and find another solution. I used Watchtower as the example because it was the first one I came across that needs this access. There might be a lot of other containers out there that use this, so I wanted to hear peoples opinions on this topic and their approach.


  • Thank you for your comment and the resources you provided. I definitely look into these. I like your approach of minimizing the attack surface. As I said, I am still new to all of this and I came across the user option of docker compose just recently when I installed Jellyfin. However, I thought the actual container image has to be configured in a way so that this is even possible. Otherwise you can run into permission errors and such. Do you just specify a non-root user and see if it still works?

    And while we’re at it, how would you setup something like Jellyfin with regards to read-write permissions? I currently haven’t restricted it to read-only and in my current setup I most certainly need write permissions as well because I store the artwork in the respective directories inside my media folder. Would you just save these files to the non-persisted storage inside the container because you can re-download them anyway and keep the media volume as read-only?



  • I don’t know anything about Podman but I think Docker also has a rootless mode, however I don’t really know any details about that either. Maybe I should read more about that.

    Yeah, I think I also saw some fancy dashboard with Grafana and Prometheus where some part also required access to the socket (can’t remember which), so I thought it might me more common to do that than I originally thought.




  • No, none of my containers are exposed to the internet and I don’t intend to do so. I leave that to people with more experience. I have however setup the Wireguard VPN feature of my router to access my home network from outside which I need occasionally. But as far as I read, that is considered one of the savest options IF you have to make it available. No outside access is of course always preferred.


  • That is the exact reason why I wouldn’t use the auto-update feature. I just thought about setting it up to check for updates and give me some sort of notification. I just feel like a reminder every now and then helps me to keep everything up to date and avoid some sort of never change a running system mentality.

    Your idea about setting it up and only letting it run occasionally is definitely one to consider. It would at least avoid manually checking the releases of each container similar to the RSS suggestion of /u/InnerScientist