Cam be anything you want, just have to install nginx and configure it: https://medium.com/learn-or-die/build-a-webdav-server-with-nginx-8660a7a7311
Cam be anything you want, just have to install nginx and configure it: https://medium.com/learn-or-die/build-a-webdav-server-with-nginx-8660a7a7311
The funny part is that they sell it as modern yet they use Java like if it was a banking software from the 90’s. Thanks for the tip.
Nextcloud Music (…) Downside: it is Nextcloud.
It’s so hard to have a SMB share with one folder per game. The solution is obviously to run 4000 docker containers.
Yeah because apparently it is too hard to double click on setup.exe but using a docker is okay.
So, looks like tons of HTTP services and SSH.
Great, but what services are you hosting ? What ports you need?
Yeah, those may work. Since you’ve one how does it look like? Are there blocked ports line SMTP? Are the IP good / aren’t blacklisted everywhere already? Thanks.
This means I don’t need to mess around with QBT’s “proxy” settings?
No, you don’t. In short, trackers will look at the source address of the incoming connection on their side, that means you VPS IP because you’re doing NAT on the VPS.
Just make sure qBittorrent is restricted to the WG interface and nothing else.
but without nix it’s a pita to maintain through restores/rebuilds.
No it isn’t. You can even define those routing polices in your systemd network unit alongside the network interface config and it will manage it all for you.
If you aren’t comfortable with systemd, you can also use simple “ip” and “route” commands to accomplish that, add everything to a startup script and done.
major benefit to using a contained VPN or gluetun is that you can be selective on what apps use the VPN.
Systemd can do that for you as well, you can tell that a certain service only has access to the wg network interface while others can use eth0 or wtv.
More classic ip/route can also be used for that, you can create a routing table for programs that you want to force to be on the VPN and other for the ones you want to use your LAN directly. Set those to bind to the respective interface and the routing tables will take place and send the traffic to the right place.
You’re using docker or similar, to make things simpler you can also create a network bridge for containers that you want to restrict to the VPN and another for everything else. Then you set the container to use one or the other bridge.
There are multiple ways to get this done, throwing more containers, like gluetun and dragging xyz dependencies and opinionated configurations from somewhere isn’t the only one, nor the most performant for sure. Linux is designed to handle this cases.
In terms of homelab stuff, I know a lot of people appreciate the containerized approach.
What I said applies to containerized setups as well. Same logic, just managed in a slightly different way.
By “set up wireguard to route through the VPS” you mean having wireguard forward a port from the VPS to a port on the homeserver at its wireguard IP address?
Yes, he means that.
qBittorrent will still need to publish the right IP address to peers though, right? So I will need to configure the proxy VPS’s IP address in qBittorrent…
No. For most things qBittorrent does public IP detection. For the rest your VPS will be doing NAT between the WG interface and the public internet. This means your qBittorrent client sends outgoing packets with the source address of your WG private IP and then the VPS will change those to it’s public IP address.
The thing you must be careful about is that you need to restrict qBittorrent to only send and receive traffic on the WG interface, otherwise it will be using both. You can do it in the settings, but the safest way is to do it at the container setup or systemd service level and completely hide any interface that isn’t the WG one from it.
You can force all outgoing traffic to use the VPN interface via iptables/routes (meaning if it doesn’t exist or doesn’t work nothing will be able to access the internet) OR use systemd globally hide the non-VPN network interface from all services except for the VPN client.
All of that can be achieved with simple systemd or iptables/routes tweaks. You can force all outgoing traffic to use the VPN interface via routes (meaning if it doesn’t exist or doesn’t work nothing will be able to access the internet) OR use systemd globally hide the non-VPN network interface from all software except for the VPN client.
Maybe this will help you: https://linuxcontainers.org/incus/docs/main/backup/
How are snapshots with ZFS on Incus?
What do you mean? They work, described here, the WebUI can also make snapshots for you.
You should consider replacing Proxmox with LXD/Incus because, depending in your needs, you might be able to replace your Proxmox instances with Incus and avoid a few headaches in the future.
While being free and open-source software, Proxmox requires a payed license for the stable version and updates. Furthermore the Proxmox guys have been found to withhold important security updates from non-stable (not paying) users for weeks.
Incus / LXD is an alternative that offers most of the Proxmox’s functionality while being fully open-source – 100% free and it can be installed on most Linux systems. You can create clusters, download, manage and create OS images, run backups and restores, bootstrap things with cloud-init, move containers and VMs between servers (even live sometimes).
Incus also provides a unified experience to deal with both LXC containers and VMs, no need to learn two different tools / APIs as the same commands and options will be used to manage both. Even profiles defining storage, network resources and other policies can be shared and applied across both containers and VMs. The same thing can’t be said about Proxmox, while it tries to make things smoother there are a few inconsistencies and incompatibilities there.
Incus is free can be installed on any clean Debian system with little to no overhead and on the release of Debian 13 it will be included on the repositories.
Another interesting advantage of Incus is that you can move containers and VMs between hosts with different base kernels and Linux distros. If you’ve bought into the immutable distro movement you can also have your hosts run an immutable with Incus on top.
Incus Under Debian 12
If you’re on stable Debian 12 then you’ve a couple of options:
In the first option you’ll get a Debian 12 stable system with a stable LXD 5.0.2 LTS, it works really well however it doesn’t provide a WebUI. The second and third options will give you the latest Incus but they might not be as stable. Personally I was running LXD from Snap since Debian 10, and moved to LXD 5.0.2 LTS repository under Debian 12 because I don’t care about the WebUI. I can see how some people, particularly those coming from Proxmox, would like the WebUI so getting the latest Incus might be a good option.
I believe most people running Proxmox today will, eventually, move to Incus and never look back, I just hope they do before Proxmox GmbH changes their licensing schemes or something fails. If you don’t require all features of Proxmox then Incus works way better with less overhead, is true open-source, requires no subscriptions, and doesn’t delay important security updates.
Note that modern versions of Proxmox already use LXC containers so why not move to Incus that is made by the same people? Why keep dragging all of the Proxmox overhead and potencial issues?
Yeah, you’ll be okay. The easiest way is to pick a generic image for ARM here: https://github.com/home-assistant/operating-system/releases/latest and run with using Incus/LXD or some other virtualization solution that you like. I personally run it on Incus.
You can also do a more barebones manual install in a container if you don’t want to run the entire thing. I would still stick with a Incus container or VM to avoid polluting your base system.
Reolink / AMCrest - no internet required, can be setup offline AND have a WebUI that allows full control over all functionality. Check the details of specific models, may vary a bit.
… NO internet required, no apps, nothing. Just a WebUI on a browser.
Too much pieces that can potentially break. I’ve been looking at http://nginx.org/en/docs/http/ngx_http_auth_request_module.html and there’s this https://github.com/kendokan/phpAuthRequest that is way more self contained and simple to maintain long term. The only issue I’m facing with that solution is that I’m yet capable of passing a token / username in a header to the final application.
Sorry here’s a better tutorial. I might write one, it is interesting that they all suck in different ways.
https://starbeamrainbowlabs.com/blog/article.php?article=posts/237-WebDav-Nginx-Setup.html
The folder is defined by the “root” directive. Like with any other nginx setup.