I try to balance things between what I find enjoyable/ worth the effort, and what ends up becoming more of a recurring headache
I try to balance things between what I find enjoyable/ worth the effort, and what ends up becoming more of a recurring headache
I have a somewhat dated (but decently specd) NUC running Proxmox, and it’s the backbone of my home lab. No issues to date.
I was using a WD PR4100, but I upgraded to a Synology RS1221+ and it’s been fantastic :)
I have a beefed up Intel NUC running Proxmox (and my self hosted services within those VMs) and a stand alone NAS that I mount on the necessary VMs via fstab.
I really like this approach, as it decouples my storage and compute servers.
4 currently with 8GB RAM and no pass through for transcoding (only direct play)
Very nice of you to offer. I made a few changes (routing my problem Jellyfin client directly to the Jellyfin server and cutting out the NGINX hop, as well as limiting the bandwidth of that client incase the line is getting saturated).
I’ll try to report back if there’s any updates.
Good bot.
Good point. I just checked and streaming something to my TV causes IO delay to spike to like 70%. I’m also wondering if maybe me routing my Jellyfin (and some other things) through NGINX (also hosted on Proxmox) has something to do with it… Maybe I need to allocate more resources to NGINX(?)
The system running Proxmox has a couple Samsung Evo 980s in it, so I don’t think they would be the issue.
I typically prefer VM’s just because I can change the kernel as I please (containers such as LXC will use the host kernel). I know it’s overkill, but I have the storage/ memory to spare. Typically I’m at about 80% (memory) utilization under full load.
Yeah, I’ve been looking into it for some time. It seems to normally be an issue on the client side (Nvidia shield), the playback will stop randomly and then restart, and this may happen a couple times (no one really knows why, it seems). I recently reinstalled that server on a new VM and a new OS (Debian) with nothing else running on it, and the only client to seem to be able to cause the crash is the TV running the Shield. It’s hard to find a good client for Jellyfin on the TV it seems :(
I actively avoid and move away from HA type devices that do not work without WAN. There’s no reason that me pushing a GUI button to turn off a light needs to do anything more than travel to my AP, to HA and then to the light. Let’s not bring the cloud into this.
Join us; It’s fantastic.
I have a (beefy specd) Intel NUC that’s running Proxmox. A few of the VMs mount to my RS1221+ for things like media (Jellyfin), etc.
On Proxmox I run
Probably missing a few, but that’s the jist
The safest (but not as convenient) way is to run a VPN, so that the services are only exposed to the VPN interface and not the whole world.
In pfsense I specify which services my OpenVPN connections can access (just an internal facing NGINX for the most part) and then I can just go to jellyfin.homelab, etc when connected.
Not as smooth as just having NGINX outward facing, but gives me piece of mind knowing my network is locked down
Yeah, I think I will end up creating a new ACL on NGINX to only allow those mgmt_allowed
IPs. I tested it, and it seems to work fine. Not ideal, as I’d like to manage everything from pfsense, but I guess it’s expected by the nature of proxies :P
Thanks for the reply! Yeah, I just tried the ACL in NGINX, and it seems to work fine. I can still ping the proxied services, but cannot connect to them. I guess I will maintain a seperate mgmt_allowed
list there like so
Ooo, very nice! If I use that script, can I generate certificates for a made up domain within my network (eg *.homelab), or do I need to use a domain I actually own?
I have heard of this, but I think if you self-host a CA, you have to add the cert to every device that wants access to the service right? For example, I’d have to add it to my TV if my TV connects to Jellyfin, to my laptop if my laptop needs access to Home Assistant, etc. I’m not sure my family would like that XD
That was my concern too. NGINX would need access to the internet in order to renew the certs.
Then I don’t understand the need for neither domain names nor third party signed certs. Can’t you use PiHole as a configurable DNS server, just make any domain name go to any of your local devices?
Yes, that is how it is currently setup, and how I may end up leaving it. Right now, I can go to jellyfin.home, and that request gets routed to my pihole which has custom DNS entries, which then points to NGINX and NGINX forwards it to the correct IP/ port. All works as expected, except it is not https (which is not that big of a deal since all my stuff is restricted from the outside world). Just an OCD itch I’m trying to scratch.
StandardNotes for me