• 0 Posts
  • 28 Comments
Joined 1 year ago
cake
Cake day: August 6th, 2023

help-circle









  • My box sits in my closet, so can’t really help much with docker or vm. But I use sunshine server with moonlight client. Keep in mind you can’t fight latency that comes from distance between server and client. I can use 4/5G for turn based or active pause games but wouldn’t try anything real time. On cable my ping is under ms, enough to play shooters as badly as I do these days.

    I use AMD for CPU and GPU, and wouldn’t try nvidia if using Linux as sever.

    I did use to run a VM in xenserver/xcp-ng and passthrough gpu with a mock hdmi screen plug. A windows 10 vm, ran very well bar pretty crap CPU but I did get around 30fps in 1080p tarkov, sometimes more with amd upscalling. Back then I was using parsec, but found sunshine and moonlight works better for me.

    I should also mention I never tried to support multiple users. You can probably play “local” multiplayer with both parsec and moonlight, but any setup that shares one GPU will require some vgpu proprietary fuckery, so easiest is to buy a PC with multiple gpus and assign one to each VM directly.


  • I think this lead me on the right path: https://community.ui.com/questions/Having-trouble-allowing-WOL-fowarding/5fa05081-125f-402b-a20c-ef1080e288d8#answer/5653fc4f-4d3a-4061-866c-f4c20f10d9b9

    This is for edgerouter, which is what I use, but I suppose opensense can do this just as well.

    Keep in mind, don’t use 1.1.1.1 for your forwarding address, use one in your LAN range, just outside of DHCP because this type of static routing will mess up a connection to anything actually on this IP.

    This is how it looks in my edge os config:

    protocols {
      static {
        arp 10.0.40.114 {
          hwaddr ff:ff:ff:ff:ff:ff
        }
      }
    }
    

    10.0.40.114 is the address I use to forward WoL broadcast to.

    Then I use an app called Wake On Lan on Android and set it up like this: Hostname/IP/Broadcast address: 10.0.40.114 Device IP: [actual IP I want to wake up on the same VLAN/physical network] WOL Port: 9

    This works fine if you’re using the router as the gateway for both VPN and LAN, but it will get messy with masquarade and NAT - then you have to use port forwarding I guess, and it should work from WAN.

    I just wanted it to be over VPN to limit my exposure (even if WoL packets aren’t especially scary).


  • There is a trick you could do to send a WoL packet to a separate IP on the sender network and modify it so it is repreated on the network of the machine you want to wake up.

    I can’t find docs on thisb on mobile, but can look for it later.

    It can’t work like a typical IP packet routing tho. I’ve only made it work with a VPN connection.

    Another thing you can do is ssh to your router and send a WoL packet from there on the machine’s LAN.




  • magikmw@lemm.eetoSelfhosted@lemmy.worldHow many PostgreSQL services?
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    8 months ago

    In a hobby it’s easy to get carried away into doing things according to “best practices” when it’s not really the point.

    I’ve done a lot of redundant boilerplate stuff in my homelab, and I justify it by “learnding”. It’s mostly perfectionism I don’t have time and energy for anymore.


  • magikmw@lemm.eetoSelfhosted@lemmy.worldHow many PostgreSQL services?
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    1
    ·
    8 months ago

    If you’re the only user and just want it working without much fuss, use a single db instance and forget about it. Less to maintain leads to better maintenance, if performance isn’t more important.

    It’s fairly straightforward to migrate a db to a new postgres instance, so you’re not shooting yourself in a future foot if you change your mind.

    Use PGTune to get as much as you can out of it and adjust if circumstances change.


  • I had budget to try xeon d soc motherboard for a smal itx case. Put 64gb ecc ram into it but could hold 128gb. That server will be 8 yo this year. That particular supermicro mb was ment for some oem routerlike 64_86x with 10g ports and remote management. I’m not sure if intel or amd have any cpus in that segment anymore, but it’s very light on wattage if mostly idle/maintaining vms.

    One option I’m looking at is to get a dedicated hetzner server, even the auction and lowest grade ‘new’ offerings are pretty good for the price if you account for energy costs and upfront gear cost.


  • magikmw@lemm.eetoSelfhosted@lemmy.worldWhen Pi-hole is down?
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    edit-2
    9 months ago

    I think it depends. In my limited experience, because I have not tested this thoroughly, most systems pick the first DNS adresses and only send requests to the second if first doesn’t respond.

    This has lead at least a couple of times to extremely long timeouts making me think the system is unresponsive, especially with things like kerberos ssh login and such.

    I personally set up my DHCP to provide pihole as primary, and my off site IPA master as secondary (so I still have internal split brain DNS working in case the entire VM host goes down).

    Now I kinda want to test if that offsite DNS gets any requests in normal use. Maybe would explain some ad leaks on twitch.tv (likely twitch just using the same hosts for video and ads, but who knows).

    Edit: If that is indeed the case, I’m not looking forward to maintaining another pihole offsite. Ehhh.


  • Longhorn isn’t just about replication (which is not backup, and RAID is not backup either). Also if you only have one replica, is it even different from local storage at this point?

    You’d use longhorn to make sure applications don’t choke and die when the storage they are using go down. Also, I’m not sure if you can supply longhorn storage to nodes that don’t run it. I haven’t tried it.

    I suspect all pods that you’d define to use longhorn would only go up at the longhorn replica node.

    All this is how I understand longhorn works. I haven’t tried it this way, my only experience is running it on every node so if one node goes down pods can just restart on any other node and carry on.