It’s even more hilariously bad because they recast a veteran series actor in a new role. Until retirement and beyond!
It’s even more hilariously bad because they recast a veteran series actor in a new role. Until retirement and beyond!
Yeah performance and energy efficiency is one of the factors behind the decision Valve made with the screen. The display is on par with entry small factor laptops from late '00s (in resolution, otherwise obviously better).
I keep forgetting mods also mean physical modifications of hardware and I was really confused how Valve can support upping resolution on a screen.
Idea of modding hardware in general feels risky af to me. But I’m glad it’s possible, worthwile, and apparently quietly supported by Valve.
They make a lot of good decisions with it.
So glad I didn’t pull the trigger on a laptop last month. I was leaning AMD but some intel offerings looked nicer and cheaper. I guess that’s one of the reasons.
Godot had a visual scripting feature, but apparently nobody used or maintained it so it got cut.
I think you can use grafana to present vidgets from different dashboards in one.
Saying mods are not an integral part of a Bethesda game is a real hot take there.
If you want to see how devs should approach mod makers so it works out for everyone, take a look at Ludeon does it with Rimworld.
I use a 2016 Asus Zenbook with integrated intel gpu.
The performance is comparable. The only thing that’s different is latency, obviously, although it’s fairly negligible on LAN, and encoding/decoding sometimes createa artifacts and smudges, but it’s better at higher bandwidth.
My box sits in my closet, so can’t really help much with docker or vm. But I use sunshine server with moonlight client. Keep in mind you can’t fight latency that comes from distance between server and client. I can use 4/5G for turn based or active pause games but wouldn’t try anything real time. On cable my ping is under ms, enough to play shooters as badly as I do these days.
I use AMD for CPU and GPU, and wouldn’t try nvidia if using Linux as sever.
I did use to run a VM in xenserver/xcp-ng and passthrough gpu with a mock hdmi screen plug. A windows 10 vm, ran very well bar pretty crap CPU but I did get around 30fps in 1080p tarkov, sometimes more with amd upscalling. Back then I was using parsec, but found sunshine and moonlight works better for me.
I should also mention I never tried to support multiple users. You can probably play “local” multiplayer with both parsec and moonlight, but any setup that shares one GPU will require some vgpu proprietary fuckery, so easiest is to buy a PC with multiple gpus and assign one to each VM directly.
I think this lead me on the right path: https://community.ui.com/questions/Having-trouble-allowing-WOL-fowarding/5fa05081-125f-402b-a20c-ef1080e288d8#answer/5653fc4f-4d3a-4061-866c-f4c20f10d9b9
This is for edgerouter, which is what I use, but I suppose opensense can do this just as well.
Keep in mind, don’t use 1.1.1.1 for your forwarding address, use one in your LAN range, just outside of DHCP because this type of static routing will mess up a connection to anything actually on this IP.
This is how it looks in my edge os config:
protocols {
static {
arp 10.0.40.114 {
hwaddr ff:ff:ff:ff:ff:ff
}
}
}
10.0.40.114 is the address I use to forward WoL broadcast to.
Then I use an app called Wake On Lan on Android and set it up like this: Hostname/IP/Broadcast address: 10.0.40.114 Device IP: [actual IP I want to wake up on the same VLAN/physical network] WOL Port: 9
This works fine if you’re using the router as the gateway for both VPN and LAN, but it will get messy with masquarade and NAT - then you have to use port forwarding I guess, and it should work from WAN.
I just wanted it to be over VPN to limit my exposure (even if WoL packets aren’t especially scary).
There is a trick you could do to send a WoL packet to a separate IP on the sender network and modify it so it is repreated on the network of the machine you want to wake up.
I can’t find docs on thisb on mobile, but can look for it later.
It can’t work like a typical IP packet routing tho. I’ve only made it work with a VPN connection.
Another thing you can do is ssh to your router and send a WoL packet from there on the machine’s LAN.
It’s generic advice, but check kompose
- it can translate docker compose yml into a bunch of k8s objects, as far as it sensibly can.
The mose issues can come from setting up volumes, since docker has different expectations towards the underlying filesystem.
It does save a bunch of work of rewriting everything by hand.
If you don’t need external calls sip trunk is not needed.
In a hobby it’s easy to get carried away into doing things according to “best practices” when it’s not really the point.
I’ve done a lot of redundant boilerplate stuff in my homelab, and I justify it by “learnding”. It’s mostly perfectionism I don’t have time and energy for anymore.
If you’re the only user and just want it working without much fuss, use a single db instance and forget about it. Less to maintain leads to better maintenance, if performance isn’t more important.
It’s fairly straightforward to migrate a db to a new postgres instance, so you’re not shooting yourself in a future foot if you change your mind.
Use PGTune to get as much as you can out of it and adjust if circumstances change.
I had budget to try xeon d soc motherboard for a smal itx case. Put 64gb ecc ram into it but could hold 128gb. That server will be 8 yo this year. That particular supermicro mb was ment for some oem routerlike 64_86x with 10g ports and remote management. I’m not sure if intel or amd have any cpus in that segment anymore, but it’s very light on wattage if mostly idle/maintaining vms.
One option I’m looking at is to get a dedicated hetzner server, even the auction and lowest grade ‘new’ offerings are pretty good for the price if you account for energy costs and upfront gear cost.
I think it depends. In my limited experience, because I have not tested this thoroughly, most systems pick the first DNS adresses and only send requests to the second if first doesn’t respond.
This has lead at least a couple of times to extremely long timeouts making me think the system is unresponsive, especially with things like kerberos ssh login and such.
I personally set up my DHCP to provide pihole as primary, and my off site IPA master as secondary (so I still have internal split brain DNS working in case the entire VM host goes down).
Now I kinda want to test if that offsite DNS gets any requests in normal use. Maybe would explain some ad leaks on twitch.tv (likely twitch just using the same hosts for video and ads, but who knows).
Edit: If that is indeed the case, I’m not looking forward to maintaining another pihole offsite. Ehhh.
Longhorn isn’t just about replication (which is not backup, and RAID is not backup either). Also if you only have one replica, is it even different from local storage at this point?
You’d use longhorn to make sure applications don’t choke and die when the storage they are using go down. Also, I’m not sure if you can supply longhorn storage to nodes that don’t run it. I haven’t tried it.
I suspect all pods that you’d define to use longhorn would only go up at the longhorn replica node.
All this is how I understand longhorn works. I haven’t tried it this way, my only experience is running it on every node so if one node goes down pods can just restart on any other node and carry on.
What’s wrong with Hetzner?
Can confirm, gitlab has a container registry built in, at least in the omnibus package installation.