Seconding this, I do the same. It’s a terrible sign that it took me longer to figure out how to successfully create VLANs and assign them to SSIDs in OpenWRT, which is a fairly simple concept, than it took me to learn basically anything about OPNSense, a vastly more powerful and complex tool.
I appreciate OpenWRT for giving me FOSS firmware I can slap on my AP, and I certainly don’t want to come across as entitled to the free labor of the developers, but it’s just objectively not very good from a UI/UX perspective.
Whatever you get for your NAS, make sure it’s CMR and not SMR. SMR drives do not perform well in NAS arrays.
I just want to follow this up and stress how important it is. This isn’t “oh, it kinda sucks but you can tolerate it” territory. It’s actually unusable after a certain point. I inherited a Synology NAS at my current job which is used for backup storage, and my job was to figure out why it wasn’t working anymore. After investigation, I found out the guy before me populated it with cheapo SMR drives, and after a certain point they just become literally unusable due to the ripple effect of rewrites inherent to shingled drives. I tried to format the array of five 6TB drives and start fresh, and it told me it would take 30 days to run whatever “optimization” process it performs after a format. After leaving it running for several days, I realized it wasn’t joking. During this period, I was getting around 1MB/s throughput to the system.
Do not buy SMR drives for any parity RAID usage, ever. It is fundamentally incompatible with how parity RAID (RAID5/6, ZFS RAID-Z, etc) writes across multiple disks. SMR should only be used for write-once situations, and ideally only for cold storage.
Refurbished drives get their SMART data reset during the process, they absolutely had more than that originally.
I’ve got a Protectli VP2420 running OPNSense at home, which has 4x Intel i225-V 2.5gbe running on a weaker Celeron J6412, and I was able to get the expected iperf performance of ~2.35gbps from some brief testing between two directly connected machines. I didn’t really do any deeper testing than that though, and I’m not currently doing any crazy threat detection stuff.
The games will still be designed by humans. Generative AI will only be used as a tool in the workflow for creating certain assets faster, or for creating certain kinds of interactivity on the fly. It’s not good enough to wholesale create large sets of matching assets, and despite what folks may think, it won’t be for a long time, if ever. Not to mention, people just don’t want that. People want art to have intentional meaning, not computer generated slop.
This is no different than anything else, we naturally appreciate the skill it takes to create something entirely by hand, even if mass production is available.
If you’re waiting for Jellyfin to run some kind of relay like Plex, you’ll be waiting a long time. That takes a lot of money to upkeep, and the demand for people who self-host FOSS and then want to depend on an external service is very minimal, certainly not enough to sustain such a service. I’d recommend just spending a weekend afternoon learning how to set up Nginx Proxy Manager and being done with it, the GUI makes it very easy.
I chose Bookstack for the same situation. It’s dead simple in usage and maintenance. No issues yet!
I will have an OG Xiaomi Mi Box and it’s absurd how over the years it went from a purely functional media device to a complete shit show covered ads. Genuinely disgusted me every time I turned the TV on. I couldn’t stand it anymore, I had to tear out the launcher with ADB and replace it with FLauncher.
I wish Kodi wasn’t such a pain in the ass to deal with, especially for YouTube. We really need a new FOSS media center application. Until then, at least FLauncher works for now as a simple app switcher for a handful of Android apps.
Recently started using Tempo with Navidrome. Haven’t had more than a few days of use yet, but everything has worked exactly as expected! Can’t ask for much more than that.
You’re in for a treat, Cassette Beasts is so underrated. I played it at release and I still listen to the music regularly.
When the corporation wars start over the remaining arable land and drinkable water, I’ll be joining the Steam Corps
I very recently started using borgbackup. I’m extremely impressed with how much it compressed the data before sending, and how well it detects changes and only sends the difference. I have not yet attempted a proper restore from backup, though.
I have much less data I’m currently securing (~50gb) and much more uplink bandwidth (~115mbps) so my situation isn’t nearly as dire. But it was able to compress that down to less than 25gb before sending, and after the initial upload, the next week’s backup only required about 100mb of data transfer.
If you can find a way to seed your data from a faster location, reduce the amount you need to back up, and/or break it up into multiple smaller transfers, this might be an effective solution for you.
Borgbase’s highest plan has an upper limit of 8TB, which you would be brushing right up against, but Hetzner storage boxes go up to 20TB and officially support Borg.
Outside of that, if you don’t expect the data to change often, you might be looking for some sort of cheap S3 storage from AWS or other similar large datacenter company. But you’ll still need to find a way to actually get them the data safely, and I’m not sure if they support differential uploads like Borg does.
I would bet the main reason is that KDE is way more willing to accept features and contributions outside of the typical use case than Gnome is.
You’re the one they see every flight. Keep up the good work
You shouldn’t put a protector on it. If you get a normal protector, you’re basically just re-adding glare. If you get an anti-glare protector, you’re further increasing the blurriness and darkening the screen, as that’s how anti-glare works. The adhesive will also fill in the etching and reduce its effectiveness (search for “scotch tape frosted glass”, same concept), but how permanent that is has never truly been verified; presumably, a good rub with alcohol should fix that problem.
The goal here is to make it difficult to link to things uploaded to discord from outside of discord. The malware reason is BS. If they wanted to curb malware it would be as easy as making it a nitro feature. What that doesn’t fix is all the people piggybacking on discord as a free CDN.
Discord isn’t even wrong for doing this. I just resent their dishonesty.
Convincing argument, but unfortunately a cursory Google search will reveal he was right. There is very little CPU overhead. The only real consideration is a bite extra storage and RAM to store and load the redundant dependencies of the container.
While that isn’t false, defaults carry immense weight. Also, very few have the means to host at scale like Docker Hub; if the goal is to not just repeat the same mistake later, each project would have to host their own, or perhaps band together into smaller groups. And unfortunately, being a good programmer does not make you good at devops or sysadmin work, so now we need to involve more people with those skillsets.
To be clear, I’m totally in favor of this kind of fragmentation. I’m just also realistic about what it means.
Something you might want to look into is using mTLS, or client certificate authentication, on any external facing services that aren’t intended for anybody but yourself or close friends/family. Basically, it means nobody can even connect to your server without having a certificate that was pre-generated by you. On the server end, you just create the certificate, and on the client end, you install it to the device and select it when asked.
The viability of this depends on what applications you use, as support for it must be implemented by its developers. For anything only accessed via web browser, it’s perfect. All web browsers (except Firefox on mobile…) can handle mTLS certs. Lots of Android apps also support it. I use it for Nextcloud on Android (so Files, Tasks, Notes, Photos, RSS, and DAVx5 apps all work) and support works across the board there. It also works for Home Assistant and Gotify apps. It looks like Immich does indeed support it too. In my configuration, I only require it on external connections by having 443 on the router be forwarded to 444 on the server, so I can apply different settings easily without having to do any filtering.
As far as security and privacy goes, mTLS is virtually impenetrable so long as you protect the certificate and configure the proxy correctly, and similar in concept to using Wireguard. Nearly everything I publicly expose is protected via mTLS, with very rare exceptions like Navidrome due to lack of support in subsonic clients, and a couple other things that I actually want to be universally reachable.