• 0 Posts
  • 14 Comments
Joined 1 year ago
cake
Cake day: June 17th, 2023

help-circle
  • I really don’t see much benefit to running two clusters.

    I’m also running single clusters with multiple ingress controllers both at home and at work.

    If you are concerned with blast radius, you should probably first look into setting up Network Policies to ensure that pods can’t talk to things they shouldn’t.

    There is of course still the risk of something escaping the container, but the risk is rather low in comparison. There are options out there for hardening the container runtime further.

    You might also look into adding things that can monitor the cluster for intrusions or prevent them. Stuff like running CrowdSec on your ingresses, and using Falco to watch for various malicious behaviour.


  • ZFS doesn’t really support mismatched disks. In OP’s case it would behave as if it was 4x 2TB disks, making 4 TB of raw storage unusable, with 1 disk of parity that would yield 6TB of usable storage. In the future the 2x 2TB disks could be swapped with 4 TB disks, and then ZFS would make use of all the storage, yielding 12 TB of usable storage.

    BTRFS handles mismatched disks just fine, however it’s RAID5 and RAID6 modes are still partially broken. RAID1 works fine, but results in half the storage being used for parity, so this would again yield a total of 6TB usable with the current disks.




  • My home-assistant installation alone is too much for my Raspberry Pi 3. It depends entirely on how much data it’s processing and needing to keep in memory.

    Octoprint needs to respond in a timely manner, so you will want to have the system mostly idle (at least below 60 percent CPU at all times), preferably octoprint should be the only thing running on the system unless it’s rather powerful.

    If I were you, I would install octoprint exclusively on your Raspberry Pi 3, and then buy a Raspberry Pi 4 for the other services.

    I’m running Pi-hole and a wireguard VPN on an old Raspberry Pi 2, which is perfectly fine if you are not expecting gigabit speeds on the VPN.


  • According to Karl, Billy must pay all the legal fees if he withdraws from the lawsuit. He must also pay the legal fees if he loses. Billy’s only way out of paying would be to win the lawsuit.

    So the longer Karl strings him along, the more the fees will mount.

    And since Billy doesn’t have a leg to stand on he can either withdraw now, pay a lot of money, and admit he lied. Or he can keep fighting mounting more fees in the slim nope of winning.


  • This is pretty cool, but I’m wondering why… Sure there’s lots of systems that make use of A/B partitions, which is a pretty good move, but with BTRFS you could have it all in one partition with an A/B subvolume, and they would even be able to share extents that are common between the two (meaning drastically reduced disk space requirements), while still maintaining the ability to boot into either…

    Depending on how much changes you might even keep many more than just two subvolumes. On my machine I run BTRFS with snapper, which takes periodic snapshots, as well as before and after every time I install or uninstall a package, with the ability to boot into any of the snapshots if a change somehow botches my system.




  • The reason a VPN is better to expose than SSH, is the feedback.

    If someone tries connecting to your SSH with the wrong key or password, they get a nice and clear permission denied. They now know that you have SSH, and which version. Which might allow them to find a vulnerability.

    If someone connects to your wireguard with the wrong key, they get zero response. Exactly as if the port had not been open in the first place. They have no additional information, and they don’t even know that the port was even open.

    Try running your public IP through shodan.io, and see what ports and services are discovered.




  • I use Promtail+Loki+Grafana on my home server, which is decently performant, light on resources and storage, and searchable. It takes a little effort to learn the LogQL query language, but it’s very expressive.

    I’m running it on Kubernetes, but it should be pretty straightforward to configure for running on plain Docker.