I’m a little teapot 🫖

  • 0 Posts
  • 68 Comments
Joined 2 years ago
cake
Cake day: September 27th, 2023

help-circle

  • You’re best off splitting the routing and WiFi tasks into separate hardware. Buy yourself a used ruckus unleashed r550/650 or r510/610 depending on how much you want to spend for wifi then run routing on whatever hardware is fit for purpose. I usually slap OPNsense on something like a dell/wyse 5070 j5005 mini PC, any mini PC with a PCIe slot will allow you to build a 1/2.5/10GbE router with open software. Chinese N100 router boxes are cheap now too, or you could reuse an old mini PC of some kind.

    I don’t like rolling my own router using arm boards anymore, router distro support for them is unreliable and j5005 pulls <10W anyway.




  • seaQueue@lemmy.worldtoSelfhosted@lemmy.worldHDD randomly unmounting
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    7 months ago

    I’m not sure how to get the N from session history, nor how to check my session history…

    journalctl --list-boots will list all sessions stored in the journal.

    The output is from yesterday, when the device stopped working correctly.

    I’m not familiar with linux kernel, but I can see there is definitely something wrong…

    The HDD (old) is attached to a USB hub (new), I tried switching port of the hub but the same issue happened again, if I try to mount it with sudo mount /mnt/2tb, it says it is already mounted:

    Those messages tell you what’s happening, there’s an unrecoverable error on the USB bus connecting the hard drive which is causing filesystem errors when writes fail. Diagnose that, lose the hub first and directly connect the drive to the pi, then try replacing the cable that attaches the drive if the error still occurs. I’d also check with people in the rpi community in case there are any known issues with USB on your model. There may be some pi specific USB firmware things you can do to increase reliability.

    You can also try disabling UASP for the drive in case BOT transfer somehow stabilizes the connection. You’ll lose performance but that helps with some USB storage bridges.

    Some USB storage bridges are just unreliable under Linux and crash under load, your last option is to buy another drive enclosure that’s tested and known to work correctly. I went through like 5 USB/NVMe enclosures looking for one that worked properly, that whole space is a compatibility mess.



  • seaQueue@lemmy.worldtoSelfhosted@lemmy.worldHDD randomly unmounting
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    7 months ago

    Don’t just look at sdb hits in the log. Open up that entire session in journalctl kernel mode (journalctl -k -bN where N is the session number in session history) and find the context surrounding the drive dropping and reconnecting.

    You’ll probably find that something caused a USB bus reset or a similar event before the drive dropped and reconnected. if you find nothing like that try switching power supplies for the HDD and/or switching USB ports until you can move the drive to a different USB root port. Use lsusb -t and swap ports until the drive is attached beneath a different root port. You might have a neighboring USB device attached to the bus that’s causing issues for other devices attached to the same root port (it happens, USB devices or drivers sometimes behave badly.)

    Always look at the context of the event when you’re troubleshooting a failure like this, don’t just drill down on the device messages. Most of the time the real cause of the issue preceded the symptom by a bit of time.





  • Interesting that the one has such large capacitors in it. I imagine that is as last-ditch effort to keep the board powered long enough to finish flushing all of its caches in the event of a power failure.

    That’s exactly the point of power loss protection (aka PLP.) As a side effect of not needing to wait for a flush after a write synchronous write workloads are dramatically faster on enterprise drives with PLP.

    Edit: To add a bit of detail - you don’t need to wait for a flush after a synchronous write with PLP because the drive firmware can lie and immediately return from a flush call because there’s enough backup power to complete that flush if the power were cut.






  • I mean, the horror of having to tick a box to use rotating v6 addresses. These are all solved problems, they’re not a flaw worth ignoring the entire ipv6 protocol over. Most major operating systems have moved to stable privacy preserving addresses by default, that’s true, but it’s not all that difficult to turn on address randomization and rotation either. And, hell, if you’re that married to NAT as security just use NAT66 and call it a day, nothing about NAT is exclusive to ipv4.


  • Your firewall should take care of that, it’s pretty rare to be connected directly without one and by default any decent routing package will filter incoming traffic that’s not in the state tracking table. NAT isn’t designed for security, any security benefit it provides is a side effect rather than the intended purpose.

    Edit: check out ipv6 privacy extensions too, there are solutions there that can reduce info disclosure if that’s a concern. You can accomplish many of the same benefits of NAT with v6 features without the downsides that NAT brings.




  • Most enterprise drives are TLC these days, MLC just doesn’t provide the storage density that enterprises require anymore. I only mentioned MLC because you’ll occasionally find mSATA drives in the <=256GB range that use MLC. You have to check the datasheet for each model, look for endurance rated at 5DWPD or higher, those will typically be MLC or heavily over provisioned TLC. If you want enterprise drives with greater endurance than the usual 0.5 or 1 DWPD look for the over provisioned models with capacities like 400GB, 800GB, 1.6T or 3.2T. those are 512GB, 1TB, 2TB and 4TB raw capacity drives with a bunch of flash set aside for wear leveling purposes. You don’t often see 300GB, 600GB, 1.2T or 2.4T drives anymore but those are often very high endurance (write intensive, 10 DWPD or so) models.

    Check the datasheets for drives when you’re shopping and you can get a pretty good idea of what their durability is like, I usually buy 1 DWPD drives for write occasional bulk storage and 3+ DWPD for anything with a serious write workload. You can also help the drive controller a bit by running blkdiscard against the entire device before partitioning, then only partition and use ~80% of available space. The drive controller will typically grab free unused blocks and use them for wear leveling but only if they’ve been marked free (TRIMmed) and never allocated after. If you can’t find or can’t afford high endurance drives you can usually buy a larger lower endurance drive and over provision it in this way to extend its lifespan.

    (The last time MLC flash was really common was back in maybe 2014-2015, some of the older Samsung pro drives like the 850/860 pro were built using MLC. Those had legendary real world endurance, I think they’d get up to 10+PB written before actually failing. It’s a shame they didn’t have PLP because they would have made good budget array storage if they did.)