• 1 Post
  • 33 Comments
Joined 1 year ago
cake
Cake day: June 7th, 2023

help-circle



  • It’s a bit more complicated than that. System load is a count of how many processes are in an R state (either "R"unning or "R"eady). If a process does disk I/O or accesses the network, that is not counted towards load, because as soon as it makes a system call, it’s now in an S (or D) state instead of an R state.

    But disk I/O does affect it, which makes it a bit tricky. You mentioned swapping. Swapping’s partner in crime, memory-mapped files, also contribute. In both of those cases, a process tries to access memory (without making a system call) that the kernel needs to do work to resolve, so the process stays in an R state.

    I can’t think of a common situation where network activity could contribute to load, though. If your swap device is mounted over NFS maybe?

    Anyway, generally load is measuring CPU usage, but if you have high disk usage elsewhere (which is not counted directly) and are under high memory pressure, that can contribute to load. If you’re seeing a high load with low CPU utilization, that’s almost always due to high memory pressure, which can cause both swapping and filesystem cache drops.



  • BitWarden+PiHole+NextCloud+Wireguard combined will add to like maybe 100MB of RAM or so.

    Where it gets tricky, especially with something like NextCloud, is the performance you see from NextCloud will depend tremendously on what kind of hard drives you have and how much of it can be cached by the OS. If you have 4GB of RAM, then like 3.5GB-ish of that can be used as cache for NextCloud (and whatever else you have that uses considerable storage). If you have tiny NextCloud storage (like 3.5GB or less), then your OS can keep the entire storage in cache, and you’ll see lightning-fast performance. If you have larger storage (and are actually accessing a lot of different files), then NextCloud will actually have to touch disk, and if you’re using a mechanical (spinning rust) hard drive, you will definitely see the 1-second lag here and there for when that happens.

    And then if you have something like Immich on top of that…

    And then if you have transmission on top of that…

    Anything that is using considerable filesystem space will be fighting over your OS’s filesystem cache. So it’s impossible to say how much RAM would be enough. 512MB could be more than enough. 1TB could be not enough. It depends on how you’re using it and how tolerant you are of cache misses.

    Mostly you won’t have to think about CPU. Most things (like NextCloud) would be using like <0.1% CPU. But there are some exceptions.

    Notably, Wireguard (or anything that requires encryption, like an HTTPS server) will have CPU usage that depends on your throughput. Wireguard, in particular, has historically been a heavy CPU user once you get up to like 1Gbit/s. I don’t have any recent benchmarks, but if you’re expecting to use Wireguard beyond 1Gbit/s, you may need to look at your CPU.


  • Yes, with some big "if"s. NextCloud can work very well for a large organization if that large organization has a “real” IT department. I use “real” to describe how IT departments used to work 20+ years ago, where someone from IT was expected to be on call 24/7, they built and configured their own software, did daily checks and maintenance, etc. Those sorts of IT departments are rare these days. But if they have the right personnel, it can definitely be done. NextCloud can be set up with hot failovers and fancy stuff like that if you know what you’re doing.





  • (Whoops, accidentally hit “Delete” instead of “Edit” and Lemmy doesn’t ask for confirmation!! Boo!! I’ll try to retype my comment as best I can remember)

    I’ll buck the trend here and say “Yes, for a home LAN, it’s absolutely worth it. In fact for a home LAN it is more important than in a data centre. It is absolutely the bees’ knees for home and is worth doing.”

    All of that depends on how your ISP does things. When I did it, I got a /56, which is sensible and I think fairly common. If your ISP gives you anything smaller than a /64, (a) your ISP is run by doofuses, but (b) it’s going to be a pain and might not be worth it. (I now live in literally one of the worst countries in the world for IPv6 adoption, so I can’t do it any more)

    The big benefit to it is that you can have your servers (if you want them to be) publicly reachable. This means you can use exactly the same address to reach them outside the network as you would inside the network. Just make one AAAA for them and you can get to it from anywhere in the world (except my country).

    When I did it, I actually just set up 2 /64s, so a /63 would have been sufficient (but a /56 is nice). Maybe you can think of more creative ways of setting up your networks. Network configuration is a lot of fun (I think).

    I had 1 /64 for statically-assigned publicly-reachable servers. Then I had a separate /64 for SLAAC (dynamic) end-user devices, which were not publicly reachable (firewalled to act essentially like a NAT). (Sidenote: if you do go to IPv6 for your home network, look into RFC7217 for privacy reasons. I think it’s probably turned on by default for Windows, Android, iOS, etc., these days, but it’s worth double-checking)


  • I’ll buck the trend and say “yes, for a home LAN, it is the bees’ knees”. I don’t do it now because my country (and hence my ISP) does not do IPv6, but for most places it’s worth doing.

    It depends on how your ISP does it. When I did it before, my ISP gave me a /56, which is pretty sensible and I think fairly common. If you get smaller than a /64, (a) your ISP is run by doofuses, but (b) it’s going to be a pain and maybe not worth it.

    A /56 was much bigger than I needed. I actually only used 2 /64s, so a /63 would have been fine, but network configuration is fun (I think), so maybe you can get creative and think about different ways of allocating your network.

    I had 1 /64 for statically-assigned, publicly reachable servers. And then I had a separate /64 for SLAAC (dynamic) allocated personal devices (laptops, phones, etc.) which were not publicly-reachable (firewalled essentially to act like a NAT). (Sidenote, if you are going to use IPv6, I recommend turning on RFC7217 on your devices for privacy reasons. I think these days it’s probably turned on by default for Windows, Android, iOS, etc., but it’s worth double-checking)

    The big benefit to using IPv6 is that all of your home machines can be (if you want them to be) reachable inside your network or outside your network using exactly the same IP address, which means you can just give them a fixed AAAA and access them from anywhere in the world you like. If you’re into that sort of thing, of course. It’s a lot of fun.





  • It’s a big question and I don’t think I can give an answer that will cover everything. A lot of it will depend on what they want to do, too. As long as we can have a real discussion about things beforehand, I don’t think there are many technologies or services that I would flat-out ban.

    I’ve realized lately that a lot of the problems I have with how society at large uses technology is it’s not deliberate/intentional or thoughtful. I think if you’re going to buy a smartphone, or download an app, and click “Accept” on all the permissions, you should at least have a goal in mind before you use it. What specifically are you intending to accomplish with it? If it’s to stay in touch with your friend, that’s fine, just have that goal in mind when you’re using it. If it’s to follow the goings-on of your favourite celebrity, okay, as long as that’s your intention. But I think too often, people buy something or download and install something just because of FOMO or without any idea or understanding of what it’s going to do. It puts you in a passive position of allowing a large tech company to decide your use and your experience for you, and that might not be what’s best for you. That kind of passive exploratory attitude I think worked well up until the introduction of “dark patterns”, but it’s a bit dangerous now.

    The other major thing is I want is to introduce them to community-developed technology first. Before they get to the point where they have to decide if they want to install Instagram, I think they should have experienced the Fediverse first, that kind of thing. I think they should understand that there is still technology out there which is completely good (by which I mean free/open source software and community services are sometimes useless, sometimes buggy, sometimes lacking in features, sometimes cumbersome to use, but they’re never antagonistic or evil or deceptive). At the very least they should know all of what kind of technology is out there for them.

    Ideally I would also like them to understand how things work. My oldest is 4 now and can read a little bit. Not complete sentences or even long words, but enough that I know it’s not going to be too many more months before she’s capable of reading properly, and maybe typing, and maybe even some programming. A fair amount of software depends upon ignorance (remember when SnapChat claimed your pictures/videos “disappeared”?) and I think understanding of technology can help navigate bullshit a lot easier. But, a lot of that will depend on her and what interests her…


  • Nobody is forcing you.

    That is not really true, I mean depending on your definition of “forcing”. Okay, it’s true, nobody is holding a gun to your head.

    But depending on where you live, it may be impossible to use a taxi. It would be impossible to work at a lot of workplaces. I work at a university where thankfully faculty are not required to own a smartphone, but students are (if you do not check in for attendance with the university’s app, you automatically fail the course). Soon here it might be impossible to have a bank account without a smartphone app. Any event that requires tickets, forget about it. We’re also getting closer to it being a requirement to see a doctor (some doctor’s offices here already do not allow any patients that haven’t installed their app, and the number is growing).

    There’s a lot of soft pressure, too. The supermarket by us doesn’t require you to install their app. You can pay cash without a smartphone…if you’re willing to pay 2x the usual amount for groceries (which are already quite expensive).


  • I agree with your assessment. I have a lot to say about this, and I’m glad to have found this article, as I’ve been having some serious inner turmoil about this lately, and this makes me feel a bit like I’m not totally alone or crazy. (But also I can’t find a link to the original survey, which makes it hard to trust, as I can’t find any description of the methodology or the exact wording of the questions)

    I’m an older Millenial (sometimes consider Gen X, depending on the terminology used) with young kids. It’s true that I would rather have them brought up 30 years ago than today. Sometimes when I see posts about parents letting their young kids (like let’s say 10) have their own smartphone and then complain about, people get snarky like “You’re the parent. If you don’t like it, just take their smartphone away.”

    But it is a tightrope to walk. I don’t want them expose them something like Instagram, which gives them eating disorders, depression, anxiety, chips away at their sense of privacy, etc. But I also don’t want them to be “the weird kid” who can’t relate to any of their peers. When I was growing up, I remember "the weird kid"s who weren’t allowed to watch TV, weren’t to play video games, etc. I can recognize that in many ways they probably benefited from not sitting in front of the TV for hours each day, but I can also recognize they probably didn’t benefit from not being able to talk to any of the rest of us about the latest episode of Fresh Prince. I do see it as a balancing act between teaching them that there’s a lot about their generation that sucks, but also letting them experience enough of it to see for themselves, and relate to the other kids around them.