• 2 Posts
  • 215 Comments
Joined 1 year ago
cake
Cake day: June 19th, 2023

help-circle
  • In the old days, it used to be a problem because everyone just connect their windows 98 desktop with all their services directly exposed to the internet because they’re using dial up internet without the concept of a gateway that prevents internet from accessing internal resources. Now days, you’re most likely behind your ISP router that doesn’t forward ports by default, and you’re only exposing the things you’d actually want to expose.

    For things you’d actually want to expose, having a service on the default port is fine, and reduces the chances of other systems interacting with it failing because they’d expect it on the default port. Moving them to a different port is just security through obscurity, and honestly doesn’t add too much value. You can port scan the entire public IPv4 space fairly quickly fairly cheaply. In fact, it is most likely that it’s already been mapped:

    https://www.shodan.io/host/<your-ip-here>

    Keeping the service up-to-date regularly and applying best practices around it would be much more important and beneficial. For SSH, make sure you’re using key based authentication, and have password based authentication disabled; add fail2ban to automatically ban those trying to brute force. For Minecraft, online mode and white listed only unless you’re running a public one for everyone.


  • I’m not saying you’re wrong — I’ve even upvoted your earlier comments because I’m generally in agreement; you’re an instance admin judging by your handle, go and check the vote history yourself lol.

    I’m saying people shouldn’t force their janky unproven solo solution on to someone else who doesn’t have their level of distrust, and would just rather trust the multibillion multinational corporation, when all they want is something that’s been working fine for them for all they care.


  • There’s always the add more of everything so something could fail without impacting the stability aspect, and that’s great for a corporation needing the redundancy; but it’s probably prudent to not forget there’s also the “I’m interested in learning” aspect, where people running a home server to play with software side of things.

    You’re spot on in that we’d need to know what it is that OP would like to do with the system, but I’m getting the feeling that stability isn’t that high of a concern just yet.


  • Until the basement floods and the server goes offline for a few days; or botched upgrade that’s failing quietly; over zealous spam assassin configuration; etc etc

    It sounded like they were trying to archive things from Gmail to their own server, so just cut the middleman jank out, and let the wife continue to use her Gmail as intended.




  • I don’t care for the argument one way or another; I’m not an EU resident and the whole thing is irrelevant to me as an individual.

    I’m merely pointing out neither the Fediverse/Lemmy/etc. nor Reddit as a platform cares for EU’s privacy concerns, and people should be well informed when entering either platforms, so they’re not doing so with the false sense of security that they’d be able to exercise those government granted rights effectively.


  • Good luck with that. Once the post federates out, the host instance can request for deletion, but any federated instances that receives the content doesn’t necessarily have to follow that request. They could easily modify their instance to not delete, they may reactivate the content from moderation log, they might have backup strategies that involves retaining data (for their own local legal reasons), etc etc.

    It’s probably best to assume any content that you post on Lemmy are out of your control and will live for much longer than you’d expect.

    This is not limited to just Lemmy but any federated systems. So regardless centralized corporation behind the service, or an open federated system; one way or another, whatever you post out there, its no longer yours to control.




  • I’m aware this is the selfhost community, but for a company of 20 engineers, it is probably best to use something commercial in the cloud.

    Biggest pain point was for our ops guy, who constantly had to stay behind to perform upgrades and maintenance, as they couldn’t do it during business hours when the engineers are working. With a team of at least 20, scheduling downtimes could get increasingly more difficult.

    It also adds an entire system to be audited by the auditors.

    The selfhost vs buy commercial kind of bounces back and forth. For smaller teams, less than 5 to 10 engineers, it might be a fun endeavour; but from that point on, until you get to mega corp scale with dedicated ops department maintaining your entire infrastructure, it is probably more effective to just pay for a solution from a major vendor in the cloud instead.






  • I think it would be a good idea to take a step back and ask what is it that you’re trying to achieve.

    Userbase, the service linked, is a backend as a service platform that offers you authentication and basic database that you can access via their api. You’d then code your own front end web app to interact with their service and store data there. You pay only per storage used by their storage tiers, which are frankly fairly fair priced. If that is something you’d need, that’s a good idea, but you’d be coding the front end yourself.

    If you’re only looking for authentication with OAuth, and then coding your own API backend, then something like Authentik would be a nice self hosted authentication provider. Others that commonly gets mentioned but I’ve got limited/no experience with worlds new keycloak, or fusionauth. Managed services here would be your Auth0, Okta, etc.

    If you’ve got a specific use case in mind, then it may be a good idea to say what service you’re thinking about, and the community may be able to suggest prebuilt solutions that good better and require less lift.


  • Strictly speaking, they’re leveraging free users to increase the number of domains they have under their DNS service. This gives them a larger end-user reach, as it in turn makes ISPs hit their DNS servers more frequently. The increased usage better positions them to lead peering agreement discussions with ISPs. More peering agreements leads to overall cheaper bandwidth for their CDN and faster responses, which they can use as a selling point for their enterprise clients. The benefits are pretty universal, so is actually a good thing for everyone all around… that is unless you’re trying to become a competitor and get your own peering agreement setup, as it’d be quite a bit harder for you to acquire customers at the same scale/pace.


  • I tend to recommend sticking with more reputable providers, even if it means a couple of dollars extra on a recurring basis. Way too many kiddie hosts popping up, trying to make a quick buck during spring break/summer and then fail to provide adequate services when it actually comes time to provide service.

    It may also be a good idea to check LET/WHT before committing into paying longer than month-to-month term with a provider.


  • OP Currently has in their possession 2 drives.

    OP has confirmed they’re 12TB each, and in total there is 19TB of data across the two drives.

    Assuming there is only one partition, each one might look something like this:

    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disklabel type: gpt
    Disk identifier: 12345678-9abc-def0-1234-56789abcdef0
    
    Device         Start        End            Sectors        Size      Type
    /dev/sda1      2048         23437499966    23437497919    12.0T     Linux filesystem
    

    OP wants to buy a new drive (also 12TB) and make a RAID5 array without losing existing data. Kind of madness, but it is achievable. OP buys a new drive, and set it up as such:

    Device         Start        End            Sectors        Size      Type
    /dev/sdc1      2048         3906252047     3906250000     2.0T      Linux RAID
    
    Unallocated space:
    3906252048      23437500000   19531247953    10.0T
    

    Then, OP must shrink the existing partition to something smaller, say 10TB for example, and then make use of the rest of the space as part of their RAID5 :

    Device         Start        End            Sectors        Size      Type
    /dev/sda1      2048         19531250000    19531247953    10.0T     Linux filesystem
    /dev/sda2      19531250001  23437499999    3906250000     2.0T      Linux RAID
    

    Now with the 3x 2TB partitions, they can create their RAID5 initially:

    sudo mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sda2 /dev/sdb2 /dev/sdc1

    Make ext4 partition on md0, copy 4TB of data (2TB from sda1 and 2TB from sdb1) into it, verify RAID5 working properly. Once OP is happy with the data on md0, they can delete the copied data from sda1 and sdb1, shrink the filesystem there (resize2fs), expand sda2 and sdb2, expand the sdc1, and resize the raid (mdadm --grow ...)

    Rinse and repeat, at the end of the process, they’d end up having all their data in the newly created md0, which is a RAID5 volume spanning across all three disks.

    Hope this is clear enough and that there is no more disconnect.