The account isn’t the issue in itself it’s the data transfer that comes with accepting the agreement that comes with that account.
“Free” is straight wrong.
The account isn’t the issue in itself it’s the data transfer that comes with accepting the agreement that comes with that account.
“Free” is straight wrong.
The first link goes into amazing detail on that. In short: all your information concerning location as well as current IP and some other metadata gets send to a basically unknown company with no transparency on how that data is handled.
I highly recommend reading the first, linked post though!
Cups
linux printing server - if you want to share a printer over network or just use one locally on a linux machine.
(not OP but same boat) Doesn’t really matter to me because google knows my servers external IP which is a non-issue: I don’t expect google to try to attack me individually but crawl data about me. There is no automatic link between my server and my personal browsing habits.
In terms of attack vector vs ease of use , self hosting searxng is a nobrainer for me - but I do have an external server available for things like that anyway so no additional overhead needed.
A Dockerfile itself is the instruction set. There is a certain minimum requirement expected from a server admin that differs from end-user requirements.
The ease of docker obfuscates that quite a bit but if you want to go full bare metal (or full AWS or GCS, etc etc) then you need to manage the full admin part as well - including custom deployments.
No worries I phrased that quite weird I think.
A NAS is only more power efficient if the additional power of a full server is not used. If for some reason the server is still needed than the NAS will be additional power consumption and not save anything.
(for example I run some quite RAM and compute heavy things on my server which no stock NAS could handle I think).
That would replace the computer with the NAS though and is not true for a server that you’d want to extend, right?
Is that 370watt across all of them or per fat server? I ask because three m5 sound like a lot of power drain!.
And thanks for sharing!
I didn’t know that about the immich app, thanks for pointing it out!
Then you need a third application (e.g. syncthing) to replicate the auto upload functionality of Nextcloud.
Personally I don’t want to have same functionality in a different stack because of pipeline issues. Doesn’t solve OPs issue I just wanted to point out that your solution might have drawbacks OP didn’t see at first glance :)
Thank you! That’s really interesting, the performance with a pi 3 was way worse - even more than the pure spec difference would’ve lead me to believe.
The OCR devs have made a really awesome job!
You are running a specific module of a project locally - not the whole project. The web server is an integral part - leaving it out makes you do a bit of the leg work: you’d need to figure out how the websites get built and deployed and then reverse engineer that for your android environment.
Personally I’m fascinated by that attempt and it could be an awesome learning opportunity. To be honest I don’t have the motivation to follow your path down this rabbit hole though.
If you decide to follow up I’d appreciate you giving updates from time to time about your insights! ♥
Thanks for sharing! The only thing I’m surprised to see in your list is paperless - how long does OCR take on a pi?
“Being a router” is what they are good for! Even needed.
Edit to be more specific: two switches in each of the 10gbit and redundant uplink would be a setup I can see, depending on your line.
No overkill there :)
Lemmy.world is blocked by beehaw as well…