Monolith can be particularly handy for this. I used it in a recent project to archive the outgoing links from my own site. Coincidentally, if anyone is interested in that, it’s called django-cool-urls.
Canadian software engineer living in Europe.
Monolith can be particularly handy for this. I used it in a recent project to archive the outgoing links from my own site. Coincidentally, if anyone is interested in that, it’s called django-cool-urls.
You probably want to look into Health Checks. I believe you can tell Docker to “start service B when service A is healthy”, so you can define your health check with a script that depends on Tailscale functioning.
I’m reasonably sure that the size of the monitor doesn’t matter, but the resolution does. If you run your monitor at 720p, the performance should be the same as the Deck locally. If you try to run it at 4k (I’m not even sure how you’d convince the Deck to do that) it would decrease performance considerably.
So my first impression is that the requirement to copy-paste that elaborate SQL to get the schema is clever but not sufficiently intuitive. Rather than saying “Run this query and paste the output”, you say “Run this script in your database” and print out a bunch of text that is not a query at all but a one-liner Bash script that relies on the existence of pbcopy
– something that (a) doesn’t exist on many default installs (b) is a red flag for something that’s meant to be self-hosted (why am I talking to a pasteboard?), and (c) is totally unnecessary anyway.
Instead, you could just say: “Run this query and paste the result in this box” and print out the raw SQL only. Leave it up to the user to figure out how they want to run it.
Alternatively you can also do something like: “Run this on your machine and copy/paste the output”:
$ curl 'https://app.chartdb.io/superquery.sql' | psql --user USERNAME --host HOSTNAME DBNAME
In the case of the cloud service, it’s also not clear if the data is being stored on the server or client side in LocalStorage
. I would think that the latter would be preferable.
Generally, I agree. I think what I meant by the above is “how would you tell someone how to use the thing”. My favourite example is email vs email-with-PGP.
How do you send an email?
How do you send a PGP-encrypted email
Let’s first talk about this thing called a “keyserver”. Once you know what that is, you’ll have to go out and find some keys to add to it. We’re not going to talk about styling your message 'cause that’s not something you should be able to do… etc. etc.
This is a common problem with Free software, and honestly I think it’s our biggest one: we build stuff for ourselves and stop there. If we want our stuff to be adopted (which, for things that rely on network effects, we do) then we need to pay more attention to usability.
Here’s a suggestion for anyone starting a project they think they might share. Before you start writing any code, write the documentation. Then rewrite it from the perspective of the least tech-literate person you know who you’d still want to use the project. Only after you’ve worked out how easy it should be for this person to get started, then you can start writing the thing.
I’ve been self-hosting my blog for 21years if you can believe it, much of it has been done on a server in my house. I’ve hosted it on everything from a dusty old Pentium 200Mhz with 16MB of RAM (that’s MB, not GB!) to a shared web host (Webfaction), to a proper VPS (Hetzner), to a Raspberry Pi Kubernetes cluster, which is where it is now.
The site is currently running Python/Django on a few Kubernetes pods on a few Raspberry Pi 4’s, so the total power consumption is tiny, and since they’re fanless, it’s all very quiet in my office upstairs.
In terms of safety, there’s always a risk since you’re opening a port to the world for someone to talk directly to software running in your home. You can mitigate that by (a) keeping your software up to date, and (b) ensuring that if you’re maintaining the software yourself (like I am) keeping on top of any dependencies that may have known exploits. Like, don’t just stand up an instance of Wordpress and forget about it. That shit’s going to get compromised :-). You should also isolate the network from the rest of your LAN if you can. Docker sort of does this for you (though I hear it can be broken out of), but a proper demarcation between your laptop and a server on the Open web is a good idea.
The safest option is probably to use a static site generator like Hugo, since then your attack surface is limited to whatever you’re using to serve the static sites (probably Nginx), while if you’re running a full-blown application that does publishing etc., then that’s a lot of stuff that could have holes you don’t know about. You may also want to setup something like Cloudflare in front of your site to prevent a DOS attack or something from crippling your home internet, though that may be overkill.
But yeah, the bandwidth requirements to running a blog are negligible, and the experience of running your own stuff on your own hardware in your own house is pretty great. I recommend it :-)
At the firewall level, port forwarding forwards traffic bound for one port to another machine on your network on an arbitrary port, but the UI built on top of it in your router may not include this.
If it’s not an option in your Fritzbox, your options are:
The first and last options on this list are probably the best.
It would be absolutely bizarre if you couldn’t connect with WireGuard port
and Wireguard obfuscation
set to Automatic
. Things to try first:
If the above somehow doesn’t work, Mulvad offers support through which you can get a temporary Server IP override
. You can enter that in the bottom portion of your app’s settings.
Thanks for posting this! I have the same router.
Has anyone managed to get AM2R working on the Deck? I was thinking of trying it out next.
You might want to consider just Dockerising everything. That way, the underlying OS really doesn’t matter to the applications running.
I’ve got a few Raspberry Pi’s running Debian, and on top of that, they’re running a kubernetes cluster with K3s. I host a bunch of different services, all in their own containers (effectively their own OS) and I don’t have to care. If I want to change the underlying OS, the containers don’t know either. It’s pretty great.
What about blog spam though? Surely this would relinquish controls like moderation for your site?
There have been some great answers on this so far, but I want to highlight my favourite part of Docker: the disposability.
When you have a running Docker container, you can hop in, fuck about with files, break stuff as you try to figure something out, and then kill the container and all of the mess you’ve created is gone. Now tweak your config and spin up a fresh one exactly the way you need it.
You’ve been running a service for 6 months and there’s a new upgrade. Delete your instance and just start up the new one. Worried that there might be some cruft left over from before? Don’t be! Every new instance is a clean slate. Regular, reproducible deployments are the norm now.
As a developer it’s even better: the thing you develop locally is identical to the thing that’s built, tested, and deployed in CI.
I <3 Docker!
Upon a cursory read, it sounds like you host a server and then relay all of your data through their centrally controlled system all while also pushing your account data to them.
I’m not sure they understand what “federated” means. Or rather, they know, but they’re hoping we don’t care.
Aww! Thank you! It was fun ❤️
Thanks! The crazy thing is that it’s really not that complicated. I’d say the hardest work was in writing the docs :-). It’s awesome to hear that people still use it and love it though.
Actually, I stepped away from the project 'cause I stopped using it altogether. I started the project to satisfy the British government with their ridiculous requirements for proof of my relationship with my wife so I could live here. Once I was settled though and didn’t need to be able to bring up flight itineraries from 5 years ago, it stopped being something I needed.
Well that, and lemme tell you, maintaining a popular Free software project is HARD. Everyone has an idea of where stuff should go, but most of the contributions come in piecemeal, so you’re left mostly acting as the one trying to wrangle different styles and architectures into something cohesive… while you’re also holding down a day job. It was stressful to say the least, and with a kid on the way, something had to give.
But every once in a while I consider installing paperless-ngx just to see how it’s come along, and how much has changed. I’m absolutely delighted that it’s been running and growing in my absence, and from the screenshots alone, I see that a lot of the ideas people had when I was helming made it in in the end.
Monolith has the same problem here. I think the best resolution might be some sort of browser-plugin based solution where you could say “archive this” and have it push the result somewhere.
I wonder if I could combine a dumb plugin with Monolith to do that… A weekend project perhaps.