Postgres doesn’t need that much ram IMO, though it may use as much as you give it. I’d reduce it’s ram and see how performance changes.
Postgres doesn’t need that much ram IMO, though it may use as much as you give it. I’d reduce it’s ram and see how performance changes.
Why no real db? Those other 2 features make sense, but if the only option you can use sacrifices the 3rd option then it seems like a win. Postgres is awesome and easy to backup, just a single command can backup the whole thing to a file making it easy to restore.
1 is just not true sorry. There’s loads of stuff that only work as root and people use them.
About the trust issue. There’s no more or less trust than running on bare metal. Sure you could compile everything from source but you probably won’t, and you might trust your distro package manager, but that still has a similar problem.
I use a k8s Cron job to execute backups with Kopia. The manifest is here
Yeah it was finished, it just sucks. Big difference
Yeah, you could already pirate it today. You could even buy it, copy files and refund it, but you probably don’t.
Each instance is available on someone’s localhost.
If you already have it, it looks like Plex can do it with https://channels1867.rssing.com/chan-55464362/all_p107.html It’ll probably get you most of those features, though it probably won’t be as nice as something purpose built. But if you already have Plex it might be nice to have all your stuff in one place. Alternatively you could probably setup something to download podcasts to your server into a folder that Plex watches.
That makes sense. I think the reason why they’re not represented as files is pretty simple. Data integrity. If you want to get the comments you just query the table and as long as the DB schema is what you expect then it’ll work just fine and you don’t have to validate that the data hasn’t been corrupted (you don’t have to check that a column exists for example). But with files, every single file you need to parse and validate because another application could have screwed them up. It’s certainly possible to build this, it might be slower but computers are pretty fast these days, but it would require more work to develop to solve the problem that the database solves for you.
It’s not plug and play, but Open telemetry is the self hosted way to go.
Oof, that’s bad… And lazy
Nothing about k8s is simple. But yes you can achieve that.
Take a look at Rancher for actually running a cluster.
Container orchestration is what you’re looking for. Kubernetes is the most popular, but it might be overkill it’s hard to say based on your setup. However it’s definitely useful experience to know how to run it.
We use Rancher fleet. It monitors a repo for k8s YAML files and applies them to the cluster automatically. But that doesn’t sound like it would work for you.
As for PRs, I’m sure you can setup a GitHub workflow to automatically merge PRs (I’d make sure to filter them by the author though).
For the images without proper versions you can always use the Image ID as a pinned reference to a specific image. Though whether that same image is still in docker hub is a different story.
A lot of your wishlist could be done quite easily with kubernetes. The automatic update isn’t built in but tools exist to help you with that (even rancher fleet).
Except it clearly doesn’t produce the same result every time. You’re not making a good case for whatever you’re trying to say.