Yes but rsync isn’t a “backup”.
Spouse i inadvertently deleted a heap of stuff last month. Rsync would happily reflect that change on the remote. Borg will store the change but you can still restore from an earlier point in time.
Yes but rsync isn’t a “backup”.
Spouse i inadvertently deleted a heap of stuff last month. Rsync would happily reflect that change on the remote. Borg will store the change but you can still restore from an earlier point in time.
A docker volume?
I only use bind mounts, and in that case you can put them where you like and move them while theyre not mounted by a running container.
Docker volume locations are managed by docker, and i dont use those so not part of the above plan.
My docker files, configs, and volumes are all kept in a structure like:
/srv
- /docker
- - /syncthing
- - - /compose.yml
- - - /sync-volume
- - /traefik
- - - /compose.yml
[...]
I just backup /srv/docker, but I black list some subfolders for things like databases for which regular dumps are created or something. Currently the compressed / deduplicated repos consume ~350GB.
I use borgmatic because you do 1 full backup and thereafter everything is incremental, so minimal bandwidth.
I keep one backup repo on the server itself in /srv/backup - yes this will be prone to failure of that server but it’s super handy to be able to restore from a local repo if you just mess up a configuration or version upgrade or something.
I keep two other backup repos in two other physical locations, and one repo air gapped.
For example I rent a server from OVH in a Sydney data centre, there’s one repo in /srv/backup on that server, one on OVH’s storage service, one kept on my home server, and one on a removable drive I update periodically.
All repo’s are encrypted except for the air gapped one. That one has instructions intended for someone to use if I die or am incapacitated. So it has my master password for my password database, ssh keys, everything. We have a physical safe at home so that’s where that lives.
I’ve never tried restic.
I’m happy with borg and no real reason to switch.
Just wanted to add that borgmatic is like a configuration manager for borg backup. Still CLI & config file, and just running borg commands on the back end, but adds some nice features like notifications while really simplifying the configuration required.


I’m not really confident in this answer but, “not that I’m aware of”.
I use mxroute as a paid / hosted IMAP & SMTP server. They run spam assassin, but it’s obviously not trained on my own reports.
I’ve grown fond of Thunderbird as an email client. It’s spam management is clunky but if you spend 15 minutes or so learning how it works, and then train it with both junk and not junk, it works reasonably well.
Sadly, it does occasionally throw a false positive, like maybe twice in the last year it identified a legit email as spam.
So, while I’m running a spam assassin and thunderbird combo, it’s really TB that’s doing the work because SA is really just filtering the super low hanging fruit.
TB is doing a very respectable job, but needs to be trained.


I don’t know the actual reason, but I personally get a bad vibe every time I see the logo because usually it means I’m trying to install or fix some java bullshit, which never goes well.


I didn’t know this was ever in question?
Also stop calling it “my sequel”


All the obvious things have been mentioned.
The only way to identify the problem is to share the exact steps youve followed and then others can reproduce.
Based on what youve told us, no one knows how the subdomain is leaked. Without meaning to be derisive, that suggests that something youve told us isn’t quite correct.


I dont think this really responds to the comment you replied to.
Lots of comments in this thread are talking about people who dont have the time or expertise to manage their own nextcloud instance.
Saving you stuff on your neighbour’s instance includes genuine risks to your privacy or sensitive information.
The “legal agreements” that commenter referred to are simply the manner in which the host is allowed to use your data. The things you might store might be your will, maybe a spreadsheet of passwords, maybe some notes about your plans for a side hustle, maybe some naughty photos of your wife. Not information thats actionable by Google or Microsoft, but certainly things people dont want their neighbour to access.


There’s an open issue about this on github. It’s the remote API is only recognising the first word of your query.
This has been bugging me too.
The timeouts are because the engines are presenting captchas. There’s a work around whereby you use your instance as a proxy, navigate to that remote engine, and do the captcha.
These two issues are a real pain in the ass so while I do presently have a searxng instance I’ve been using qwant the last few weeks because I’m just over it.


Never used containers on synology.
Seems weird to me that there’s an AIO container that seems to contain other containers, but anyway I guess thats a synology thing.
Maybe this is obvious to everyone else but… all those containers are “starting” because theyre waiting for one other container to finish “starting” before starting up themselves.
I agree that the redis warnings seem benign.
Weird that nextcloud is waiting for a db but postures says its ready.
Is the 404 in the master container logs from you trying to access the instance in your browser?
I assume there’s some kind of compose.yaml as part of the AIO project you’re installing which will reveal which containers “depends_on” which other containers so you could figure out which one is blocking you.
Yeah, I hate facebook too, but sometimes you just have to acknowledge that flying the FOSS flag is not your primary objective. Like someone else said, if you have half the people on whatsapp, you’ll get much less than that with anything else.
I occasionally dream of having a better “community” in my suburb but basically, I just have zero available effort to invest in that. Like I’m not working today and looking forward to spending the afternoon in my pyjamas fiddling around at home. If I feel super motivated and energetic later I might take the kids somewhere. If there were a community thing scheduled I just… wouldn’t feel like going.
I think the best form of community I can manage is simply having a few people’s numbers in my phone and telling them when something happens “Hey Barb, just letting you know the neighbours car got broken into last night, hows things down your end?”


Containers have layers. So if you create an instance of a syncthing container whoever built that container would have started with some other container. Alpine linux is a very popular base layer, just used as an example in this discussion.
When you download an image, all the layers underlying the application that you actually wanted, will only be as fresh as the last time the maintainer built that image. So if there were a bug in the alpine base, that might have been fixed in alpine, but wouldn’t by pushed through to whatever you downloaded.


I didn’t realise this was a problem.
I’m not too worried about it though.
each container has such a small attack surface. As in, my reverse proxy traefik exposes port 80 and port 443, and all the others only expose their API’s or webservers to traefik.


I’m also in this category, but OP is talking about something else.
Like if you use container-x, which has an alpine base. If it hasn’t released a new version in several years then you’re using a several year old alpine distro.
I didn’t really realise this was a thing.
This is me.
For example, /srv/docker/synching contains:
compose.yml .env ./Sync
That last one is a directory bound to the container which contains all my sync folders.
Occasionally it makes more sense to put the mounted folder in /srv like /srv/photos is mounted by /srv/docker/photoprism/compose.yml
However, thats a rarity. Things mostly accessed by a single compose stack are kept alongside the other files for that stack.


Yep.
“I manage my server in yaml. Sometimes yml.”
what do you enjoy doing online?
my recommendation would be to start small, without having to trust yourself with your own data, at least not in the short term.
maybe try your own instance of photon, it’s a frontend for lemmy.
Sorry what integrations?
This is what their docs say. Not sure what you mean about diffferent file types but this seems fairly agnostic?
I actually didn’t realise that first point, as in you can move folders and the chunks will still be deduplicated.