

The application isn’t 100% likely to stay active in the background it seems. I tried to program one myself but there’s a lot of bullshit going on in background apps in Android that I’m not familiar enough with to trust that I can do any better.
The application isn’t 100% likely to stay active in the background it seems. I tried to program one myself but there’s a lot of bullshit going on in background apps in Android that I’m not familiar enough with to trust that I can do any better.
Any suggestions for services that do that? I like the idea, I’d actually get a few different phones to ring if some of the alarms were to get triggered.
This is my summer solar system. I like to winter in the Antares.
There is a guide in the Mail cow docs on integrating Roundcube, that’s the client I use for my stack.
I’m not sold on GLinet’s implementation of OpenWRT. I have 3 of them in production, and all three need regular reboots to stay working. I like the VPN interface they have and the ability to get to the underlying Luci interface, but I’ve found just flashing my own device to have a more stable and deterministic result.
I can’t speak to VLANs in specific, because I haven’t trusted them enough after seeing the rest of it to use it anywhere critical enough that I use VLANs.
I’m trying to figure out how to add it to Mailcow Dockerized and hook the existing containers. If I sort it out, I’ll probably PR it to Mailcow. I think it would a nice addition to start to build out a network that isn’t susceptible to the same spam attacks as regular email (yet).
I remember as a kid I set up one of the first private Echomail nodes as part of my RBBS bulletin board. UUCP was a big part of that, as I was the hop for other nodes coming onboard in my area. I added another half-dozen modems eventually just to handle the email traffic, then had to offload it to a university because I didn’t want to have to charge for the traffic and it was getting too big to handle. But it was pretty interesting at the time.
I mustn’t be communicating well, but that’s fine.
OK, yah, that’s what I was getting at.
I was getting more at stacks on a host talking, ie: you have a postgres stack with PG and Pgadmin, but want to use it with other stacks or k8s swarm, without exposing the pg port outside the machine. You are controlling other containers from interacting except on the allowed ports, and keeping those port from being available off the host.
I assume #2 is just to keep containers/stacks able to talk to each other without piercing the firewall for ports that aren’t to be exposed to the outside? It wouldn’t prevent anything if one of the containers on that host were compromised, afaik.
“All your containers are belong to us.”
You might consider using something like Cloudflared or Tailscale’s Funnels to proxy the connections through to prevent DDOSing and apply ACLs. You can still use your domains with those.
Been using this for years, runs like a top.
A modern sensor would be a mass airflow sensor, but a sail switch that you can adjust the amount of surface area that it hits so it alerts if the amount of air isn’t high enough it doesn’t activate when the furnace is running.
OK, yah, that’s good point about swarms. I’ve generally not used any swarmed filesystem stuff where I needed persistence, just shared databases, so it hasn’t come up.
Well, I know you can define volumes for other filesystem drivers, but with bind mounts, you don’t need to define the bind mount as you do, you can just specify the path directly in the container volumes and it will bind mount it. I was just wondering if there was any actual benefit to defining the volume manually over the simple way.
Is there any advantage to bind mounting that way? I’ve only ever done it by specifying the path directly in the container, usually ./data:data
or some such. Never had a problem with it.
I can get notification to ntfy, but I’m not sure if the app is certain to blow my phone up until I notice it, which is my goal. Frankly, if I could trigger the Presidential Alert, I would do that.