

I don’t think that’s affected. It sounds more like political propaganda which gets allowed. I bet this will still be censored and lead to demonetization just as it is today.
A software developer and Linux nerd, living in Germany. I’m usually a chill dude but my online persona doesn’t always reflect my true personality. Take what I say with a grain of salt, I usually try to be nice and give good advice, though.
I’m into Free Software, selfhosting, microcontrollers and electronics, freedom, privacy and the usual stuff. And a few select other random things as well.
I don’t think that’s affected. It sounds more like political propaganda which gets allowed. I bet this will still be censored and lead to demonetization just as it is today.
I’d just set up the reverse proxy on the VPS and make it forward everything via IPv6. But you could also use a tunnel/VPN, everything from Tailscale to Wireguard or even an SSH tunnel would work. And there are dedicated services like Cloudflare, nohost, neutrinet, pagekite…
You could run multiple mail servers. Or download from Sharehosters in parallel. Or download more Youtube videos before the rate limit stops you. Or use virtualization or containers to launch some more virtualized servers.
Sure, I have an old PC with an energy efficient mainboard and a PicoPSU and I wouldn’t want anything else. I believe it does somewhere around 20W-25W though. And I have lots of RAM, a decent (old) CPU and enough SATA ports… Well, I would go for a newer PC, they get more energy efficient all the time… But it’s a lot of effort to pick the components unless some PC magazine writes something or someone has a blog with recommendations.
You’ll want to look up the QNAP as well. I’ve seen reports with quite some variety on the power consumption. Depending on the exact model, it could be somewhere in the range from 25W to 55W… So could be less, could be the same. And have a look at the amount of RAM if you want to run services on it.
I think Radicale, Baikal, SabreDAV or NextCloud are the most common choices. I read those names a lot.
But I believe only one of those isn’t written in PHP.
I’d really recommend digging into the “hacking” though. Unless you learn from your specific mistakes and avoid that in the future, you might run in to the exact same issue again. And I mean it could be a security flaw in the program code of the WebDAV server. But it could as well be a few dozen other reasons why your server wasn’t secure… (Missing updates, insecure passwords, missing fail2ban, a webserver or reverse proxy, unrelated other software… There are a lot of moving gears in a webserver and lots of things to consider.)
I think if you use a SIP provider, they’ll have an app or a description on their website how to connect with third-party software. Just install it on a device you take with you, and configure it as per their description. Examples for Android SIP softphones are Linphone and Baresip.
Other options: you have a AVM Fritzbox at home and install their app. Or you set up an entire PBX like Asterisk or FreePBX or one of the other ones. That’s rather complex and involved.
Nah, I don’t think there’s a lot on IPv6 in that book. I think OP’s concern is valid. Accessing devices at home isn’t unheard of. The amount of smart home stuff, appliances and consumer products increases every day. And we all gladly pay our ISPs to connect us and our devices to the internet. They could as well do a good job while at it. I mean should it cost extra to manage a static prefix, so be it. But oftentimes they really make it hard to even give them money and obtain that “additional” service.
I wonder how often the assigned prefix changes with most of the regular ISPs. I’d have to look someone else’s router since I’m still stuck on an old contract. But I believe what I saw with some of the regular consumer contracts: the prefixes stay the same for a long time. You could just slap a free DynDNS service on top and be done with it.
But yes, I think this used to be the promise… We’d all get IPv6 and a lot of gadgets like NAS systems, video cameras and a wifi kettle and they’d be accessible from outside. Instead of that we use big capitalist cloud services and all the data from the internet of things devices has some stopover in the China cloud.
It is misrepresenting the facts quite a bit. I think microwave links might be able to do a bit more bandwidth. And laser can do way more than ChatGPT attributes to it. It can do 1 or 2.5 Gbps as well. The main thing about optics is that it comes without electromagnetic interference. And you don’t need to have a fresnel zone without obstacles, and you don’t need a license. The other things about laser being more susceptible to weather, etc should be about right. (And I don’t know a lot about cost and alignment, so I don’t really know if that’s accurate and substancially more effort for lasers. They sure both cost some money and you have to point both at the receiver.)
Sure. I think we’re talking a bit about different things here. I didn’t want to copy it, just know how it’s done 😆 But yeah, you’re right. And what you said has another benefit. If they want to protect it by law, we have a process for that: Patents. And those require to publish how it’s done…
Nah, all it takes is one person buying it, disassemble it and look at the mechanics to see whether there are things like motors and mirrors inside the transmitter to do new things like align it dynamically. And I mean the other things, physics, the atmosphere, lenses and near infraread lasers along with signal processing are well-understood. I think it won’t be a big secret once it turns into a real thing… I mean as long as it’s hype only it might be.
I wonder what they did, though. Because the article is omitting most of the interesting details and frames it as if this as if optical communication in itself was something new or disruptive… I mean if I read the Wikipedia article on Long-range optical wireless communication, it seems a bunch of companies have already invested 3 digit million sums into solving this exact issue…
Of course. These all are different issues. Encrypted messaging has nothing to do with handing out my phone number to everyone.
I can’t remember why I skipped SimpleX. I tried it some time ago, maybe it sucked too much battery on my old phone… Should I have another look at it? Respectively, is it any good for someone like me who already uses a Matrix messenger? I mean not theoretically, but for every-day use.
Yes, this. And with WhatsApp or an dedicated app they’re either directly on your phone. Or have your (personal) phone number. Which isn’t great. With eMail you can just have another spam address. And that’s more complicated with phone numbers and most people don’t have a second one dedicated to spam and advertisements…
Maybe have a look at https://nginxproxymanager.com/ as well. I don’t know how difficult it is to install since I never used it, but I heard it has a relatively straight-forward graphical interface.
Configuring good old plain nginx isn’t super complicated. It depends a bit on your specific setup, though. Generally, you’d put config files into /etc/nginx/sites-available/servicexyz
(or put it in the default
)
server {
listen 80;
server_name jellyfin.yourdomain.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
server_name jellyfin.yourdomain.com;
ssl_certificate /etc/ssl/certs/your_ssl_certificate.crt;
ssl_certificate_key /etc/ssl/private/your_private_key.key;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384';
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
location / {
proxy_pass http://127.0.0.1:8096/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
access_log /var/log/nginx/jellyfin.yourdomain_access.log;
error_log /var/log/nginx/jellyfin.yourdomain_error.log;
}
It’s a bit tricky to search for tutorials these days… I got that from: https://linuxconfig.org/setting-up-nginx-reverse-proxy-server-on-debian-linux
Jellyfin would then take all requests addressed at jellyfin.yourdomain.com and forward that to your Jellyfin which hopefully runs on port 8096. You’d use a similar file like this for each service, just adapt them to the internal port and domain.
You can also have all of this on a single domain (and not sub-domains). That’d be the difference between “jellyfin.yourdomain.com” and “yourdomain.com/jellyfin”. That’s accomplished with one file with a single “server” block in it, but make it several “location” blocks within, like location /jellyfin
Alright, now that I wrote it down, it certainly requires some knowledge. If that’s too much and all the other people here recommend Caddy, maybe have a look at that as well. It seems to be packaged in Debian, too.
Edit: Oh yes, and you probably want to set up Letsencrypt so you connect securely to your services. The reverse proxy would be responsible for encryption.
Edit2: And many projects have descriptions in their documentation. Jellyfin has documentation on some major reverse proxies: https://jellyfin.org/docs/general/post-install/networking/advanced/nginx
You’d install one reverse proxy only and make that forward to the individual services. Popular choices include nginx, Caddy and Traefik. I always try to rely on packages from the repository. They’re maintained by your distribution and tied into your system. You might want to take a different approach if you use containers, though. I mean if you run everything in Docker, you might want to do the reverse proxy in Docker as well.
That one reverse proxy would get port 443 and 80. All services like Jellyfin, Immich… get random higher ports and your reverse proxy internally connects (and forwards) to those random ports. That’s the point of a reverse proxy, to make multiple distinct services available via just one and the same port.
Right. Do your testing. Nothing here is black and white only. And everyone has different requirements, and it’s also hard to get own requirements right.
Plus they even change over time. I’ve used Debian before with all the services configured myself, moved to YunoHost, to Docker containers, to NixOS, partially back to YunoHost over the time… It all depends on what you’re trying to accomplish, how much time you got to spare, what level of customizability you need… It’s all there for a reason. And there isn’t a perfect solution. At least in my opinion.
I think Alpine has a release cycle of 6 months. So it should be a better option if you want software from 6 months ago packaged and available. Debian does something like 2 years(?) so naturally it might have very old versions of software. On the flipside you don’t need to put in a lot of effort for 2 years.
I don’t think there is such a thing as a “standard” when it comes to Linux software. I mean Podman is developed by Red Hat. And Red Hat also does Fedora. But we’re not Apple here with a tight ecosystem. It’s likely going to run on a plethora of other Linux distros as well. And it’s not going to run better or worse just because of the company who made it…
Maybe LocalAI? It doesn’t do python code execution, but pretty much all of the rest.