Any guides on how to host at home? I’m always afraid that opening ports in my home router means taking the heavy risk of being hacked. Does using something like CloudFlare help? I am a complete beginner.
Edit: Thanks for all the great response! They are very helpful.
There are ways to host things from home without opening ports in your router at all, this usually involves running something that calls/tunnels out of your network and back to some service and accepts incoming connections and sends them “baskward” over that connection. Cloudflare offers something called Tunnel, ngrok does something similar (though mainly aimed at development and not production hosting), and you can even host something yourself using something like frp (which is what I use, even for the Lemmy instance I am writing this from).
I haven’t looked too closely at it, but there is an awesome-tunneling page someone put together that goes over these options and more.
Let me know if you want a bit more details on these options or specifics of how I’ve set up frp.
Not OP and I don’t have any specific questions right now, but if you got any general advice on frp you want to share, I’d be happy to read it :-)
Very basically, I set up a
frps
on the public server, and onefrpc
per service I want to have publicly accessible - where thefrpc.ini
defines all the specifics, so I don’t have to touch the server regularly. Is that correct so far?Edit: After typing it out, I came up with a question - what happens when the
frpc
s go offline? Will the server return an error, fallback, or just deny the connection?Yeah, you’re basically on the right track. I do a couple things in a possibly interesting way that you may find useful:
frps
s on different servers. Haven’t gotten around to setting up a LB in front or automatically removing them from DNS, but doing that sort of thing is the eventual plan. This means running as manyfrpc
s as I havefrps
s. I also haven’t gotten to the point of figuring out what to do if e.g. one service exposed viafrps
is healthy but another is not. It may make sense to run HAProxy in front of it or something… sounds terrible…frpc.ini
s, they define all of the connection details for a particularfrps
then useincludes = /conf.d/*.ini
to load up whatever services thatfrpc
exposes.frpc
in docker and use volumes to manage e.g. putting the rightfrpc.ini
and/conf.d/<service>.ini
files in there.frpc
andfrps
using certificates for client authentication.frpc
s (one container perfrps
, I’m considering ways to combine them to make it less annoying to deploy) right alongside the service I am exposing remotely, so I run e.g. one for Traefik, one forgogsgiteaforgejo ssh, etc. If you are using docker-compose I would put one (set of)frpc
in that compose file to expose whatever services it has. Similar thought for k8s, I would do sidecar containers as part of your podspec.frpc
s per deployment of that service.proxy_protocol_version = v2
to easily preserve incoming IP address. Traefik supports this natively, which is the most important service to me as most of what I run connects over HTTP(s).plugin = https2http
, but I like my setup better.As to your question of “what happens when the
frpc
s go offline?”, it depends on service type. I only use services oftype = tcp
andtype = udp
, so can’t speak to anything beyond that with experience.In the case of
type = tcp
yourfrps
you can run multiplefrpc
s and thefrps
will load-balance to them, meaning if you run multiple you should get some level of HA because if one connection breaks it should just use the other, killing any still-open connections to the failedfrpc
. Same thought there as how e.g.cloudflared
using their Tunnels feature makes two connections to two of their datacenters. If there is nothing to handle a particular TCP service on anfrps
I think the connection gets refused, it may even stop listening on the port, but I’m not sure of that.Sadly in the case of
type = udp
thefrps
will only accept onefrpc
connection. I still run multiplefrpc
s, but those particular connections just fail and keep retrying until the “active”frpc
for that udp service dies. I believe this means that if there is nothing to handle a particular UDP service on anfrps
it just drops the packets since there isn’t really a “connection” to kill/refuse/reset, the same thing about stopping listening may apply here as well but I am also unsure in this case.My wishlist for frp is, in no particular order:
frpc
making multiple connections to a serverfrpc
being able to connect to multiple serversfrps
, and ability to use a custom ALPN protocol for frp traffic (so I can run client traffic and frp traffic on the same port)frps
support for loadbalancing to multiple UDP via some sort of session tracking, triple-based routing, or something elsefrps
support for clustering or something, so even if onefrps
didn’t have a way to route a service it could talk to anotherfrps
nearby that didOverall I am pretty happy with frp, but it seems like it is trying to solve too much (e.g “secret” services, p2p, HTTP, TCP multiplexing). I would love to see something more focused purely TCP/UDP (and maybe TLS/QUIC) edge-ingress emerge and solve a narrower problem space even better. Maybe there is some sort of complex network-level solution with a VPN and routing daemons (BGP?) and firewall/NAT stuff that could do this better, but I really just want a tiny executable and/or container I can run on both ends and have things “just work”.
<pipedream> I want the technology to build something similar to Cloudflare incredibly simply, and for more providers to be able to offer something similar. Maybe there is something that could be built to solve this if they open source Oxy. </pipedream>