• 0 Posts
  • 26 Comments
Joined 1 year ago
cake
Cake day: June 10th, 2023

help-circle
  • I’ve never used Rust or Zig, but for Go: (disclaimer: this is all from memory, so there may be inaccuracies or out-of-date information here)

    Go does not allow circular references between modules. That restriction allows the compiler, when compiling a module, to not only put the compiled machine code in the resulting object file for that module but also the information that in C would be required to be in a header file (i.e. type definitions, function signatures, and even complete functions if they’re considered candidates for inlining, etc.). When compiling a module that imports others, the compiler reads that stuff back out of those files. Essentially a compiled Go library has it’s auto-generated “header file” baked-in.

    In older versions this was actually human-readable: an early part of the object file would essentially look like trimmed-down Go when opened in a text editor. IIRC they’ve switched to a binary serialization format for this some years back, but AFAIK it still essentially works the same.

    I guess when comparing to C or C++, you could compare this to automatically generating pre-compiled headers for every module, except the headers themselves are also auto-generated (as you alluded to in your post).

    If by “shared library” you mean a dynamically linked one: IIRC Go does allow shared libraries to be used, but by default all Go code is linked statically (though libraries written in other languages may be dynamically linked by default, if you import a module that requires it).




  • Any chance you’ve defined the new networks as “internal”? (using docker network create --internal on the CLI or internal: true in your docker-compose.yaml).

    Because the symptoms you’re describing (no connectivity to stuff outside the new network, including the wider Internet) sound exactly like you did, but didn’t realize what that option does…


  • It also means that ALL traffic incoming on a specific port of that VPS can only go to exactly ONE private wireguard peer. You could avoid both of these issues by having the reverse proxy on the VPS (which is why cloudflare works the way it does), but I prefer my https endpoint to be on my own trusted hardware.

    For TLS-based protocols like HTTPS you can run a reverse proxy on the VPS that only looks at the SNI (server name indication) which does not require the private key to be present on the VPS. That way you can run all your HTTPS endpoints on the same port without issue even if the backend server depends on the host name.

    This StackOverflow thread shows how to set that up for a few different reverse proxies.







  • I have a similar setup.

    Getting the DNS to return the right addresses is easy enough: you just set your records for subdomain * instead a specific subdomain, and then any subdomain that’s not explicitly configured will default to using the records for *.

    Assuming you want to use Let’s Encrypt (or another ACME CA) you’ll probably want to make sure you use an ACME client that supports your DNS provider’s API (or switch DNS provider to one that has an API your client supports). That way you can get wildcard TLS certificates (so individual subdomains won’t still leak via Certificate Transparency logs). Configure your ACME client to use the Let’s Encrypt staging server until you see a wildcard certificate on your domains.

    Some other stuff you’ll probably want:

    • A reverse proxy to handle requests for those subdomains. I use Caddy, but basically any reverse proxy will do. The reason I like Caddy is that it has a built-in ACME client as well as a bunch of plugins for DNS providers including my preferred one. It’s a bit tricky to set this up with wildcard certificates (by default it likes to request individual subdomain certificates), but I got it working and it’s been running very smoothly since.
    • To put a login screen before each service I’ve configured Caddy to only let visitors through to the real pages (or the error page, for unconfigured domains) if Authelia agrees.



  • There’s a bit more to it than captured in the summary, which is why it’s just a summary of the spec and not the actual spec.

    From a bit further down on that page:

    1. Major version zero (0.y.z) is for initial development. Anything MAY change at any time. The public API SHOULD NOT be considered stable.

    Lemmy is still in major version zero, so it can make breaking changes without incrementing the major version and still be in compliance with the spec. This way, projects won’t have their first “real” version be something like v123.0.0.

    Lemmy still being v0.x also serves as kind of a warning to app developers that changes like this may be made at any time.