• 0 Posts
  • 43 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle



  • The way I have my monitoring set up is to poll the containers from behind the proxy layer. Ex. if I’m trying to poll Portainer for example:

    ---
    services:
        portainer:
        ...
    

    with the service name portainer

    from uptime-kuma within the same docker network it would look like this:

    Can confirm this is working correctly to monitor that the service is reachable. This doesn’t however ensure that you can reach it from your computer, because that depends on if your reverse proxy is configured correctly and isn’t down, but that’s what I wanted in my case.

    Edit: If you’re wanting to poll the http endpoint you would add it before like http://whatever_service:whatever_port


  • I believe the Pictrs is a hard dependency and Lemmy just won’t work without it, and there is no way to disable the caching

    I’ll have to double check this but I’m almost certain pictrs isn’t a hard dependency. Saw either the author or one of the contributors mention a few days ago that pictrs could be discarded by editing the config.hjson to remove the pictrs block. Was playing around with deploying a test instance a few days ago and found it to be true, at least prior to finalizing the server setup. I didn’t spin up the pictrs container at all, so I know that it will at least start and let me configure the server.

    The one thing I’m not sure of however is if any caching data is written to the container layer in lieu of being sent to pictrs, as I didn’t get that far (yet). I haven’t seen any mention that the backend even does local storage, so I’m assuming that no caching is taking place when pictrs is dot being used.

    Edit: Clarifications


  • Thanks for sharing! I’ll definitely be looking into adding this to my infra alerting stack. Should pair well with webhooks using ntfy for notifications. Currently just have bash scripts push to uptime-kuma for disk usage monitoring as a dead man trigger, but this should be better as a first-line method. Not to mention all the other functionalities it has baked in.

    Edit: Would also be great if there was an already compiled binary in each release so I can use bare-metal, but the container on ghcr.io is most-likely what I’ll be using anyway. Thanks for not only uploading to docker hub.



  • Coming in late here, but your best starting point I think is to find someone that has published a list of known federated lemmy servers, or build your own.

    • I think there’s an API endpoint (IDK if you have to be an authenticated user to access) that lists which servers a particular server is federated to
    • Use that to query all the servers in that list at the same endpoint, deduplicate, and repeat to build a graph of the fediverse.
    • From there you can use a different API endpoint to query which servers are open vs. closed registration
    • Then you can ping each server to find that latency, but that’s not the whole picture.
      • some servers are starved for resources, or on an older version of software that is less optimized, so there may be a way to use the API to navigate to random posts and capture the time it takes that to complete; probably a more useful metric.
      • Might also be a good idea to get a metric for the number of users on that server too, as that might sway your opinion one way or the other.
    • There might be an endpoint to query the number of banned users, but I don’t recall seeing it.

    IDK if you’re interested in doing that work, but I don’t think anyone has published tooling so far that you can run on your desktop to get that performance info. There’s Python libraries already out there for interacting with the Lemmy API, so that’s a good jumping off point.

    Edit: Now that I’m thinking about it, that could be a pretty useful for the main website(s). They can use those type of queries on the backend to help with suggestions for new user onboarding.