For years I’ve on and off looked for web archiving software that can capture most sites, including ones that are “complex” with lots of AJAX and require logins like Reddit. Which ones have worked best for you?

Ideally I want one that can be started up programatically or via command line, an opens a chromium instance (or any browser), and captures everything shown on the page. I could also open the instance myself and log into sites and install addons like UBlock Origin. (btw, archiveweb.page must be started manually).

  • klangcola@reddthat.com
    link
    fedilink
    English
    arrow-up
    13
    ·
    1 day ago

    SingleFile is a browser addon to save a complete web page into a single HTML file. SingleFile is a Web Extension (and a CLI tool) compatible with Chrome, Firefox (Desktop and Mobile), Microsoft Edge, Safari, Vivaldi, Brave, Waterfox, Yandex browser, and Opera.

    SingleFile can also be integrated with bookmark managers hoarder and linkding browser extensions. So your browser does the capture, which means you are already logged in, have dismissed the cookie banner, solved the capthas or whatever else annoyance is on the webpage.

    ArchiveBox and I believe also Linkwarden use SingleFile (but as CLI from the server side) to capture web pages, as well as other tools and formats. This works well for simple/straightforward web pages, but not for annoying we pages with cookie banners, capthas, and other popups.

    • N0x0n@lemmy.ml
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 day ago

      For reddit, SingleFile HTML pages can be 20MB per file ! Which is huge for a simple discussion…

      To reduce that bloated but still relevant site, redirect to any still working alternative like https://github.com/redlib-org/redlib or old reddit and decrease your file to less than 1MB/file.

    • Showroom7561@lemmy.ca
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 day ago

      Singlefile is the only one that works reliably for me.

      Linkwarden would have been awesome, but I had so many issues with it when I was self-hosting. I think it’s improved since then, though.

      • Xanza@lemm.ee
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        2
        ·
        1 day ago

        wget is the most comprehensive site cloner there is. What exactly do you mean by complex? Because wget works for anything static and public… If you’re trying to clone compiled source files, like PHP or something, obviously that’s not going to work. If that’s what you mean by “complex” then just give up, because you can’t.

        • TheTwelveYearOld@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          8
          ·
          1 day ago

          For instance, I can’t download completely youtube pages with videos using wget, but can with pywb (though pywb has issues with sites like reddit).

          Not that I would necessarily use it for youtube pages, but that’s an example of a complex page with lots of AJAX.