I noticed a bit of panic around here lately and as I have had to continuously fight against pedos for the past year, I have developed tools to help me detect and prevent this content.

As luck would have it, we recently published one of our anti-csam checker tool as a python library that anyone can use. So I thought I could use this to help lemmy admins feel a bit more safe.

The tool can either go through all your images via your object storage and delete all CSAM, or it canrun continuously and scan and delete all new images as well. Suggested option is to run it using --all once, and then run it as a daemon and leave it running.

Better options would be to be able to retrieve exact images uploaded via lemmy/pict-rs api but we’re not there quite yet.

Let me know if you have any issue or improvements.

EDIT: Just to clarify, you should run this on your desktop PC with a GPU, not on your lemmy server!

  • snowe@programming.dev
    link
    fedilink
    English
    arrow-up
    107
    arrow-down
    2
    ·
    1 year ago

    Hey @db0@lemmy.dbzer0.com, just so you know, this tool is most likely very illegal to use in the USA. Something that your users should be aware of. I don’t really have the energy to go into it now, but I’ll post what I told my users in the programming.dev discord:

    that is almost definitely against the law in the USA. From what I’ve read, you have to follow very specific procedures to report CSAM as well as retain the evidence (yes, you actually have to keep the pictures), until the NCMEC tells you you should destroy the data. I’ve begun the process to sign up programming.dev (yes you actually have to register with the government as an ICS/ESP) and receive a login for reports.

    If you operate a website, and knowingly destroy the evidence without reporting it, you can be jailed. It’s quite strange, and it’s quite a burden on websites. Funnily enough, if you completely ignore your website, so much so that you don’t know that you’re hosting CSAM then you are completely protected and have no obligation to report (in the USA at least)

    Also, that script is likely to get you even more into trouble because you are knowingly transmitting CSAM to ‘other systems’, like dbzer0’s aihorde cluster. that’s pretty dang bad…

    here are some sources:

    • db0@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      57
      arrow-down
      3
      ·
      edit-2
      1 year ago

      Note that the script I posted is not transmitting the images to the AI Horde.

      Also keep in mind this tool is fully automated and catches a lot of false positives (due to the nature of the scan, it couldn’t be otherwise). So one could argue it’s a generic filtering operation, not an explicit knowledge of CSAM hosting. But IANAL of course.

      This is unlike cloudflare or other services which compare with known CSAM.

      EDIT: That is to mean, if you use this tool to forward these images to the govt, they are going to come after you for spamming them with garbage

      • snowe@programming.dev
        link
        fedilink
        English
        arrow-up
        29
        arrow-down
        1
        ·
        edit-2
        1 year ago

        Cloudflare still has false positives, the NCMEC does not care if they get false positives. If you read some of those links I provided it wouldn’t be considered a generic filtering operation, from how I’m reading it at least. I wouldn’t take the chance, especially not with running the software on your own hardware in your own house, split from the server.

        I think you’re not in the US? So it’s probably different for your jurisdiction. Just want to make it clear that in the US, from what i’ve read up on, this would be considered against the law. You are running software to filter for CSAM, so you are obligated to report it. Up to 1 year jail time for not doing so.

        • db0@lemmy.dbzer0.comOP
          link
          fedilink
          English
          arrow-up
          18
          arrow-down
          2
          ·
          1 year ago

          One can easily hook this script to forward to whoever is needed, but I think they might be a bit annoyed after you send them a couple hundred thousand false positives without any csam.

          • snowe@programming.dev
            link
            fedilink
            English
            arrow-up
            23
            arrow-down
            5
            ·
            1 year ago

            The problem is you aren’t warning people that deleting CSAM without following your applicable laws can potentially get people that use your tool thrown in jail. You went ahead and built the tool without detailing any of the applicable laws around it. Cloudflare explicitly calls out that in their documentation because it’s very important. I really like the stuff you put out, but this is not the way to do it. I know lots of people on Lemmy hate CF and any sort of large company, but running this stuff yourself without understanding the law is sure to get someone in trouble.

            I don’t even know why you think I was recommending for your system to forward the reports to the authorities. I didn’t sleep very much last night, so I must have glazed over it, but I see nowhere where I said that.

            • db0@lemmy.dbzer0.comOP
              link
              fedilink
              English
              arrow-up
              15
              arrow-down
              4
              ·
              1 year ago

              Honestly, I thinking you’re grossly overstating the legal danger a random small lemmy sysadmin is going to get into for running an automated tool like this.

              In any case, you’ve made your point, people can now make their own decisions on whether it’s better to pretend nothing is wrong on their instance, or if they want at least this sort of blanket cleanup. Far be it from me to tell anyone what to do.

              I don’t even know why you think I was recommending for your system to forward the reports to the authorities

              You may not have meant it, but you strongly implied something of the sort. But since this is not what you’re suggesting I’m curious to hear what your optimal approach to those problem would be here.

              • snowe@programming.dev
                link
                fedilink
                English
                arrow-up
                5
                ·
                1 year ago

                You may not have meant it, but you strongly implied something of the sort. But since this is not what you’re suggesting I’m curious to hear what your optimal approach to those problem would be here.

                Optimal approach is to use the existing systems that are used by massive corporations to solve this problem already. I know everyone on lemmy hates that, but this isn’t something to mess around with. The reason this is optimal is because NCMEC provides the hashes only to these companies. You’re not going to be able to get the hashes (this is a good thing… imagine some child abuser getting access to these hashes and then using them to evade detection). So if you can’t get these hashes (and you shouldn’t want them either) then you should use a service that has them. It is by far the best way to filter and has been proven time and time again to be successful.

                The easiest is CloudFlare’s, and yes, you will have to use them as your DNS which I also understand a vast majority of admins hate. But there are other options as well

                • PhotoDNA
                • Safer
                • Facebook PDQ

                Because access to the original hash databases is considered sensitive, NCMEC will not provide these to smaller platforms. Neither will Microsoft provide the source code of its PhotoDNA algorithm except to its most trusted partners, because if the algorithm became widely known, it is thought that this might enable abusers to bypass it.

                In that article, it actually points out that a solution called Safer that uses machine learning and image recognition has very flawed results and is incredibly biased. So if these massive platforms can’t get this kind of image recognition right then it’s probably best to not waste money and time on it. The article even points out that for smaller platforms it’s not worth it.

                We also know in general terms that machine learning algorithms for image recognition tend to be both flawed overall, and biased against minorities specifically. In October 2020, it was reported that Facebook’s nudity-detection AI reported a picture of onions for takedown. It may be that for largest platforms, AI algorithms can assist human moderators to triage likely-infringing images. But they should never be relied upon without human review, and for smaller platforms they are likely to be more trouble than they are worth

    • hoodlem@hoodlem.me
      link
      fedilink
      English
      arrow-up
      18
      ·
      1 year ago

      Ugh, what a mess. Thought about this for a while today and three thoughts started circulating in my head:

      1. Hire an actual lawyer and get firm legal advice on this issue. I think this would fall to the admins, not the devs. Maybe an admin who wanted could volunteer to contact a lawyer? We could do a gofundme for one-time consultation legal fees.

      2. Stop using pictrs completely and instead use links to a third party such as Imgur or whatever. They’re in this business and I’m sure already have dealt with it and have a solution. Yes it sucks that Imgur (or whatever third party) could delete our legitimate images at any time, but IMHO it’s worth it to avoid this headache. At any rate it offloads the liability from an admin. Of course, IANAL and this is a question we would want to ask a lawyer about.

      3. Needing a GPU increases the expenses for an admin significantly. It will start to not be worth it for quite a few to keep their instance running.

      Thanks for bringing up this point. This is obviously a nuanced issue that is going to need a well-thought-out solution.

      • snowe@programming.dev
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        1
        ·
        1 year ago

        the ridiculous part of it is, as I understand it, if you completely ignore your website and essentially never know that you’re hosting CSAM then you cannot be held liable for it. But then, someone’s probably literally gonna come hunt you down to tell you in person (FBI) lol. So probably best to not ignore it.

  • veroxii@aussie.zone
    link
    fedilink
    English
    arrow-up
    78
    arrow-down
    1
    ·
    1 year ago

    This is extremely cool.

    Because of the federated nature of Lemmy many instances might be scanning the same images. I wonder if there might be some way to pool resources that if one instance has already scanned an image some hash of it can be used to identify it and the whole AI model doesn’t need to be rerun.

    Still the issue of how do you trust the cache but maybe there’s some way for a trusted entity to maintain this list?

    • irdc@derp.foo
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      2
      ·
      edit-2
      1 year ago

      How about a federated system for sharing “known safe” image attestations? That way, the trust list is something managed locally by each participating instance.

      Edit: thinking about it some more, a federated image classification system would allow some instances to be more strict than others.

      • gabe [he/him]@literature.cafe
        link
        fedilink
        English
        arrow-up
        28
        ·
        1 year ago

        I think building such a system of some kind that can allow smaller instances to rely from help from larger instances would be extremely awesome.

        Like, lemmy has the potential to lead the fediverse is safety tools if we put the work in.

      • huginn@feddit.it
        link
        fedilink
        English
        arrow-up
        15
        ·
        1 year ago

        Consensus algorithms. But it means there will always be duplicate work.

        No way around that unfortunately

        • kbotc@lemmy.world
          link
          fedilink
          English
          arrow-up
          9
          ·
          edit-2
          1 year ago

          Why? Use something like RAFT, elect the leader, have the leader run the AI tool, then exchange results, with each node running it’s own subset of image hashes.

          That does mean you need a trust system, though.

          • irdc@derp.foo
            link
            fedilink
            English
            arrow-up
            9
            ·
            1 year ago

            As I’m saying, I don’t think you need to: manually subscribing to each trusted instance via ActivityPub should suffice. The pass/fail determination can be done when querying for known images.

          • huginn@feddit.it
            link
            fedilink
            English
            arrow-up
            8
            ·
            1 year ago

            Yeah that works. Who is the leader and how does it change? Does Lemmy.World take over because it’s largest?

            • kbotc@lemmy.world
              link
              fedilink
              English
              arrow-up
              8
              ·
              1 year ago

              Hash the image, then assign hash ranges to servers that are part of the ring. You’d use RAFT to get consensus about who is responsible for which ranges. I’m largely just envisioning the Scylla gossip replacement as the underlying communications protocol.

      • Rentlar@lemmy.ca
        link
        fedilink
        English
        arrow-up
        17
        ·
        1 year ago

        Yes it is definitely possible! Just have no pictrs installed/running with the server. Note it will still be possible to link external images.

        • Morgikan@lemm.ee
          link
          fedilink
          English
          arrow-up
          12
          ·
          1 year ago

          My understanding was it’s bad practice to host images on Lemmy instances anyway as it contributes to storage bloat. Instead of coming up with a one-off script solution (albeit a good effort), wouldn’t it make sense to offload the scanning to a third party like imgur or catbox who would already be doing that and just link images into Lemmy? If nothing else wouldn’t that limit liability on the instance admins?

          • hoodlem@hoodlem.me
            link
            fedilink
            English
            arrow-up
            8
            ·
            edit-2
            1 year ago

            I was thinking the same thing. Stop storing the images and offload to Imgur or whatever. They likely already have a solution for this issue. Show images inline instead of a link. Looks the same, no liability.

            Saying that, this is tremendously cool. I was given pause though by another poster on the thread mentioning the legality of using this in the U.S.

          • Rentlar@lemmy.ca
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            1 year ago

            Might be what we’d need to do for small servers lacking moderation, wanting to avoid the liability from potentially hosting harmful images.

            I used postimg.cc when hosting was having issues, I’ll probably use it more to ease up Lemmy admins’ jobs.

    • Starbuck@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 year ago

      TBH, I wouldn’t be comfortable outsourcing the scanning like that if I were running an instance. It only takes a bit of resources to know that you have done your due diligence. Hopefully this can get optimized to get time to be faster.

  • sunaurus@lemm.ee
    link
    fedilink
    English
    arrow-up
    45
    ·
    1 year ago

    As a test, I ran this on a very early backup of lemm.ee images from when we had very little federation and very little uploads, and unfortunately it is finding a whole bunch of false positives. Just some examples it flagged as CSAM:

    • Calvin and Hobbes comic
    • The default Lemmy logo
    • Some random user’s avatar, which is just a digital drawing of a person’s face
    • a Pikachu image

    Do you think the parameters of the script should be tuned? I’m happy to test it further on my backup, as I am reasonably certain that it doesn’t contain any actual CSAM

    • db0@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      52
      arrow-down
      1
      ·
      edit-2
      1 year ago

      This is normal . You should be worried if it wasn’t catching any false positives as it would mean a lot of false negatives would slip though. I am planning to add args to make it more or less severe, but I it will never be perfect. So long as it’s not catching most images, and of the false positives most are porn or contain children, I consider with worthwhile.

      I’ll let you know when the functionality for he severity is updated

    • hackitfast@lemmy.world
      link
      fedilink
      English
      arrow-up
      28
      arrow-down
      1
      ·
      1 year ago

      I’d bet there’s a CSAM test image dataset with innocuous images that get picked up by the script. Not sure how the system works, but if it’s through hashes then it would be pretty simple to add that to the script.

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    40
    arrow-down
    4
    ·
    edit-2
    1 year ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    CF CloudFlare
    CSAM Child Sexual Abuse Material
    DNS Domain Name Service/System
    HTTP Hypertext Transfer Protocol, the Web
    nginx Popular HTTP server

    4 acronyms in this thread; the most compressed thread commented on today has 6 acronyms.

    [Thread #88 for this sub, first seen 28th Aug 2023, 22:25] [FAQ] [Full list] [Contact] [Source code]

  • Cyborganism@lemmy.ca
    link
    fedilink
    English
    arrow-up
    28
    ·
    1 year ago

    I don’t host a server myself, but can this tool identify the users who posted the images and create a report with their IP addresses?

    This could help identify who spreads that content and it can be used to notify authorities. No?

    • db0@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      30
      ·
      1 year ago

      No but it will record the object storage We then need a way to connect that path to the pict-rs image ID, and once we do that, connect the pict-rs image ID to the comment or post which uploaded it. I don’t know how to do the last two steps however, so hopefully someone else will step up for this

    • db0@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      44
      ·
      1 year ago

      It will be atrocious. You can run it, but you’ll likely be waiting for weeks if not months.

    • Rescuer6394@feddit.nl
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      1
      ·
      1 year ago

      the model under the hood is clip interrogator, and it looks like it is just the torch model.

      it will run on cpu, but we can do better, an onnx version of the model will run a lot better on cpu.

      • db0@lemmy.dbzer0.comOP
        link
        fedilink
        English
        arrow-up
        12
        ·
        1 year ago

        sure, or a .cpp. But it will still not be anywhere near as good as a GPU. However it might be sufficient for something just checking new images

        • relic_@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          I’m not really convinced that a GPU backend is needed. Was there ever a comparison of the different CLIP model variants? Or a graph optimized / quantized ONNX version?

          I think the proposed solution makes a lot of sense for the task at hand if it were integrated on the pic-rs end, but it would be worth investigating further improvements if it were on the lemmy server end.

          • db0@lemmy.dbzer0.comOP
            link
            fedilink
            English
            arrow-up
            5
            ·
            1 year ago

            For scanning all existing images, trust me a good GPU is necessary. I’m scanning all my backend on a 4090 with 400 threads and I’m still only halfway through after 4 hours.

            For scanning newly uploaded images, a CPU might be sufficient but the users might get annoyed at the wait times.

  • bdonvr@thelemmy.club
    link
    fedilink
    English
    arrow-up
    22
    ·
    1 year ago

    Worth noting you seem to be missing dependencies in requirements.txt notably unidecode and strenum

    Also that this only works with GPU acceleration on NVidia (maybe, I messed around with trying to get it to work with AMD ROCm instead of CUDA but didn’t get it running)

  • FriendlyBeagleDog@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    19
    ·
    1 year ago

    Not well versed in the field, but understand that large tech companies which host user-generated content match the hashes of uploaded content against a list of known bad hashes as part of their strategy to detect and tackle such content.

    Could it be possible to adopt a strategy like that as a first-pass to improve detection, and reduce the compute load associated with running every file through an AI model?

    • dan@upvote.au
      link
      fedilink
      English
      arrow-up
      17
      ·
      edit-2
      1 year ago

      match the hashes

      It’s more than just basic hash matching because it has to catch content even if it’s been resized, cropped, reduced in quality (lower JPEG quality with more artifacts), colour balance change, etc.

        • dan@upvote.au
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Definitely. A lot of the good algorithms used by big services are proprietary though, unfortunately.

            • dan@upvote.au
              link
              fedilink
              English
              arrow-up
              4
              ·
              edit-2
              1 year ago

              Microsoft’s PhotoDNA is probably the most well-known. Every major service that has user-generated content uses it. Last I checked, it wasn’t open-source. It was built for detecting CSAM, but it’s really just a general-purpose similarity hashing algorithm.

              Meta has some algorithms that are open-source: https://about.fb.com/news/2019/08/open-source-photo-video-matching/

              Google has CSAI Match for hash-matching of videos and Google Content Safety API for classification of new content, but both are proprietary.

            • db0@lemmy.dbzer0.comOP
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              1 year ago

              There’s better approaches than hashing. For comparing images I am calculating “distance” in tensors between them. This can match even when compression artifacts are involved or the images are slightly altered.

    • db0@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      19
      ·
      1 year ago

      Currently I delete on PIL exceptions. I assume if someone uploaded a .zip to your image storage, you’d want it deleted

      • Starbuck@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        1 year ago

        The fun part is that it’s still a valid JPEG file if you put more data in it. The file should be fully re-encoded to be sure.

          • Starbuck@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            ·
            1 year ago

            But I could take ‘flower.jpg’, which is an actual flower, and embed a second image, ‘csam.png’ inside it. Your scanner would scan ‘flower.jpg’, find it to be acceptable, then in turn register ‘csam.png’. Not saying that this isn’t a great start, but this is the reason that a lot of websites that allow uploads re-encode images.

  • Yuumi@lemmy.ml
    link
    fedilink
    English
    arrow-up
    16
    ·
    1 year ago

    based db0 releasing great tools and maintaining a great community

  • Rentlar@lemmy.ca
    link
    fedilink
    English
    arrow-up
    14
    ·
    1 year ago

    Hey db0 thanks for putting in extra effort to help the community (as you have multiple times) when big issues like this crop up on Lemmy.

    Despite being a pressing issue this is one that people also are a little reluctant to help solve because of fear of getting in trouble themselves. (How can a server admin develop a method to detect and remove/prevent CSAM distribution without accessing known examples which is extremely illegal?)

    Another time being the botspam wave where you developed Overseer in response very quickly. I’m hoping here too devs will join you to work out how to best implement the changes into Lemmy to combat this problem.

  • chrisbit@leminal.space
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    1 year ago

    Thanks for releasing this. After doing a --dry_run can the flagged files then be removed without re-analysing all images?