Mastodon, an alternative social network to Twitter, has a serious problem with child sexual abuse material according to researchers from Stanford University. In just two days, researchers found over 100 instances of known CSAM across over 325,000 posts on Mastodon. The researchers found hundreds of posts containing CSAM related hashtags and links pointing to CSAM trading and grooming of minors. One Mastodon server was even taken down for a period of time due to CSAM being posted. The researchers suggest that decentralized networks like Mastodon need to implement more robust moderation tools and reporting mechanisms to address the prevalence of CSAM.

  • while1malloc0@beehaw.org
    link
    fedilink
    arrow-up
    157
    ·
    edit-2
    1 year ago

    While the study itself is a good read and I agree with the conclusions—Mastodon, and decentralized social media need better moderation tools—it’s hard to not read the Verge headline as misleading. One of the study authors gives more context here https://hachyderm.io/@det/110769470058276368. Basically most of the hits came from a large Japanese instance that no one federates with; the author even calls out that the blunt instrument most Mastodon admins use is to blanket defederate with instances hosted in Japan due to their more lax (than the US) laws around CSAM. But the headline seems to imply that there’s a giant seedy underbelly to places like mastodon.social[1] that are rife with abuse material. I suppose that’s a marketing problem of federated software in general.

    1. There is a seedy underbelly of mainstream Mastodon instances, but it’s mostly people telling you how you’re supposed to use Mastodon if you previously used Twitter.
    • glorbo@lemmy.one
      link
      fedilink
      English
      arrow-up
      37
      ·
      1 year ago

      In my opinion the biggest issue the author points out is that cached materials are sometimes retained even after moderator action. Which honestly just sounds like a straight up bug more than anything. Though if I were running an instance, the feds showing up at my door with a warrant because I’ve been accidentally distributing CSAM would be my nightmare scenario. And of course jurisdiction plays a part, too: an American user on a Canadian server might see drawn depictions of sexualized minors, think “weird but not illegal,” and now the Canadian admin has content that’s illegal in Canada on their Canadian server and has no idea.

      IMO I think the best solution to this is something similar to what Renaud Chaput (Mastodon’s resident infra boffin) described in his recent blog post. Effectively, give admins a way to hand this off to pluggable third-party services. Admins that are worried about this sort of thing can then have some degree of safety via e.g. PhotoDNA, whereas others can take on additional risk and preserve additional privacy.

      All that said: yeah the headline makes it sound like .social is some 8chan-esque hellhole, whereas in reality my feed is 99% German programmers sharing milquetoast political takes.

    • jherazob@beehaw.org
      link
      fedilink
      English
      arrow-up
      25
      ·
      edit-2
      1 year ago

      The person outright rejects defederation as a solution when it IS the solution, if an instance is in favor of this kind of thing you don’t want to federate with them, period.

      I also find worrying the amount of calls for a “Fediverse police” in that thread, scanning every image that gets uploaded to your instance with a 3rd party tool is an issue too, on one side you definitely don’t want this kinda shit to even touch your servers and on the other you don’t want anybody dictating that, say, anti-union or similar memes are marked, denounced and the person who made them marked, targeted and receiving a nice Pinkerton visit.

      This is a complicated problem.

      Edit: I see somebody suggested checking the observations against the common and well used Mastodon blocklists, to see if the shit is contained on defederated instances, and the author said this was something they wanted to check, so i hope there’s a followup

      • Pseu@beehaw.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 year ago

        The person outright rejects defederation as a solution when it IS the solution

        It’s the solution in the sense that it removes it from view of users of the mainstream instances. It is not a solution to the overall problem of CSAM and the child abuse that creates such material. There is an argument to be made that is the only responsibility of instance admins, and that past that is the responsibility of law enforcement. This is sensible, but it invites law enforcement to start overtly trawling the Fediverse for offending content, and create an uncomfortable situation for admins and users, as they will go after admins who simply do not have the tools to effectively monitor for CSAM.

        Defederation also obviously does not prevent users of the instance from posting CSAM. Admins even unknowingly having CSAM on their instance can easily lead to the admins being prosecuted and the instance taken down. Section 230 does not apply to material illegal on a federal level, and SESTA requires removal of material that violates even state level sex trafficking laws.

  • 🦊 OneRedFox 🦊@beehaw.org
    link
    fedilink
    English
    arrow-up
    72
    ·
    1 year ago

    Yeah I recall that the Japanese instances have a big problem with that shit. As for the rest of us, Facebook actually open sourced some efficient hashing algorithms for use for dealing with CSAM; Fediverse platforms could implement these, which would just leave the issue of getting an image hash database to check against. All the big platforms could probably chip in to get access to one of those private databases and then release a public service for use with the ecosystem.

    • zephyrvs@lemmy.ml
      link
      fedilink
      English
      arrow-up
      14
      ·
      1 year ago

      That’d be useless though, because first, it’d probably opt-in via configuration settings and even if it wasn’t, people would just fork and modify the code base or simply switch to another ActivityPub implementation.

      We’re not gonna fix society using tech unless we’re all hooked up to some all knowing AI under government control.

      • 🦊 OneRedFox 🦊@beehaw.org
        link
        fedilink
        English
        arrow-up
        19
        ·
        1 year ago

        That’d be useless though, because first, it’d probably opt-in via configuration settings and even if it wasn’t, people would just fork and modify the code base or simply switch to another ActivityPub implementation.

        No it wouldn’t, because it’d still be significantly easier for instances to deal with CSAM content with this functionality built into the platforms. And I highly doubt there’s going to be a mass migration from any Fediverse platform that implements such a feature (though honestly I’d be down to defederate with any instance that takes serious issue with this).

      • crystal@feddit.de
        link
        fedilink
        English
        arrow-up
        13
        ·
        1 year ago

        That’s not the point. Yes, child porn sites can host child porn. Other sites/instances can’t stop that. But what other instances can stop, is redistributing said child porn. And for that purpose, such technology would be useful.

        • zephyrvs@lemmy.ml
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 year ago

          researchers found 112 instances of known CSAM across 325,000 posts on the platform

          So you’re willing to vacuum up the hashes of every image file uploaded on thousands of decentralized systems into a centralized systems (that is out of “our” control and coupled with direct access for law enforcement and corporations) to prevent the distribution of 0.034% of files that are CSAM and that could just as well be reported and deleted by admins and moderators? Remember how Snowden warned us about metadata?

          If you think that’s a wise tradeoff, I guess, go ahead. But then I’d have to question the entire goal of being decentralized in the first place. If it’s all about “a billionare can’t wreak havok upon my social network”, then yeah, I guess decentralization helps a bit but even that remains to be seen.

          But if you’re actually willing to do that, you’d probably also be in favor of having government backdoors into chat encryption (and thus rendering the entire concept moot, because you can’t have backdoors that cannot be discovered by other nefarious actors) and even more censorship-resistant systems like Tor because evil people use it to exchange CSAM anonymously as well?

          • ParsnipWitch@feddit.de
            link
            fedilink
            English
            arrow-up
            5
            ·
            1 year ago

            If you read the article, there’s actually more. The problem also isn’t just that they post the material directly onto Mastodon, they also use the platform to network.

            • jarfil@beehaw.org
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              1 year ago

              More… or less, given that US-centric CSAM detectors mark AI, CG and drawings at the same level as IRL images.

              Preventing the networking, is called defederation, that’s already there.

          • crystal@feddit.de
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            1 year ago

            I don’t know how you get the impression that this increases censorship.

            Instance admin already manually block content. And they are already able to do that to any extend they wish to do.

            This tool would simply automate that process.

            Admins would not gain or lose any ability to block content. Identifying child porn would simply be easier.

            (Imagine an admin going to their database and doing a CTRL+F with the term “child porn”, and then going through the posts to find offending ones. But instead of CTRL+F it’s an AI.)

            (For some reason I don’t get a notification when you answer my comment. Is that a known issue? Did you block me or something?)

            • zephyrvs@lemmy.ml
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              1 year ago

              I’m referring to the CSAM scanning systems that are outside of the control of almost anyone except governments, three letter agencies, other law enforcement and parts of the private sector.

              These systems must be fed every hash of every file submitted to as many instances as possible to be efficient with close to no oversight or public scrutiny.

              Pass.

              Edit: I’m not blocking you but I noticed intermittent connectivity issues on lemmy.ml today, possibly around the time where I replied.

            • jarfil@beehaw.org
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              I don’t know how you get the impression that this increases censorship.

              This tool would simply automate that process.

              Well… precisely?

              Censorship is any removal of material considered “undesirable”, whether you agree with why it is considered “undesirable” or not.

              If you want more censorship of “material that you personally consider undesirable”, then just say so, don’t hide behind some disingenuous “but it isn’t censorship”. Then we can discuss the merits of that classification, and of the means proposed to achieve such censorship.

              • crystal@feddit.de
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                You seem to be missing my point.

                This tool would not increase censorship.

                Admins are already able to implement all censorship they want.

                Admins are already able to block left-wing opinions, right-wing opinions, child porn, normal porn.

                And that already happens.

                Lots of instances (like feddit.de) block pornographic content.

                Lots of instances (like lemmy.blahaj.zone) block right-wing content.

                It is already possible, and it is already happening.

                An AI which can detect CSAM (and potentially other content) won’t change that. It will simply make the admins’ job easier.

                • jarfil@beehaw.org
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 year ago

                  I think you’re missing the opposite point.

                  An AI trained on a given instance’s admin decisions, would increase the same censorship the admins already apply. We can agree on that.

                  An AI trained by a third-party on unknown data (and actually illegal to be known) which can detect “CSAM (and potentially other content)”, would increase censorship of both CSAM… and of “potentially other content” out of the control, preferences or knowledge of the instance admins.

                  Using an external service to submit ALL content for an AI trained by a third-party to make a decision, not only allows the external service to collect ALL the content (not just the censored one), but also to change the decision parameters without previous notice, or any kind of oversight, and apply it to ALL content.

                  The problem is a difference between:

                  • instance modlog -> instance content filtered by instance AI -> makes similar decisions as instance admins
                  • [illegal to know dataset] -> third-party captures all content, feeds to undisclosed AI -> makes unknown decisions in the name of removing CSAM

                  One is an AI that can make mistakes, but mostly follows whatever an admin would do. The other, is a 100% surveillance state nightmare in the name of filtering 0.03% of content.

      • Paradoxvoid@aussie.zone
        link
        fedilink
        English
        arrow-up
        10
        ·
        edit-2
        1 year ago

        As much as we can (and should) lambast Facebook/Meta’s C-Suite for terrible decisions, their engineers are generally pretty legit.

      • 🦊 OneRedFox 🦊@beehaw.org
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        They actually contribute a lot of useful stuff to the web dev world, like React.js. It’s just all the other shit they do that’s awful.

  • Mandy@beehaw.org
    link
    fedilink
    arrow-up
    54
    ·
    1 year ago

    Pedos that got banned from platforms turn to other platform who hasnt done it yet

    In other news: the sky is blue

    • jarfil@beehaw.org
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      While white knights propose ways to control everyone everywhere everytime, in the name of catching the pedos who will just hop to the next platform (or have already).

  • stravanasu@lemmy.ca
    link
    fedilink
    English
    arrow-up
    51
    ·
    edit-2
    1 year ago

    I’m not fully sure about the logic and perhaps hinted conclusions here. The internet itself is a network with major CSAM problems (so maybe we shouldn’t use it?).

    • mudeth@lemmy.ca
      link
      fedilink
      English
      arrow-up
      31
      ·
      edit-2
      1 year ago

      It doesn’t help to bring whataboutism into this discussion. This is a known problem with the open nature of federation. So is bigotry and hate speech. To address these problems, it’s important to first acknowledge that they exist.

      Also, since fed is still in the early stages, now is the time to experiment with mechanisms to control them. Saying that the problem is innate to networks is only sweeping it under the rug. At some point there will be a watershed event that’ll force these conversations anyway.

      The challenge is in moderating such content without being ham-fisted. I must admit I have absolutely no idea how, this is just my read of the situation.

      • Shiri Bailem@foggyminds.com
        link
        fedilink
        arrow-up
        27
        ·
        1 year ago

        @mudeth @pglpm you really don’t beyond our current tools and reporting to authorities.

        This is not a single monolithic platform, it’s like attributing the bad behavior of some websites to HTTP.

        Our existing moderation tools are already remarkably robust and defederating is absolutely how this is approached. If a server shares content that’s illegal in your country (or otherwise just objectionable) and they have no interest in self-moderating, you stop federating with them.

        Moderation is not about stamping out the existence of these things, it’s about protecting your users from them.

        If they’re not willing to take action against this material on their servers, then the only thing further that can be done is reporting it to the authorities or the court of public opinion.

      • stravanasu@lemmy.ca
        link
        fedilink
        English
        arrow-up
        15
        ·
        edit-2
        1 year ago

        Maybe my comment wasn’t clear or you misread it. It wasn’t meant to be sarcastic. Obviously there’s a problem and we want (not just need) to do something about it. But it’s also important to be careful about how the problem is presented - and manipulated - and about how fingers are pointed. One can’t point a finger at “Mastodon” the same way one could point it at “Twitter”. Doing so has some similarities to pointing a finger at the http protocol.

        Edit: see for instance the comment by @while1malloc0@beehaw.org to this post.

        • mudeth@lemmy.ca
          link
          fedilink
          English
          arrow-up
          8
          ·
          1 year ago

          Understood, thanks. Yes I did misread it as sarcasm. Thanks for clearing that up :)

          However I disagree with @shiri@foggyminds.com in that Lemmy, and the Fediverse, are interfaced with as monolithic entities. Not just by people from the outside, but even by its own users. There are people here saying how they love the community on Lemmy for example. It’s just the way people group things, and no amount of technical explanation will prevent this semantic grouping.

          For example, the person who was arrested for CSAM recently was running a Tor exit node, but that didn’t help his case. As shiri pointed out, defederation works for black-and-white cases. But what about in cases like disagreement, where things are a bit more gray? Like hard political viewpoints? We’ve already seen the open internet devolve into bubbles with no productive discourse. Federation has a unique opportunity to solve that problem starting from scratch, and learning from previous mistakes. Defed is not the solution, it isn’t granular enough for one.

          Another problem defederation is that it is after-the-fact and depends on moderators and admins. There will inevitably be a backlog (pointed out in the article). With enough community reports, could there be a holding-cell style mechanism in federated networks? I think there is space to explore this deeper, and the study does the useful job of pointing out liabilities in the current state-of-the-art.

          • faeranne@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            5
            ·
            1 year ago

            Another way to look at it is: How would you solve this problem with email?

            The reality is, there is no way to solve the problem of moderation across disparate servers without some unified point of contact. With any form of federation, your options are:

            1. close-source the protocol, api, and implementation and have the creator be the final arbiter, either by proxy of code, or by having a back door
            2. Have every instance agree to a singular set of rules/admins
            3. Don’t and just let the instances decide where to draw lines.

            The reality is, any federated system is gonna have these issues, and as long as the protocol is open, anyone can implement any instance on top of it they want. It would be wonderful to solve this issue “properly”, but it’s like dealing with encryption. You can’t force bad people to play by the rules, and any attempt to do so breaks the fundamental purpose of these systems.

          • stravanasu@lemmy.ca
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            1 year ago

            I share and promote this attitude. If I must be honest it feels a little hopeless: it seems that since the 1970s or 1980s humanity has been going down the drain. I fear “fediverse wars”. It’s 2023 and we basically have a World War III going on, illiteracy and misinformation steadily increase, corporations play the role of governments, science and scientific truth have become anti-Galilean based on “authorities” and majority votes, and natural stupidity is used to train artificial intelligence. I just feel sad.

            But I don’t mean to be defeatist. No matter the chances we can fight for what’s right.

          • Shiri Bailem@foggyminds.com
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            @mudeth @pglpm The grey area is all down to personal choices and how “fascist” your admin is (which goes on to which instance is best for you?)

            Defederation is a double-edged sword, because if you defederate constantly for frivolous reasons all you do is isolate your node. This is also why it’s the *final* step in moderation.

            The reality is that it’s a whole bunch of entirely separate environments and we’ve walked this path well with email (the granddaddy of federated social networks). The only moderation we can perform outside of our own instance is to defederate, everything else is just typical blocking you can do yourself.

            The process here on Mastodon is to decide for yourself what is worth taking action on. If it’s not your instance, you report it to the admin of that instance and they decide if they want to take action and what action to take. And if they decide it’s acceptable, you decide whether or not this is a personal problem (just block the user or domain on in your user account but leave it federating) or if it’s a problem for your whole server (in which case you defederate to protect your users).

            Automated action is bad because there’s no automated identity verification here and it’s an open door to denial of service attacks (harasser generates a bunch of different accounts, uses them all the report a user until that user is auto-suspended).

            The backlog problem however is an intrinsic problem to moderation that every platform struggles with. You can automate moderation, but then that gets abused and has countless cases of it taking action on harmless content, and you can farm out moderation but then you get sloppiness.

            The fediverse actually helps in moderation because each admin is responsible for a group of users and the rest of the fediverse basically decides whether they’re doing their job acceptably via federation and defederation (ie. if you show that you have no issue with open Nazis on your platform, then most other instances aren’t going to want to connect to you)

            • mudeth@lemmy.ca
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              1 year ago

              Defederation is a double-edged sword

              Agreed. It’s not the solution.

              The reality is that it’s a whole bunch of entirely separate environments and we’ve walked this path well with email

              On this I disagree. There are many fundamental differences. Email is private, while federated social media is public. Email is one-to-one primarily, or one-to-few. Soc media is broadcast style. The law would see it differently, and the abuse potential is also different. @faeranne@lemmy.blahaj.zone also used e-mail as a parallel and I don’t think that model works well.

              The process here on Mastodon is to decide for yourself what is worth taking action on.

              I agree for myself, but that wouldn’t shield a lay user. I can recommend that a parent sign up for reddit, because I know what they’ll see on the frontpage. Asking them to moderate for themselves can be tricky. As an example, if people could moderate content themselves we wouldn’t have climate skeptics and holocaust deniers. There is an element of housekeeping to be done top-down for a platform to function as a public service, which is what I assume Lemmy wants to be.

              Otherwise there’s always the danger of it becoming an wild-west platform that’ll attract extremists more than casual users looking for information.

              Automated action is bad because there’s no automated identity verification here and it’s an open door to denial of service attacks

              Good point.

              The fediverse actually helps in moderation because each admin is responsible for a group of users and the rest of the fediverse basically decides whether they’re doing their job acceptably via federation and defederation

              The way I see it this will inevitably lead to concentration of users, defeating the purpose of federation. One or two servers will be seen as ‘safe’ and people will recommend that to their friends and family. What stops those two instances from becoming the reddit of 20 years from now? We’ve seen what concentration of power in a few internet companies has done to the Internet itself, why retread the same steps?

              Again I may be very naive, but I think with the big idea that is federation, what is sorely lacking is a robust federated moderation protocol.

              • Shiri Bailem@foggyminds.com
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                @mudeth I 110% agree faeranne, especially in that this is much like the topic of encryption and how people (especially politicians) keep arguing that we just need to magically come up with a solution that allows governments to access all encrypted communication somehow without impacting security and preventing people from using existing encryption to completely bypass it. It’s much like trying to legislate math into functioning differently.

                The closest you can get to a federated moderation protocol is basically just a standard way to report posts/users to admins.

                You could absolutely build blocklists that are shared around, but that’s already a thing and will never be universal.

                Basically what you’re describing is that someone should come up with a way to *force* me to apply moderation actions to my server that I disagree with. That somehow such a system would be immune to abuse (ie. because it’s external to my server, it would magically avoid hackers and trolls manipulating it) and that I would have no choice in whether or not to allow that access despite running a server based on open source software in which I can edit the code myself if I wish (but somehow in this case wouldn’t be able to edit it to prevent the external moderation from working).

                You largely miss the point entirely of my other arguments: email is a perfect reference point because, despite private vs public, it faces all the same technical, social, and legal challenges. It’s just an older system with a slightly different purpose (that doesn’t change it’s technical foundations, only just how it’s interacted with), but the closest relative to activitypub with much much larger scale adoption. These issues and topics have already been discussed ad nauseum there.

                And I didn’t say users would moderate themselves, we decide what is worth taking action on. If you’re not an admin, you choose whether or not something is worth reporting and whether or not you find the server you’re on acceptable to your wants/needs. If you take issue with anti-vaxxers, climate change deniers, and nazis and your server allows all of that (either on the server itself, or has no issue with other servers that allow it)… then you move to a server that doesn’t.

                Finally, this doesn’t end in centralization because of all the aforementioned gray areas. There are many things that I don’t consider acceptable on my server but aren’t grounds for defederation.

                For example: I won’t tolerate the ignoring of minority voices on topics of cultural appropriation and microaggressions… but I don’t consider it a good idea to defederate other servers for it because the admins themselves often barely understand it and I would be defederation 90% of the fediverse at that point. If I see such from my users I will talk to them and take action as appropriate, but from other servers I’ll report if the server looks remotely receptive to it.

    • Penguinblue@kbin.social
      link
      fedilink
      arrow-up
      14
      ·
      1 year ago

      This is exactly what I thought. The story here is that the human race has a massive child abuse material problem.

      • jarfil@beehaw.org
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        The problem is even bigger: some places (ejem Reddit) you will get deplatformed for explaining and documenting why there is a problem.

        (even here, I’ll censor myself, and hopefully restrict to content not too hard to moderate)

    • jarfil@beehaw.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      The internet itself is a network with major CSAM problems

      Is it, though?

      Over the last year, I’ve seen several reports on TV of IRL group abuse of children, by other children… which left everyone scratching their heads as to what to do since none of the perpetrators are legally imputable.

      During that same time, I’ve seen exactly 0 (zero) instances of CSAM on the Internet.

      Sounds to me like IRL has a major CSAM, and general sex abuse, problem.

  • Jordan Lund@lemmy.one
    link
    fedilink
    arrow-up
    47
    ·
    1 year ago

    “massive child abuse material problem”

    “112 instances of known CSAM across 325,000 posts”

    While any instance is unacceptable, does 112/325,000 constitute a “massive problem”?

    0.0000034462% of posts are unacceptable! Massive problem!

    • ParsnipWitch@feddit.de
      link
      fedilink
      arrow-up
      9
      ·
      1 year ago

      That’s just the material they knew was CSAM from previous investigations.

      There were also 713 uses of the top 20 CSAM-related hashtags across the Fediverse on posts that contained media, as well as 1,217 text-only posts that pointed to “off-site CSAM trading or grooming of minors.” The study notes that the open posting of CSAM is “disturbingly prevalent.”

  • Sphere@reddthat.com
    link
    fedilink
    arrow-up
    41
    ·
    1 year ago

    So instances that are actually supporting CSAM material can and should be dealt with by law enforcement. That much is simple (and I’m surprised it hasn’t been done with certain … instances, to be honest). But I think the apparently less clearly solved issues have known and working solutions that apply to other parts of the web as well. No content moderation is perfect, but in general, if admins are acting in good faith, I don’t think there should be too much of a problem:

    • For when federation inadvertently spreads some of the material through to other instances’ databases: Isn’t this the same situation as when ISP’s used to cache web traffic to save on bandwidth costs? In that situation, too, browsed web pages would end up in the ISP’s cache which could then harbour whatever material the user was looking at. As I recall, the ISP would just ban CSAM and other illegal material in their terms of service, and remove anyone reported as violating the rule, and that sufficed.
    • As for “bad” instances/users: It’s impossible to block all instances and all users that might disseminate this material as you’d have to go to a “block everything, then allow known entities” rule which would break the Fediverse model. Again, users or site admins found to be acting in bad faith should be blocked and reported (either automatically or manually). Some may slip through the net, but as long as admins are seen to be doing the best they can, that should be enough.

    There seem to be concerns about “surveillance” of material on Mastodon, which strikes me as a bit odd. Mastodon isn’t a private platform. People who want private messaging should use an E2EE messaging app like Signal, not a social networking platform like Mastodon (or Twitter, Threads etc.). Mastodon data is already public and is likely already being surveilled, and will be so regardless of what anyone involved with the network wants, because there’s no access control on it anyway. Having Mastodon itself contain code to keep the network clean, even if it only applies to part of the network, just allows those Mastodon admins who are running that part of the code to take some of the responsibility on themselves for doing so, reducing the temptation for third parties to do it for them.

  • lohrun@fediverse.boo
    link
    fedilink
    arrow-up
    41
    ·
    1 year ago

    One of the problems with the fediverse is that each server keeps its own copy of the content. It is definitely a worry that bad actors push content to federated servers to get them taken down due to the content they now are storing.

  • teawrecks@sopuli.xyz
    link
    fedilink
    arrow-up
    34
    ·
    1 year ago

    I for one am all for instances being forcibly taken down by police if they can’t moderate CSAM appropriately.

    Moderation is a very real challenge. The internet at large aimed to solved it by centralizing everything to a few mega corps with AI moderation. The fediverse aims to solve it by keeping instances small and holding both mods and users accountable.

  • Cylinsier@beehaw.org
    link
    fedilink
    arrow-up
    34
    ·
    1 year ago

    The researchers suggest that decentralized networks like Mastodon need to implement more robust moderation tools and reporting mechanisms to address the prevalence of CSAM.

    I agree, but who’s going to pay for it? Those aren’t just freely available additions to any application that you only need to toggle on.

    • abhibeckert@beehaw.org
      link
      fedilink
      arrow-up
      13
      ·
      edit-2
      1 year ago

      I agree, but who’s going to pay for it?

      How about police/the tax payer?

      If university researchers can find the stuff, then police can find it too. There should be an established way to flag the user (or even the entire instance) so that content can be removed from the fediverse while simultaneously asking for all data that is available to try to catch the criminals.

      And of course, if regular users come across anything illegal they will report it too, and it should be removed quickly (I’d hope immediately in many cases, especially if the post was by a brand new/untrusted account).

      • swnt@feddit.de
        link
        fedilink
        arrow-up
        8
        ·
        1 year ago

        A decentralised platform like the Fediverses won’t easily work with nation states and their taxes. Even with Wikipedia today, it’s not funded directly via any government - but rather by certain universities giving some money to it + all the private doners.

        And even if we get that working, power politics will mess this up like so often when things actually get troublesome.

        It might be interesting to explore cryptocurrencies as for donations here though. They do have international liquidity and they can’t be misused foe power politics.

        • abhibeckert@beehaw.org
          link
          fedilink
          arrow-up
          4
          ·
          edit-2
          1 year ago

          I’m not suggesting Beehaw/etc should be government funded. Rather I’m suggesting it’s already possible for basically anyone in the fediverse to report a post as needing urgent moderator attention.

          I think there will be tax payer funded efforts, donation funded efforts, volunteers, etc that are unaffiliated with any specific instance but go through major instances and hit the report button where they consider it to be appropriate — not just manually with people but also with automated tools such as searching for images by a hash of their contents or maybe even running messages through a Large Language Model to check if it is, for example, a form of targeted harassment.

          And yes, the report feature will be abused. That’s unavoidable and needs to be taken into account when deciding how to respond to a report. An algorithm could easily prioritise reports based on the history of past reports made by the same person / organisation.

          Stack Exchange has a pretty good system - decisions by individuals are not trusted. Rather those trigger a review by a randomly selected (and trusted) individual to get a second opinion. And even after a decision has been made and an action has been taken (ban a user, etc) there’s often a third or even fourth review. And there are processes to appeal and question decisions.

          It’s not an easy problem to solve, but as the creator of mastodon said - many hands make light work. The fediverse can some day have a billion people doing moderation tasks - where even simple acts like hitting the upvote button become part of the moderation system (upvote would imply that this account holder tends to make valuable contributions to the community, and should make the moderation system less likely to come down with a ban hammer).

          And I also think there is scope for some communities to be entirely government funded. For example I’d love for every city in the world to run an offical community, with official local government anouncements as well as moderated discussions relevant to people who live in or are visiting the city.

          • swnt@feddit.de
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            I am exactly doubting your suggestion of tax paid donations. I don’t think this will happen, unless we actually come together and try to actually enforce this on the political level in various countries.

            After all, open source software is an essential and critical foundation since many decades - but I’m not sure, whether there is any government that has made a pledge to donate a certain amount of money per year into the development and funding of such general purpose software. (Maybe I’m wrong though.)

            Before the fediverse can get any public funding, we need to make some political efforts. the UN is the largest such institution - and it took all the fiasco with the 2 world war to get many countries pledge to donate to it every year…

            • abhibeckert@beehaw.org
              link
              fedilink
              arrow-up
              3
              ·
              edit-2
              1 year ago

              I’m not sure, whether there is any government that has made a pledge to donate a certain amount of money per year into the development and funding of such general purpose software

              Tor (The Onion Router) was invented by a United States Naval Intelligence Unit. They released the source code as open source and handed control over to the EFF but several US Government agencies continued to provide substantial funding (especially the Bureau of Democracy, Human Rights and Labor Affairs). As far as I know they continue to fund it.

              There are definitely examples of Governments funding open source software, especially things that are as valuable as a social network.

            • abhibeckert@beehaw.org
              link
              fedilink
              arrow-up
              2
              ·
              edit-2
              1 year ago

              I am exactly doubting your suggestion of tax paid donations. I don’t think this will happen, unless we actually come together and try to actually enforce this on the political level in various countries.

              I meant private donations, which are already happening.

              I think tax revenue would be spent on government employees looking over content in search of evidence of crimes/etc, which I’m sure is also already happening. I hope they don’t just look - they should be reporting whatever they find.

    • pineapplelover@infosec.pub
      link
      fedilink
      arrow-up
      12
      ·
      1 year ago

      One way to do this is to block hashes. This is a slippery slope though because it could be used maliciously. Only way to do this and protect freedom of information is to make this fully open source.

      • Scrubbles@poptalk.scrubbles.tech
        link
        fedilink
        arrow-up
        8
        ·
        1 year ago

        Block hash lists then? Something like a community driven hashlist for CSAM would work, of the majority of federated instances report it as that type then it would get added to the list. Instances could then choose what lists they wanted to block.

        …instances could also show what lists they subscribe to so they users could see what sort of moderation they choose

        • glorbo@lemmy.one
          link
          fedilink
          English
          arrow-up
          7
          ·
          1 year ago

          So the standard approach to this is so-called “perceptual hashing.” Effectively, using cryptographic hashes (sha256, etc.) doesn’t really work well in this case. Given a piece of illegal content, that content is likely to still be just as illegal with a single pixel changed – however, it’ll have a completely different cryptographic hash. So instead, a hash function that determines how “similar-looking” two images are, ignoring things like dimensions, color palette, JPEG compression artifacts, etc. This is obviously way fuzzier, and is prone to both false positives and negatives.

          Because all this is inherently kinda fuzzy, the exact database of hashes is usually “secret sauce” if you will. If it were public, it would be super easy to circumvent. As an example, given an illegal image:

          1. Is the image’s hash in the DB?
          2. No? All done, you can post it with impunity.
          3. Yes? Change one random pixel, GOTO 1.

          As a result even “public” databases are distributed with NDAs etc. This obviously does not jive well with an open source, federated network like Mastodon, and I have my doubts as to how willing the relevant agencies would be to give their databases to every rando with $5 to spin up a Pleroma instance on a VPS. A public DB might help in some cases, but unfortunately more illegal content is produced every day, and so it would be extremely hard to keep up with the bad actors.

        • BarbecueCowboy@kbin.social
          link
          fedilink
          arrow-up
          6
          ·
          1 year ago

          This is kind of problematic… By creating a community driven hashlist that is freely shared, you’ve also kind of created an index of CSAM content that could easily be extrapolated for people actively looking to find/share that content.

            • sociablefish@lemm.ee
              link
              fedilink
              arrow-up
              3
              ·
              edit-2
              1 year ago

              only if they are crypto hashes (hash functions that back btc, ltc, other cryptos) as they are irreversible*

              *i wont explain, use your internet in the pocket

            • BarbecueCowboy@kbin.social
              link
              fedilink
              arrow-up
              2
              ·
              edit-2
              1 year ago

              Super useful, it’s very similar to how magnet links for torrenting works. I know of a few less popular file sharing services that can act and search for files based on hash alone.

              A lot of other areas online make use of hashes as identifiers already too. If you search for a hash of a file you’ve downloaded, just the hash and nothing else, there’s a very good chance you’ll get multiple results.

      • IronKrill@lemmy.ca
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        Image hashes? That could work. It could be a simple system like uBlock where you import filter lists to your instance and they’re easy to disable if their caretakers fill them with garbage data.

    • zephyrvs@lemmy.ml
      link
      fedilink
      arrow-up
      8
      ·
      edit-2
      1 year ago

      The researchers can’t be taken seriously if they don’t acknowledge that you can’t force free software to do something you don’t want it to.

      Even if we started way down at the stack and we added a CSAM hash scanner to the Linux kernel, people would just fork the kernel and use their own build without it.

      Same goes for nginx or any other web server or web proxy. Same goes for Tor. Same goes for Mastodon or any other Fedi/ActivityPub implementation.

      It. Does. Not*. Work.

      * Please, prove me wrong, I’m not all knowing, but short of total surveillance, I see no technical solution to this.

  • zygo_histo_morpheus@programming.dev
    link
    fedilink
    arrow-up
    31
    ·
    edit-2
    1 year ago

    Is there any way mastodon stands out from other self hosted websites? Would the CSAM material be harder to distribute or easier to prosecute if they ran, say, a self-hosted bulletin board for it instead?

    • Big P@feddit.uk
      link
      fedilink
      arrow-up
      7
      ·
      1 year ago

      Probably just the ease at which you can find it since each instance is linked, it basically becomes a search engine that might not have the same controls/protection as Google etc

    • ParsnipWitch@feddit.de
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 year ago

      Privately hosted websites are only useful for established clients. Via social media and image sharing platforms the distributors try to reach new clientele. They often have more or less hidden tags and codes that can attract potential customers. When someone reacts to these they carefully try to see if the person could be trusted to have access to private sharing. It’s how drug dealers online sometimes work or extremist groups.

  • deCorp0@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    28
    ·
    1 year ago

    Hi, since Mastodon is no longer acceptable due to the 0.04 percent of instances found to have abusive material, would someone please suggest the alternative social network with 0 percent of these incidents? Companies like Facebook and Twitter are driven by shareholders and greed, Mastodon is a community effort and you’ll certainly find bad actors there, but I feel less dirty contributing to a community project, versus helping billionaires like Zuck and Elon line their pockets harvesting my data.

  • Bendavisunlv6@lemmynsfw.com
    link
    fedilink
    arrow-up
    21
    ·
    1 year ago

    This is one of the things I don’t like about the whole Twitter format. There’s no moderator layer. Every lemmy community must be created by a moderator and that mod can be held accountable.

    There isn’t even a concept of communities on Twitter / Mastodon. Hashtags? Nobody owns monitoring them, and they can be freely improvised at will. It really is just the instance and its zillion users with nothing in between. Imagine a lemmy instance admin being responsible for all the moderation… would never work.

  • FIash Mob #5678@beehaw.org
    link
    fedilink
    arrow-up
    19
    ·
    edit-2
    1 year ago

    Mastodon.art doesn’t.

    And the beauty of Mastodon is you can block an entire instance, as can your admin, when something awful is posted. Mastodon even has a hashtag they use as an alert for this kind of thing. (#Fediblock)

  • eskimofry@lemmy.one
    link
    fedilink
    arrow-up
    16
    ·
    1 year ago

    I don’t trust stanford to not work on behalf of the CIA or other 3 alphabet orgs. They kind of turn a blind eye to CSA in churches but a federated media? This sounds like a smear job.

    • shinjiikarus@mylem.eu
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      Total tangent, but we kid ourselves if we think the fediverse is somehow censorship-immune in comparison to Reddit or Twitter.

      There are more moderators and administrators across all instances which can federate/defederate at will and can delete posts and propagate this deletion through the network. At the same time governments don’t need to negotiate with a large company, but only need to hint they could destroy one person’s livelihood to remove undesirable content from the network. And to avoid the Streisand effect instead of requesting to delete one specific piece of subversive content (which could backfire), just insinuate some illegal material (CSAM being the most obvious, but anything goes, really) has been found to force shut down or takeover of the whole instance.

      The same goes for big companies instead of governments: if a large corporation has launched their own Mastodon clone, the first thing they’d reasonably fund are smearpieces by “journalists” and/or “scientists” hinting at harm to befall server owners by continuing to host Mastodon instances.

      I personally hate, what crypto has become (if I wanted to destroy crypto, I’d have invented crypto bros as a psy op), but the fediverse isn’t really federated enough to be resistant to influence by corporations and governments and something blockchain adjacent could have been the solution. For example: if the server admin and their hoster is totally unable to decrypt whatever is stored on their own server and the network as a whole is distributing all the content probabilistically across every federated server, the network would only get stronger and more censorship resistant with each new instance. If the government is forcing you for any reason to take down your server your content is not gone but stored with all the other nodes. If you are able to retrieve your key, you could even move to a new instance and authenticate as your old instance (don’t forget: you are not “sending” BTC from one wallet to another, you are only telling as much nodes as sensible that BTC on the chain belongs to a new key now; the same would go for content. Take down one node with a “wallet” doesn’t change which wallet the BTC on the chain belongs to. I propose the same, just with content). If federation between instances would work in a comparable way as it is now, this would additionally increase the probability to root out bad faith actors trying to flood the whole network with illegal content, since their content would be stored on much less nodes in a pseudo-predictable way: as soon as each major instance would defederate, their content would not be stored on their nodes and unfederated third-party-nodes.

  • alyaza [they/she]@beehaw.orgM
    link
    fedilink
    arrow-up
    16
    ·
    1 year ago

    not surprised at all. this is a growing pain here too because this was previously a thing handled invisibly by platforms and federation makes it fall to individual sysadmins and whoever they have on staff. the tools for this stuff are, in general, not here yet–and as people have noted there are potential conflicts with some of the principles of federation introduced by those tools that can’t be totally handwaved.