They wanted everyone to scan their faces to look at legal porn featuring consensual performers, but now that it’s synthetic CSAM featuring other people’s kids, eh, they’re not quite sure if intervention is really warranted?
Edit: Why, I’m starting to think it was never about “thinking of the children”, or maybe I took it entirely the wrong way and they meant something completely different than I’d assumed.
It’s never been about the children, but if you enact something “in the name of child safety” they can shout down any opposition by screaming child predator at anyone who does.
After all, anyone who opposes a child safety bill, must be a predator right?
Eh? It’s the exact same law that’s being used, with the exact same problems. The reason it’s not been done yet is because Twitter is not - or was not - a porn site, so it escaped scrutiny. But the sites failing to comply with the Online Safety Act aren’t just deleted from the internet immediately, there’s an investigation first.
As far as I understand (going from what was reported on this in the last couple of days) Grok’s ability to create porn is recent, so that explains that.
I don’t use it, but my impression was that, Grok aside, the content is primarily not porn, which would make it not a porn site, surely.
It is a porn site. Not a site that sometimes people break the rules and post porn to. It is a site that deliberately and intentional hosts pornography.
Fair enough. But it also (I just checked) requires age verification like regular porn sites, so I don’t really get the raising of the treatment of twitter as some kind of double standard.
Do you mean that Twitter itself is not forcing all users to undergo ID verification, like for example Pornhub does?
Because that can be explained by the law not requiring a site which hosts adult content to go to those lengths if such content is not shown to those whose age is not reliably known. If you think the law is being applied unfairly maybe it would be worth being specific about what exact provision of the law is being applied to porn sites and not to twitter.
tbf that face scanning BS was 100% on the companies contracted out to do the verification, because the moronic law had absolutely no control over HOW the user’s age must be verified and how the data used to verify the age should be stored.
They wanted everyone to scan their faces to look at legal porn featuring consensual performers, but now that it’s synthetic CSAM featuring other people’s kids, eh, they’re not quite sure if intervention is really warranted?
Edit: Why, I’m starting to think it was never about “thinking of the children”, or maybe I took it entirely the wrong way and they meant something completely different than I’d assumed.
It’s never been about the children, but if you enact something “in the name of child safety” they can shout down any opposition by screaming child predator at anyone who does.
After all, anyone who opposes a child safety bill, must be a predator right?
Yuuuup. OfCom wants to control you not protect children.
Eh? It’s the exact same law that’s being used, with the exact same problems. The reason it’s not been done yet is because Twitter is not - or was not - a porn site, so it escaped scrutiny. But the sites failing to comply with the Online Safety Act aren’t just deleted from the internet immediately, there’s an investigation first.
It is a website that hosts a shit ton of porn and even features a tool for creating porn. How is that not a porn site?
As far as I understand (going from what was reported on this in the last couple of days) Grok’s ability to create porn is recent, so that explains that.
I don’t use it, but my impression was that, Grok aside, the content is primarily not porn, which would make it not a porn site, surely.
Twitter has always hosted a considerable amount of porn GROK being deliberately developed into a CSAM machine is just the most recent thing.
To be clear, Twitter (and now X) explicitly permit pornographic content by policy:
https://help.x.com/en/rules-and-policies/adult-content
It is a porn site. Not a site that sometimes people break the rules and post porn to. It is a site that deliberately and intentional hosts pornography.
Fair enough. But it also (I just checked) requires age verification like regular porn sites, so I don’t really get the raising of the treatment of twitter as some kind of double standard.
They ARE NOT being subjected to the same age verification process as regular porn sites. That is the entire thing that we are talking about here.
Do you mean that Twitter itself is not forcing all users to undergo ID verification, like for example Pornhub does?
Because that can be explained by the law not requiring a site which hosts adult content to go to those lengths if such content is not shown to those whose age is not reliably known. If you think the law is being applied unfairly maybe it would be worth being specific about what exact provision of the law is being applied to porn sites and not to twitter.
Congratulations, you just caught up to where the rest of us started this discussion.
tbf that face scanning BS was 100% on the companies contracted out to do the verification, because the moronic law had absolutely no control over HOW the user’s age must be verified and how the data used to verify the age should be stored.