Most parents either don’t know how to do that, or don’t care enough to stop their kids from using social media, despite how harmful it is for society as a whole, and especially children, and since all their friends are on social media, a child can credibly argue that they need to use it to maintain their social life. If social media is banned for under 16’s, then children would have to communicate with normal chat apps. Also I know from experience that parental controls can easily be bypassed by a dedicated child.
A propely implemented (as in ZKP verification that gives no information to the service other than the age category) age gate is a good thing in my opinion, because at some point some systemic problems are better served by systemic solutions. We don’t let parents decide if their kids should smoke or drink alchohol either.
despite how harmful it is for society as a whole, and especially children
If you don’t understand that the motivation is to target kids with ads and influencer content designed to push products, you’re not going to solve anything. Kids have to have spaces to communicate with each other in order to develop healthy socialization skills. Locking them in a proverbial box is not healthy, and guess what, we killed off 99% of third spaces that welcome kids.
If social media is banned for under 16’s, then children would have to communicate with normal chat apps.
I feel like you are envisioning “chat apps” to mean “text-only”, but chat apps have been multimedia/ multi-modal, and multi-user (i.e. not 1:1 messaging) for a long time now, and can be just as easily infiltrated by the same actors targeting kids on social media.
at some point some systemic problems are better served by systemic solutions
This is not a solution, this is a band-aid that doesn’t attack the root cause whatsoever.
If you don’t understand that the motivation is to target kids with ads and influencer content designed to push products, you’re not going to solve anything.
I 100% agree that social media companies are incentivized to make their platforms toxic to children, and that people on those platforms are motivated to exploit them, but immoral people existing is not the problem here.
Kids have to have spaces to communicate with each other in order to develop healthy socialization skills.
They don’t have to use social media for that though, not only are third spaces alive and well in many places including much of France (and where I live), chat apps are 100% enough for kids to socialize.
I know that every teen where I live uses instagram to communicate with friends, but only the group/1to1 chats and stories, which are available in chat apps. The main feed(s) never show you things from your friends anyways.
I feel like you are envisioning “chat apps” to mean “text-only”
No, when I think of chat apps I think of signal and discord. Both have a ton of social features, the difference is that there isn’t an algorithm that acts as a vector for harmful bullshit or public profiles. Public rooms are still an issue, but from experience being a tween/teen on those platforms, it’s not even close to being as bad. Said bad actors do not exist in anywhere near the same capacity. Imo the harm of public chat rooms falls under the “parents can handle this” umbrella.
If it was the case that it was just individual actors on the platform causing the harm and not the structure of the platforms incentivizing said harm, then we would see more of this type of thing in real life as well.
I struggle to think of a more complete solution to the harm caused by social media to children than just banning them.
I may have misunderstood your thesis though, and in that case please correct me.
True. The profit motive is. People pushing harmful content are doing it because it makes them money, not because they’re twirling their moustaches as they relish their evil deeds. You remove the profit motive, you remove the motivation to harm people for profit.
the difference is that there isn’t an algorithm that acts as a vector for harmful bullshit
The algorithms boost engagement according to 1) what people engage with, and 2) what companies assess to be appealing. Facebook took the lead in having the social media platform own the engagement algorithms, but the companies and people pushing the content can and do also have their own algorithmic targeting. Just as Joe Camel existed before social media and still got to kids (and not just on TV), harmful actors will find and join discords. All that Facebook and Twitter did was handle the targeting for them, but it’s not like the targeting doesn’t exist without the platforms’ assistance.
Said bad actors do not exist in anywhere near the same capacity. Imo the harm of public chat rooms falls under the “parents can handle this” umbrella.
Public rooms are still an issue, but from experience being a tween/teen on those platforms, it’s not even close to being as bad.
It wasn’t as bad on those… back when we were teens. It absolutely is now. If anything, you’ll usually find that a lot of the most harmful groups (red-pill/ manosphere, body-image- especially based around inducing EDs- influencers) actually operate their own discords that they steer/ capture kids into. They make contact elsewhere, then get them into a more insular space where they can be more extreme and forceful in pushing their products, out of public view.
If it was the case that it was just individual actors on the platform causing the harm and not the structure of the platforms incentivizing said harm, then we would see more of this type of thing in real life as well.
I’m not saying it’s all individuals, I’m saying the opposite; it’s companies. Just not social media companies. Social media companies are the convenient access vector for the companies actually selling and pushing the harmful products and corollary ideas that drive kids to them.
I struggle to think of a more complete solution to the harm caused by social media to children than just banning them.
Given that your immediate solution was to regulate kids instead of regulating companies, I don’t think you’re going to be interested in my solutions.
Most parents either don’t know how to do that, or don’t care enough to stop their kids from using social media, despite how harmful it is for society as a whole, and especially children, and since all their friends are on social media, a child can credibly argue that they need to use it to maintain their social life. If social media is banned for under 16’s, then children would have to communicate with normal chat apps. Also I know from experience that parental controls can easily be bypassed by a dedicated child.
A propely implemented (as in ZKP verification that gives no information to the service other than the age category) age gate is a good thing in my opinion, because at some point some systemic problems are better served by systemic solutions. We don’t let parents decide if their kids should smoke or drink alchohol either.
If you don’t understand that the motivation is to target kids with ads and influencer content designed to push products, you’re not going to solve anything. Kids have to have spaces to communicate with each other in order to develop healthy socialization skills. Locking them in a proverbial box is not healthy, and guess what, we killed off 99% of third spaces that welcome kids.
I feel like you are envisioning “chat apps” to mean “text-only”, but chat apps have been multimedia/ multi-modal, and multi-user (i.e. not 1:1 messaging) for a long time now, and can be just as easily infiltrated by the same actors targeting kids on social media.
This is not a solution, this is a band-aid that doesn’t attack the root cause whatsoever.
I 100% agree that social media companies are incentivized to make their platforms toxic to children, and that people on those platforms are motivated to exploit them, but immoral people existing is not the problem here.
They don’t have to use social media for that though, not only are third spaces alive and well in many places including much of France (and where I live), chat apps are 100% enough for kids to socialize.
I know that every teen where I live uses instagram to communicate with friends, but only the group/1to1 chats and stories, which are available in chat apps. The main feed(s) never show you things from your friends anyways.
No, when I think of chat apps I think of signal and discord. Both have a ton of social features, the difference is that there isn’t an algorithm that acts as a vector for harmful bullshit or public profiles. Public rooms are still an issue, but from experience being a tween/teen on those platforms, it’s not even close to being as bad. Said bad actors do not exist in anywhere near the same capacity. Imo the harm of public chat rooms falls under the “parents can handle this” umbrella.
If it was the case that it was just individual actors on the platform causing the harm and not the structure of the platforms incentivizing said harm, then we would see more of this type of thing in real life as well.
I struggle to think of a more complete solution to the harm caused by social media to children than just banning them.
I may have misunderstood your thesis though, and in that case please correct me.
True. The profit motive is. People pushing harmful content are doing it because it makes them money, not because they’re twirling their moustaches as they relish their evil deeds. You remove the profit motive, you remove the motivation to harm people for profit.
The algorithms boost engagement according to 1) what people engage with, and 2) what companies assess to be appealing. Facebook took the lead in having the social media platform own the engagement algorithms, but the companies and people pushing the content can and do also have their own algorithmic targeting. Just as Joe Camel existed before social media and still got to kids (and not just on TV), harmful actors will find and join discords. All that Facebook and Twitter did was handle the targeting for them, but it’s not like the targeting doesn’t exist without the platforms’ assistance.
It wasn’t as bad on those… back when we were teens. It absolutely is now. If anything, you’ll usually find that a lot of the most harmful groups (red-pill/ manosphere, body-image- especially based around inducing EDs- influencers) actually operate their own discords that they steer/ capture kids into. They make contact elsewhere, then get them into a more insular space where they can be more extreme and forceful in pushing their products, out of public view.
I’m not saying it’s all individuals, I’m saying the opposite; it’s companies. Just not social media companies. Social media companies are the convenient access vector for the companies actually selling and pushing the harmful products and corollary ideas that drive kids to them.
Given that your immediate solution was to regulate kids instead of regulating companies, I don’t think you’re going to be interested in my solutions.
Cite sources, I think you’re lying to me
Which part do you think I’m lying about?