Nucleo’s investigation identified accounts with thousands of followers with illegal behavior that Meta’s security systems were unable to identify; after contact, the company acknowledged the problem and removed the accounts
Nucleo’s investigation identified accounts with thousands of followers with illegal behavior that Meta’s security systems were unable to identify; after contact, the company acknowledged the problem and removed the accounts
Yes at a cursory glance that’s true. AI generated images don’t involve the abuse of children, that’s great. The problem is what the follow-on effects of this is. What’s to stop actual child abusers from just photoshopping a 6th finger onto their images and then claiming that it’s AI generated?
AI image generation is getting absurdly good now, nearly indistinguishable from actual pictures. By the end of the year I suspect they will be truly indistinguishable. When that happens, how do you tell which images are AI generated and which are real? How do you know who is peddling real CP and who isn’t if AI-generated CP is legal?
What’s the follow on effect from making generated images illegal?
Do you want your freedom to be at stake where the question before the Jury is “How old is this image of a person (that doesn’t exist?)”. “Is this fake person TOO child-like?”
You won’t be able to tell, we can assume that this is a given.
So the real question is:
Who are you trying to arrest and put in jail and how are you going to write that difference into law so that innocent people are not harmed by the justice system?
To me, the evil people are the ones harming actual children. Trying to blur the line between them and people who generate images is a morally confused position.
There’s a clear distinction between the two groups and that distinction is that one group is harming people.
If pedophiles won’t be able to tell what’s real and what’s AI generated why risk jail to create the real ones?