• sucrerey@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 days ago

    weird question: if this worked, couldnt the same dataset be used to create a very skillful AI cybergroomer chatbot if it fell into the wrong hands?

  • General_Effort@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    4 days ago

    I guess most people don’t get how terrifyingly dystopian this is.

    In the EU, there is a serious push to make this mandatory.

    • simple@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 days ago

      If this is implemented right it should flag accounts so human reviewers can follow up on it, not take action on its own.

      • Inucune@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 days ago

        Even still, the ‘flag’ could be enough damning evidence for some people to take action. We’re in the cultural ‘guilty until proven innocent’ territory, where a mere accusation ruins lives.