• Shanmugha@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    9 days ago

    I’ll bait. Let’s think:

    -there are three humans who are 98% right about what they say, and where they know they might be wrong, they indicate it

    • now there is an llm (fuck capitalization, I hate the ways they are shoved everywhere that much) trained on their output

    • now llm is asked about the topic and computes the answer string

    By definition that answer string can contain all the probably-wrong things without proper indicators (“might”, “under such and such circumstances” etc)

    If you want to say 40% wrong llm means 40% wrong sources, prove me wrong