I’m being serious. I think that if instead of Trump there was just a prompt “engineer” the country would actually run better. Even if you train it to be far right.

And this is not a praise of LLMs…

  • BertramDitore@lemm.ee
    link
    fedilink
    English
    arrow-up
    6
    ·
    15 hours ago

    I highly doubt that. For so many reasons. Here’s just a few:

    • What data would you train it on, the Constitution? The entirely of federal law? How would that work? Knowing how ridiculous textualism is even when done by humans, do you really think a non-thinking algorithm could understand the intention behind the words? Or even what laws, rules, or norms should be respected in each unique situation?
    • We don’t know why LLMs return the responses they return. This would be hugely problematic for understanding its directions.
    • If an LLM doesn’t know an answer, instead of saying so it will usually just make something up. Plenty of people do this too, but I’m not sure why we should trust an algorithm’s hallucinations over a human’s bullshit.
    • How would you ensure the integrity of the prompt engineer’s prompts? Would there be oversight? Could the LLM’s “decisions” be reversed?
    • How could you hold an LLM accountable for the inevitable harm it causes? People will undoubtedly die for one reason or another based on the LLM’s “decisions.” Would you delete the model? Retrain it? How would you prevent it from making the same mistake again?

    I don’t mean this as an attack on you, but I think you trust the implementation of LLMs way more than they deserve. These are unfinished products. They have some limited potential, but should by no means have any power or control over our lives. Have they really shown you they should be trusted with this kind of power?

    • Slippery_Snake874@sopuli.xyz
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      10 hours ago

      We don’t know why LLMs return the responses they return.

      While I agree with most of your points, this is a strange thing to say. Sure, you and I don’t know why LLMs return the responses they do, but the people who actually make them definitely know how they work.

      • BertramDitore@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 hours ago

        It isn’t just you and me. Not even the people who designed them fully understand why they give the responses they give. It’s a well-known problem. Our understanding is definitely improving over time, but we still don’t fully know how they do it.

        Here’s the latest exploration of this topic I could find.

        LLMs continue to be one of the least understood mass-market technologies ever

        Tracing even a single response takes hours and there’s still a lot of figuring out left to do.

        • Slippery_Snake874@sopuli.xyz
          link
          fedilink
          arrow-up
          2
          ·
          9 hours ago

          Hmm. That is interesting, and I admit it does seem like the company that made it is also still researching their own model, but some parts of the article seem a bit dramatic (not sure if there is a better word).

          Like when it says the model doesn’t “admit” to how it solved the math problem when asked. Of course it doesn’t, it is made for humans to interact with so it is not going to tell a human how a computer does math, it makes more sense for it to explain the “human” method.

          Interesting stuff though, thanks for the article!

          • Rikudou_Sage@lemmings.world
            link
            fedilink
            arrow-up
            1
            ·
            8 hours ago

            The wording is very sensationalist. It’s currently hard to trace any single output specifically, but that doesn’t mean we don’t know how it works. The algorithm and theory behind it are very well understood. It’s just that the algorithm is very complex (and uses very large amounts of data), which is what makes tracing a single response token by token so hard.

            When you read something about how we don’t understand how AI works, you can safely skip that article, they intentionally use dishonest language to make it sound like something mystical is going on.

    • MTK@lemmy.worldOP
      link
      fedilink
      arrow-up
      4
      ·
      14 hours ago

      I honestly don’t see how that is worse than Trump. Is it terrible? Yeah. Is Trump worse? Also yeah

    • themurphy@lemmy.ml
      link
      fedilink
      arrow-up
      2
      ·
      14 hours ago

      You’ll train on data and laws from all other countries and take the best from each. Then make the AI put them together and let it evaluate the best possible country for its citizens + the economy.

      It would absolutly be better than Trump, but probably not as good as a real well-run country.

  • kvasir476@lemmy.world
    link
    fedilink
    arrow-up
    8
    ·
    16 hours ago

    I don’t know if this is true, but people were saying that the tariff policy was what LLMs suggest when asked about how to reduce a trade deficit. So perhaps we are already being governed by ChatGPT.

  • palebluethought@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    16 hours ago

    DOGE is basically a bunch of 25 year old know-nothings feeding budget spreadsheets and job descriptions to a chatbot and asking what to cut, so…

  • AbouBenAdhem@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    16 hours ago

    And by an LLM you mean the people who train and tune the LLM to generate the type of responses they like.