• madame_gaymes@programming.dev
      link
      fedilink
      English
      arrow-up
      12
      ·
      1 day ago

      Came here to say something similar about a local archive.

      You can also use the app Kiwix to make it a little easier to download/search (and grab several other doc archives like Python PEP and Wikipedia)

    • melroy@kbin.melroy.org
      link
      fedilink
      arrow-up
      6
      arrow-down
      2
      ·
      2 days ago

      Bad news. Since AI can only answer what it knows. If you have a question that is legit but not yet part of stackoverflow, you get a bad AI response.

      In that case you can ask it on the stackoverflow website. But due to the fact that everybody now only rely on AI stackoverflow is dead. Well there you go, you just killed the source of truth.

      • cecilkorik@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        ·
        21 hours ago

        Which is eventually going to cause AI model collapse, since AI no longer has any source of truth to train on. This is such an interesting technology being used in such a stupid and irresponsible way.

        • melroy@kbin.melroy.org
          link
          fedilink
          arrow-up
          1
          ·
          17 hours ago

          Exactly my point. So what you see now is Ai is generating Ai content used for training. Also known as synthetic data… I know right?

      • anotherandrew@mbin.mixdown.ca
        link
        fedilink
        arrow-up
        1
        ·
        1 day ago

        I don’t know if it’s just my age/experience or some kind of innate “horse sense” But I tend to do alright with detecting shit responses, whether they be human trolls or an LLM that is lying through its virtual teeth. I don’t see that as bad news, I see it as understanding the limitations of the system. Perhaps with a reasonable prompt an LLM can be more honest about when it’s hallucinating?

        • mbtrhcs@feddit.org
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 day ago

          I don’t know if it’s just my age/experience or some kind of innate “horse sense” But I tend to do alright with detecting shit responses, whether they be human trolls or an LLM that is lying through its virtual teeth

          I’m not sure how you would do that if you are asking about something you don’t have expertise in yet, as it takes the exact same authoritative tone no matter whether the information is real.

          Perhaps with a reasonable prompt an LLM can be more honest about when it’s hallucinating?

          So far, research suggests this is not possible (unsurprisingly, given the nature of LLMs). Introspective outputs, such as certainty or justifications for decisions, do not map closely to the LLM’s actual internal state.

          • anotherandrew@mbin.mixdown.ca
            link
            fedilink
            arrow-up
            1
            ·
            11 hours ago

            I’m not sure how you would do that if you are asking about something you don’t have expertise in yet, as it takes the exact same authoritative tone no matter whether the information is real.

            I agree – That’s why I’m chalking it up to some kind of healthy sense of skepticism when it comes to trusting authoritative-sounding answers by themselves. e.g. “ok that sounds plausible, let’s see if we can find supporting information on this answer elsewhere or, maybe ask the same question a different way to see if the new answer(s) seem to line up.”

            So far, research suggests this is not possible (unsurprisingly, given the nature of LLMs). Introspective outputs, such as certainty or justifications for decisions, do not map closely to the LLM’s actual internal state.

            Interesting – I still see them largely as black boxes so reading about how people smarter than me describe the processes is fascinating.

            • mbtrhcs@feddit.org
              link
              fedilink
              English
              arrow-up
              1
              ·
              21 minutes ago

              let’s see if we can find supporting information on this answer elsewhere or, maybe ask the same question a different way to see if the new answer(s) seem to line up

              Yeah, that’s probably the best way to go about it, but still requires some foundational knowledge on your part. For example, in a recent study I worked on we found that programming students struggle hard when the LLM output is wrong and they don’t know enough to understand why. They then tend to trust the LLM anyways and end up prompting variations of the same thing over and over again to no avail. Other studies similarly found that while good students can work faster with AI, many others are actually worse off due to being misled.

              I still see them largely as black boxes

              The crazy part is that they are, even for the researchers that came up with them. Sure we can understand how the data flows from input to output, but realistically not a single person in the world could look at all of the weights in an LLM and tell you what it has learned. Basically everything we know about their capabilities on tasks is based on just trying it out and seeing how well it works. Hell, even “prompt engineers” are making a lot of their decisions based on vibes only.