Engineer/Mathematician/Student. I’m not insane unless I’m in a schizoposting or distressing memes mood; I promise.

  • 0 Posts
  • 3 Comments
Joined 2 years ago
cake
Cake day: July 28th, 2023

help-circle
  • Valid point, though I’m surprised that cyc was used for non-AI purposes since, in my very very limited knowledge of the project, I thought the whole thing was based around the ability to reason and infer from an encyclopedic data set.

    Regardless, I suppose the original topic of this discussion is heading towards a prescriptivist vs descriptivist debate:

    Should the term Artificial Intelligence have the more literal meaning it held when it first was discussed, like by Turing or in the sci-fi of Isaac Asimov?

    OR

    Should society’s use of the term in reference to advances in problem solving tech in general or specifically its most prevalent use in reference to any neural network or learning algorithm in general be the definition of Artificial Intelligence?

    Should we shift our definition of a term based on how it is used to match popular use regardless of its original intended meaning or should we try to keep the meaning of the phrase specific/direct/literal and fight the natural shift in language?

    Personally, I prefer the latter because I think keeping the meaning as close to literal as possible increases the clarity of the words and because the term AI is now thrown about so often these days as a buzzword for clicks or money, typically by people pushing lies about the capabilities or functionality of the systems they’re referring to as AI.

    The lumping together of models trained by scientists to solve novel problems and the models that are using the energy of a small country to plagiarize artwork also is not something I view fondly as I’ve seen people assume the two are one in the same despite the fact one has redeeming qualities and the other is mostly bullshit.

    However, it seems that many others are fine with or in support of a descriptivist definition where words have the meaning they are used for even if that meaning goes beyond their original intent or definitions.

    To each their own I suppose. These preferences are opinions so there really isn’t an objectively right or wrong answer for this debate


  • The term “artificial intelligence” is supposed to refer to a computer simulating the actions/behavior of a human.

    LLMs can mimic human communication and therefore fits the AI definition.

    Generative AI for images is a much looser fit but it still fulfills a purpose that was until recently something most or thought only humans could do, so some people think it counts as AI

    However some of the earliest AI’s in computer programs were just NPCs in video games, looong before deep learning became a widespread thing.

    Enemies in video games (typically referring to the algorithms used for their pathfinding) are AI whether they use neural networks or not.

    Deep learning neural networks are predictive mathematic models that can be tuned from data like in linear regression. This, in itself, is not AI.

    Transformers are a special structure that can be implemented in a neural network to attenuate certain inputs. (This is how ChatGPT can act like it has object permanence or any sort of memory when it doesn’t) Again, this kind of predictive model is not AI any more than using Simpson’s Rule to calculate a missing coordinate in a dataset would be AI.

    Neural networks can be used to mimic human actions, and when they do, that fits the definition. But the techniques and math behind the models is not AI.

    The only people who refer to non-AI things as AI are people who don’t know what they’re talking about, or people who are using it as a buzzword for financial gain (in the case of most corporate executives and tech-bros it is both)