Of late, my biggest concern is certain parties feeding LLMs with a different version of history.
Search has become so shit of late that LLMs are often the better path to answering a question. But as everyone knows they are only as good as what they’ve been trained on.
Do we, as a society, move past basic search to a preference for AI to answer our questions? If we do, how do we ensure that the history they feed the models is accurate?
Of late, my biggest concern is certain parties feeding LLMs with a different version of history.
Search has become so shit of late that LLMs are often the better path to answering a question. But as everyone knows they are only as good as what they’ve been trained on.
Do we, as a society, move past basic search to a preference for AI to answer our questions? If we do, how do we ensure that the history they feed the models is accurate?