I can’t see it happening tbh, but like the USA government discussed putting restriction on AI development, I think OpenAI or some other companies asked them to do so!? And there were short/reels of high profile developers hyping out the fact that “we don’t know what we’re doing”, and one of them quit his job. So why is all that hype? Is the “Matrix” route actually a possible future ?
Months ago I would have said “yes, it’s possible”. Now, it’s become pretty clear LLMs are a dead end. They’re trained to simulate the internet and can’t do other things with any reliability.
It’s still possible with whatever the future approach to making computers smarter, though. Natural intelligence exists, and we’re made out of the same stuff as everything else, so artificial intelligence must also be possible. And, without the limits of recent evolution, it could probably be made far better than us.
Don’t forget that pandemics used to be a goofy sci-fi trope, too.
One thing that bothers me about high level devs just leaving because they realized what they created. Is that them leaving means one more possible road block is just gone. They will just be replaced with people that are more fresh faced and on the hype train of going harder and harder. Lots of folks I know that are finishing college are just leaning more and more into just using all of these AI to solve problems instead of learning to code (or just write things) themselves. Some are still trying and I support them in my little ways, but I can see how much like a drug things start small and can turn into just using it all the time. Comp Sci majors were already getting worse in their actual understanding of how things work before LLMs (just look at all the things that will never be optimized and just rely on higher spec PCs).
What’s funny to me is that such a robot takeover would mean all humans are (wage-)enslaved rather than 99% of us like right now
We already have a ruler, the Money god, that is already enslaving many, killing others, and silencing dissent. I might actually prefer if my ruler was some superintelligent logical being rather a than few male 60yos hoping to book the next trip to some harem island that might or might not have minors in it taken directly from the territories at war around the world
Of late, my biggest concern is certain parties feeding LLMs with a different version of history.
Search has become so shit of late that LLMs are often the better path to answering a question. But as everyone knows they are only as good as what they’ve been trained on.
Do we, as a society, move past basic search to a preference for AI to answer our questions? If we do, how do we ensure that the history they feed the models is accurate?
This is absolutely one of the reasons they’re pushing this garbage so hard. It’s VERY easy to manipulate as a propaganda tool.
You can already see that most of these tools lean right because their userbase does - leftists don’t touch this garbage because of numerous ethical concerns as-is. Add more astroturfing on top of that, and now it’s just a straight up automated fascist mouthpiece.