I can’t see it happening tbh, but like the USA government discussed putting restriction on AI development, I think OpenAI or some other companies asked them to do so!? And there were short/reels of high profile developers hyping out the fact that “we don’t know what we’re doing”, and one of them quit his job. So why is all that hype? Is the “Matrix” route actually a possible future ?
Current LLMs are just that, large language models. They’re incredible at predicting the next word, but literally cannot perform tasks outside of that, like fact checking, playing chess, etc. The theoretical AI that “could take over the world” like Skynet is called “Artificial Generalized Intelligence”. We’re nowhere close yet, do not believe OpenAI when they claim otherwise. This means the highest risk currently is a human person deciding to put an LLM “in charge” of an important task, that could cost lives if a mistake is made.