A new paper from Apple's artificial intelligence scientists has found that engines based on large language models, such as those from Meta and OpenAI, still lack basic reasoning skills.
AI in general is a shitty term. It’s mostly PR. The Term “Intelligence” is very fuzzy and difficult to define - especially for people who are not in the field of machine learning.
I still think it’s better to refer to LLMs as “stochastic lexical indexes” than AI
AI in general is a shitty term. It’s mostly PR. The Term “Intelligence” is very fuzzy and difficult to define - especially for people who are not in the field of machine learning.
So for those in ML it’s easier?