488 private links
Over the millennia, we have created security systems to deal with the sorts of mistakes humans commonly make. //
But it’s not the frequency or severity of AI systems’ mistakes that differentiates them from human mistakes. It’s their weirdness. AI systems do not make mistakes in the same ways that humans do.
Much of the friction—and risk—associated with our use of AI arise from that difference. We need to invent new security systems that adapt to these differences and prevent harm from AI mistakes. //
AI errors come at seemingly random times, without any clustering around particular topics. LLM mistakes tend to be more evenly distributed through the knowledge space. A model might be equally likely to make a mistake on a calculus question as it is to propose that cabbages eat goats.
And AI mistakes aren’t accompanied by ignorance. A LLM will be just as confident when saying something completely wrong—and obviously so, to a human—as it will be when saying something true. The seemingly random inconsistency of LLMs makes it hard to trust their reasoning in complex, multi-step problems. If you want to use an AI model to help with a business problem, it’s not enough to see that it understands what factors make a product profitable; you need to be sure it won’t forget what money is. //
Matt • January 21, 2025 11:54 AM
“Technologies like large language models (LLMs) can perform many cognitive tasks”
No, they can’t perform ANY cognitive tasks. They do not cogitate. They do not think and are not capable of reasoning. They are nothing more than word-prediction engines. (This is not the same as saying they are useless.)
You should know better than that, Bruce.
RealFakeNews • January 21, 2025 12:35 PM
Part of the problem is AI can’t fundamentally differentiate a fact from something it just made up. It can check cabbages and goats are related via some probability, but it can’t check that a cabbage doesn’t eat goats because it can’t use the lack of data to verify if that is correct.