488 private links
LordP666 said:
I think everyone is vastly overthinking AI.
In my opinion all we need are smart individual devices with built in AI.
Take a smart lamp - all it needs to know is it's name, it must recognize your voice, and what it must do:
Lampy McLampface, Level 2
That's it! If a thief breaks in he can suck eggs because he can't turn on a lamp or anything else. He can steal the lamp, but again...eggs, because Lampy will miss his owner and never turn on for anyone else.
How about a car? A thief gets in the car and he says Fordy McFordface "Start", and Fordy says "screw you thief, you are not the boss of me" and starts honking his horn while locking the doors.
Smart devices need to be more like very loyal dogs. ///
Best a/i comment ever
But it turns out that Michael Cohen, the lawyer they're hoping to use against former President Donald Trump in the criminal case in Manhattan, was caught using AI for citations that he gave his attorney that were then used in a motion filed with a federal judge.
(...)
This all came out because the judge in the case couldn't find three of the cases cited and ordered on Dec. 12 that Schwartz produce the cases and, if he couldn't, explain how non-existent cases got in his filings along with Cohen's role in them. The judge also asked Schwartz why he should not be sanctioned for made-up cases in his filing.
Artificial intelligence will change so many aspects of society, largely in ways that we cannot conceive of yet. Democracy, and the systems of governance that surround it, will be no exception. In this short essay, I want to move beyond the “AI-generated disinformation” trope and speculate on some of the ways AI will change how democracy functions—in both large and small ways.
When I survey how artificial intelligence might upend different aspects of modern society, democracy included, I look at four different dimensions of change: speed, scale, scope, and sophistication. Look for places where changes in degree result in changes of kind. Those are where the societal upheavals will happen.
Some items on my list are still speculative, but none require science-fictional levels of technological advance. And we can see the first stages of many of them today. When reading about the successes and failures of AI systems, it’s important to differentiate between the fundamental limitations of AI as a technology, and the practical limitations of AI systems in the fall of 2023. Advances are happening quickly, and the impossible is becoming the routine. We don’t know how long this will continue, but my bet is on continued major technological advances in the coming years. Which means it’s going to be a wild ride. ///
A lot of these could be good or bad, depending on the neutrality of those creating the A/I software and LLM. It would be very easy to use the control of the A/I software to mold society to fit the worldview and desires of one segment of society. //
Clive Robinson • November 13, 2023 12:37 PM
@ Bruce, ALL,
“I stress the importance of separating the specific AI chatbot technologies in November of 2023 with AI’s technological possibilities in general.”
Firstly current AI, LLM’s are little more than DSP “Matched filters” driven by a shaped noise signal. The ML is little more than DSP “adaptive filtering” which all to often turns out to be a multiband integration of the noise spectrum.
There is no sign of “I” artificial or not, you can write down very basic equations that both LLM’s and the ML addatives can be adiquately described by. Even the nonlinearity and rounding errors that come up due to limitations can be reproduced as equations.
But for all the brouhaha of marketing hype and nonsense it does not take long to find out that these so called artificial neural networks, do not in any real way behave like biological neural networks, and as such fail at massive scale and energy input to do what biological networks do simply and efficiently.
I hope people by now realise that whilst LLM’s can patten match by correlation, they can not even do simple reasoning reliably or effectively. Thus they are going to fail miserably as “teachers” for learning much as the old “Smack it in by rote” teaching methods that were thoroughly debunked something like fourty years back from the start of this millennium.
If neither LLMs or their argument of ML can reliably reason, just pattern match to existing data sets plus noise, what are they realy orher than overhyped over priced near usless toys running around a historical track.
In short they are stuck in a faux past and very poor present, and can not move forward…
If you must anthropomorphize LLM’s and ML, they are effectively as daft as those who look back a century or a half ago and think it was all wonderful because their cognative blindness makes them think that they would have been at the top of things… Some what worse in fact than the stories of the Walter Mitty character who’s fantasies were just about being a manly hero.
Winter • November 13, 2023 12:52 PM
@Clive
Firstly current AI, LLM’s are little more than DSP “Matched filters” driven by a shaped noise signal.
Even though it is “literally” true, it still is utter nonsense.
It is claiming viruses are dead heaps of polymerized amino acids and nucleotides that can never imitate real life. That is true, taken literally, but they are nevertheless one of the most potent entities shaping life on earth, regularly remodeling human demographics on continental scales.
An LLM produces output that condenses orders of magnitude more text and speech than any human has seen in their life. The results are very useful, as anyone who writes texts can attest.
Maybe you are expecting the TRUTH. You will not get that from a machine. But human can do well with less than the TRUTH. And LLMs are delivering useful results even now. //
JonKnowsNothing • November 14, 2023 12:41 PM
@Anonymous
re: behind every major AI decision, we should have a human who makes the call
- Loophole #1 major AI decision
For AI there is not such thing as “major or minor”. These are qualitative human values. They do not exist in the giant scrabble bag of AI datasets
- Loophole #2 human makes the call
So, which human gets to do this? Which committee? Which government?
On what basis will a human make the call? Based on what the AI barfs up on the scrabble board of the dataset content?
How will the human know that the AI is not hallucinating?
AI is vastly different than other metering systems. Older metering systems are deterministic: they give the same results every time.
- Drop a coin (now a credit card) into the parking meter which gives N-minutes of ticket-free parking
AI gives different answers every time. The data set mutates. There is no fixed base. There is no measurable accuracy.
- Drop a coin (now a credit card) into the parking meter and AI generates Instant Ticket. Funds auto-deducted from your credit card. No refunds. No receipt. No redress. No proof.
It’s laudable that a human should vet the AI but there is no longer any verifiable authority on any topic. Ours is the last generation to know Before AI. We can know about the hallucinations but those behind us will never know.
- Elvis has left the building.
They do not know who Elvis was or why he left the building; neither does AI.