413 private links
Because apps talking like pirates and creating ASCII art never gets old
LY Corp's QA team struggled to manage projects while wading through prolix posts
Create the most realistic speech with our AI audio platform
Pioneering research in Text to Speech, AI Voice Generator, and more
Generate high quality speech in any voice, style, and language. Our AI voice generator renders human intonation and inflections with exceptional fidelity, adjusting the delivery based on context.
Large language models are increasingly integrating into everyday life—as chatbots, digital assistants, and internet search guides, for example. These artificial intelligence systems, which consume large amounts of text data to learn associations, can create all sorts of written material when prompted and can ably converse with users.
Large language models’ growing power and omnipresence mean that they exert increasing influence on society and culture.
So, it’s of great import that these artificial intelligence systems remain neutral when it comes to complicated political issues. Unfortunately, according to a new analysis recently published to PLOS ONE, this doesn’t seem to be the case.
As Phil Root, the deputy director of the Defense Sciences Office at DARPA, recounted to Scharre, “A tank looks like a tank, even when it’s moving. A human when walking looks different than a human standing. A human with a weapon looks different.”
In order to train the artificial intelligence, it needed data in the form of a squad of Marines spending six days walking around in front of it. On the seventh day, though, it was time to put the machine to the test.
“If any Marines could get all the way in and touch this robot without being detected, they would win. I wanted to see, game on, what would happen,” said Root in the book. //
the Marines, being Marines, found several ways to bollix the AI and achieved a 100 percent success rate.
Two Marines, according to the book, somersaulted for 300 meters to approach the sensor. Another pair hid under a cardboard box.
“You could hear them giggling the whole time,” said Root in the book.
One Marine stripped a fir tree and held it in front of him as he approached the sensor. In the end, while the artificial intelligence knew how to identify a person walking, that was pretty much all it knew because that was all it had been modeled to detect. //
The moral of the story? Never bet against Marines, soldiers, or military folks in general. The American military rank-and-file has proven itself more creative than any other military in history. Whether that creativity is focused on finding and deleting bad guys or finding ways to screw with an AI and the eggheads who programmed it, my money's on the troops.
Last month, the Secretary of the Air Force put on a flight suit and sat in the front seat of an F-16.
His F-16 spent an hour in the air, dogfighting with another Air Force fighter. His jet was piloted by AI. //
I was reminded of the scene from "2001: A Space Odyssey." Machines deciding what is right and wrong. //
jumper
16 minutes ago edited
Between a president that keeps threatening to use F-15's against us and a woke military that will absolutely fire on their own people we may as well take our chances with the computers.
But the reality is quite different. This isn't "AI" in the sense that it's sentient and self-determinant. It's adaptive software that eliminates the problems of the human in the aircraft. There would be hard-wired kill switches and all sorts of other safety measures that sci-fi tries to pretend is easily bypassed. Put it this way: the Chinese and the Russians will be designing their own UCAV's. We would be foolish to fall behind in this.
Getting an AI to distinguish red from orange was a major challenge. //
The last time a human set the world record for solving a Rubik's Cube, it was Max Park, at 3.13 seconds for a standard 3×3×3 cube, set in June 2023. It is going to be very difficult for any human to pull off a John Henry-like usurping of the new machine record, which is more than 10 times faster, at 0.305 seconds. That's within the accepted time frame for human eye blinking, which averages out to one-third of a second.
TOKUFASTbot, built by Mitsubishi Electric, can actually pull off a solve in as little as 0.204 seconds on video, but not when Guinness World Records judges were measuring. The previous mechanical record was 0.38 seconds.
Billionaire Elon Musk said this month that while the development of AI had been “chip constrained” last year, the latest bottleneck to the cutting-edge technology was “electricity supply.” Those comments followed a warning by Amazon chief Andy Jassy this year that there was “not enough energy right now” to run new generative AI services. //
“One of the limitations of deploying [chips] in the new AI economy is going to be ... where do we build the data centers and how do we get the power,” said Daniel Golding, chief technology officer at Appleby Strategy Group and a former data center executive at Google. “At some point the reality of the [electricity] grid is going to get in the way of AI.” //
Such growth would require huge amounts of electricity, even if systems become more efficient. According to the International Energy Agency, the electricity consumed by data centers globally will more than double by 2026 to more than 1,000 terawatt hours, an amount roughly equivalent to what Japan consumes annually.
The promise of AI, we hear over and over again, is that it’s a tool to help humans do better, automating tasks to free up worker time for other things. But instead, AI looks far more like HAL 9000 in “2001: A Space Odyssey,” a computer that overtakes its human masters’ ability to control it and turns against humanity. //
Behind the scenes and out of sight, AI and social media algorithms can be used to determine what you are allowed to post, what you will be able to read, and ultimately what you will think.
Despite the promises of simplifying workflows and managing tasks, there’s far too much evidence of AI destruction to be ignored.
When it comes to AI, be afraid, be very afraid.
Swarovski AX Visio, billed as first "smart binoculars," names species and tracks location.
Last week, Austria-based Swarovski Optik introduced the AX Visio 10x32 binoculars, which the company says can identify over 9,000 species of birds and mammals using image recognition technology. The company is calling the product the world's first "smart binoculars," and they come with a hefty price tag—$4,799.
LordP666 said:
I think everyone is vastly overthinking AI.
In my opinion all we need are smart individual devices with built in AI.
Take a smart lamp - all it needs to know is it's name, it must recognize your voice, and what it must do:
Lampy McLampface, Level 2
That's it! If a thief breaks in he can suck eggs because he can't turn on a lamp or anything else. He can steal the lamp, but again...eggs, because Lampy will miss his owner and never turn on for anyone else.
How about a car? A thief gets in the car and he says Fordy McFordface "Start", and Fordy says "screw you thief, you are not the boss of me" and starts honking his horn while locking the doors.
Smart devices need to be more like very loyal dogs. ///
Best a/i comment ever
But it turns out that Michael Cohen, the lawyer they're hoping to use against former President Donald Trump in the criminal case in Manhattan, was caught using AI for citations that he gave his attorney that were then used in a motion filed with a federal judge.
(...)
This all came out because the judge in the case couldn't find three of the cases cited and ordered on Dec. 12 that Schwartz produce the cases and, if he couldn't, explain how non-existent cases got in his filings along with Cohen's role in them. The judge also asked Schwartz why he should not be sanctioned for made-up cases in his filing.
Artificial intelligence will change so many aspects of society, largely in ways that we cannot conceive of yet. Democracy, and the systems of governance that surround it, will be no exception. In this short essay, I want to move beyond the “AI-generated disinformation” trope and speculate on some of the ways AI will change how democracy functions—in both large and small ways.
When I survey how artificial intelligence might upend different aspects of modern society, democracy included, I look at four different dimensions of change: speed, scale, scope, and sophistication. Look for places where changes in degree result in changes of kind. Those are where the societal upheavals will happen.
Some items on my list are still speculative, but none require science-fictional levels of technological advance. And we can see the first stages of many of them today. When reading about the successes and failures of AI systems, it’s important to differentiate between the fundamental limitations of AI as a technology, and the practical limitations of AI systems in the fall of 2023. Advances are happening quickly, and the impossible is becoming the routine. We don’t know how long this will continue, but my bet is on continued major technological advances in the coming years. Which means it’s going to be a wild ride. ///
A lot of these could be good or bad, depending on the neutrality of those creating the A/I software and LLM. It would be very easy to use the control of the A/I software to mold society to fit the worldview and desires of one segment of society. //
Clive Robinson • November 13, 2023 12:37 PM
@ Bruce, ALL,
“I stress the importance of separating the specific AI chatbot technologies in November of 2023 with AI’s technological possibilities in general.”
Firstly current AI, LLM’s are little more than DSP “Matched filters” driven by a shaped noise signal. The ML is little more than DSP “adaptive filtering” which all to often turns out to be a multiband integration of the noise spectrum.
There is no sign of “I” artificial or not, you can write down very basic equations that both LLM’s and the ML addatives can be adiquately described by. Even the nonlinearity and rounding errors that come up due to limitations can be reproduced as equations.
But for all the brouhaha of marketing hype and nonsense it does not take long to find out that these so called artificial neural networks, do not in any real way behave like biological neural networks, and as such fail at massive scale and energy input to do what biological networks do simply and efficiently.
I hope people by now realise that whilst LLM’s can patten match by correlation, they can not even do simple reasoning reliably or effectively. Thus they are going to fail miserably as “teachers” for learning much as the old “Smack it in by rote” teaching methods that were thoroughly debunked something like fourty years back from the start of this millennium.
If neither LLMs or their argument of ML can reliably reason, just pattern match to existing data sets plus noise, what are they realy orher than overhyped over priced near usless toys running around a historical track.
In short they are stuck in a faux past and very poor present, and can not move forward…
If you must anthropomorphize LLM’s and ML, they are effectively as daft as those who look back a century or a half ago and think it was all wonderful because their cognative blindness makes them think that they would have been at the top of things… Some what worse in fact than the stories of the Walter Mitty character who’s fantasies were just about being a manly hero.
Winter • November 13, 2023 12:52 PM
@Clive
Firstly current AI, LLM’s are little more than DSP “Matched filters” driven by a shaped noise signal.
Even though it is “literally” true, it still is utter nonsense.
It is claiming viruses are dead heaps of polymerized amino acids and nucleotides that can never imitate real life. That is true, taken literally, but they are nevertheless one of the most potent entities shaping life on earth, regularly remodeling human demographics on continental scales.
An LLM produces output that condenses orders of magnitude more text and speech than any human has seen in their life. The results are very useful, as anyone who writes texts can attest.
Maybe you are expecting the TRUTH. You will not get that from a machine. But human can do well with less than the TRUTH. And LLMs are delivering useful results even now. //
JonKnowsNothing • November 14, 2023 12:41 PM
@Anonymous
re: behind every major AI decision, we should have a human who makes the call
- Loophole #1 major AI decision
For AI there is not such thing as “major or minor”. These are qualitative human values. They do not exist in the giant scrabble bag of AI datasets
- Loophole #2 human makes the call
So, which human gets to do this? Which committee? Which government?
On what basis will a human make the call? Based on what the AI barfs up on the scrabble board of the dataset content?
How will the human know that the AI is not hallucinating?
AI is vastly different than other metering systems. Older metering systems are deterministic: they give the same results every time.
- Drop a coin (now a credit card) into the parking meter which gives N-minutes of ticket-free parking
AI gives different answers every time. The data set mutates. There is no fixed base. There is no measurable accuracy.
- Drop a coin (now a credit card) into the parking meter and AI generates Instant Ticket. Funds auto-deducted from your credit card. No refunds. No receipt. No redress. No proof.
It’s laudable that a human should vet the AI but there is no longer any verifiable authority on any topic. Ours is the last generation to know Before AI. We can know about the hallucinations but those behind us will never know.
- Elvis has left the building.
They do not know who Elvis was or why he left the building; neither does AI.