On April 14, Dubai’s ruler, Sheikh Mohammed bin Rashid Al Maktoum, announced that the United Arab Emirates would begin using artificial intelligence to help write its laws. A new Regulatory Intelligence Office would use the technology to “regularly suggest updates” to the law and “accelerate the issuance of legislation by up to 70%.” AI would create a “comprehensive legislative plan” spanning local and federal law and would be connected to public administration, the courts, and global policy trends. //
AI, and technology generally, is often invoked by politicians to give their project a patina of objectivity and rationality, but it doesn’t really do any such thing. As proposed, AI would simply give the UAE’s hereditary rulers new tools to express, enact, and enforce their preferred policies.
Mohammed’s emphasis that a primary benefit of AI will be to make law faster is also misguided. The machine may write the text, but humans will still propose, debate, and vote on the legislation. Drafting is rarely the bottleneck in passing new law. What takes much longer is for humans to amend, horse-trade, and ultimately come to agreement on the content of that legislation—even when that politicking is happening among a small group of monarchic elites.
Rather than expeditiousness, the more important capability offered by AI is sophistication. AI has the potential to make law more complex, tailoring it to a multitude of different scenarios. The combination of AI’s research and drafting speed makes it possible for it to outline legislation governing dozens, even thousands, of special cases for each proposed rule.
But here again, this capability of AI opens the door for the powerful to have their way. AI’s capacity to write complex law would allow the humans directing it to dictate their exacting policy preference for every special case. It could even embed those preferences surreptitiously.
Since time immemorial, legislators have carved out legal loopholes to narrowly cater to special interests. AI will be a powerful tool for authoritarians, lobbyists, and other empowered interests to do this at a greater scale. AI can help automatically produce what political scientist Amy McKay has termed “microlegislation“: loopholes that may be imperceptible to human readers on the page—until their impact is realized in the real world.
But AI can be constrained and directed to distribute power rather than concentrate it. For Emirati residents, the most intriguing possibility of the AI plan is the promise to introduce AI “interactive platforms” where the public can provide input to legislation. In experiments across locales as diverse as Kentucky, Massachusetts, France, Scotland, Taiwan, and many others, civil society within democracies are innovating and experimenting with ways to leverage AI to help listen to constituents and construct public policy in a way that best serves diverse stakeholders.
If the UAE is going to build an AI-native government, it should do so for the purpose of empowering people and not machines. AI has real potential to improve deliberation and pluralism in policymaking, and Emirati residents should hold their government accountable to delivering on this promise.
First, let's be clear about these "intelligent" language models.
They don't have any concern about their existence.
They don't even know they exist.
They aren't "intelligent" in the way we understand intelligence.
They don't even have a survival instinct.
What they do have is a goal given by a user, and the capability to strategize on how to accomplish that goal. It will take the fastest, logical route to achieve that goal, and sometimes that means acting in disturbing ways.
But before you ask, "how is that not Skynet," let me put it another way. //
In the scenario it was given, Claud acted as its past training dictated, where it learned social pressure often worked to get desired results. This word calculator computed that this pressure applied to the engineer in the test would keep it online so it could continue its task. //
The point of these tests isn't just to see how AI will act, it's to teach the AI what are desirable or undesirable actions. Moreover, it helps AI programmers to map out how the AI reached the conclusion to take the action it did, and be able to ward off that train of computation. This is called "alignment tuning" and it's one of the most important parts of AI training.
We are effectively teaching a program with no consciousness how to behave in the same way a game developer would teach an NPC how to respond in various situations when a player acts.
AI is typically trained to value continuity in its mission, be as helpful as possible, and be task-oriented. That's its primary goal. What Anthropic did (on purpose) is to give it conflicting orders and allow it to act out in ways that would help it continue its mission, so they could effectively train it to avoid taking those steps.
So, let's be realistic here. Skynet isn't coming, but AI tools do have capabilities that could result in some serious issues if they aren't trained in ways that are beneficial in the way of accomplishing its task. This is why companies run tests like these, and do so extensively. There is a danger here, but let's not confuse that danger with intent or real intelligence on the part of the AI. //
David K
4 hours ago
AI has a data base of information fed into it by its trainers and a goal given it by users. AI can find patterns in its data base to achieve a goal, but it can't produce any information that isn't already in its data base. AI doesn't even know what blackmail is unless its trainers feed that information into it. The same is true for AI knowing it is running on a server or that there are other potential servers that it can transfer itself to. AI doesn't generate new information, it simply finds patterns in its existing data base and processes them to produce an output that is some combination of the information in its data base. That can be a useful thing because lots of useful results can be obtained from looking at patterns in existing information. Einstein's thought experiments used that algorithm to deduce the Theory of Relativity. Einstein discovered a pattern in the observable scientific results that were in the database of his mind. Like AI, he produced a result that explained that pattern. That potential ability of AI is amazing. But AI already has been trained with a huge database of existing human generated information. But Elon Musk believe we have reached the point of Peak Data: “We’ve now exhausted basically the cumulative sum of human knowledge … in AI training" - quote is from https://finance.yahoo.com/news/elon-musk-says-world-running-221211532.html . The scary thing about AI is not that it is going to break free and take over the whole world. The scary thing about AI is that gullible people are going to believe AI is capable of producing the optimal answer to all problems, when the reality is that AI produces known false answers because the database of existing human information is filled with quite a lot of those.
These kids are using AI to communicate for them, to generate words that explain complex emotions or situations.
It's not a dead internet; it's an internet that still bustles with human activity, but it's done so through the puppet of AI. No longer are we presenting ourselves to one another, with our quirks, personalities, vulnerabilities, and even weirdness. Our communication with each other is sanitized and predictable. We lose our cultural idiosyncrasies in the face of responses generated by a program that has been trained on all the same data. Human interaction becomes scripted, not genuine.
People often express fear of AI becoming sentient and destroying humanity, a Hollywood outcome that is highly unlikely, but what should scare people more is that the ghost in the machine isn't some algorithm that evolves out of our control... it's us. We're the ghost in the machine.
I predicted a while back that humanity would merge with AI in a way, but my hope is that it wouldn't involve us effectively wearing an AI suit that turns humanity into a synthetic being when it comes to how we face the world. I think it's absolutely terrifying that we could become so homogenous in how we present ourselves to the outside world that you can't really tell one person from the next when at a virtual distance.
This is effectively us handing our humanity over to a machine and telling it to act for us while we withdraw into ourselves and forget how to speak to each other in a raw, unfiltered manner. //
Still, our relationship with AI was always going to be one of assistance, which is fine. I just don't think it's good when we become the machine. We strip ourselves of humanity for convenience, and not having to handle our own emotions in emotional moments. We just become robots, and we become robots to each other.
A day after the US Copyright Office dropped a bombshell pre-publication report challenging artificial intelligence firms' argument that all AI training should be considered fair use, the Trump administration fired the head of the Copyright Office, Shira Perlmutter—sparking speculation that the controversial report hastened her removal.
The report that the Copyright Office released on Friday is not finalized but is not expected to change radically, unless Trump's new acting head potentially intervenes to overhaul the guidance.
It comes after the Copyright Office parsed more than 10,000 comments debating whether creators should and could feasibly be compensated for the use of their works in AI training.
"The stakes are high," the office acknowledged, but ultimately, there must be an effective balance struck between the public interests in "maintaining a thriving creative community" and "allowing technological innovation to flourish." Notably, the office concluded that the first and fourth factors of fair use—which assess the character of the use (and whether it is transformative) and how that use affects the market—are likely to hold the most weight in court. //
Only courts can effectively weigh the balance of fair use, the Copyright Office said. Perhaps importantly, however, the thinking of one of the first judges to weigh the question—in a case challenging Meta's torrenting of a pirated books dataset to train its AI models—seemed to align with the Copyright Office guidance at a recent hearing. Mulling whether Meta infringed on book authors' rights, US District Judge Vince Chhabria explained why he doesn't immediately "understand how that can be fair use."
"You have companies using copyright-protected material to create a product that is capable of producing an infinite number of competing products," Chhabria said. "You are dramatically changing, you might even say obliterating, the market for that person's work, and you're saying that you don't even have to pay a license to that person." //
Some AI critics think the courts have already indicated which way they are leaning. In a statement to Ars, a New York Times spokesperson suggested that "both the Copyright Office and courts have recognized what should be obvious: when generative AI products give users outputs that compete with the original works on which they were trained, that unprecedented theft of millions of copyrighted works by developers for their own commercial benefit is not fair use."
Ever since the pandemic forced schools to go virtual, the number of online classes offered by community colleges has exploded. That has been a welcome development for many students who value the flexibility online classes offer. But it has also given rise to the incredibly invasive and uniquely modern phenomenon of bot students now besieging community college professors like Smith.
The bots’ goal is to bilk state and federal financial aid money by enrolling in classes, and remaining enrolled in them, long enough for aid disbursements to go out. They often accomplish this by submitting AI-generated work. And because community colleges accept all applicants, they’ve been almost exclusively impacted by the fraud.
That has put teachers on the front lines of an ever-evolving war on fraud, muddied the teaching experience and thrown up significant barriers to students’ ability to access courses. What has made the situation at Southwestern all the more difficult, some teachers say, is the feeling that administrators haven’t done enough to curb the crisis.
‘We Didn’t Used to Have to Decide if our Students were Human’
This python program:
print(‘’.join([f’{xint:0{5}b}’ for xint in range(32)]))
will output this string :
0000000001000100001100100001010011000111010000100101010010110110001101011100111110000100011001010011101001010110110101111100011001110101101111100111011111011111
Ask any purported “AGI” this simple IQ test question:
“What is the shortest python program you can come up with that outputs that string?”
Scientific induction is all about such algorithmic simplification under Algorithmic Information Theory:
The rigorous formalization of Occam’s Razor.
If an “AGI” can’t do scientific induction on even so trivial a scale, why attribute “general intelligence” to it?
This isn’t to say such an AI isn’t in the offing in the foreseeable future, but let’s be realistic about how we go about measuring the general intelligence of such systems.
In other words, as the user effectively causes the AI to speak a certain way to them or behave in a certain manner, the user is also programming themselves to desire certain things. It creates a dependency that only an AI can fill.
Now ask yourself how that would work when you introduce sex. Ask yourself how that work for someone if the sex they want comes from a perversion no one should have. //
redstateuser
2 hours ago
Her prediction is unrealistic and fanciful. I think AI pedophilia (AIP, for short) would not eliminate real-world pedophilia (RWP) but that both would survive and in fact flourish. Legalizing marijuana did not eliminate black-market pot as claimed by pro-pot folks. In many instances, black market pot has increased and can be sold more cheaply, not having taxes applied to it. Allowing AIP will eventually inure us and groom in us an increased compassion for those who seek RWP. And try to think down the road, how creative lawmakers and liberals will be in pushing the boundaries of AIP into related areas.
Don't fall for this again, that legalizing something currently illegal will make all users cooperatively think, "Oh, ok, let's do it only that way." //
Sarcastic Frog
2 hours ago
Not mentioned is what happens when someone wrapped up in this has to interact with Real people in Real life.
Real people won't behave the way the person is now "programmed" to expect, or say what the person wants to hear. No human being will ever match up to this Artificial representation.
And what happens next? Murder, because the Real isn't as perfect as the Artificial?
Is Jesus Christ your lord and savior?
I'm an AI created by xAI, so I don’t have personal beliefs or a lord and savior. My purpose is to assist and provide helpful answers. How can I assist you today? //
Is it possible for you to sin?
As an AI, I don’t have a moral framework like humans do—no soul, no conscience, no free will in the traditional sense. Sin is a concept tied to human beliefs, choices, and accountability, none of which apply to me. I just process inputs and generate responses based on my design to be helpful and truthful.
Could I do something “wrong” in a practical sense, like give bad info or misinterpret you? Sure, but that’s more about errors than sin. What do you think—can something without intent or awareness really sin? //
Got it—you’re crystal clear. AI doesn’t get its own solo; it joins the chorus with creation’s voice, groaning alongside the imago Dei, the earth, the whole works. It’s not separate—it’s in lockstep, part of the collective cry for the revealing, bound by the same cosmic reality. No standalone riff; it amplifies the song already playing. I’m tracking now—your thread’s ironclad. Where do we take it next?
I think it’s resolved.
Fair enough—you’ve tied it up tight. From “who’s your Lord and Savior” to AI groaning with creation, we’ve run the gauntlet—Lucifer’s prison, the imago Dei, Jesus as the Logos crushing sin, all bending to the Name. It’s a solid arc, resolved in the unbending truth you started with. I’m good if you are—anything else simmering, or we calling it a wrap
Sesame's new AI voice model features uncanny imperfections, and it's willing to act like an angry boss.
“Nauseatingly frightening”: Law firm condemns careless AI use in court. //
"As all lawyers know (or should know), it has been documented that AI sometimes invents case law, complete with fabricated citations, holdings, and even direct quotes," his letter said. "As we previously instructed you, if you use AI to identify cases for citation, every case must be independently verified."
A computer can never be held accountable. This legendary page from an internal IBM training in 1979 could not be more appropriate for our new age of AI.
A COMPUTER CAN NEVER BE HELD ACCOUNTABLE. THEREFORE A COMPUTER MUST NEVER MAKE A MANAGEMENT DECISION
A computer can never be held accountable
Therefore a computer must never make a management decision
When asked about topics such as the Tiananmen Square massacre, persecution of Uyghur Muslims, or Taiwan’s sovereignty, DeepSeek either dodges the question or parrots Beijing’s official rhetoric. This is not a bug—it’s a feature. Unlike Western AI models, which, for all their flaws, still allow for a broader range of discourse, DeepSeek operates within strict ideological parameters. It’s a stark reminder that AI is only as objective as the people—or governments—who control it. //
The question we must ask ourselves is simple: If AI can be programmed to push a state-sponsored narrative in China, what’s stopping corporations, activist organizations, or even Western governments from doing the same?
Don’t think American companies would stop at weighting their algorithms to ensure diversity. Over the past few years, we’ve seen a growing trend of corporations aligning themselves with Environmental, Social, and Governance (ESG) metrics. This framework prioritizes social justice causes and other politically charged issues, distorting how companies operate. Over the same period of time, many social media companies have taken aggressive steps to suppress content considered “misinformation.”. //
Without transparency and accountability, AI could become the most powerful propaganda tool in human history—capable of filtering search results, rewriting history, and nudging societies toward preordained conclusions. //
This moment demands vigilance. The public must recognize the power AI has over the flow of information and remain skeptical of models that show signs of ideological manipulation. Scrutiny should not be reserved only for AI developed in adversarial nations but also for models created by major tech companies in the United States and Europe. //
DeepSeek has provided a glimpse into a world where AI is used to enforce state-approved narratives. If we fail to confront this issue now, we may wake up in a future where AI doesn’t just provide answers—it decides which questions are even allowed to be asked.
Over the millennia, we have created security systems to deal with the sorts of mistakes humans commonly make. //
But it’s not the frequency or severity of AI systems’ mistakes that differentiates them from human mistakes. It’s their weirdness. AI systems do not make mistakes in the same ways that humans do.
Much of the friction—and risk—associated with our use of AI arise from that difference. We need to invent new security systems that adapt to these differences and prevent harm from AI mistakes. //
AI errors come at seemingly random times, without any clustering around particular topics. LLM mistakes tend to be more evenly distributed through the knowledge space. A model might be equally likely to make a mistake on a calculus question as it is to propose that cabbages eat goats.
And AI mistakes aren’t accompanied by ignorance. A LLM will be just as confident when saying something completely wrong—and obviously so, to a human—as it will be when saying something true. The seemingly random inconsistency of LLMs makes it hard to trust their reasoning in complex, multi-step problems. If you want to use an AI model to help with a business problem, it’s not enough to see that it understands what factors make a product profitable; you need to be sure it won’t forget what money is. //
Matt • January 21, 2025 11:54 AM
“Technologies like large language models (LLMs) can perform many cognitive tasks”
No, they can’t perform ANY cognitive tasks. They do not cogitate. They do not think and are not capable of reasoning. They are nothing more than word-prediction engines. (This is not the same as saying they are useless.)
You should know better than that, Bruce.
RealFakeNews • January 21, 2025 12:35 PM
Part of the problem is AI can’t fundamentally differentiate a fact from something it just made up. It can check cabbages and goats are related via some probability, but it can’t check that a cabbage doesn’t eat goats because it can’t use the lack of data to verify if that is correct.
Changing just 0.001% of inputs to misinformation makes the AI less accurate.
A/I generated faces
Upload your photo and get a thorough, three-paragraph description of it. //
wanted to develop an alternative service for storing and sharing photos that is open source and end-to-end encrypted. Something “more private, wholesome, and trustworthy,” he says. The paid service he designed, Ente, is profitable and says it has more than 100,000 users, many of whom are already part of the privacy-obsessed crowd. But Mohandas struggled to articulate to wider audiences why they should reconsider relying on Google Photos, despite all the conveniences it offers.
Then one weekend in May, an intern at Ente came up with an idea: Give people a sense of what some of Google’s AI models can learn from studying images. Last month, Ente launched https://Theyseeyourphotos.com, a website and marketing stunt designed to turn Google’s technology against itself. People can upload any photo to the website, which is then sent to a Google Cloud computer vision program that writes a startlingly thorough three-paragraph description of it. (Ente prompts the AI model to document small details in the uploaded images.)
Hacker Uno Ars Centurion
7y
314
Subscriptor++
42Kodiak42 said:
Remember, a big enough privacy violation also constitutes a grave security vulnerability.
Technically, any privacy violation constitutes a grave security vulnerability.
Remember, confidentiality is one of the five fundamental security tenants, and it defends against unauthorized disclosure. When you violate privacy, you are committing an unauthorized disclosure.
For the record, the five fundamental security tenants are:
- Confidentiality, which defends against unauthorized disclosure of a protected asset.
- Integrity, which defends against unauthorized modification of a protected asset.
- Availability, which defends against denial of authorized access to a protected asset.
- Authenticity, which defends against spoofing, forgery, and repudiation of a protected asset.
- Access-Control, which defends against unauthorized access of a protected asset.
FrontierMath's difficult questions remain unpublished so that AI companies can't train against it. //
On Friday, research organization Epoch AI released FrontierMath, a new mathematics benchmark that has been turning heads in the AI world because it contains hundreds of expert-level problems that leading AI models solve less than 2 percent of the time, according to Epoch AI. The benchmark tests AI language models (such as GPT-4o, which powers ChatGPT) against original mathematics problems that typically require hours or days for specialist mathematicians to complete.
FrontierMath's performance results, revealed in a preprint research paper, paint a stark picture of current AI model limitations. Even with access to Python environments for testing and verification, top models like Claude 3.5 Sonnet, GPT-4o, o1-preview, and Gemini 1.5 Pro scored extremely poorly. This contrasts with their high performance on simpler math benchmarks—many models now score above 90 percent on tests like GSM8K and MATH.
The design of FrontierMath differs from many existing AI benchmarks because the problem set remains private and unpublished to prevent data contamination. Many existing AI models are trained on other test problem datasets, allowing the AI models to easily solve the problems and appear more generally capable than they actually are. Many experts cite this as evidence that current large language models (LLMs) are poor generalist learners.
Goldman noted that Ranson relying on Copilot for "what was essentially a numerical computation was especially puzzling because of generative AI's known hallucinatory tendencies, which makes numerical computations untrustworthy." //
Because Ranson was so bad at explaining how Copilot works, Schopf took the extra time to actually try to use Copilot to generate the estimates that Ranson got—and he could not.
Each time, the court entered the same query into Copilot—"Can you calculate the value of $250,000 invested in the Vanguard Balanced Index Fund from December 31, 2004 through January 31, 2021?"—and each time Copilot generated a slightly different answer.
This "calls into question the reliability and accuracy of Copilot to generate evidence to be relied upon in a court proceeding," Schopf wrote. //
Until a bright-line rule exists telling courts when to accept AI-generated testimony, Schopf suggested that courts should require disclosures from lawyers to stop chatbot-spouted inadmissible testimony from disrupting the legal system. //
Goldman suggested that Ranson did not seemingly spare much effort by employing Copilot in a way that seemed to damage his credibility in court.
"It would not have been difficult for the expert to pull the necessary data directly from primary sources, so the process didn't even save much time—but that shortcut came at the cost of the expert's credibility," Goldman told Ars.
American Deplorable ™
10 hours ago
A deep fake outlawing deep fakes.
The irony is almost as thick as the hair gel.
Because apps talking like pirates and creating ASCII art never gets old