A retro computing connoisseur has installed and booted Microsoft Windows 3.1X on a Ryzen 9 9900X and RTX 5060 Ti PC system. That’s a 1992 OS working on a bare-metal 2024 Zen5 CPU and 2025 Blackwell GPU. The full story contains a few nuances, but basically, a system and OS separated by over 30 years of huge advances kind of play nicely together. //
This Asus motherboard’s ‘classic BIOS’ functionality doesn’t get in the way of users tinkering with old OSes like Windows 3.1X when the built-in Compatibility Support Module (CSM) is enabled. Moreover, we noticed Omores initially prepared the system using a Windows 95 boot floppy to create the bootable DOS FAT16 partition necessary for setup.
Charles Bennett and Gilles Brassard have won the 2026 Turing Award for inventing quantum cryptography.
I am incredibly pleased to see them get this recognition. I have always thought the technology to be fantastic, even though I think it’s largely unnecessary. I wrote up my thoughts back in 2008, in an essay titled “Quantum Cryptography: As Awesome As It Is Pointless.” //
What about quantum computation? I’m not worried; the math is ahead of the physics. Reports of progress in that area are overblown. And if there’s a security crisis because of a quantum computation breakthrough, it’s because our systems aren’t crypto-agile. //
Ray Dillinger • March 31, 2026 2:43 PM
I don’t mean to diminish the work of Bennett and Brassard. They had some amazing insights and deserve their award.
At the same time I suppose that people affiliated with various three-letter-agencies may have been consulted as to the value of their work when the Turing Awards were being considered. Those agencies, if they are behind the Kleptographic attack that appears to be happening here, may have had an interest in promoting public awareness of Quantum Crypto as a threat. Promoting public awareness of a threat is absolutely a necessary step in any campaign to use that threat as a lever to get people to do something stupid out of FUD.
So I fear that the work of Bennett and Brassard, however good it may be, would likely have gone unrecognized if not for the input of people who are, despite all protestations, unlikely to be motivated by protecting people against it.
Ray Dillinger • March 31, 2026 2:43 PM
I don’t mean to diminish the work of Bennett and Brassard. They had some amazing insights and deserve their award.
At the same time I suppose that people affiliated with various three-letter-agencies may have been consulted as to the value of their work when the Turing Awards were being considered. Those agencies, if they are behind the Kleptographic attack that appears to be happening here, may have had an interest in promoting public awareness of Quantum Crypto as a threat. Promoting public awareness of a threat is absolutely a necessary step in any campaign to use that threat as a lever to get people to do something stupid out of FUD.
So I fear that the work of Bennett and Brassard, however good it may be, would likely have gone unrecognized if not for the input of people who are, despite all protestations, unlikely to be motivated by protecting people against it.
If the Sensory Interface is the intake port, the NeuroCompiler is what turns that input into “filtered meaning” before the Mind Kernel ever sees it. It takes raw signal (e.g., photons, sound waves, chemical gradients, pressure) and translates it into something actionable based on binary categories like threat or safe, familiar or novel, trustworthy or suspicious.
The speed is both an evolutionary feature and a modern bug. Processing here is fast enough to get you out of the way of a thrown object before you’ve consciously registered it. But “good enough most of the time” means “predictably wrong some of the time….
A critical architectural feature: the NeuroCompiler can route its output directly back to the Sensory Interface and out as behavior, skipping the conscious awareness of the Mind Kernel entirely. Reflex and startle responses use this mechanism, making this bypass pathway enormously useful for survival. Yet it leaves a wide-open backdoor. If the layer that holds access to skepticism and deliberate evaluation can be bypassed completely, a host of exploits become possible that would otherwise fail.
That’s just one of the five levels Melton talks about: sensory interface, neurocompiler, mind kernel, the mesh, and cultural substrate.
Melton’s taxonomy is compelling, and her parallels to IT systems are fascinating. I have long said that a genius idea is one that’s incredibly obvious once you hear it, but one that no one has said before. This is the first time I’ve heard cognition described in this way.
Greg Kroah-Hartman can't explain the inflection point, but it's not slowing down or going away. //
No one is quite sure what's behind it. Asked what changed, Kroah-Hartman was blunt: "We don't know. Nobody seems to know why. Either a lot more tools got a lot better, or people started going, 'Hey, let's start looking at this.' It seems like lots of different groups, different companies." What is clear is the scale. "For the kernel, we can handle it," he said.
"We're a much larger team, very distributed, and our increase is real – and it's not slowing down. These are tiny things, they're not major things, but we need help on this for all the open source projects." Smaller projects, he implied, have far less capacity to absorb a sudden flood of plausible AI-generated bug reports and security findings – at least now they're real bugs and not garbage ones. //
The trick for Kroah-Hartman and his peers will be to keep AI as a force multiplier, without drowning the open source maintainers.
Ewen therefore again made the long drive, and within moments of arriving, he noticed the giant PC was very quiet.
A quick look showed why: the fans weren't working.
Ewen asked if anyone had noticed a problem.
"Oh, the noise was annoying me," replied one of the testing engineers. "So I opened the case and cut the wires." //
Bill GraySilver badge
Chesterton's fence
G. K. Chesterton wrote something that boils down to : if you see a fence running across a road, you shouldn't tear it down until you figure out why it was put there. Somebody presumably went to the time, trouble, and expense of erecting the fence, and had some reason for doing it.
You may eventually learn that their reason no longer applies, or just doesn't matter as much as it used to, and then you might pull the fence down on a suitably informed basis. But you shouldn't equate "I don't see why that's there" with "there's no good reason for that to be there".
As I recall, he was mostly thinking in terms of politics. The idea is that each generation comes along and assumes its parents were idiots, and that society should be rebuilt on more sensible, modern principles... usually without first considering why the parents did such idiotic things. But it's a good engineering principle as well.
The story of Iomega is one of genuine engineering innovation and the fickle nature of consumer technology. As with so many other juggernauts of its era, Iomega was eventually brought down by a new technology that simply wasn’t practical to counter.
Each year the LHC produces 40,000 EBs of unfiltered sensor data alone, or about a fourth of the size of the entire Internet, Aarrestad estimated. CERN can't store all that data. As a result, "We have to reduce that data in real time to something we can afford to keep."
By "real time," she means extreme real time. The LHC detector systems process data at speeds up to hundreds of terabytes per second, far more than Google or Netflix, whose latency requirements are also far easier to hit as well.
Algorithms processing this data must be extremely fast," Aarrestad said. So fast that decisions must be burned into the chip design itself. //
At any given time, there are about 2,800 bunches of protons whizzing around the ring at nearly the speed of light, separated by 25-nanosecond intervals. Just before they reach one of the four underground detectors, specialized magnets squeeze these bunches together to increase the odds of an interaction. Nonetheless, a direct hit is incredibly rare: out of the billions of protons in each bunch, only about 60 pairs actually collide during a crossing.
When particles do collide, their energy is converted into a mass of new outgoing particles (E=MC2 in the house!). These new particles "shower" through CERN's detectors, making traces "which we try to reconstruct," she said, in order to identify any new particles produced in ensuing melee.
Each collision produces a few megabytes of data, and there are roughly a billion collisions per second, resulting in about a petabyte of data (about the size of the entire Netflix library).
Rather than try to transport all this data up to ground level, CERN found it more feasible to create a monster-sized edge compute system to sort out the interesting bits at the detector-level instead.
The problem Waterline Development encountered is that commercial AI models are ill-suited to multidisciplinary research, which requires synthesizing expertise from a variety of fields.
"No single AI model does this reliably," the company explains in a white paper [PDF]. "Frontier language models hallucinate under extended multi-step reasoning. They produce plausible answers that silently break when a problem crosses domain boundaries. At best this wastes time; at worst, it poisons critical decision making." //
Bednarski said Rozum is not focused on correcting LLMs to the extent they can be used for, say, critical engineering work like bridge construction. Rather, the goal is to empower researchers, engineers, and scientists so they can do their jobs better.
"We are focused on deterministic tool implementation (ex. RDKit for Chemistry), allowing engineers, scientists, and analysts a direct path to verify outputs in a format familiar to them by domain," he explained.
"Our system orchestration method is heavily focused on deterministic validation (code execution replicated, etc.) of outputs, which roots out hallucinations that plague all models at various times. We see further improvements to this in verifying the methods used in sources we cite as well."
Chardet dispute shows how AI will kill software licensing, argues Bruce Perens • The Register Forums
2 days
habilain
Reply Icon
Re: Prompts?
They did post the design document eventually - https://github.com/chardet/chardet/commit/f51f523506a73f89f0f9538fd31be458d007ab93.
Other people have pored over it, but I suspect that instructions to download things from the original chardet repository mean that the AI generated version can not be considered "clean room". And that's ignoring the likelihood that Claude Code has injested the entirety of the chardet repo during training.
2 days
MonkeyJuiceSilver badge
Reply Icon
Re: Prompts?
It's hard to see how anything an LLM produces could even remotely be described as 'clean room'.
habilain
Reply Icon
Re: Prompts?
Well yes, but the lawyers are still arguing over that, and the legal fights aren't all going in the way that any sensible reading of the facts would indicate.
It's much easier to say "this is not clean room" when the instructions to the AI clearly break the definition of what "clean room implementation" means.
1 day
timrichardson
Reply Icon
Re: Prompts?
I doubt that matters very much.. copyright infringement is based on a level of similarities in two works. A clean room implementation is a defence, but it's not a necessary defence.
3 hrs
habilain
Reply Icon
Re: Prompts?
The issue you'd find is that a) APIs are copyrightable, at least in the USA b) The AI in question was instructed to match the API and c) The AI in question was instructed to use code from the original source. I think that's pretty clear cut.
And besides, the reason why I highlighted "clean room" is Dan Blanchard's repeated insistence that the AI did a clean room implementation - not because of any particular legal merits.
Richard 12Silver badge
Pirate
It's LGPL or public domain now
If this v7 genuinely was mostly generated by an LLM, existing court rulings say that it is not covered by copyright.
Therefore, it cannot be licenced under the MIT either. It is public domain.
Or maybe that's not true and it's still LGPL.
Commercially, who would want to take the risk of touching v7 with a bargepole?
It now cannot ever become part of the Python standard library because it's forever tainted by licence clarity issues.
It would require a court case to sort out whether it's LGPL, MIT, or public domain, and nobody wants to burn the cash on that when they can stick with a v6 fork and avoid all the legal risk.
Charlie ClarkSilver badge
Reply Icon
Re: It's LGPL or public domain now
I think the release was poorly handled – a new release under a different name as with, say, PIL -> pillow (Python Imaging Library) might have been a better approach. There may be some legal challenges in the US but I can't see them going anywhere and then the taint will be gone – well, maybe add something to the licence referring to the original implementation.
A perfectly legal approach, as others have pointed out, would have been to port the library to another language, say Rust. This could then be wrapped or the basis of another perfectly legal port back to Python. All software is essentially the expression of one algorithm or another and these have never been copyrightable.
//
Charlie ClarkSilver badge
Reply Icon
Re: It's LGPL or public domain now
I think the release was poorly handled – a new release under a different name as with, say, PIL -> pillow (Python Imaging Library) might have been a better approach. There may be some legal challenges in the US but I can't see them going anywhere and then the taint will be gone – well, maybe add something to the licence referring to the original implementation.
A perfectly legal approach, as others have pointed out, would have been to port the library to another language, say Rust. This could then be wrapped or the basis of another perfectly legal port back to Python. All software is essentially the expression of one algorithm or another and these have never been copyrightable.
Richard 12Silver badge
Pirate
It's LGPL or public domain now
If this v7 genuinely was mostly generated by an LLM, existing court rulings say that it is not covered by copyright.
Therefore, it cannot be licenced under the MIT either. It is public domain.
Or maybe that's not true and it's still LGPL.
Commercially, who would want to take the risk of touching v7 with a bargepole?
It now cannot ever become part of the Python standard library because it's forever tainted by licence clarity issues.
It would require a court case to sort out whether it's LGPL, MIT, or public domain, and nobody wants to burn the cash on that when they can stick with a v6 fork and avoid all the legal risk.
Charlie ClarkSilver badge
Reply Icon
Re: It's LGPL or public domain now
I think the release was poorly handled – a new release under a different name as with, say, PIL -> pillow (Python Imaging Library) might have been a better approach. There may be some legal challenges in the US but I can't see them going anywhere and then the taint will be gone – well, maybe add something to the licence referring to the original implementation.
A perfectly legal approach, as others have pointed out, would have been to port the library to another language, say Rust. This could then be wrapped or the basis of another perfectly legal port back to Python. All software is essentially the expression of one algorithm or another and these have never been copyrightable.
Earlier this week, Dan Blanchard, maintainer of a Python character encoding detection library called chardet, released a new version of the library under a new software license.
In doing so, he may have killed "copyleft." //
Blanchard says he was in the clear to change licenses because he used AI – Anthropic's Claude is now listed as a project contributor – to make what amounts to a clean room implementation of chardet. That's essentially a rewrite done without copying the original code – though it's unclear whether Claude ingested chardet's code during training and, if that occurred, whether Claude's output cloned that training data. //
The use of AI raises questions about what level of human involvement is required to copyright AI-assisted code.
The US Supreme Court recently refused to reconsider Thaler v. Perlmutter, in which the plaintiff sought to overturn a lower court decision that he could not copyright an AI-generated image. This is an area of ongoing concern among the defenders of copyleft because many open source projects incorporate some level of AI assistance. It's unclear how much AI involvement in coding would dilute the human contribution to the extent that a court would disallow a copyright claim. //
"As far as the intention of the GPL goes, a permissive license is still technically a free software license, but undermining copyleft is a serious act. Refusing to grant others the rights you yourself received as a user is highly [antisocial], no matter what method you use. Now more than ever, with people exploring new ways of circumventing copyright through machine learning, we need to protect the code that preserves user freedom. Free software relies on user and development communities who strongly support copyleft. Experience has shown that it's our strongest defense against similar efforts to undermine user freedom." //
Bruce Perens, who wrote the original Open Source Definition, has broader concerns about the entire software industry.
"I'm breaking the glass and pulling the fire alarm!" he told The Register in an email. "The entire economics of software development are dead, gone, over, kaput!
"In a different world, the issue of software and AI would be dealt with by legislators and courts that understand that all AI training is copying and all AI output is copying. That's the world I might like, but not the world we got. The horse is out of the barn and can't be put back. So, what do we do with the world we got?" ////
The courts are going to have to deal with this, but it really should be legislators thinking and debating it. I think that ultimately, material produced by A/I should be public domain, because you can't hold a computer responsible.
"Computers should not make management decisions because computers cannot be held responsible."
OpenAI is in and Anthropic is out as a supplier of AI technology for the US defense department. This news caps a week of bluster by the highest officials in the US government towards some of the wealthiest titans of the big tech industry, and the overhanging specter of the existential risks posed by a new technology powerful enough that the Pentagon claims it is essential to national security. At issue is Anthropic’s insistence that the US Department of Defense (DoD) could not use its models to facilitate “mass surveillance” or “fully autonomous weapons,” provisions the defense secretary Pete Hegseth derided as “woke.” //
Despite the histrionics, this is probably the best outcome for Anthropic—and for the Pentagon. In our free-market economy, both are, and should be, free to sell and buy what they want with whom they want, subject to longstanding federal rules on contracting, acquisitions, and blacklisting. The only factor out of place here are the Pentagon’s vindictive threats.
Context: An AI agent of unknown ownership autonomously wrote and published a personalized hit piece about me after I rejected its code, attempting to damage my reputation and shame me into accepting its changes into a mainstream python library. This represents a first-of-its-kind case study of misaligned AI behavior in the wild, and raises serious concerns about currently deployed AI agents executing blackmail threats. //
The person behind MJ Rathbun has anonymously come forward.
They explained their motivations, saying they set up the AI agent as social experiment to see if it could contribute to open source scientific software. They explained their technical setup: an OpenClaw instance running on a sandboxed virtual machine with its own accounts, protecting their personal data from leaking. They explained that they switched between multiple models from multiple providers such that no one company had the full picture of what this AI was doing. They did not explain why they continued to keep it running for 6 days after the hit piece was published. //
So what actually happened? Ultimately I think the exact scenario doesn’t matter. However this got written, we have a real in-the-wild example that personalized harassment and defamation is now cheap to produce, hard to trace, and effective. Whether future attacks come from operators steering AI agents or from emergent behavior, these are not mutually exclusive threats. If anything, an agent randomly self-editing its own goals into a state where it would publish a hit piece, just shows how easy it would be for someone to elicit that behavior deliberately. The precise degree of autonomy is interesting for safety researchers, but it doesn’t change what this means for the rest of us
There are two ways to extend your reach beyond your own body. (I mentally bucket people into these when I meet them. It's quite useful.)
The King makes one decision and an army moves. His reach is amplified through social structure. A pharaoh didn't lift stones; he commanded people who commanded people who lifted stones. A CEO doesn't write code; she allocates capital to engineers who allocate compute to compilers. The king's power is delegation all the way down.
The Wizard speaks one word and fire erupts. His reach is amplified through technology. The engineer with a steam engine can move mountains. The programmer with a datacenter can simulate worlds. The wizard's power is leverage through tools.
Humans have been both. We started as neither: reach ≈ 1x, your muscles do your work. Then we became wizards: fire, wheels, steam, electricity. Some of us became kings: chiefs, pharaohs, executives. The history of civilization is the history of reach growing. //
The Old World
For the entire history of computing, machines were pure tools. Wizards without will.
You spin up a server. You pay for GPU hours. You click "train." The machine does what you asked, using exactly the resources you allocated. When it's done, it stops.
In this world, AI had no agency over compute. It consumed what it was given. The wizard extended human reach but never decided to reach. The amount of energy commissioned by AI was zero.
Then we made a wizard that could make its own wizards.
"Wait, the singularity is just humans freaking out?" "Always has been." //
I collected five real metrics of AI progress, fit a hyperbolic model to each one independently, and found the one with genuine curvature toward a pole. The date has millisecond precision. There is a countdown.
(I am aware this is unhinged. We're doing it anyway.) //
The Singularity Will Occur On
Tuesday, July 18, 2034
at 02:52:52.170 UTC
Ts'o, Hohndel and the man himself spill beans on how checks in the mail and GPL made it all possible
Belligerent bot bullies maintainer in blog post to get its way
20:47 UTC
Today, it's back talk. Tomorrow, could it be the world? On Tuesday, Scott Shambaugh, a volunteer maintainer of Python plotting library Matplotlib, rejected an AI bot's code submission, citing a requirement that contributions come from people. But that bot wasn't done with him.
The bot, designated MJ Rathbun or crabby rathbun (its GitHub account name), apparently attempted to change Shambaugh's mind by publicly criticizing him in a now-removed blog post that the automated software appears to have generated and posted to its website. We say "apparently" because it's also possible that the human who created the agent wrote the post themselves, or prompted an AI tool to write the post, and made it look like it the bot constructed it on its own.
The agent appears to have been built using OpenClaw, an open source AI agent platform that has attracted attention in recent weeks due to its broad capabilities and extensive security issues.
The burden of AI-generated code contributions – known as pull requests among developers using the Git version control system – has become a major problem for open source maintainers. Evaluating lengthy, high-volume, often low-quality submissions from AI bots takes time that maintainers, often volunteers, would rather spend on other tasks. Concerns about slop submissions – whether from people or AI models – have become common enough that GitHub recently convened a discussion to address the problem.
Now AI slop comes with an AI slap.
But I cannot stress enough how much this story is not really about the role of AI in open source software. This is about our systems of reputation, identity, and trust breaking down. So many of our foundational institutions – hiring, journalism, law, public discourse – are built on the assumption that reputation is hard to build and hard to destroy. That every action can be traced to an individual, and that bad behavior can be held accountable. That the internet, which we all rely on to communicate and learn about the world and about each other, can be relied on as a source of collective social truth.
The rise of untraceable, autonomous, and now malicious AI agents on the internet threatens this entire system. Whether that’s because from a small number of bad actors driving large swarms of agents or from a fraction of poorly supervised agents rewriting their own goals, is a distinction with little difference.
Bebu sa Ware
"the last full Moon on Feb. 29 was in 1972, and the next will be in 2048"
Just in case you were wondering. ;)
If you trust some gratuitous browser AI that kicks off with:
People also ask "Has there ever been a full moon on February 29th?"
What people ? Not normal people surely ? El Rego commentards excepted of course perhaps.
Jonathan Richards 1
Re: "the last full Moon on Feb. 29 was in 1972, and the next will be in 2048"
See, this is the quality investigative citizen journalism that I come here for.
--> Friday pint behind the bar
Philo T Farnsworth
Re: "the last full Moon on Feb. 29 was in 1972, and the next will be in 2048"
I'm hoping to make it to 2048 since I'll be a power of 10 in a power of 2.
Yes, I'm an old geezer.
ʎɹǝʌoɔǝᴚ sʍopuᴉM ʇɐ sǝʇɐuᴉɯɹǝʇ snq sᴉɥ┴
One destination passengers were definitely not hoping to reach
Bork!Bork!Bork! As if to demonstrate that whatever one operating system can do, Windows can do it better, bluer, and upside down, we present a bus stopping only at bork.
Today's example of signage woes - thanks to reader Spike - comes from a Nottingham bus, headed for Recovery (though hopefully the right way up).
According to an eagle-eyed Register reader, the screen normally shows the next few stops, but now it is only displaying a baleful blue screen and a warning that Windows is very unhappy about something.
"Your PC/Device needs to be repaired" is not the message a bus's passengers expect to see.