These are just individuals, they just use computers, and they just want to steal your data and make money. They're not mythical. They don't have superpowers. //
And thus, the Dark Web Roast was born. It's a regular blog complete with memes, mockery, and a Ricky Gervais' "they're just jokes" inspired disclaimer: "While these incidents are genuinely amusing, they represent real criminal activities causing significant harm. This content is for threat intelligence and educational purposes only."
The most recent edition features a ransomware gang that bulk-drafted and scheduled their extortion attempts like a content calendar: "Considering the sheer, numbing volume of their posts, it's a solid bet that their 'victims' are probably just fake sites they spun up themselves for content, because nothing screams legitimacy like inflating your stats with phantom compromises," the researchers wrote. //
But public mockery (as with LockBit), and infiltration like the FBI did with Hive's ransomware network, can fracture trust among cyberthieves. And this fragmentation can help defenders dismantle criminal operations and keep people and data safe. //
The video shows an administrator skimming the most valuable secrets and cryptocurrency keys for personal gain, while passing only less lucrative data to customers. Trellix learned about this incident during a briefing with Dutch police.
"They said to us, 'We found out that this admin is also stealing from his own customers,'" Fokker remembers. After the Europol press release came out, Trellix unleashed the snark in a Dark Web Roast.
"We basically said you're stupid if you work with him, because he's just getting rich, and we just make fun of him," Fokker said. "We don't know if the impact was measurable, but still, we had an opportunity to run with that story and make a complete fool out of this admin. So that's something." ®
The cost of high-performance GPUs, typically $8,000 or more, means they are frequently shared among dozens of users in cloud environments. Three new attacks demonstrate how a malicious user can gain full root control of a host machine by performing novel Rowhammer attacks on high-performance GPU cards made by Nvidia.
The attacks exploit memory hardware’s increasing susceptibility to bit flips, in which 0s stored in memory switch to 1s and vice versa. In 2014, researchers first demonstrated that repeated, rapid access—or “hammering”—of memory hardware known as DRAM creates electrical disturbances that flip bits. A year later, a different research team showed that by targeting specific DRAM rows storing sensitive data, an attacker could exploit the phenomenon to escalate an unprivileged user to root or evade security sandbox protections. Both attacks targeted DDR3 generations of DRAM. //
On Thursday, two research teams, working independently of each other, demonstrated attacks against two cards from Nvidia’s Ampere generation that take GPU rowhammering into new—and potentially much more consequential—territory: GDDR bitflips that give adversaries full control of CPU memory, resulting in full system compromise of the host machine. For the attack to work, IOMMU memory management must be disabled, as is the default in BIOS settings. //
A separate mitigation is to enable Error Correcting Codes (ECC) on the GPU, something Nvidia allows to be done using a command line. //
Kevin G
Ars Scholae Palatinae
21y
1,483
Thursday at 2:54 PM
#12
New
The ECC functionality on nVidia cards can take a pretty big performance hit as they do not include extra DRAM for ECC. Thus on a 32 GB workstation GPU, the amount of usable memory is reduced down to a 28 GB. Thus if you were using that extra memory and flipped on ECC, performance tanks as the remaining 4 GB gets paged out to host CPU memory. Beyond that, the ECC algorithm itself as the where the parity data for ECC resides is some what configurable. If itis on the same memory controller (which generally means the same memory chip as often there is only one chip per memory channel), then the calculation is done inside the memory controller relatively quickly. This of course comes at the higher integrity risk of losing data if a memory chip fails but this does protect against random bit flips. The other ECC algorithm is more akin to software RAID5 which rotates where the parity data resides across the chip and across the various internal memory controllers. Thus to compute ECC, one memory controller has to wait for another control to read that information and pass it down which is big performance penalty.
What this article doesn't cover is HBM which can both have extra stacks of memory in a channel as well as extra bits of parity on each die in the stack. Most ECC leverage the extra memory on the die plus rotating where the parity data resides. The end result is effectively the same as having an extra DRAM chip on a DIMM. (For those who don't know, an 8 GB ECC DIMM will contain ten 1 GB memory chips but the extra 2 GB is used exclusively for ECC and does not alter the usable capacity.)
HBM controllers are rather complex and the reason why capacities like 141 GB exist is due to a single die failure in one of the many stacks. Instead of disabling a wholes stack and reducing the memory capacity down to 120 GB, only the explicitly broken die is disabled.
At Friday’s hearing of the Colorado Senate Business, Labor, and Technology committee, lawmakers voted unanimously to move Colorado state bill SB26-090—titled Exempt Critical Infrastructure from Right to Repair—out of committee and into the state senate and house for a vote.
The bill modifies Colorado’s Consumer Right to Repair Digital Electronic Equipment act, which was passed in 2024 and went into effect in January 2026. While the protections secured by that act are wide, the new SB26-090 bill aims to “exempt information technology equipment that is intended for use in critical infrastructure from Colorado’s consumer right to repair laws.” //
“I can point out at least five problems with the bill as drafted,” Gay Gordon-Byrne, the executive director at the Repair Association, said during the hearing. “The definition of critical infrastructure is completely inadequate. The definition that has been proposed in this bill is not even a definition.” //
Repair advocates also say that limiting this kind of repairability is the exact opposite of keeping devices secure. If something goes wrong with a critical piece of technology, the people using it need to fix it and not have to wait for manufacturer approval.
“There’s a general principle in cybersecurity that obscurity is not security,” iFixit CEO Kyle Wiens said in the hearing. “The money that’s behind the scenes, that’s what’s driving the bill.” //
DarthSlack Ars Legatus Legionis
12y
23,110
Subscriptor++
So critical infrastructure is, well, critical, right? Like you need it to keep working because if it stops you're in a world of hurt? So isn't that the stuff you really, really, really want to be able to repair when it breaks and not sitting on your ass waiting for some clownshoes to show up and charge you a small fortune to turn a screw or apply a patch?
Charles Bennett and Gilles Brassard have won the 2026 Turing Award for inventing quantum cryptography.
I am incredibly pleased to see them get this recognition. I have always thought the technology to be fantastic, even though I think it’s largely unnecessary. I wrote up my thoughts back in 2008, in an essay titled “Quantum Cryptography: As Awesome As It Is Pointless.” //
What about quantum computation? I’m not worried; the math is ahead of the physics. Reports of progress in that area are overblown. And if there’s a security crisis because of a quantum computation breakthrough, it’s because our systems aren’t crypto-agile. //
Ray Dillinger • March 31, 2026 2:43 PM
I don’t mean to diminish the work of Bennett and Brassard. They had some amazing insights and deserve their award.
At the same time I suppose that people affiliated with various three-letter-agencies may have been consulted as to the value of their work when the Turing Awards were being considered. Those agencies, if they are behind the Kleptographic attack that appears to be happening here, may have had an interest in promoting public awareness of Quantum Crypto as a threat. Promoting public awareness of a threat is absolutely a necessary step in any campaign to use that threat as a lever to get people to do something stupid out of FUD.
So I fear that the work of Bennett and Brassard, however good it may be, would likely have gone unrecognized if not for the input of people who are, despite all protestations, unlikely to be motivated by protecting people against it.
Ray Dillinger • March 31, 2026 2:43 PM
I don’t mean to diminish the work of Bennett and Brassard. They had some amazing insights and deserve their award.
At the same time I suppose that people affiliated with various three-letter-agencies may have been consulted as to the value of their work when the Turing Awards were being considered. Those agencies, if they are behind the Kleptographic attack that appears to be happening here, may have had an interest in promoting public awareness of Quantum Crypto as a threat. Promoting public awareness of a threat is absolutely a necessary step in any campaign to use that threat as a lever to get people to do something stupid out of FUD.
So I fear that the work of Bennett and Brassard, however good it may be, would likely have gone unrecognized if not for the input of people who are, despite all protestations, unlikely to be motivated by protecting people against it.
Secure Boot is a feature of UEFI, and it's a requirement for any computer that wants to run a modern version of Windows. It exists to protect us against malware that infects your computer's bootloader. There's a security certificate stored in the UEFI which your computer uses to check the Windows bootloader, to ensure it's legitimately signed by Microsoft, and not an imposter.
So far, so good, but what happens when the certificate in your UEFI expires? Well, we're all about to find out.
IanRS
Bigger problems
In my work as a security architect I occasionally get asked by an assurer or auditor why I think running AWS infrastructure in just two availability zones without a second region is enough. The latest was just earlier this week. It shows that they do not understand risk/impact balance outside their own little box. I have to point out that if something can take out two geographically separated data centres simultaneously then the impact is not restricted just to their website, and they probably have bigger problems to worry about. Some of them accept this. Some still think another region would help.
20 hrs
Anonymous Coward
Re: Bigger problems
I worked for a small public sector body. An auditor once asked what would happen if both our main and DR sites went dark. I said if that happened, something very big & bad was happening and no-one was going to care about our organisation.
Auditor ticked their box as we had clearly considered the possibility and we had a plan. (Do nothing is still a plan!)
Dachannien Ars Scholae Palatinae
16y
1,130
Subscriptor
OrvGull said:
Google has a quantum computing division. Implying they're close to some kind of breakthrough could absolutely juice their stock.
Maybe, but they actually explain the point in worrying now: Store-now-decrypt-later attacks can only really be mitigated by migrating systems to PQC. The sooner you do that, the smaller your data vulnerability surface is (in a timewise sense). If you get compromised in the future and your encrypted data gets exfiltrated, you're much better off if that data was protected with PQC. Your future vulnerability without PQC is by definition shorter if you implement now rather than later.
Based on that logic, the reason to pick, say, 2029 as a good must-implement date is because of the naturally decaying value of store-now-decrypt-later data. Even if QC isn't successful until 2039, deploying by 2029 means any vulnerable data would be 10 years old (and 10 years less valuable) by the time it gets cracked. The fact that they didn't pick a date even sooner just speaks to the monumental bulk of the task at hand.
The Federal Communications Commission yesterday announced it will no longer approve consumer-grade routers made outside of the US, citing a President Trump directive on reducing the use of foreign technology for national security reasons. The action will prevent foreign-made routers from being imported into or sold in the US.
Routers already approved for sale in the US can continue to be sold, and consumers can keep using any router they’ve previously obtained, the FCC said. But the FCC will not approve new device models made at least partly outside the US unless the Department of Defense or Department of Homeland Security determines that the router does not pose national security risks.
The prohibition applies to both US and foreign companies that produce routers outside the US. Foreign production includes “any major stage of the process through which the device is made, including manufacturing, assembly, design, and development.”
“This action means that new models of foreign-produced routers will no longer be eligible for marketing or sale in the US,” FCC Chairman Brendan Carr wrote on X.
While US government agencies remain the top targets for Iran's cyber weapons, all of the security professionals we interviewed told us that American businesses are more at risk.
"The NSA is really, really good at defensive operations, and so I don't see...the attacks going against government assets, I see them going after civilian assets," said Coffman, who served more than 35 years in the US Army and is now president of Forward Edge-AI, which provides AI and cybersecurity services to US government, defense, and critical infrastructure sectors.
2FAS Auth — Internet’s favourite Open-source two-factor authenticator.
Private, simple and secure.
It works with all browsers and is not limited to just browser use. Use it to log into any device, application, or unlock encrypted drives.
Some sites support OTP codes; others support security keys. OnlyKey does it all and is the most universally supported 2FA key.
You can use your OnlyKey immediately for two-factor authentication and passwordless login (FIDO2) supported by major websites such as Microsoft, Google, Facebook,Dropbox, GitHub,Okta, AWS and more.
To bypass the bottleneck, companies are turning to Merkle Trees, a data structure that uses cryptographic hashes and other math to verify the contents of large amounts of information using a small fraction of material used in more traditional verification processes in public key infrastructure. Cloudflare has a much deeper dive into Merkle Trees here.
Merkle Tree Certificates, “replace the heavy, serialized chain of signatures found in traditional PKI with compact Merkle Tree proofs,” members of Google’s Chrome Secure Web and Networking Team wrote Friday. “In this model, a Certification Authority (CA) signs a single ‘Tree Head’ representing potentially millions of certificates, and the ‘certificate’ sent to the browser is merely a lightweight proof of inclusion in that tree.”
With the ability to intercept all link-layer traffic (that is, the traffic as it passes between Layers 1 and 2), an attacker can perform other attacks on higher layers. The most dire consequence occurs when an Internet connection isn’t encrypted—something that Google recently estimated occurred when as much as 6 percent and 20 percent of pages loaded on Windows and Linux, respectively. In these cases, the attacker can view and modify all traffic in the clear and steal authentication cookies, passwords, payment card details, and any other sensitive data. Since many company intranets are sent in plaintext, traffic from them can also be intercepted. //
“Even when the guest SSID has a different name and password, it may still share parts of the same internal network infrastructure as your main Wi-Fi,” the researcher explained. “In some setups, that shared infrastructure can allow unexpected connectivity between guest devices and trusted devices.” //
The MitM targets Layers 1 and 2 and the interaction between them. It starts with port stealing, one of the earliest attack classes of Ethernet that’s adapted to work against Wi-Fi. An attacker carries it out by modifying the Layer-1 mapping that associates a network port with a victim’s MAC—a unique address that identifies each connected device. By connecting to the BSSID that bridges the AP to a radio frequency the target isn’t using (usually a 2.4GHz or 5GHz) and completing a Wi-Fi four-way handshake, the attacker replaces the target’s MAC with one of their own. //
For now, client isolation is similarly defeated—almost completely and overnight—with no immediate remedy available.
At the same time, the bar for waging WEP attacks was significantly lower, since it was available to anyone within range of an AP. AirSnitch, by contrast, requires that the attacker already have some sort of access to the Wi-Fi network. For many people, that may mean steering clear of public Wi-Fi networks altogether.
If the network is properly secured—meaning it’s protected by a strong password that’s known only to authorized users—AirSnitch may not be of much value to an attacker. The nuance here is that even if an attacker doesn’t have access to a specific SSID, they may still use AirSnitch if they have access to other SSIDs or BSSIDs that use the same AP or other connecting infrastructure. //
Probably the most reasonable response is to exercise measured caution for all Wi-Fi networks managed by people you don’t know. When feasible, use a trustworthy VPN on public APs or, better yet, tether a connection from a cell phone.
Most phishing websites are little more than static copies of login pages for popular online destinations, and they are often quickly taken down by anti-abuse activists and security firms. But a stealthy new phishing-as-a-service offering lets customers sidestep both of these pitfalls: It uses cleverly disguised links to load the target brand’s real website, and then acts as a relay between the victim and the legitimate site — forwarding the victim’s username, password and multi-factor authentication (MFA) code to the legitimate site and returning its responses.
Academics say they found a series of flaws affecting three popular password managers, all of which claim to protect user credentials in the event that their servers are compromised.
The team, comprised of researchers from ETH Zurich and Università della Svizzera italiana (USI), examined the "zero-knowledge encryption" promises made by Bitwarden, LastPass, and Dashlane, finding all three could expose passwords if attackers compromised servers. //
As one of the most popular alternatives to Apple and Google's own password managers, which together dominate the market, the researchers found Bitwarden was most susceptible to attacks, with 12 working against the open-source product. Seven distinct attacks worked against LastPass, and six succeeded in Dashlane.
DeMercurio and Wynn sued Dallas County and Leonard for false arrest, abuse of process, defamation, intentional infliction of emotional distress, and malicious prosecution. The case dragged on for years. Last Thursday, five days before a trial was scheduled to begin in the case, Dallas County officials agreed to pay $600,000 to settle the case.
It’s hard to overstate the financial, emotional, and professional stresses that result when someone is locked up and repeatedly accused of criminal activity for performing authorized work that’s clearly in the public interest. DeMercurio has now started his own firm, Kaiju Security.
“The settlement confirms what we have said from the beginning: our work was authorized, professional, and done in the public interest,” DeMercurio said. “What happened to us never should have happened. Being arrested for doing the job we were hired to do turned our lives upside down and damaged reputations we spent years building.” //
Martin Blank
Reading more about it, it seems a bit more complicated. While I don't think the pentesters should have been arrested (much less defamed), it does seem like the people who authorized them might not have actually had that authority.
I was a pentester for about a decade (though I didn't do physical testing), including at the time of this incident. There is a certain amount of trust that goes into contracting. We don't go out just based on an email approval. We get signed authorizations that are presumably vetted by knowledgeable people, and frequently lawyers, on both sides. I wouldn't have thought twice about accepting a contract signed by a representative for the court system itself.
But even more important, the people who hired them should have done their due dilligence. Had they followed the standard protocol and brought legal in, these issues of authority would likely have been pointed out.
There is a high likelihood that legal was brought in. This circumstance was weird, and the only reason that it got out of control was the sheriff. In most places, an improperly authorized test would have resulted in no charges or charges rapidly dismissed after showing that there was no intent to break the law.
You want to be especially in the clear on this, given cops inherent tendencies to be dicks about anything.
Yeah, this whole incident caused some significant changes in how physical pentesting was done.
January 29, 2026 at 7:08 pm
EFF is against age gating and age verification mandates, and we hope we’ll win in getting existing ones overturned and new ones prevented. But mandates are already in effect, and every day many people are asked to verify their age across the web, despite prominent cases of sensitive data getting leaked in the process.
At some point, you may have been faced with the decision yourself: should I continue to use this service if I have to verify my age? And if so, how can I do that with the least risk to my personal information? This is our guide to navigating those decisions, with information on what questions to ask about the age verification options you’re presented with, and answers to those questions for some of the top most popular social media sites. Even though there’s no way to implement mandated age gates in a way that fully protects speech and privacy rights, our goal here is to help you minimize the infringement of your rights as you manage this awful situation.
If you're serious about encryption, keep control of your encryption keys //
If you think using Microsoft's BitLocker encryption will keep your data 100 percent safe, think again. Last year, Redmond reportedly provided the FBI with encryption keys to unlock the laptops of Windows users charged in a fraud indictment. //
BitLocker is a Windows security system that can encrypt data on storage devices. It supports two modes: Device Encryption, a mode designed to simplify security, and BitLocker Drive Encryption, an advanced mode.
For either mode, Microsoft "typically" backs up BitLocker keys to its servers when the service gets set up from an active Microsoft account. "If you use a Microsoft account, the BitLocker recovery key is typically attached to it, and you can access the recovery key online," the company explains in its documentation. //
Microsoft provides the option to store keys elsewhere. Instead of selecting "Save to your Microsoft Account," customers can "Save to a USB flash drive," "Save to a file," or "Print the recovery key." //
Apple offers a similar device encryption service called FileVault, complemented by its iCloud service. The iCloud service also offers an easy mode called "Standard data protection" and "Advanced Data Protection for iCloud."
Claude Cowork is vulnerable to file exfiltration attacks via indirect prompt injection as a result of known-but-unresolved isolation flaws in Claude's code execution environment. //
Anthropic shipped Claude Cowork as an "agentic" research preview, complete with a warning label that quietly punts core security risks onto users. The problem is that Cowork inherits a known, previously disclosed isolation flaw in Claude's code execution environment—one that was acknowledged and left unfixed. The result: indirect prompt injection can coerce Cowork into exfiltrating local files, without user approval, by abusing trusted access to Anthropic's own API.
The attack chain is depressingly straightforward. A user connects Cowork to a local folder, uploads a seemingly benign document (or "Skill") containing a concealed prompt injection, and asks Cowork to analyze their files. The injected instructions tell Claude to run a curl command that uploads the largest available file to an attacker-controlled Anthropic account, using an API key embedded in the hidden text. Network egress is "restricted," except for Anthropic's API—which conveniently flies under the allowlist radar and completes the data theft.
Once uploaded, the attacker can chat with the victim's documents, including financial records and PII. This works not just on lightweight models, but also on more "resilient" ones like Opus 4.5. Layer in Cowork's broader mandate—browser control, MCP servers, desktop automation—and the blast radius only grows. Telling non-technical users to watch for "suspicious actions" while encouraging full desktop access isn't risk management; it's abdication.