DeMercurio and Wynn sued Dallas County and Leonard for false arrest, abuse of process, defamation, intentional infliction of emotional distress, and malicious prosecution. The case dragged on for years. Last Thursday, five days before a trial was scheduled to begin in the case, Dallas County officials agreed to pay $600,000 to settle the case.
It’s hard to overstate the financial, emotional, and professional stresses that result when someone is locked up and repeatedly accused of criminal activity for performing authorized work that’s clearly in the public interest. DeMercurio has now started his own firm, Kaiju Security.
“The settlement confirms what we have said from the beginning: our work was authorized, professional, and done in the public interest,” DeMercurio said. “What happened to us never should have happened. Being arrested for doing the job we were hired to do turned our lives upside down and damaged reputations we spent years building.” //
Martin Blank
Reading more about it, it seems a bit more complicated. While I don't think the pentesters should have been arrested (much less defamed), it does seem like the people who authorized them might not have actually had that authority.
I was a pentester for about a decade (though I didn't do physical testing), including at the time of this incident. There is a certain amount of trust that goes into contracting. We don't go out just based on an email approval. We get signed authorizations that are presumably vetted by knowledgeable people, and frequently lawyers, on both sides. I wouldn't have thought twice about accepting a contract signed by a representative for the court system itself.
But even more important, the people who hired them should have done their due dilligence. Had they followed the standard protocol and brought legal in, these issues of authority would likely have been pointed out.
There is a high likelihood that legal was brought in. This circumstance was weird, and the only reason that it got out of control was the sheriff. In most places, an improperly authorized test would have resulted in no charges or charges rapidly dismissed after showing that there was no intent to break the law.
You want to be especially in the clear on this, given cops inherent tendencies to be dicks about anything.
Yeah, this whole incident caused some significant changes in how physical pentesting was done.
January 29, 2026 at 7:08 pm
EFF is against age gating and age verification mandates, and we hope we’ll win in getting existing ones overturned and new ones prevented. But mandates are already in effect, and every day many people are asked to verify their age across the web, despite prominent cases of sensitive data getting leaked in the process.
At some point, you may have been faced with the decision yourself: should I continue to use this service if I have to verify my age? And if so, how can I do that with the least risk to my personal information? This is our guide to navigating those decisions, with information on what questions to ask about the age verification options you’re presented with, and answers to those questions for some of the top most popular social media sites. Even though there’s no way to implement mandated age gates in a way that fully protects speech and privacy rights, our goal here is to help you minimize the infringement of your rights as you manage this awful situation.
If you're serious about encryption, keep control of your encryption keys //
If you think using Microsoft's BitLocker encryption will keep your data 100 percent safe, think again. Last year, Redmond reportedly provided the FBI with encryption keys to unlock the laptops of Windows users charged in a fraud indictment. //
BitLocker is a Windows security system that can encrypt data on storage devices. It supports two modes: Device Encryption, a mode designed to simplify security, and BitLocker Drive Encryption, an advanced mode.
For either mode, Microsoft "typically" backs up BitLocker keys to its servers when the service gets set up from an active Microsoft account. "If you use a Microsoft account, the BitLocker recovery key is typically attached to it, and you can access the recovery key online," the company explains in its documentation. //
Microsoft provides the option to store keys elsewhere. Instead of selecting "Save to your Microsoft Account," customers can "Save to a USB flash drive," "Save to a file," or "Print the recovery key." //
Apple offers a similar device encryption service called FileVault, complemented by its iCloud service. The iCloud service also offers an easy mode called "Standard data protection" and "Advanced Data Protection for iCloud."
Claude Cowork is vulnerable to file exfiltration attacks via indirect prompt injection as a result of known-but-unresolved isolation flaws in Claude's code execution environment. //
Anthropic shipped Claude Cowork as an "agentic" research preview, complete with a warning label that quietly punts core security risks onto users. The problem is that Cowork inherits a known, previously disclosed isolation flaw in Claude's code execution environment—one that was acknowledged and left unfixed. The result: indirect prompt injection can coerce Cowork into exfiltrating local files, without user approval, by abusing trusted access to Anthropic's own API.
The attack chain is depressingly straightforward. A user connects Cowork to a local folder, uploads a seemingly benign document (or "Skill") containing a concealed prompt injection, and asks Cowork to analyze their files. The injected instructions tell Claude to run a curl command that uploads the largest available file to an attacker-controlled Anthropic account, using an API key embedded in the hidden text. Network egress is "restricted," except for Anthropic's API—which conveniently flies under the allowlist radar and completes the data theft.
Once uploaded, the attacker can chat with the victim's documents, including financial records and PII. This works not just on lightweight models, but also on more "resilient" ones like Opus 4.5. Layer in Cowork's broader mandate—browser control, MCP servers, desktop automation—and the blast radius only grows. Telling non-technical users to watch for "suspicious actions" while encouraging full desktop access isn't risk management; it's abdication.
There are two major security problems with these photo frames and unofficial Android TV boxes. The first is that a considerable percentage of them come with malware pre-installed, or else require the user to download an unofficial Android App Store and malware in order to use the device for its stated purpose (video content piracy). The most typical of these uninvited guests are small programs that turn the device into a residential proxy node that is resold to others.
The second big security nightmare with these photo frames and unsanctioned Android TV boxes is that they rely on a handful of Internet-connected microcomputer boards that have no discernible security or authentication requirements built-in. In other words, if you are on the same network as one or more of these devices, you can likely compromise them simultaneously by issuing a single command across the network. //
Many wireless routers these days make it relatively easy to deploy a “Guest” wireless network on-the-fly. Doing so allows your guests to browse the Internet just fine but it blocks their device from being able to talk to other devices on the local network — such as shared folders, printers and drives. If someone — a friend, family member, or contractor — requests access to your network, give them the guest Wi-Fi network credentials if you have that option. //
It is somewhat remarkable that we haven’t yet seen the entertainment industry applying more visible pressure on the major e-commerce vendors to stop peddling this insecure and actively malicious hardware that is largely made and marketed for video piracy. These TV boxes are a public nuisance for bundling malicious software while having no apparent security or authentication built-in, and these two qualities make them an attractive nuisance for cybercriminals.
Funds in ‘Money Safe’ accounts are only available when customers appear for face-to-face verification
Urban VPN Proxy targets conversations across ten AI platforms: ChatGPT, Claude, Gemini, Microsoft Copilot, Perplexity, DeepSeek, Grok (xAI), Meta AI.
For each platform, the extension includes a dedicated “executor” script designed to intercept and capture conversations. The harvesting is enabled by default through hardcoded flags in the extension’s configuration.
There is no user-facing toggle to disable this. The only way to stop the data collection is to uninstall the extension entirely.
[…]
The data collection operates independently of the VPN functionality. Whether the VPN is connected or not, the harvesting runs continuously in the background.
"So one of the things that we're seeing is the whole movement away from passwords to passkeys – a certificate-based authentication wrapped in a usability shrink wrap," Forrester VP and analyst Andras Cser told The Register.
Passkeys are typically what security folks mean when they say "phishing-resistant MFA." They replace passwords, and instead use cryptographic key pairs with the public key stored on the server and the private key – such as the user's face, fingerprints, or PIN – stored on the user's device. higher bandwidth demands.
From this, he looked at its software and operating system, and that’s where he discovered the dark truth: his smart vacuum was a security nightmare and a black hole for his personal data. First of all, it's Android Debug Bridge, which gives him full root access to the vacuum, wasn't protected by any kind of password or encryption. The manufacturer added a makeshift security protocol by omitting a crucial file, which caused it to disconnect soon after booting, but Harishankar easily bypassed it. He then discovered that it used Google Cartographer to build a live 3D map of his home.
This isn’t unusual, by far. After all, it’s a smart vacuum, and it needs that data to navigate around his home. However, the concerning thing is that it was sending off all this data to the manufacturer’s server. It makes sense for the device to send this data to the manufacturer, as its onboard SoC is nowhere near powerful enough to process all that data. However, it seems that iLife did not clear this with its customers. Furthermore, the engineer made one disturbing discovery — deep in the logs of his non-functioning smart vacuum, he found a command with a timestamp that matched exactly the time the gadget stopped working. This was clearly a kill command, and after he reversed it and rebooted the appliance, it roared back to life.
It’s still legal to pick locks, even when you swing your legs.
“Opening locks” might not sound like scintillating social media content, but Trevor McNally has turned lock-busting into online gold. A former US Marine Staff Sergeant, McNally today has more than 7 million followers and has amassed more than 2 billion views just by showing how easy it is to open many common locks by slapping, picking, or shimming them.
This does not always endear him to the companies that make the locks. //
Wheels Of Confusion Ars Legatus Legionis
16y
73,932
Subscriptor
The company claimed to have this case locked up from the start, but it was picked apart embarrassingly quickly.
One simple AI prompt saved me from disaster.
Not fancy security tools. Not expensive antivirus software. Just asking my coding assistant to look for suspicious patterns before executing unknown code.
The scary part? This attack vector is perfect for developers. We download and run code all day long. GitHub repos, npm packages, coding challenges. Most of us don't sandbox every single thing.
And this was server-side malware. Full Node.js privileges. Access to environment variables, database connections, file systems, crypto wallets. Everything.
If this sophisticated operation is targeting developers at scale, how many have already been compromised? How many production systems are they inside right now?
Perfect Targeting: Developers are ideal victims. Our machines contain the keys to the kingdom: production credentials, crypto wallets, client data.
Professional Camouflage: LinkedIn legitimacy, realistic codebases, standard interview processes.
Technical Sophistication: Multi-layer obfuscation, remote payload delivery, dead-man switches, server-side execution.
One successful infection could compromise production systems at major companies, crypto holdings worth millions, personal data of thousands of users. //
If you're a developer getting LinkedIn job opportunities:
Always sandbox unknown code. Docker containers, VMs, whatever. Never run it on your main machine.
Use AI to scan for suspicious patterns. Takes 30 seconds. Could save your entire digital life.
Verify everything. Real LinkedIn profile doesn't mean real person. Real company doesn't mean real opportunity.
Trust your gut. If someone's rushing you to execute code, that's a red flag.
This scam was so sophisticated it fooled my initial BS detector. But one paranoid moment and a simple AI prompt exposed the whole thing.
Scientists at the University of California, San Diego, and the University of Maryland, College Park, say they were able to pick up large amounts of sensitive traffic largely by just pointing a commercial off-the-shelf satellite dish at the sky from the roof of a university building in San Diego.
In its paper, Don't Look Up: There Are Sensitive Internal Links in the Clear on GEO Satellites [PDF], the team describes how it performed a broad scan of IP traffic on 39 GEO satellites across 25 distinct longitudes and found that half of the signals they picked up contained cleartext IP traffic.
This included unencrypted cellular backhaul data sent from the core networks of several US operators, destined for cell towers in remote areas. Also found was unprotected internet traffic heading for in-flight Wi-Fi users aboard airliners, and unencrypted call audio from multiple VoIP providers.
According to the researchers, they were able to identify some observed satellite data as corresponding to T-Mobile cellular backhaul traffic. This included text and voice call contents, user internet traffic, and cellular network signaling protocols, all "in the clear," but T-Mobile quickly enabled encryption after learning about the problem.
More seriously, the team was able to observe unencrypted traffic for military systems including detailed tracking data for coastal vessel surveillance and operational data of a police force.
In addition, they found retail, financial, and banking companies all using unencrypted satellite communications to link their internal networks at various sites. The researchers were able to see unencrypted login credentials, corporate emails, inventory records, and information from ATM cash dispensers.
New design sets a high standard for post-quantum readiness.
Aranya is an access governance and secure data exchange platform for organizations to control their critical data and services. Access governance is a mechanism to define, enforce, and maintain the set of rules and procedures to secure your system’s behaviors. Aranya gives you the ability to apply access controls over stored and shared resources all in one place.
Aranya enables you to safeguard sensitive information, maintain compliance, mitigate the risk of unauthorized data exposure, and grant appropriate access. Aranya’s decentralized platform allows you to define and enforce these sets of policies to secure and access your resources.
The platform provides a software toolkit for policy-driven access controls and secure data exchange. The software is deployed on endpoints, integrating into applications which require granular access controls over their data and services. Endpoints can entrust Aranya with their data protection and access controls so that other applications running on the endpoint need only to focus on using the data for their intended functionality. Aranya has configurable end-to-end encryption built into its core as a fundamental design principle.
A key discriminating attribute of Aranya is the decentralized, zero trust architecture. Through the integration of the software, access governance is implemented without the need for a connection back to centralized IT infrastructure. With Aranya’s decentralized architecture, if two endpoints are connected to each other, but not back to the cloud or centralized infrastructure, governance over data and applications will be synchronized between peers and further operations will continue uninterrupted.
Today’s world requires us to make complex and nuanced decisions about our digital security. Evaluating when to use a secure messaging app like Signal or WhatsApp, which passwords to store on your smartphone, or what to share on social media requires us to assess risks and make judgments accordingly. Arriving at any conclusion is an exercise in threat modeling.
Are your links not malicious looking enough?
This tool is guaranteed to help with that!
What is this and what does it do?
This is a tool that takes any link and makes it look malicious. It works on the idea of a redirect. Much like https://tinyurl.com/ for example. Where tinyurl makes an url shorter, this site makes it look malicious.
Place any link in the below input, press the button and get back a fishy(phishy, heh...get, it?) looking link. The fishy link doesn't actually do anything, it will just redirect you to the original link you provided.
Notion just released version 3.0, complete with AI agents. Because the system contains Simon Willson’s lethal trifecta, it’s vulnerable to data theft though prompt injection.
First, the trifecta:
The lethal trifecta of capabilities is:
- Access to your private data—one of the most common purposes of tools in the first place!
- Exposure to untrusted content—any mechanism by which text (or images) controlled by a malicious attacker could become available to your LLM
- The ability to externally communicate in a way that could be used to steal your data (I often call this “exfiltration” but I’m not confident that term is widely understood.)
This is, of course, basically the point of AI agents. //
The fundamental problem is that the LLM can’t differentiate between authorized commands and untrusted data. So when it encounters that malicious pdf, it just executes the embedded commands. And since it has (1) access to private data, and (2) the ability to communicate externally, it can fulfill the attacker’s requests. I’ll repeat myself:
This kind of thing should make everybody stop and really think before deploying any AI agents. We simply don’t know to defend against these attacks. We have zero agentic AI systems that are secure against these attacks. Any AI that is working in an adversarial environment—and by this I mean that it may encounter untrusted training data or input—is vulnerable to prompt injection. It’s an existential problem that, near as I can tell, most people developing these technologies are just pretending isn’t there.
lukem
If you're going to use test values in your test systems, why not use test values allocated for documentation purposes that aren't expected to be used in "live" networks?
IETF RFC 5737 section 3 allocates three IPv4 CIDR ranges for documentation:
192.0.2.0/24, 198.51.100.0/24, and 203.0.113.0/24.
September 5, 2025 at 8:02 am
Feeling old yet? Let the Reg ruin your day for you. We are now substantially closer to the 2038 problem (5,849 days) than it has been since the Year 2000 problem (yep, 8,049 days since Y2K).
Why do we mention it? Well, thanks to keen-eyed Reg reader Calum Morrison, we've spotted a bit of the former, and a hint of what lies beneath the Beeb's digital presence, when he sent in a snapshot that implies Old Auntie might be using a 32-bit Linux in iPlayer, and something with a kernel older than Linux 5.10, too.
That 2020 kernel release was the first able to serve as a base for a 32-bit system designed to run beyond 03:14:07 UTC on 19 January 2038. //
Like Y2K, it's easy to explain and fairly easy to fix: traditionally, UNIX stored the time as the number of seconds since January 1, 1970 – but they held it in a signed 32-bit value. That means that at seven seconds after pi o'clock in the morning of January 18th, 2038 (UTC), when it will be 2,147,483,647 (2³¹) seconds after the "epoch", the next time will be -2³¹: 20:45:52 in the evening of December 13th, 1901.
It has already been fixed in Linux and any other recent UNIX-ish OSes. The easy way is to move to a 64-bit value, giving a range of 292 billion years either way. That'll do, pig.
The fun bit is finding all the places it might occur, like this fun (and old and long-fixed) MongoDB bug. MongoDB stores the date correctly, Python stores the date correctly, but dates were passed from one to the other using a 32-bit value.
Museum boffins find code that crashes in 2037
A stark warning about the upcoming Epochalypse, also known as the "Year 2038 problem," has come from the past, as National Museum Of Computing system restorers have discovered an unsetting issue while working on ancient systems.
Robin Downs, a volunteer who was recently involved in an exhibition of Digital Equipment Corporation (DEC) gear at the museum, was on hand to demonstrate the problem to The Register in the museum's Large Systems Gallery, which now houses a running PDP-11/73. //
"So we found bugs that exist, pre-2038, in writing this that we didn't know about."
The Year 2038 problem occurs in systems that store Unix time – the number of seconds since the Unix epoch (00:00:00 UTC on January 1, 1970) in a signed 32-bit integer (64-bit is one modern approach, but legacy systems have a habit of lingering).
At 03:14:07 UTC on January 19, 2038, the second counter will overflow. In theory, this will result in a time and date being returned before the epoch – 20:45:52 UTC on December 13, 1901, but that didn't happen for Downs. //
zb42
As the article is specifically about date problems that occur before the 32bit unix time rollover, I think it should be mentioned that:
32bit NTP is going to roll over on the 7th of February 2036.