An Arizona woman who ran a laptop farm from her home - helping North Korean IT operatives pose as US-based remote workers - has been sentenced to eight and a half years behind bars for her role in a $17 million fraud that hit more than 300 American companies.
The Solid protocol, invented by Sir Tim Berners-Lee, represents a radical reimagining of how data operates online. Solid stands for “SOcial LInked Data.” At its core, it decouples data from applications by storing personal information in user-controlled “data wallets”: secure, personal data stores that users can host anywhere they choose. Applications can access specific data within these wallets, but users maintain ownership and control.
Solid is more than distributed data storage. This architecture inverts the current data ownership model. Instead of companies owning user data, users maintain a single source of truth for their personal information. It integrates and extends all those established identity standards and technologies mentioned earlier, and forms a comprehensive stack that places personal identity at the architectural center.
This identity-first paradigm means that every digital interaction begins with the authenticated individual who maintains control over their data. Applications become interchangeable views into user-owned data, rather than data silos themselves. This enables unprecedented interoperability, as services can securely access precisely the information they need while respecting user-defined boundaries.
Solid ensures that user intentions are transparently expressed and reliably enforced across the entire ecosystem. Instead of each application implementing its own custom authorization logic and access controls, Solid establishes a standardized declarative approach where permissions are explicitly defined through control lists or policies attached to resources. Users can specify who has access to what data with granular precision, using simple statements like “Alice can read this document” or “Bob can write to this folder.” These permission rules remain consistent, regardless of which application is accessing the data, eliminating the fragmentation and unpredictability of traditional authorization systems. //
Peter Galbavy • July 24, 2025 9:30 AM
Maybe I have failed to have boned up on Solid, but the charming naivete that people will maintain their own personal data stores in an honest and trustworthy way is only slightly less laughable than how it’s done right now. Or maybe not.
Again, perhaps, because I have not spent any time looking at the actual protocol details I am confused where the veracity comes from? Or am I suddenly able to call myself an Admiral with a law degree and a healthy trust fund as a credit line?
Financial criminality would be democratised overnight, if nothing else.
atanas entchev • July 24, 2025 11:01 AM
The Solid protocol is charmingly naive. It assumes — like the early internet — good-will participation from everyone. We know that this is not how the real world functions.
What is to stop bad actors from building and presenting a fake profile / history / whatever?
Peter A. • July 24, 2025 11:11 AM
There’s also another problem: partial identities, pseudonymous/fake identities, companies that collect too much data, etc. Having a data store that has it all is a bit risky, as you can accidentally share too much, especially the people that are a little less competent with all that computer stuff.
Shashank Yadav • July 24, 2025 8:57 AM
People like to own things which accord them status or meaningful utility – which is where all expectations of users considering data ownership falter.
Moreover, for enterprise users this may work, the vast majority of individual users cannot be expected to maintain such personal data pods. Hypothetically, let us say you make a law requiring this way of data management, there will immediately be third-parties who people would prefer to handle this for them. Kind of like the notion of consent managers in India’s data protection laws, because competent and continuous technical administration cannot be expected from ordinary users.
If you're writing an open source system utility, for example, your chance of widespread adoption depends on its reputation as trustworthy, and that will reflect on you.
Who watches the watchers?
Talon is a case in point. A Windows de-bloater made by an outfit called Raven and distributed through GitHub as open source, it nonetheless got a rep as potential malware. Open source by itself guarantees nothing, and the conversation around whether or not Talon's bona fides checked out simply grew and grew. Enter YouTube cyber security educator and ethical hacker John Hammond. His day job includes answering the question "Is it Malware?" He has the chops, he has the tools, he has the caffeine. Speedrun is go. //
How might Raven have avoided being considered suspicious? There's a concept called defensive coding, where you consider each decision not just as how it contributes to functionality, but how it would cope if given an unexpected input. With Talon, the defensive process is whether a choice of technique will trigger malware scanners, and if it might, but is indispensable, how to make it clear in the code what's going on. You know, that pesky documentation stuff. The design overview. The comments in the code. If your product will need all those open source eyeballs to become trusted, then feed those eyeballs with what they need. There aren't many Hammonds, but there are lots of curious wannabes, and even the occasional journalist eager to tell a story.
Creating security is a huge task, and everyone who launches software for the masses has the opportunity to help or hinder, regardless of the actual intent of the product. Open source is a magnificent path to greater security across the board, because it keeps humans in the loop. Engineering for those humans is a force amplifier for good. Just ask the future historians speedrunning the history of cyber security centuries from now. ®
One password is believed to have been all it took for a ransomware gang to destroy a 158-year-old company and put 700 people out of work.
KNP - a Northamptonshire transport company - is just one of tens of thousands of UK businesses that have been hit by such attacks.
In KNP's case, it's thought the hackers managed to gain entry to the computer system by guessing an employee's password, after which they encrypted the company's data and locked its internal systems.
KNP director Paul Abbott says he hasn't told the employee that their compromised password most likely led to the destruction of the company.
"Would you want to know if it was you?" he asks.
neils @midwestneil
·
Turns out you can just hack any train in the USA and take control over the brakes. This is CVE-2025-1727 and it took me 12 years to get this published. This vulnerability is still not patched. Here's the story:
wetw0rk @wetw0rk7
Perhaps one of the most badass CVE's I've ever seen from @midwestneil 💪😤
https://cisa.gov/news-events/ics-advisories/icsa-25-191-10
12:25 PM · Jul 11, 2025
Back when it was first implemented in the late 1980s, it was illegal for anyone else to use the frequencies allocated for this system. So, the system only used the BCH checksum for packet creation. Unfortunately, anyone with an SDR could mimic these packets, allowing them to send false signals to the EoT module and its corresponding Head-of-Train (HoT) partner. This would not have been an urgent issue if the EoT had only sent telemetry data. However, the HoT can also issue a brake command to the EoT through this system. Thus, anyone with the hardware (available for less than $500) and know-how can easily issue a brake command without the train driver’s knowledge, potentially compromising the safety of the transport operation.
What’s frustrating for Neils is that the AAR refused to acknowledge the vulnerability back in 2012, saying that it was just a theoretical issue and that they’d only believe it if it happened in real life. Unfortunately, the Federal Railway Authority (FRA) lacks a test track facility, and the AAR has not permitted any testing due to security concerns on their property. It has got to the point that the security researcher published their findings in the Boston Review, only to be refuted by the AAR in Fortune magazine. //
By 2024, the issue still hasn’t been resolved — the AAR’s Director of Information Security said that it wasn’t really a major issue and that the vulnerable devices are already reaching their end of life. Because the AAR continued to ignore the warnings, the CISA had no choice but to officially publish an advisory to warn the public about it. This has got the AAR moving forward, with the group announcing an update last April. However, implementation is going at a snail’s pace, with 2027 being the target as the earliest year of deployment.
I created this site to enable people to compare many so-called “secure messaging apps”. Likewise, I hope to educate people as to which functionality is required for truly secure messaging.
In 2016, I was frustrated with the EFF’s very out-of-date comparison, and hence I decided to create a comparison myself. Reaching out to various privacy organisations proved to be a complete waste of time, as no one was willing to collaborate on a comparison. This is a good lesson learnt: Don’t be beholden to other people/organisations, and produce your own useful work.
This site is not meant to be comprehensive; security is difficult, and a full review of each app is simply not plausible due to time, a lack of access to source code in many cases, and a lack of knowledge of development practices, and general cyber security maturity.
PsychoArs Ars Scholae Palatinae
20y
768
Subscriptor
jhodge said:
If your management network is accessible from the Internet, you're doing it wrong.It needs to be fixed, but this shouldn't be a full-on freakout for most shops.
Yes and no.
If your management network is accessible from your workload network, you're doing it wrong. All it takes is a compromised laptop/desktop/IoT device that's reaching out and lets a bad actor control it. Defense in depth. //
fuzzyfuzzyfungus Ars Legatus Legionis
12y
10,234
PsychoArs said:
Yes and no.If your management network is accessible from your workload network, you're doing it wrong. All it takes is a compromised laptop/desktop/IoT device that's reaching out and lets a bad actor control it. Defense in depth.
You also probably want 'your management network' to be internally divided to the degree possible. Ideally you'd like all your BMCs to work for you; but if one of them turns out not to you can't necessarily trust the remainder to protect themselves(and for newly added devices that aren't supposed to require hands-on provisioning more or less blind trust in the first thing that talks to you is a feature; so you really, really, want that to be you).
Not every wire can be cut, or you might as well just get an empty shed for much, much, less money; but there is often not much call for any two random devices on the management network to talk to one another, rather than a relative handful of monitoring and provisioning systems talking to otherwise solitary BMCs who have no excuse for knowing about one another. //
Little-Zen Ars Praefectus
24y
3,201
Subscriptor
Deny_Deflect_Disavow said:
“The vulnerability, carrying a severity rating of 10 out of a possible 10, resides in the AMI MegaRAC, a widely used firmware package that allows large fleets of servers to be remotely accessed and managed even when power is unavailable or the operating system isn't functioning.”I‘m not sure I understand how firmware can be manipulated if electricity is not available or the OS is not functioning. Secondly, these hosts may be physically wired to any network, yet how can a remote execution or procedure call be issued to the server if powered down?
If there's literally no power, like at all, then they aren't accessible, yes. But that's "no power" as in "the whole building's power is out."
If, however, it's just that the server has been powered off but is still plugged in, and the BMC is connected to a network, you can reach it. These are things like Dell iDRAC, HP iLO, Lenovo IMM, etc. They're designed to be always on, and they provide a way to access the server as though you were physically there, including a virtual console that acts like a connected monitor and keyboard, so you can even remotely power on/off a server if necessary. It doesn't use the installed host operating system - thing of it like a remotely accessible BIOS with a ton of other functionality that also lets you see what's happening on the system in real time. You can even virtually mount ISOs to remotely install an operating system.
It is extremely convenient and I'm sure anyone here who has worked in IT has stories about how iDRAC saved their life at one point or another. I certainly have a few.
However, I can also say - when I was managing servers, all my BMCs were connected to an isolated VLAN, in-building only accessible from another isolated VLAN and to only a very specific set of users with separate logins used solely for interacting with devices on the management network, and remotely over a VPN that only allowed that very specific set of users to access a jump box, which itself was accessible only with the separate management network logins.
You absolutely want to isolate and protect these interfaces specifically because of vulnerabilities like this one.
Encrypted chat apps like Signal and WhatsApp are one of the best ways to keep your digital conversations as private as possible. But if you’re not careful with how those conversations are backed up, you can accidentally undermine your privacy.
When a conversation is properly encrypted end-to-end, it means that the contents of those messages are only viewable by the sender and the recipient. The organization that runs the messaging platform—such as Meta or Signal—does not have access to the contents of the messages. But it does have access to some metadata, like the who, where, and when of a message. Companies have different retention policies around whether they hold onto that information after the message is sent.
What happens after the messages are sent and received is entirely up to the sender and receiver. If you’re having a conversation with someone, you may choose to screenshot that conversation and save that screenshot to your computer’s desktop or phone’s camera roll. You might choose to back up your chat history, either to your personal computer or maybe even to cloud storage (services like Google Drive or iCloud, or to servers run by the application developer).
Those backups do not necessarily have the same type of encryption protections as the chats themselves, and may make those conversations—which were sent with strong, privacy-protecting end-to-end encryption—available to read by whoever runs the cloud storage platform you’re backing up to, which also means they could hand them at the request of law enforcement.
One one my biggest worries about VPNs is the amount of trust users need to place in them, and how opaque most of them are about who owns them and what sorts of data they retain.
A new study found that many commercials VPNS are (often surreptitiously) owned by Chinese companies.
Starting from version 1.26.7, VeraCrypt discontinued support for the TrueCrypt format to prioritize the highest security standards. However, recognizing the transitionary needs of our users, we have preserved version 1.25.9, the last to support the TrueCrypt format.
On this page, users can find download links for version 1.25.9, specifically provided for converting TrueCrypt volumes to the more secure VeraCrypt format. We strongly recommend transitioning to VeraCrypt volumes and using our latest releases for ongoing encryption needs, as they encompass the latest security enhancements.
GaidinBDJ Ars Scholae Palatinae
11y
1,266
Subscriptor
actor0 said:
Why do people think E2R encryption means the data can't be decrypted?
Probably a gross misunderstanding of encryption in general.ANYONE with access to the keys can unlock it.
The ones with access to the keys own the platform.
The one who own the platform are legally required to submit your info to Subpoena, Homeland Security warrants, and Patriot Act related actions.
This is totally incorrect.
With end-to-end encryption, the platform doesn't have the keys. The clients exchange keys through the platform, but it's done in a way that the platform doesn't know what they are. A subpoena doesn't let them provide information they don't have. The platform may have metadata about your message, but not the contents.
On the Wikipedia page for Diffie-Hellman key exchange there's a good diagram explaining the concept of how you can exchange private keys through public transport. It's the one down the page a bit where they use paint colors. In the real world, it's done with math, but the paint concept is sound to understand the underlying idea.
A team of researchers confirmed that behavior in a recently released formal analysis of WhatsApp group messaging. They reverse-engineered the app, described the formal cryptographic protocols, and provided theorems establishing the security guarantees that WhatsApp provides. Overall, they gave the messenger a clean bill of health, finding that it works securely and as described by WhatsApp.
They did, however, confirm a behavior that should give some group messaging users pause: Like other messengers billed as secure—with the notable exception of Signal—WhatsApp doesn’t provide any sort of cryptographic means for group management.
“This means that it is possible for the WhatsApp server to add new members to a group,” Martin R. Albrecht, a researcher at King's College in London, wrote in an email. “A correct client—like the official clients—will display this change but will not prevent it. Thus, any group chat that does not verify who has been added to the chat can potentially have their messages read.” //
By contrast, the open source Signal messenger provides a cryptographic assurance that only an existing group member designated as the group admin can add new members. //
Most messaging apps, including Signal, don’t certify the identity of their users. That means there’s no way Signal can verify that the person using an account named Alice does, in fact, belong to Alice. It’s fully possible that Malory could create an account and name it Alice. (As an aside, and in sharp contrast to Signal, the account members that belong to a given WhatsApp group are visible to insiders, hackers, and to anyone with a valid subpoena.)
Signal does, however, offer a feature known as safety numbers. It makes it easy for a user to verify the security of messages or calls with specific contacts. When two users verify out-of-band—meaning using a known valid email address or cell phone number of the other—that Signal is displaying the same safety number on both their devices, they can be assured that the person claiming to be Alice is, in fact, Alice.
McAfee warns “these messages may seem harmless, but they’re often the first step in long-game scams designed to steal personal data — or even life savings. McAfee research shows 1 in 4 Americans have received one. Best advice? Don’t engage.”
- Airgapped raspberry pi computer with touch screen and camera
- Featuring LUKS full disk encryption
- For secure offline blockchain transactions and for secure encrypted messaging
- Move files across the airgap to other devices using QR-Codes
Independent researchers have discovered, or should we say rediscovered, a major security vulnerability in Microsoft's Remote Desktop Protocol (RDP). Previously known as Terminal Services, RDP appears to be designed to always validate a previously used password for remote connections to a Windows machine, even when that password has been revoked by a system administrator or compromised in a security breach. //
The flaw violates universally acknowledged operational security (opsec) practices – and then some. When a password is changed, it should no longer provide access to a remote system. "People trust that changing their password will cut off unauthorized access," Wade said. //
According to Microsoft, the behavior is a design decision meant to "ensure that at least one user account always has the ability to log in no matter how long a system has been offline."
The company had already been warned about this backdoor by other researchers in August 2023, making the new analysis ineligible for a bounty award. Redmond engineers reportedly attempted to modify the code to eliminate the backdoor but abandoned the effort, as the changes could break compatibility with a Windows feature that many applications still rely on. //
brucek brucekMay 2, 2025, 3:30 PM
And on the flip side, RDP doesn't recognize a valid Microsoft Account password that is not cached on the local machine. This can easily happen on a new install where you've only logged in using methods other than the password (PIN, windows hello, etc.) This is a great way to lose an hour wondering why you can't log in because it's so easy to think the problem must be some other configuration problem with setting up RDP or elsewhere in the system. //
FireStormOOOMay 2, 2025, 9:05 PM
This is cached credentials working the same way it had for decades, and it's been configurable by GPO for almost as long. The administrator chooses how long the server will remember stale credentials if it can't reach a domain controller immediately to check. No, the defaults don't make sense for a server that expects 100% availability of your authentication infrastructure.
New ChoiceJacking attack allows malicious chargers to steal data from phones. //
About a decade ago, Apple and Google started updating iOS and Android, respectively, to make them less susceptible to “juice jacking,” a form of attack that could surreptitiously steal data or execute malicious code when users plug their phones into special-purpose charging hardware. Now, researchers are revealing that, for years, the mitigations have suffered from a fundamental defect that has made them trivial to bypass.
“Juice jacking” was coined in a 2011 article on KrebsOnSecurity detailing an attack demonstrated at a Defcon security conference at the time. Juice jacking works by equipping a charger with hidden hardware that can access files and other internal resources of phones, in much the same way that a computer can when a user connects it to the phone. //
Researchers at the Graz University of Technology in Austria recently made a discovery that completely undermines the premise behind the countermeasure: They’re rooted under the assumption that USB hosts can’t inject input that autonomously approves the confirmation prompt. Given the restriction against a USB device simultaneously acting as a host and peripheral, the premise seemed sound. The trust models built into both iOS and Android, however, present loopholes that can be exploited to defeat the protections. The researchers went on to devise ChoiceJacking, the first known attack to defeat juice-jacking mitigations.
“We observe that these mitigations assume that an attacker cannot inject input events while establishing a data connection,” the researchers wrote in a paper scheduled to be presented in August at the Usenix Security Symposium in Seattle. “However, we show that this assumption does not hold in practice.”
I just got a note from @Microfix that pointed me to an interesting discussion from Ionut Ilascu at BleepingComputer:
After Microsoft ends support for Windows 7 and Windows Server 2008 on January 14, 2020, 0Patch platform will continue to ship vulnerability fixes to its agents.
“Each Patch Tuesday we’ll review Microsoft’s security advisories to determine which of the vulnerabilities they have fixed for supported Windows versions might apply to Windows 7 or Windows Server 2008 and present a high-enough risk to warrant micropatching”
Micropatches will normally be available to paying customers (Pro – $25/agent/year – and Enterprise license holders). However, Kolsek says that there will be exceptions for high-risk issues that could help slow down a global-level spread, which will be available to non-paying customers, too.
Many of you know that 0Patch has been issuing quick fixes for bad bugs in recent patches. In all cases, I’ve refrained from recommending them, simply because I’m concerned about applying third party patches directly to Windows binaries. That said, to date, they’ve had a very good track record. Whether they can continue that record with patches-on-patches-on-patches remains to be seen, of course.
I fully expect Microsoft to release patches for newly discovered major security flaws, even after January 14. Whether those will step on the 0Patch patches is anybody’s guess.
Definitely something worth considering….
0patch promises to keep delivering security updates to Windows 10 even after Microsoft stops next year. Should you use it? We help you decide. //
It’s a way to (likely) get some extra security on a Windows PC by blocking potential flaws from being exploited. But you’re also trusting an additional vendor’s security software. //
If you’re going to connect a Windows 10 (or Windows 7) PC to a network after it’s no longer receiving patches, you should take some security precautions. Ensure you’re using a browser that’s still getting updates on your operating system and an antivirus that’s still supported. And yes, 0patch could also be an additional layer of security against nasty flaws.
“In the short term, it is a good option to buy time, but eventually, the operating system should be upgraded to a regularly supported version,” said Kron.
The folder, typically c:\inetpub, reappeared on Windows systems in April as part of Microsoft's mitigation for CVE-2025-21204, an exploitable elevation-of-privileges flaw within Windows Process Activation. Rather than patching code directly, Redmond simply pre-created the folder to block a symlink attack path. //
For at least one security researcher, in this case Kevin Beaumont, the fix also presented an opportunity to hunt for more vulnerabilities. After poking around, he discovered that the workaround introduced a new flaw of its own, triggered using the mklink command with the /j parameter.
It's a simple enough function. According to Microsoft's documentation, mklink "creates a directory or file symbolic or hard link." And with the /j flag, it creates a directory junction - a type of filesystem redirect.
Beaumont demonstrated this by running: "mklink /j c:\inetpub c:\windows\system32\notepad.exe." This turned the c:\inetpub folder - precreated in Microsoft's April 2025 update to block symlink abuse - into a redirect to a system executable. When Windows Update tried to interact with the folder, it hit the wrong target, errored out, and rolled everything back.
"So you just go without security updates," he noted.
At a Congressional hearing earlier this week, Matt Blaze made the point that CALEA, the 1994 law that forces telecoms to make phone calls wiretappable, is outdated in today’s threat environment and should be rethought:
In other words, while the legally-mandated CALEA capability requirements have changed little over the last three decades, the infrastructure that must implement and protect it has changed radically. This has greatly expanded the “attack surface” that must be defended to prevent unauthorized wiretaps, especially at scale. The job of the illegal eavesdropper has gotten significantly easier, with many more options and opportunities for them to exploit. Compromising our telecommunications infrastructure is now little different from performing any other kind of computer intrusion or data breach, a well-known and endemic cybersecurity problem. To put it bluntly, something like Salt Typhoon was inevitable, and will likely happen again unless significant changes are made.
This is the access that the Chinese threat actor Salt Typhoon used to spy on Americans:
The Wall Street Journal first reported Friday that a Chinese government hacking group dubbed Salt Typhoon broke into three of the largest U.S. internet providers, including AT&T, Lumen (formerly CenturyLink), and Verizon, to access systems they use for facilitating customer data to law enforcement and governments. The hacks reportedly may have resulted in the “vast collection of internet traffic”; from the telecom and internet giants. CNN and The Washington Post also confirmed the intrusions and that the U.S. government’s investigation is in its early stages.