That led to a quick trip to an 'Urgent Care' - the frontline medical center for most Americans. At the check-in counter, the check-in nurse asked to see some ID, so I handed over my Australian driver's license. The nurse looked at the license and typed some of the info on it into a computer, then they looked up at me and asked: "Are you the same Mark Pesce who lived at...?" and then proceeded to recite an address that I resided at more than half a century ago.
Dumbstruck, I said, "Yes...? And how did you know that? I haven't lived there in nearly 50 years. I've never been in here before - I've barely ever been in this town before. Where did that come from?"
"Oh," they replied. "We share our patient data records with Massachusetts General Hospital. It's probably from them?"
I remembered having a bit of minor surgery as an 11 year old, conducted at that facility. 51 years ago. That's the only time I'd ever been a patient at Massachusetts General Hospital.
Somehow that had never been forgotten.
We seem perfectly willing to accept that everything we do today leaves a permanent record. It appears that long before Eric Schmidt declared, "Privacy is dead," any of our pretensions to privacy had already joined the Choir Invisible. //
I don’t much care how my records made it into 2025. I am interested in why nobody ever decided to delete them.
I realize we all want our medical records instantly available to inform treatment in moments of great need. But half a century of somewhat senseless recordkeeping strains credulity. Most likely my record remained in that database simply because it's never been cleaned out - an operation that would take time and budget that would never be approved because, why would you ever delete patient data?
This has the feel of a situation we had no idea we were making for ourselves - countless sensible decisions culminating in a ridiculous outcome. Go forward another fifty years, when it's quite likely I, too, will have joined the Choir Invisible. Will my patient record still be in that database? What purpose would that serve? If my records as a child are in there, half a century later, it's easy to imagine this database holds records of many other people who have passed on and therefore shouldn't be in there at all. Privacy lost to laziness. //
Alex 72
Reply Icon
Medical records should be kept but access should be controlled
I agree yes you want medical records kept, patient history is always useful and provided they are only shared with the patient or a doctor or other professional they have consented to be cared for by, and who is not engaged in malpractice, it's not harmful. The hard part is drawing the line on how much anonymised data can be used for research, ensuring that data remains anonymous and managing consent for sharing data when patients are treated elsewhere or researchers want to use data from multiple sources.
and if you keep them for someone's entire lifespan then you should provided they did not object in their lifetime and next of kin explicitly consent or at least don't object probably archive it for future research in the near/medium term and historical value in the long term. Again managing consent, allowing reasonable anonymised research in the public interest, preventing de-anonymisation and deciding the limits of how long parts of it stay private vs when genealogists and historians can have unrestricted access.. is the challenge.
To do any of this effective durable storage, access control, authentication and authorisation are just some of the challenges. I have seen data analytics firms who's job is just this struggle to get everything correct so a group of organisation just trying to provide healthcare, research, treatments, disease, prevention.... Having to do this as an add on with a limited budget I am honestly impressed its only now with ransomware we are starting to see issues and paper records were not being stolen and abused on a massive scale in the past...
I don't know the answer but I don't think its the delete key
Gene CashSilver badge
Why would you ever delete patient data?
Yes, seriously.
I can understand other records, but not medical ones.
I was able to get proper medical care, including surgery, for a broken coccyx after proving I had fallen off a hay bale in 1973 and seriously injured myself, and thus it was a chronic thing and not just the minor recent incident my doctor insisted it was. I would have otherwise not been considered eligible for the surgery.
And after you're dead, it's no longer a privacy issue and becomes historical records. It's no different than census records.
Should this data be held indefinitely? Yes.
This is the same sort of data that let me piece together that my great^9 grandfather was Edward Reavis, born 1680 in Paddington, England, and left to come to Virginia, after being held in Newgate prison for his religious beliefs. He moved to Henrico county, Virginia in 1721 and died in Northampton county, North Carolina in 1751. I've also found 454 other relatives down to me, through a ton of things including bible notes, estate papers, census records, marriage records, medical records, military records, family papers, private letters, obituaries, social security records, tombstones, and even old wedding invitations.
Tells The Reg China's ability to p0wn Redmond's wares 'gives me a political aneurysm'
Roger Cressey served two US presidents as a senior cybersecurity and counter-terrorism advisor and currently worries he'll experience a "political aneurysm" due to Microsoft's many security messes.
In the last few weeks alone, Microsoft disclosed two major security vulnerabilities – along with news that attackers exploited one involving SharePoint as a zero-day. The second flaw, while not yet under exploitation, involves Exchange server – a favorite of both Russian and Chinese spies for years. //
"This is the latest episode of a decades-long process of Microsoft not taking security seriously. Full stop," Cressey said, acknowledging that the government continues spending billions on Microsoft products. "Anytime there's a major announcement of a Microsoft procurement by the government, the happiest people in the world first are in Redmond and second in Beijing."
Microsoft declined to comment for this story, but did point out that Google Cloud is a client of Cressey's in his consulting work.
Anonymous Coward
Anonymous Coward
"got sick of telling them what was wrong and not having them fix it"
I don't know the situation with these guys, I'll give them the benefit of the doubt, but that phrase is everything wrong with a lot of cybersecurity professionals in a nutshell...plenty of goons willing to run scans and test 'sploits then suggest insanely expensive mitigations..."Man, that £1m worth of data is exposed it needs to be protected. I recommend this firewall from Ironballs Labs in California, it's only £5m".
Person: building a sandcastle
Cybersecurity: It's shit mate, it's not going to work.
Person: looks confused, doesn't understand
Cybersecurity: Man, I keep telling you it's shit.
Person: sad because his sandcastle fell over
Cybersecurity: See I told you, I've been telling you for ages you need to make your sandcastles better.
Person: Hey man, my goal here was to just have fun and chill out on the beach, a cheap day out. What would you have done?
Cybersecurity: Well, I would have used those boulders over there to fashion a small blast furnace, scavenged for iron ore at the bottom of those cliff and collected all the drift wood over there as fuel.
Person: Man, that's not worth it, I just wanted to build a sandcastle.
Cybersecurity: Why doesn't anyone ever listen?
Usually if a cybersecurity person moans about not being listened to and having their advice ignored, it's an indicator that their proposals for mitigations are just insane.
Yes, security problems can kill your business...but so can overspending on mitigating vulnerabilities that have significantly lower ALE and ARO than the solution costs.
Cybersecurity isn't about "perfect hardened security", it's about balancing risk and cost. You wouldn't protect a £10 note with a £1m vault. Similarly, you wouldn't protect £1m with a £10 petty cash tin. You have to find the balance where the cost is reasonable vs the asset being protected and the risk is sufficiently low that the cost of attacking the asset prevents it being a worthwhile exercise.
Anyone can find a security issue and then suggest the latest and greatest cutting edge security software / hardware to protect the vulnerability...that's the easy part of cybersecurity. The hard part is finding solutions that are feasible and practical that don't result in costs that are higher than the assets are worth.
Black Hat Four countries have now tested anti-satellite missiles (the US, China, Russia, and India), but it's much easier and cheaper just to hack them.
In a briefing at the Black Hat conference in Las Vegas, Milenko Starcik and Andrzej Olchawa from German biz VisionSpace Technologies demonstrated how easy it is by exploiting software vulnerabilities in the software used in the satellites themselves, as well as the ground stations that control them.
"I used to work at the European Space Agency on ground station IT and got sick of telling them what was wrong and not having them fix it," Olchawa told The Register, "So I decided to go into business to do it myself." //
"We found actual vulnerabilities which allow you to crash the entire onboard software with an unauthenticated telephone," claimed Starcik.
"So basically, you send a packet to the spacecraft, and the entire software crashes and reboots, which then actually causes the spacecraft, if it's not properly configured, to reset all its keys. And then you have zero keys on the spacecraft that you can use from that stage on."
An Arizona woman who ran a laptop farm from her home - helping North Korean IT operatives pose as US-based remote workers - has been sentenced to eight and a half years behind bars for her role in a $17 million fraud that hit more than 300 American companies.
The Solid protocol, invented by Sir Tim Berners-Lee, represents a radical reimagining of how data operates online. Solid stands for “SOcial LInked Data.” At its core, it decouples data from applications by storing personal information in user-controlled “data wallets”: secure, personal data stores that users can host anywhere they choose. Applications can access specific data within these wallets, but users maintain ownership and control.
Solid is more than distributed data storage. This architecture inverts the current data ownership model. Instead of companies owning user data, users maintain a single source of truth for their personal information. It integrates and extends all those established identity standards and technologies mentioned earlier, and forms a comprehensive stack that places personal identity at the architectural center.
This identity-first paradigm means that every digital interaction begins with the authenticated individual who maintains control over their data. Applications become interchangeable views into user-owned data, rather than data silos themselves. This enables unprecedented interoperability, as services can securely access precisely the information they need while respecting user-defined boundaries.
Solid ensures that user intentions are transparently expressed and reliably enforced across the entire ecosystem. Instead of each application implementing its own custom authorization logic and access controls, Solid establishes a standardized declarative approach where permissions are explicitly defined through control lists or policies attached to resources. Users can specify who has access to what data with granular precision, using simple statements like “Alice can read this document” or “Bob can write to this folder.” These permission rules remain consistent, regardless of which application is accessing the data, eliminating the fragmentation and unpredictability of traditional authorization systems. //
Peter Galbavy • July 24, 2025 9:30 AM
Maybe I have failed to have boned up on Solid, but the charming naivete that people will maintain their own personal data stores in an honest and trustworthy way is only slightly less laughable than how it’s done right now. Or maybe not.
Again, perhaps, because I have not spent any time looking at the actual protocol details I am confused where the veracity comes from? Or am I suddenly able to call myself an Admiral with a law degree and a healthy trust fund as a credit line?
Financial criminality would be democratised overnight, if nothing else.
atanas entchev • July 24, 2025 11:01 AM
The Solid protocol is charmingly naive. It assumes — like the early internet — good-will participation from everyone. We know that this is not how the real world functions.
What is to stop bad actors from building and presenting a fake profile / history / whatever?
Peter A. • July 24, 2025 11:11 AM
There’s also another problem: partial identities, pseudonymous/fake identities, companies that collect too much data, etc. Having a data store that has it all is a bit risky, as you can accidentally share too much, especially the people that are a little less competent with all that computer stuff.
Shashank Yadav • July 24, 2025 8:57 AM
People like to own things which accord them status or meaningful utility – which is where all expectations of users considering data ownership falter.
Moreover, for enterprise users this may work, the vast majority of individual users cannot be expected to maintain such personal data pods. Hypothetically, let us say you make a law requiring this way of data management, there will immediately be third-parties who people would prefer to handle this for them. Kind of like the notion of consent managers in India’s data protection laws, because competent and continuous technical administration cannot be expected from ordinary users.
If you're writing an open source system utility, for example, your chance of widespread adoption depends on its reputation as trustworthy, and that will reflect on you.
Who watches the watchers?
Talon is a case in point. A Windows de-bloater made by an outfit called Raven and distributed through GitHub as open source, it nonetheless got a rep as potential malware. Open source by itself guarantees nothing, and the conversation around whether or not Talon's bona fides checked out simply grew and grew. Enter YouTube cyber security educator and ethical hacker John Hammond. His day job includes answering the question "Is it Malware?" He has the chops, he has the tools, he has the caffeine. Speedrun is go. //
How might Raven have avoided being considered suspicious? There's a concept called defensive coding, where you consider each decision not just as how it contributes to functionality, but how it would cope if given an unexpected input. With Talon, the defensive process is whether a choice of technique will trigger malware scanners, and if it might, but is indispensable, how to make it clear in the code what's going on. You know, that pesky documentation stuff. The design overview. The comments in the code. If your product will need all those open source eyeballs to become trusted, then feed those eyeballs with what they need. There aren't many Hammonds, but there are lots of curious wannabes, and even the occasional journalist eager to tell a story.
Creating security is a huge task, and everyone who launches software for the masses has the opportunity to help or hinder, regardless of the actual intent of the product. Open source is a magnificent path to greater security across the board, because it keeps humans in the loop. Engineering for those humans is a force amplifier for good. Just ask the future historians speedrunning the history of cyber security centuries from now. ®
One password is believed to have been all it took for a ransomware gang to destroy a 158-year-old company and put 700 people out of work.
KNP - a Northamptonshire transport company - is just one of tens of thousands of UK businesses that have been hit by such attacks.
In KNP's case, it's thought the hackers managed to gain entry to the computer system by guessing an employee's password, after which they encrypted the company's data and locked its internal systems.
KNP director Paul Abbott says he hasn't told the employee that their compromised password most likely led to the destruction of the company.
"Would you want to know if it was you?" he asks.
neils @midwestneil
·
Turns out you can just hack any train in the USA and take control over the brakes. This is CVE-2025-1727 and it took me 12 years to get this published. This vulnerability is still not patched. Here's the story:
wetw0rk @wetw0rk7
Perhaps one of the most badass CVE's I've ever seen from @midwestneil 💪😤
https://cisa.gov/news-events/ics-advisories/icsa-25-191-10
12:25 PM · Jul 11, 2025
Back when it was first implemented in the late 1980s, it was illegal for anyone else to use the frequencies allocated for this system. So, the system only used the BCH checksum for packet creation. Unfortunately, anyone with an SDR could mimic these packets, allowing them to send false signals to the EoT module and its corresponding Head-of-Train (HoT) partner. This would not have been an urgent issue if the EoT had only sent telemetry data. However, the HoT can also issue a brake command to the EoT through this system. Thus, anyone with the hardware (available for less than $500) and know-how can easily issue a brake command without the train driver’s knowledge, potentially compromising the safety of the transport operation.
What’s frustrating for Neils is that the AAR refused to acknowledge the vulnerability back in 2012, saying that it was just a theoretical issue and that they’d only believe it if it happened in real life. Unfortunately, the Federal Railway Authority (FRA) lacks a test track facility, and the AAR has not permitted any testing due to security concerns on their property. It has got to the point that the security researcher published their findings in the Boston Review, only to be refuted by the AAR in Fortune magazine. //
By 2024, the issue still hasn’t been resolved — the AAR’s Director of Information Security said that it wasn’t really a major issue and that the vulnerable devices are already reaching their end of life. Because the AAR continued to ignore the warnings, the CISA had no choice but to officially publish an advisory to warn the public about it. This has got the AAR moving forward, with the group announcing an update last April. However, implementation is going at a snail’s pace, with 2027 being the target as the earliest year of deployment.
I created this site to enable people to compare many so-called “secure messaging apps”. Likewise, I hope to educate people as to which functionality is required for truly secure messaging.
In 2016, I was frustrated with the EFF’s very out-of-date comparison, and hence I decided to create a comparison myself. Reaching out to various privacy organisations proved to be a complete waste of time, as no one was willing to collaborate on a comparison. This is a good lesson learnt: Don’t be beholden to other people/organisations, and produce your own useful work.
This site is not meant to be comprehensive; security is difficult, and a full review of each app is simply not plausible due to time, a lack of access to source code in many cases, and a lack of knowledge of development practices, and general cyber security maturity.
PsychoArs Ars Scholae Palatinae
20y
768
Subscriptor
jhodge said:
If your management network is accessible from the Internet, you're doing it wrong.It needs to be fixed, but this shouldn't be a full-on freakout for most shops.
Yes and no.
If your management network is accessible from your workload network, you're doing it wrong. All it takes is a compromised laptop/desktop/IoT device that's reaching out and lets a bad actor control it. Defense in depth. //
fuzzyfuzzyfungus Ars Legatus Legionis
12y
10,234
PsychoArs said:
Yes and no.If your management network is accessible from your workload network, you're doing it wrong. All it takes is a compromised laptop/desktop/IoT device that's reaching out and lets a bad actor control it. Defense in depth.
You also probably want 'your management network' to be internally divided to the degree possible. Ideally you'd like all your BMCs to work for you; but if one of them turns out not to you can't necessarily trust the remainder to protect themselves(and for newly added devices that aren't supposed to require hands-on provisioning more or less blind trust in the first thing that talks to you is a feature; so you really, really, want that to be you).
Not every wire can be cut, or you might as well just get an empty shed for much, much, less money; but there is often not much call for any two random devices on the management network to talk to one another, rather than a relative handful of monitoring and provisioning systems talking to otherwise solitary BMCs who have no excuse for knowing about one another. //
Little-Zen Ars Praefectus
24y
3,201
Subscriptor
Deny_Deflect_Disavow said:
“The vulnerability, carrying a severity rating of 10 out of a possible 10, resides in the AMI MegaRAC, a widely used firmware package that allows large fleets of servers to be remotely accessed and managed even when power is unavailable or the operating system isn't functioning.”I‘m not sure I understand how firmware can be manipulated if electricity is not available or the OS is not functioning. Secondly, these hosts may be physically wired to any network, yet how can a remote execution or procedure call be issued to the server if powered down?
If there's literally no power, like at all, then they aren't accessible, yes. But that's "no power" as in "the whole building's power is out."
If, however, it's just that the server has been powered off but is still plugged in, and the BMC is connected to a network, you can reach it. These are things like Dell iDRAC, HP iLO, Lenovo IMM, etc. They're designed to be always on, and they provide a way to access the server as though you were physically there, including a virtual console that acts like a connected monitor and keyboard, so you can even remotely power on/off a server if necessary. It doesn't use the installed host operating system - thing of it like a remotely accessible BIOS with a ton of other functionality that also lets you see what's happening on the system in real time. You can even virtually mount ISOs to remotely install an operating system.
It is extremely convenient and I'm sure anyone here who has worked in IT has stories about how iDRAC saved their life at one point or another. I certainly have a few.
However, I can also say - when I was managing servers, all my BMCs were connected to an isolated VLAN, in-building only accessible from another isolated VLAN and to only a very specific set of users with separate logins used solely for interacting with devices on the management network, and remotely over a VPN that only allowed that very specific set of users to access a jump box, which itself was accessible only with the separate management network logins.
You absolutely want to isolate and protect these interfaces specifically because of vulnerabilities like this one.
Encrypted chat apps like Signal and WhatsApp are one of the best ways to keep your digital conversations as private as possible. But if you’re not careful with how those conversations are backed up, you can accidentally undermine your privacy.
When a conversation is properly encrypted end-to-end, it means that the contents of those messages are only viewable by the sender and the recipient. The organization that runs the messaging platform—such as Meta or Signal—does not have access to the contents of the messages. But it does have access to some metadata, like the who, where, and when of a message. Companies have different retention policies around whether they hold onto that information after the message is sent.
What happens after the messages are sent and received is entirely up to the sender and receiver. If you’re having a conversation with someone, you may choose to screenshot that conversation and save that screenshot to your computer’s desktop or phone’s camera roll. You might choose to back up your chat history, either to your personal computer or maybe even to cloud storage (services like Google Drive or iCloud, or to servers run by the application developer).
Those backups do not necessarily have the same type of encryption protections as the chats themselves, and may make those conversations—which were sent with strong, privacy-protecting end-to-end encryption—available to read by whoever runs the cloud storage platform you’re backing up to, which also means they could hand them at the request of law enforcement.
One one my biggest worries about VPNs is the amount of trust users need to place in them, and how opaque most of them are about who owns them and what sorts of data they retain.
A new study found that many commercials VPNS are (often surreptitiously) owned by Chinese companies.
Starting from version 1.26.7, VeraCrypt discontinued support for the TrueCrypt format to prioritize the highest security standards. However, recognizing the transitionary needs of our users, we have preserved version 1.25.9, the last to support the TrueCrypt format.
On this page, users can find download links for version 1.25.9, specifically provided for converting TrueCrypt volumes to the more secure VeraCrypt format. We strongly recommend transitioning to VeraCrypt volumes and using our latest releases for ongoing encryption needs, as they encompass the latest security enhancements.
GaidinBDJ Ars Scholae Palatinae
11y
1,266
Subscriptor
actor0 said:
Why do people think E2R encryption means the data can't be decrypted?
Probably a gross misunderstanding of encryption in general.ANYONE with access to the keys can unlock it.
The ones with access to the keys own the platform.
The one who own the platform are legally required to submit your info to Subpoena, Homeland Security warrants, and Patriot Act related actions.
This is totally incorrect.
With end-to-end encryption, the platform doesn't have the keys. The clients exchange keys through the platform, but it's done in a way that the platform doesn't know what they are. A subpoena doesn't let them provide information they don't have. The platform may have metadata about your message, but not the contents.
On the Wikipedia page for Diffie-Hellman key exchange there's a good diagram explaining the concept of how you can exchange private keys through public transport. It's the one down the page a bit where they use paint colors. In the real world, it's done with math, but the paint concept is sound to understand the underlying idea.
A team of researchers confirmed that behavior in a recently released formal analysis of WhatsApp group messaging. They reverse-engineered the app, described the formal cryptographic protocols, and provided theorems establishing the security guarantees that WhatsApp provides. Overall, they gave the messenger a clean bill of health, finding that it works securely and as described by WhatsApp.
They did, however, confirm a behavior that should give some group messaging users pause: Like other messengers billed as secure—with the notable exception of Signal—WhatsApp doesn’t provide any sort of cryptographic means for group management.
“This means that it is possible for the WhatsApp server to add new members to a group,” Martin R. Albrecht, a researcher at King's College in London, wrote in an email. “A correct client—like the official clients—will display this change but will not prevent it. Thus, any group chat that does not verify who has been added to the chat can potentially have their messages read.” //
By contrast, the open source Signal messenger provides a cryptographic assurance that only an existing group member designated as the group admin can add new members. //
Most messaging apps, including Signal, don’t certify the identity of their users. That means there’s no way Signal can verify that the person using an account named Alice does, in fact, belong to Alice. It’s fully possible that Malory could create an account and name it Alice. (As an aside, and in sharp contrast to Signal, the account members that belong to a given WhatsApp group are visible to insiders, hackers, and to anyone with a valid subpoena.)
Signal does, however, offer a feature known as safety numbers. It makes it easy for a user to verify the security of messages or calls with specific contacts. When two users verify out-of-band—meaning using a known valid email address or cell phone number of the other—that Signal is displaying the same safety number on both their devices, they can be assured that the person claiming to be Alice is, in fact, Alice.
McAfee warns “these messages may seem harmless, but they’re often the first step in long-game scams designed to steal personal data — or even life savings. McAfee research shows 1 in 4 Americans have received one. Best advice? Don’t engage.”
- Airgapped raspberry pi computer with touch screen and camera
- Featuring LUKS full disk encryption
- For secure offline blockchain transactions and for secure encrypted messaging
- Move files across the airgap to other devices using QR-Codes
Independent researchers have discovered, or should we say rediscovered, a major security vulnerability in Microsoft's Remote Desktop Protocol (RDP). Previously known as Terminal Services, RDP appears to be designed to always validate a previously used password for remote connections to a Windows machine, even when that password has been revoked by a system administrator or compromised in a security breach. //
The flaw violates universally acknowledged operational security (opsec) practices – and then some. When a password is changed, it should no longer provide access to a remote system. "People trust that changing their password will cut off unauthorized access," Wade said. //
According to Microsoft, the behavior is a design decision meant to "ensure that at least one user account always has the ability to log in no matter how long a system has been offline."
The company had already been warned about this backdoor by other researchers in August 2023, making the new analysis ineligible for a bounty award. Redmond engineers reportedly attempted to modify the code to eliminate the backdoor but abandoned the effort, as the changes could break compatibility with a Windows feature that many applications still rely on. //
brucek brucekMay 2, 2025, 3:30 PM
And on the flip side, RDP doesn't recognize a valid Microsoft Account password that is not cached on the local machine. This can easily happen on a new install where you've only logged in using methods other than the password (PIN, windows hello, etc.) This is a great way to lose an hour wondering why you can't log in because it's so easy to think the problem must be some other configuration problem with setting up RDP or elsewhere in the system. //
FireStormOOOMay 2, 2025, 9:05 PM
This is cached credentials working the same way it had for decades, and it's been configurable by GPO for almost as long. The administrator chooses how long the server will remember stale credentials if it can't reach a domain controller immediately to check. No, the defaults don't make sense for a server that expects 100% availability of your authentication infrastructure.
New ChoiceJacking attack allows malicious chargers to steal data from phones. //
About a decade ago, Apple and Google started updating iOS and Android, respectively, to make them less susceptible to “juice jacking,” a form of attack that could surreptitiously steal data or execute malicious code when users plug their phones into special-purpose charging hardware. Now, researchers are revealing that, for years, the mitigations have suffered from a fundamental defect that has made them trivial to bypass.
“Juice jacking” was coined in a 2011 article on KrebsOnSecurity detailing an attack demonstrated at a Defcon security conference at the time. Juice jacking works by equipping a charger with hidden hardware that can access files and other internal resources of phones, in much the same way that a computer can when a user connects it to the phone. //
Researchers at the Graz University of Technology in Austria recently made a discovery that completely undermines the premise behind the countermeasure: They’re rooted under the assumption that USB hosts can’t inject input that autonomously approves the confirmation prompt. Given the restriction against a USB device simultaneously acting as a host and peripheral, the premise seemed sound. The trust models built into both iOS and Android, however, present loopholes that can be exploited to defeat the protections. The researchers went on to devise ChoiceJacking, the first known attack to defeat juice-jacking mitigations.
“We observe that these mitigations assume that an attacker cannot inject input events while establishing a data connection,” the researchers wrote in a paper scheduled to be presented in August at the Usenix Security Symposium in Seattle. “However, we show that this assumption does not hold in practice.”