There are two major security problems with these photo frames and unofficial Android TV boxes. The first is that a considerable percentage of them come with malware pre-installed, or else require the user to download an unofficial Android App Store and malware in order to use the device for its stated purpose (video content piracy). The most typical of these uninvited guests are small programs that turn the device into a residential proxy node that is resold to others.
The second big security nightmare with these photo frames and unsanctioned Android TV boxes is that they rely on a handful of Internet-connected microcomputer boards that have no discernible security or authentication requirements built-in. In other words, if you are on the same network as one or more of these devices, you can likely compromise them simultaneously by issuing a single command across the network. //
Many wireless routers these days make it relatively easy to deploy a “Guest” wireless network on-the-fly. Doing so allows your guests to browse the Internet just fine but it blocks their device from being able to talk to other devices on the local network — such as shared folders, printers and drives. If someone — a friend, family member, or contractor — requests access to your network, give them the guest Wi-Fi network credentials if you have that option. //
It is somewhat remarkable that we haven’t yet seen the entertainment industry applying more visible pressure on the major e-commerce vendors to stop peddling this insecure and actively malicious hardware that is largely made and marketed for video piracy. These TV boxes are a public nuisance for bundling malicious software while having no apparent security or authentication built-in, and these two qualities make them an attractive nuisance for cybercriminals.
Funds in ‘Money Safe’ accounts are only available when customers appear for face-to-face verification
Urban VPN Proxy targets conversations across ten AI platforms: ChatGPT, Claude, Gemini, Microsoft Copilot, Perplexity, DeepSeek, Grok (xAI), Meta AI.
For each platform, the extension includes a dedicated “executor” script designed to intercept and capture conversations. The harvesting is enabled by default through hardcoded flags in the extension’s configuration.
There is no user-facing toggle to disable this. The only way to stop the data collection is to uninstall the extension entirely.
[…]
The data collection operates independently of the VPN functionality. Whether the VPN is connected or not, the harvesting runs continuously in the background.
"So one of the things that we're seeing is the whole movement away from passwords to passkeys – a certificate-based authentication wrapped in a usability shrink wrap," Forrester VP and analyst Andras Cser told The Register.
Passkeys are typically what security folks mean when they say "phishing-resistant MFA." They replace passwords, and instead use cryptographic key pairs with the public key stored on the server and the private key – such as the user's face, fingerprints, or PIN – stored on the user's device. higher bandwidth demands.
From this, he looked at its software and operating system, and that’s where he discovered the dark truth: his smart vacuum was a security nightmare and a black hole for his personal data. First of all, it's Android Debug Bridge, which gives him full root access to the vacuum, wasn't protected by any kind of password or encryption. The manufacturer added a makeshift security protocol by omitting a crucial file, which caused it to disconnect soon after booting, but Harishankar easily bypassed it. He then discovered that it used Google Cartographer to build a live 3D map of his home.
This isn’t unusual, by far. After all, it’s a smart vacuum, and it needs that data to navigate around his home. However, the concerning thing is that it was sending off all this data to the manufacturer’s server. It makes sense for the device to send this data to the manufacturer, as its onboard SoC is nowhere near powerful enough to process all that data. However, it seems that iLife did not clear this with its customers. Furthermore, the engineer made one disturbing discovery — deep in the logs of his non-functioning smart vacuum, he found a command with a timestamp that matched exactly the time the gadget stopped working. This was clearly a kill command, and after he reversed it and rebooted the appliance, it roared back to life.
It’s still legal to pick locks, even when you swing your legs.
“Opening locks” might not sound like scintillating social media content, but Trevor McNally has turned lock-busting into online gold. A former US Marine Staff Sergeant, McNally today has more than 7 million followers and has amassed more than 2 billion views just by showing how easy it is to open many common locks by slapping, picking, or shimming them.
This does not always endear him to the companies that make the locks. //
Wheels Of Confusion Ars Legatus Legionis
16y
73,932
Subscriptor
The company claimed to have this case locked up from the start, but it was picked apart embarrassingly quickly.
One simple AI prompt saved me from disaster.
Not fancy security tools. Not expensive antivirus software. Just asking my coding assistant to look for suspicious patterns before executing unknown code.
The scary part? This attack vector is perfect for developers. We download and run code all day long. GitHub repos, npm packages, coding challenges. Most of us don't sandbox every single thing.
And this was server-side malware. Full Node.js privileges. Access to environment variables, database connections, file systems, crypto wallets. Everything.
If this sophisticated operation is targeting developers at scale, how many have already been compromised? How many production systems are they inside right now?
Perfect Targeting: Developers are ideal victims. Our machines contain the keys to the kingdom: production credentials, crypto wallets, client data.
Professional Camouflage: LinkedIn legitimacy, realistic codebases, standard interview processes.
Technical Sophistication: Multi-layer obfuscation, remote payload delivery, dead-man switches, server-side execution.
One successful infection could compromise production systems at major companies, crypto holdings worth millions, personal data of thousands of users. //
If you're a developer getting LinkedIn job opportunities:
Always sandbox unknown code. Docker containers, VMs, whatever. Never run it on your main machine.
Use AI to scan for suspicious patterns. Takes 30 seconds. Could save your entire digital life.
Verify everything. Real LinkedIn profile doesn't mean real person. Real company doesn't mean real opportunity.
Trust your gut. If someone's rushing you to execute code, that's a red flag.
This scam was so sophisticated it fooled my initial BS detector. But one paranoid moment and a simple AI prompt exposed the whole thing.
Scientists at the University of California, San Diego, and the University of Maryland, College Park, say they were able to pick up large amounts of sensitive traffic largely by just pointing a commercial off-the-shelf satellite dish at the sky from the roof of a university building in San Diego.
In its paper, Don't Look Up: There Are Sensitive Internal Links in the Clear on GEO Satellites [PDF], the team describes how it performed a broad scan of IP traffic on 39 GEO satellites across 25 distinct longitudes and found that half of the signals they picked up contained cleartext IP traffic.
This included unencrypted cellular backhaul data sent from the core networks of several US operators, destined for cell towers in remote areas. Also found was unprotected internet traffic heading for in-flight Wi-Fi users aboard airliners, and unencrypted call audio from multiple VoIP providers.
According to the researchers, they were able to identify some observed satellite data as corresponding to T-Mobile cellular backhaul traffic. This included text and voice call contents, user internet traffic, and cellular network signaling protocols, all "in the clear," but T-Mobile quickly enabled encryption after learning about the problem.
More seriously, the team was able to observe unencrypted traffic for military systems including detailed tracking data for coastal vessel surveillance and operational data of a police force.
In addition, they found retail, financial, and banking companies all using unencrypted satellite communications to link their internal networks at various sites. The researchers were able to see unencrypted login credentials, corporate emails, inventory records, and information from ATM cash dispensers.
New design sets a high standard for post-quantum readiness.
Aranya is an access governance and secure data exchange platform for organizations to control their critical data and services. Access governance is a mechanism to define, enforce, and maintain the set of rules and procedures to secure your system’s behaviors. Aranya gives you the ability to apply access controls over stored and shared resources all in one place.
Aranya enables you to safeguard sensitive information, maintain compliance, mitigate the risk of unauthorized data exposure, and grant appropriate access. Aranya’s decentralized platform allows you to define and enforce these sets of policies to secure and access your resources.
The platform provides a software toolkit for policy-driven access controls and secure data exchange. The software is deployed on endpoints, integrating into applications which require granular access controls over their data and services. Endpoints can entrust Aranya with their data protection and access controls so that other applications running on the endpoint need only to focus on using the data for their intended functionality. Aranya has configurable end-to-end encryption built into its core as a fundamental design principle.
A key discriminating attribute of Aranya is the decentralized, zero trust architecture. Through the integration of the software, access governance is implemented without the need for a connection back to centralized IT infrastructure. With Aranya’s decentralized architecture, if two endpoints are connected to each other, but not back to the cloud or centralized infrastructure, governance over data and applications will be synchronized between peers and further operations will continue uninterrupted.
Today’s world requires us to make complex and nuanced decisions about our digital security. Evaluating when to use a secure messaging app like Signal or WhatsApp, which passwords to store on your smartphone, or what to share on social media requires us to assess risks and make judgments accordingly. Arriving at any conclusion is an exercise in threat modeling.
Are your links not malicious looking enough?
This tool is guaranteed to help with that!
What is this and what does it do?
This is a tool that takes any link and makes it look malicious. It works on the idea of a redirect. Much like https://tinyurl.com/ for example. Where tinyurl makes an url shorter, this site makes it look malicious.
Place any link in the below input, press the button and get back a fishy(phishy, heh...get, it?) looking link. The fishy link doesn't actually do anything, it will just redirect you to the original link you provided.
Notion just released version 3.0, complete with AI agents. Because the system contains Simon Willson’s lethal trifecta, it’s vulnerable to data theft though prompt injection.
First, the trifecta:
The lethal trifecta of capabilities is:
- Access to your private data—one of the most common purposes of tools in the first place!
- Exposure to untrusted content—any mechanism by which text (or images) controlled by a malicious attacker could become available to your LLM
- The ability to externally communicate in a way that could be used to steal your data (I often call this “exfiltration” but I’m not confident that term is widely understood.)
This is, of course, basically the point of AI agents. //
The fundamental problem is that the LLM can’t differentiate between authorized commands and untrusted data. So when it encounters that malicious pdf, it just executes the embedded commands. And since it has (1) access to private data, and (2) the ability to communicate externally, it can fulfill the attacker’s requests. I’ll repeat myself:
This kind of thing should make everybody stop and really think before deploying any AI agents. We simply don’t know to defend against these attacks. We have zero agentic AI systems that are secure against these attacks. Any AI that is working in an adversarial environment—and by this I mean that it may encounter untrusted training data or input—is vulnerable to prompt injection. It’s an existential problem that, near as I can tell, most people developing these technologies are just pretending isn’t there.
lukem
If you're going to use test values in your test systems, why not use test values allocated for documentation purposes that aren't expected to be used in "live" networks?
IETF RFC 5737 section 3 allocates three IPv4 CIDR ranges for documentation:
192.0.2.0/24, 198.51.100.0/24, and 203.0.113.0/24.
September 5, 2025 at 8:02 am
Feeling old yet? Let the Reg ruin your day for you. We are now substantially closer to the 2038 problem (5,849 days) than it has been since the Year 2000 problem (yep, 8,049 days since Y2K).
Why do we mention it? Well, thanks to keen-eyed Reg reader Calum Morrison, we've spotted a bit of the former, and a hint of what lies beneath the Beeb's digital presence, when he sent in a snapshot that implies Old Auntie might be using a 32-bit Linux in iPlayer, and something with a kernel older than Linux 5.10, too.
That 2020 kernel release was the first able to serve as a base for a 32-bit system designed to run beyond 03:14:07 UTC on 19 January 2038. //
Like Y2K, it's easy to explain and fairly easy to fix: traditionally, UNIX stored the time as the number of seconds since January 1, 1970 – but they held it in a signed 32-bit value. That means that at seven seconds after pi o'clock in the morning of January 18th, 2038 (UTC), when it will be 2,147,483,647 (2³¹) seconds after the "epoch", the next time will be -2³¹: 20:45:52 in the evening of December 13th, 1901.
It has already been fixed in Linux and any other recent UNIX-ish OSes. The easy way is to move to a 64-bit value, giving a range of 292 billion years either way. That'll do, pig.
The fun bit is finding all the places it might occur, like this fun (and old and long-fixed) MongoDB bug. MongoDB stores the date correctly, Python stores the date correctly, but dates were passed from one to the other using a 32-bit value.
Museum boffins find code that crashes in 2037
A stark warning about the upcoming Epochalypse, also known as the "Year 2038 problem," has come from the past, as National Museum Of Computing system restorers have discovered an unsetting issue while working on ancient systems.
Robin Downs, a volunteer who was recently involved in an exhibition of Digital Equipment Corporation (DEC) gear at the museum, was on hand to demonstrate the problem to The Register in the museum's Large Systems Gallery, which now houses a running PDP-11/73. //
"So we found bugs that exist, pre-2038, in writing this that we didn't know about."
The Year 2038 problem occurs in systems that store Unix time – the number of seconds since the Unix epoch (00:00:00 UTC on January 1, 1970) in a signed 32-bit integer (64-bit is one modern approach, but legacy systems have a habit of lingering).
At 03:14:07 UTC on January 19, 2038, the second counter will overflow. In theory, this will result in a time and date being returned before the epoch – 20:45:52 UTC on December 13, 1901, but that didn't happen for Downs. //
zb42
As the article is specifically about date problems that occur before the 32bit unix time rollover, I think it should be mentioned that:
32bit NTP is going to roll over on the 7th of February 2036.
That led to a quick trip to an 'Urgent Care' - the frontline medical center for most Americans. At the check-in counter, the check-in nurse asked to see some ID, so I handed over my Australian driver's license. The nurse looked at the license and typed some of the info on it into a computer, then they looked up at me and asked: "Are you the same Mark Pesce who lived at...?" and then proceeded to recite an address that I resided at more than half a century ago.
Dumbstruck, I said, "Yes...? And how did you know that? I haven't lived there in nearly 50 years. I've never been in here before - I've barely ever been in this town before. Where did that come from?"
"Oh," they replied. "We share our patient data records with Massachusetts General Hospital. It's probably from them?"
I remembered having a bit of minor surgery as an 11 year old, conducted at that facility. 51 years ago. That's the only time I'd ever been a patient at Massachusetts General Hospital.
Somehow that had never been forgotten.
We seem perfectly willing to accept that everything we do today leaves a permanent record. It appears that long before Eric Schmidt declared, "Privacy is dead," any of our pretensions to privacy had already joined the Choir Invisible. //
I don’t much care how my records made it into 2025. I am interested in why nobody ever decided to delete them.
I realize we all want our medical records instantly available to inform treatment in moments of great need. But half a century of somewhat senseless recordkeeping strains credulity. Most likely my record remained in that database simply because it's never been cleaned out - an operation that would take time and budget that would never be approved because, why would you ever delete patient data?
This has the feel of a situation we had no idea we were making for ourselves - countless sensible decisions culminating in a ridiculous outcome. Go forward another fifty years, when it's quite likely I, too, will have joined the Choir Invisible. Will my patient record still be in that database? What purpose would that serve? If my records as a child are in there, half a century later, it's easy to imagine this database holds records of many other people who have passed on and therefore shouldn't be in there at all. Privacy lost to laziness. //
Alex 72
Reply Icon
Medical records should be kept but access should be controlled
I agree yes you want medical records kept, patient history is always useful and provided they are only shared with the patient or a doctor or other professional they have consented to be cared for by, and who is not engaged in malpractice, it's not harmful. The hard part is drawing the line on how much anonymised data can be used for research, ensuring that data remains anonymous and managing consent for sharing data when patients are treated elsewhere or researchers want to use data from multiple sources.
and if you keep them for someone's entire lifespan then you should provided they did not object in their lifetime and next of kin explicitly consent or at least don't object probably archive it for future research in the near/medium term and historical value in the long term. Again managing consent, allowing reasonable anonymised research in the public interest, preventing de-anonymisation and deciding the limits of how long parts of it stay private vs when genealogists and historians can have unrestricted access.. is the challenge.
To do any of this effective durable storage, access control, authentication and authorisation are just some of the challenges. I have seen data analytics firms who's job is just this struggle to get everything correct so a group of organisation just trying to provide healthcare, research, treatments, disease, prevention.... Having to do this as an add on with a limited budget I am honestly impressed its only now with ransomware we are starting to see issues and paper records were not being stolen and abused on a massive scale in the past...
I don't know the answer but I don't think its the delete key
Gene CashSilver badge
Why would you ever delete patient data?
Yes, seriously.
I can understand other records, but not medical ones.
I was able to get proper medical care, including surgery, for a broken coccyx after proving I had fallen off a hay bale in 1973 and seriously injured myself, and thus it was a chronic thing and not just the minor recent incident my doctor insisted it was. I would have otherwise not been considered eligible for the surgery.
And after you're dead, it's no longer a privacy issue and becomes historical records. It's no different than census records.
Should this data be held indefinitely? Yes.
This is the same sort of data that let me piece together that my great^9 grandfather was Edward Reavis, born 1680 in Paddington, England, and left to come to Virginia, after being held in Newgate prison for his religious beliefs. He moved to Henrico county, Virginia in 1721 and died in Northampton county, North Carolina in 1751. I've also found 454 other relatives down to me, through a ton of things including bible notes, estate papers, census records, marriage records, medical records, military records, family papers, private letters, obituaries, social security records, tombstones, and even old wedding invitations.
Tells The Reg China's ability to p0wn Redmond's wares 'gives me a political aneurysm'
Roger Cressey served two US presidents as a senior cybersecurity and counter-terrorism advisor and currently worries he'll experience a "political aneurysm" due to Microsoft's many security messes.
In the last few weeks alone, Microsoft disclosed two major security vulnerabilities – along with news that attackers exploited one involving SharePoint as a zero-day. The second flaw, while not yet under exploitation, involves Exchange server – a favorite of both Russian and Chinese spies for years. //
"This is the latest episode of a decades-long process of Microsoft not taking security seriously. Full stop," Cressey said, acknowledging that the government continues spending billions on Microsoft products. "Anytime there's a major announcement of a Microsoft procurement by the government, the happiest people in the world first are in Redmond and second in Beijing."
Microsoft declined to comment for this story, but did point out that Google Cloud is a client of Cressey's in his consulting work.
Anonymous Coward
Anonymous Coward
"got sick of telling them what was wrong and not having them fix it"
I don't know the situation with these guys, I'll give them the benefit of the doubt, but that phrase is everything wrong with a lot of cybersecurity professionals in a nutshell...plenty of goons willing to run scans and test 'sploits then suggest insanely expensive mitigations..."Man, that £1m worth of data is exposed it needs to be protected. I recommend this firewall from Ironballs Labs in California, it's only £5m".
Person: building a sandcastle
Cybersecurity: It's shit mate, it's not going to work.
Person: looks confused, doesn't understand
Cybersecurity: Man, I keep telling you it's shit.
Person: sad because his sandcastle fell over
Cybersecurity: See I told you, I've been telling you for ages you need to make your sandcastles better.
Person: Hey man, my goal here was to just have fun and chill out on the beach, a cheap day out. What would you have done?
Cybersecurity: Well, I would have used those boulders over there to fashion a small blast furnace, scavenged for iron ore at the bottom of those cliff and collected all the drift wood over there as fuel.
Person: Man, that's not worth it, I just wanted to build a sandcastle.
Cybersecurity: Why doesn't anyone ever listen?
Usually if a cybersecurity person moans about not being listened to and having their advice ignored, it's an indicator that their proposals for mitigations are just insane.
Yes, security problems can kill your business...but so can overspending on mitigating vulnerabilities that have significantly lower ALE and ARO than the solution costs.
Cybersecurity isn't about "perfect hardened security", it's about balancing risk and cost. You wouldn't protect a £10 note with a £1m vault. Similarly, you wouldn't protect £1m with a £10 petty cash tin. You have to find the balance where the cost is reasonable vs the asset being protected and the risk is sufficiently low that the cost of attacking the asset prevents it being a worthwhile exercise.
Anyone can find a security issue and then suggest the latest and greatest cutting edge security software / hardware to protect the vulnerability...that's the easy part of cybersecurity. The hard part is finding solutions that are feasible and practical that don't result in costs that are higher than the assets are worth.
Black Hat Four countries have now tested anti-satellite missiles (the US, China, Russia, and India), but it's much easier and cheaper just to hack them.
In a briefing at the Black Hat conference in Las Vegas, Milenko Starcik and Andrzej Olchawa from German biz VisionSpace Technologies demonstrated how easy it is by exploiting software vulnerabilities in the software used in the satellites themselves, as well as the ground stations that control them.
"I used to work at the European Space Agency on ground station IT and got sick of telling them what was wrong and not having them fix it," Olchawa told The Register, "So I decided to go into business to do it myself." //
"We found actual vulnerabilities which allow you to crash the entire onboard software with an unauthenticated telephone," claimed Starcik.
"So basically, you send a packet to the spacecraft, and the entire software crashes and reboots, which then actually causes the spacecraft, if it's not properly configured, to reset all its keys. And then you have zero keys on the spacecraft that you can use from that stage on."