507 private links
This blog is reserved for more serious things, and ordinarily I wouldn’t spend time on questions like the above. But much as I’d like to spend my time writing about exciting topics, sometimes the world requires a bit of what Brad Delong calls “Intellectual Garbage Pickup,” namely: correcting wrong, or mostly-wrong ideas that spread unchecked across the Internet.
This post is inspired by the recent and concerning news that Telegram’s CEO Pavel Durov has been arrested by French authorities for its failure to sufficiently moderate content. While I don’t know the details, the use of criminal charges to coerce social media companies is a pretty worrying escalation, and I hope there’s more to the story.
But this arrest is not what I want to talk about today.
What I do want to talk about is one specific detail of the reporting. Specifically: the fact that nearly every news report about the arrest refers to Telegram as an “encrypted messaging app.”
This phrasing drives me nuts because in a very limited technical sense it’s not wrong. Yet in every sense that matters, it fundamentally misrepresents what Telegram is and how it works in practice. And this misrepresentation is bad for both journalists and particularly for Telegram’s users, many of whom could be badly hurt as a result.
Questions raised as one of the world's largest PC makers joins America's critical defense team
NextDNS protects you from all kinds of security threats, blocks ads and trackers on websites and in apps and provides a safe and supervised Internet for kids — on all devices and on all networks.
Try it now
No signup required.
Sign up later to save your settings.
As Phil Root, the deputy director of the Defense Sciences Office at DARPA, recounted to Scharre, “A tank looks like a tank, even when it’s moving. A human when walking looks different than a human standing. A human with a weapon looks different.”
In order to train the artificial intelligence, it needed data in the form of a squad of Marines spending six days walking around in front of it. On the seventh day, though, it was time to put the machine to the test.
“If any Marines could get all the way in and touch this robot without being detected, they would win. I wanted to see, game on, what would happen,” said Root in the book. //
the Marines, being Marines, found several ways to bollix the AI and achieved a 100 percent success rate.
Two Marines, according to the book, somersaulted for 300 meters to approach the sensor. Another pair hid under a cardboard box.
“You could hear them giggling the whole time,” said Root in the book.
One Marine stripped a fir tree and held it in front of him as he approached the sensor. In the end, while the artificial intelligence knew how to identify a person walking, that was pretty much all it knew because that was all it had been modeled to detect. //
The moral of the story? Never bet against Marines, soldiers, or military folks in general. The American military rank-and-file has proven itself more creative than any other military in history. Whether that creativity is focused on finding and deleting bad guys or finding ways to screw with an AI and the eggheads who programmed it, my money's on the troops.
So if you have some service running on your macOS or Linux workstation on port 11223, and you assume no one can reach it because it's behind your firewall, and that your big-name browser blocks outside requests to localhost, guess again because that browser will route a 0.0.0.0:11223 request by a malicious page you're visiting to your service.
It’s supposed to make the title transfer a breeze and help Californians avoid those tedious trips to the DMV.
Users will soon be able to claim their digital titles via the DMV’s application, track and manage them without getting to the office, according to an Avalanche blog post. The time to transfer vehicle titles drops to a few minutes using blockchain rails in the backend from two weeks via the traditional process, a DMV spokesperson said in an email. //
However, given the recent spate of Microsoft outages and other hacking reports, I am a bit nervous about digitizing without serious hard copy backups. Given how expensive cars have become and how critical having one is to people’s lives and livelihoods, extreme caution should be used before proceeding.
The unintended consequences of this move could be devastating if there are significant issues with the system.
It is also disturbing to note this move is also part of Governor Gavin Newsom’s plans to even have more control over our lives….under the banner of protections. ///
What about people who don't have smartphones, or computers, or Internet ? What happens when there is actual fraud, how do you unwind that? Do people still get paper backup copies of titles?
potential problems can arise when a domain’s DNS records are “lame,” meaning the authoritative name server does not have enough information about the domain and can’t resolve queries to find it. A domain can become lame in a variety of ways, such as when it is not assigned an Internet address, or because the name servers in the domain’s authoritative record are misconfigured or missing.
The reason lame domains are problematic is that a number of Web hosting and DNS providers allow users to claim control over a domain without accessing the true owner’s account at their DNS provider or registrar. //
In the 2019 campaign, the spammers created accounts on GoDaddy and were able to take over vulnerable domains simply by registering a free account at GoDaddy and being assigned the same DNS servers as the hijacked domain. //
How does one know whether a DNS provider is exploitable? There is a frequently updated list published on GitHub called “Can I take over DNS,” which has been documenting exploitability by DNS provider over the past several years. The list includes examples for each of the named DNS providers.
When running in kernel mode rather than user mode, security software has full access to a system's hardware and software, which makes it more powerful and flexible; this also means that a bad update like CrowdStrike's can cause a lot more problems.
Recent versions of macOS have deprecated third-party kernel extensions for exactly this reason, one explanation for why Macs weren't taken down by the CrowdStrike update. But past efforts by Microsoft to lock third-party security companies out of the Windows kernel—most recently in the Windows Vista era—have been met with pushback from European Commission regulators. That level of skepticism is warranted, given Microsoft's past (and continuing) record of using Windows' market position to push its own products and services. Any present-day attempt to restrict third-party vendors' access to the Windows kernel would be likely to draw similar scrutiny. //
For context, analytics company Parametrix Insurance estimated the cost of the outage to Fortune 500 companies somewhere in the realm of $5.4 billion.
Re: Yes and no
OK, here's the situation.
you fork an upstream repo, your fork is private
you commit something there that should not see daylight (keys to the Lamborghini or whatever)
you delete that commit to hide your sins
and now that commit is apparently still easily accessible from upstream. //
Re: Yes and no
Also - as far as I can see it's not actually possible to create private native GitHub forks of public repos in GitHub
The example they cite is when you have a private repo that will eventually become public, fork it to make permanently-private fork and then later make the original repo public. Anything commited to the still-private repo up until the point the first repo is made public, can be accessed from the now-public repo. (As long as you know the commit hashes, that is - but unfortunately they're easily discoverable.) //
The issue is that people are mentally modelling forks as "that's my copy of the repo, completely separate from the original" whereas in reality the fork is just a different interface to the same pool of blobs. Furthermore, while you wouldn't be able to access commits from another fork in the same pool of blobs unless you know the commit hash, github makes those commit hashes discoverable. //
Re: Yes and no
For the people downvoting me - you are aware how quickly AWS credentials accidentally exposed on GitHub are found and abused by attackers? Honeypot tests suggest 1 minute.
https://www.comparitech.com/blog/information-security/github-honeypot/
Note that at no point in the "what to do if you've exposed credentials" section does it say "delete the public repo in the hope that this will alter the past". Magical thinking.
Having played around with this on GitHub, I will say that the message on trying to delete a repo isn't explicit enough about the unexpected (if documented) behaviour. It really ought to have a disclaimer that says "If you're trying to delete commits you wish you hadn't pushed everywhere, this won't achieve it", and a link to a page describing what will actually help. //
What happens in Repo Stays In Repo
"It's reasonable to expect that deleting a commit tree that nobody else has yet accessed will prevent it from being accessed in future."
No it isn't. It's supposed to be a history. In a code repo, the ability to permanently delete past changeset data should be considered a bug or design flaw. The inability to lose history is the whole point.
Source code has no right to be forgotten, when it's in an SCM, because the point of the SCM is to remember.
The catastrophe is yet another reminder of how brittle global internet infrastructure is. It’s complex, deeply interconnected, and filled with single points of failure. As we experienced last week, a single problem in a small piece of software can take large swaths of the internet and global economy offline.
The brittleness of modern society isn’t confined to tech. We can see it in many parts of our infrastructure, from food to electricity, from finance to transportation. This is often a result of globalization and consolidation, but not always. In information technology, brittleness also results from the fact that hundreds of companies, none of which you;ve heard of, each perform a small but essential role in keeping the internet running. CrowdStrike is one of those companies.
This brittleness is a result of market incentives. In enterprise computing—as opposed to personal computing—a company that provides computing infrastructure to enterprise networks is incentivized to be as integral as possible, to have as deep access into their customers’ networks as possible, and to run as leanly as possible.
Redundancies are unprofitable. Being slow and careful is unprofitable. Being less embedded in and less essential and having less access to the customers’ networks and machines is unprofitable—at least in the short term, by which these companies are measured. This is true for companies like CrowdStrike. It’s also true for CrowdStrike’s customers, who also didn’t have resilience, redundancy, or backup systems in place for failures such as this because they are also an expense that affects short-term profitability.
But brittleness is profitable only when everything is working. When a brittle system fails, it fails badly. The cost of failure to a company like CrowdStrike is a fraction of the cost to the global economy. And there will be a next CrowdStrike, and one after that. The market rewards short-term profit-maximizing systems, and doesn’t sufficiently penalize such companies for the impact their mistakes can have. (Stock prices depress only temporarily. Regulatory penalties are minor. Class-action lawsuits settle. Insurance blunts financial losses.) It’s not even clear that the information technology industry could exist in its current form if it had to take into account all the risks such brittleness causes. //
Imagine a house where the drywall, flooring, fireplace, and light fixtures are all made by companies that need continuous access and whose failures would cause the house to collapse. You’d never set foot in such a structure, yet that’s how software systems are built. It’s not that 100 percent of the system relies on each company all the time, but 100 percent of the system can fail if any one of them fails. But doing better is expensive and doesn’t immediately contribute to a company’s bottom line. //
This is not something we can dismantle overnight. We have built a society based on complex technology that we’re utterly dependent on, with no reliable way to manage that technology. Compare the internet with ecological systems. Both are complex, but ecological systems have deep complexity rather than just surface complexity. In ecological systems, there are fewer single points of failure: If any one thing fails in a healthy natural ecosystem, there are other things that will take over. That gives them a resilience that our tech systems lack.
We need deep complexity in our technological systems, and that will require changes in the market. Right now, the market incentives in tech are to focus on how things succeed: A company like CrowdStrike provides a key service that checks off required functionality on a compliance checklist, which makes it all about the features that they will deliver when everything is working. That;s exactly backward. We want our technological infrastructure to mimic nature in the way things fail. That will give us deep complexity rather than just surface complexity, and resilience rather than brittleness.
How do we accomplish this? There are examples in the technology world, but they are piecemeal. Netflix is famous for its Chaos Monkey tool, which intentionally causes failures to force the systems (and, really, the engineers) to be more resilient. The incentives don’t line up in the short term: It makes it harder for Netflix engineers to do their jobs and more expensive for them to run their systems. Over years, this kind of testing generates more stable systems. But it requires corporate leadership with foresight and a willingness to spend in the short term for possible long-term benefits.
Last week’s update wouldn’t have been a major failure if CrowdStrike had rolled out this change incrementally: first 1 percent of their users, then 10 percent, then everyone. But that’s much more expensive, because it requires a commitment of engineer time for monitoring, debugging, and iterating. And can take months to do correctly for complex and mission-critical software. An executive today will look at the market incentives and correctly conclude that it’s better for them to take the chance than to “waste” the time and money.
We run for search in bases: (msdn.rg-adguard.net, vlsc.rg-adguard.net tb.rg-adguard.net and uup.rg-adguard.net)
In this first lecture of Security Engineering (https://www.cl.cam.ac.uk/~rja14/book.html), Ross looks at the various kinds of attacker and their capabilities: the crooks, state actors, corporate competitors, and "the swamp". Sam then looks at the various tools they all use, and how real-world vulnerabilities are patched and/or exploited.
I've written a third edition of Security Engineering. The e-book version is available now for $44 from Wiley and Amazon; paper copies are available from Amazon here for delivery in the USA and here for the UK.
Here are the chapters, with links to the seven sample chapters as I last put them online for review: //
Here are fifteen teaching videos we made based on the book for a security engineering class at Edinburgh, taught to masters students and fourth-year undergrads: //
The Second Edition (2008)
Download for free here:
Apple and the satellite-based broadband service Starlink each recently took steps to address new research into the potential security and privacy implications of how their services geo-locate devices. Researchers from the University of Maryland say they relied on publicly available data from Apple to track the location of billions of devices globally — including non-Apple devices like Starlink systems — and found they could use this data to monitor the destruction of Gaza, as well as the movements and in many cases identities of Russian and Ukrainian troops.
At issue is the way that Apple collects and publicly shares information about the precise location of all Wi-Fi access points seen by its devices. Apple collects this location data to give Apple devices a crowdsourced, low-power alternative to constantly requesting global positioning system (GPS) coordinates.
Both Apple and Google operate their own Wi-Fi-based Positioning Systems (WPS) that obtain certain hardware identifiers from all wireless access points that come within range of their mobile devices. Both record the Media Access Control (MAC) address that a Wi-FI access point uses, known as a Basic Service Set Identifier or BSSID.
Ebury backdoors SSH servers in hosting providers, giving the malware extraordinary reach. //
Infrastructure used to maintain and distribute the Linux operating system kernel was infected for two years, starting in 2009, by sophisticated malware that managed to get a hold of one of the developers’ most closely guarded resources: the /etc/shadow files that stored encrypted password data for more than 550 system users, researchers said Tuesday. //
A 47-page report summarizing Ebury's 15-year history said that the infection hitting the kernel.org network began in 2009, two years earlier than the domain was previously thought to have been compromised. The report said that since 2009, the OpenSSH-dwelling malware has infected more than 400,000 servers, all running Linux except for about 400 FreeBSD servers, a dozen OpenBSD and SunOS servers, and at least one Mac. //
There is no indication that either infection resulted in tampering with the Linux kernel source code.
The attack works by manipulating the DHCP server that allocates IP addresses to devices trying to connect to the local network. A setting known as option 121 allows the DHCP server to override default routing rules that send VPN traffic through a local IP address that initiates the encrypted tunnel. By using option 121 to route VPN traffic through the DHCP server, the attack diverts the data to the DHCP server itself. //
We use DHCP option 121 to set a route on the VPN user’s routing table. The route we set is arbitrary and we can also set multiple routes if needed. By pushing routes that are more specific than a /0 CIDR range that most VPNs use, we can make routing rules that have a higher priority than the routes for the virtual interface the VPN creates. We can set multiple /1 routes to recreate the 0.0.0.0/0 all traffic rule set by most VPNs. //
Interestingly, Android is the only operating system that fully immunizes VPN apps from the attack because it doesn't implement option 121. For all other OSes, there are no complete fixes. When apps run on Linux there’s a setting that minimizes the effects, but even then TunnelVision can be used to exploit a side channel that can be used to de-anonymize destination traffic and perform targeted denial-of-service attacks. //
The most effective fixes are to run the VPN inside of a virtual machine whose network adapter isn’t in bridged mode or to connect the VPN to the Internet through the Wi-Fi network of a cellular device.
A little-discussed detail in the Lavender AI article is that Israel is killing people based on being in the same Whatsapp group [1] as a suspected militant [2]. Where are they getting this data? Is WhatsApp sharing it?
Attacks coming from nearly 4,000 IP addresses take aim at VPNs, SSH and web apps. //
Talos said Tuesday that services targeted in the campaign include, but aren’t limited to:
Cisco Secure Firewall VPN
Checkpoint VPN
Fortinet VPN
SonicWall VPN
RD Web Services
Mikrotik
Draytek
Ubiquiti.
...
Additionally, remote access VPNs should use certificate-based authentication.
Hackers already received a $22 million payment. Now a second group demands money. //
Callow says the incident reinforces that cybercriminals can’t be trusted to delete data, even when they are paid. //
“Sometimes they use the undeleted data to extort victims for a second time, and the risk of re-extortion will only increase as law enforcement up their disruption efforts and throw the ransomware ecosystem into chaos,” Callow says. “What were always unpredictable outcomes will now be even more unpredictable.”
Similarly, DiMaggio says victims of ransomware attacks need to learn they can’t trust cybercriminals. “Victims need to understand that paying a criminal who promises to delete their data permanently is a myth,” DiMaggio says. “They are paying to have their data taken off the public side of the ransomware attackers' data leak site. They should assume it is never actually deleted.” //
quackmeister Smack-Fu Master, in training
7y
55
Makes perfect business sense that ransomware vendors are embracing the subscriber model. //
deviant_cocktail Wise, Aged Ars Veteran
4y
150
Subscriptor
It is wrong to put temptation in the path of any nation,
For fear they should succumb and go astray;
So when you are requested to pay up or be molested,
You will find it better policy to say:—
"We never pay any-one Dane-geld,
No matter how trifling the cost;
For the end of that game is oppression and shame,
And the nation that plays it is lost!"
(From the poem Dane-geld by Rudyard Kipling) //
Shavano Ars Legatus Legionis
11y
57,985
Subscriptor
Although Change Healthcare and their parent United Health rightly deserve to be pilloried and their stock to take a giant nose dive for this, and to lose all their doctor affiliations and patients, punishing them alone won't fix the problem. Everybody's data remains at risk as long as it's legal to pay ransomware companies.
Make it a felony for any US business or government entity to pay cyber-related ransom. Then the payments will stop, which will make the ransom attempts stop. //
freaq Ars Scholae Palatinae
6y
1,099
Stop… paying…
They will never stop if you keep paying…
its time that it becomes illegal to pay off ransomware, so that fewer people do.
Crime only stops when it stops paying…
No patch yet for unauthenticated code-execution bug in Palo Alto Networks firewall. //
beheadedstraw Ars Centurion 8y 373
cyberfunk said:
I find this article quite difficult to comprehend, we go from rooting firewalls to somehow magically obtaining Microsoft active directory secrets?There’s no logical flow to how attackers are jumping around the network here and it just feels like bits and pieces of the security reports are copy and pasted here into the article without explanation. I think a better job needs to be done explaining the logical flow events here
The vast majority of firewalls have service accounts with full read access to AD for authentication, usually for VPN's. Microsoft still uses NTLM/NTLMv2 to encrypt their passwords, which is highly susceptible to simple brute force attacks because they don't use salts.
Regardless this is basically the worst of the worst case scenarios for a shitload of Fortune 500 companies, which is what Palo Alto caters to. //
fsck! Ars Centurion
12y
242
Having gone through the Ivanti ordeal as well, I can say AD integration isnt to be taken lightly. From a recovery standpoint, you are now not only looking at VPN remediation but also your entire AD... //
Focher Ars Scholae Palatinae
17y
1,054
KingKrayola said:
We're neither using a PAN firewall nor a blue-chip company.Does using RADIUS for VPN auth provide a level of protection vs direct AD Access, or is it just a case of choosing one's poison?
That depends. RADIUS has a fully configurable authentication mechanism, but if you’re using a flavor of Active Directory then you’re subject to much of the same. Why certificates aren’t a required layer in environments continues to surprise me. I’m not suggesting other laypersons should have it but even I use it on my own network so it’s definitely manageable. //
pnellesen Ars Scholae Palatinae
12y
1,035
Subscriptor++
This kind of news never comes out on a Monday morning, does it? //