488 private links
The Colorado Secretary of State’s Office posted passwords to statewide voting systems online for anyone to access. The Colorado Republican Party, which uncovered the security breach, is seeking accountability.
The office of Democrat Secretary of State Jena Griswold, who tried but failed to kick former President Donald Trump off the ballot, posted an Excel file online with a “hidden” page of 600 passwords to voting systems in every county but one, according to an email the Colorado GOP sent on Tuesday. Anyone could “unhide” the page and view the passwords.
On Wednesday, the Colorado GOP said it is seeking “legal relief in the courts” and calling on state lawmakers for an emergency audit, saying Griswold engaged in a “cover-up.” Colorado voting is already underway, according to the secretary’s website, with more than 1.27 million votes already cast.
“This does not pose an immediate security threat to Colorado’s elections, nor will it impact how ballots are counted,” Griswold’s office claimed in a press release. //
Just last week, Griswold held a press conference about voter fraud in Mesa County, according to KUSA. There, she claimed there was “no reason to believe that there are any security breaches or compromises in the state of Colorado.”
The Internet Archive was breached again, this time on their Zendesk email support platform after repeated warnings that threat actors stole exposed GitLab authentication tokens.
In the olden days of five years ago, it used to take months for threat actors and cybercriminals to start taking advantage of a newly-discovered exploit, but that window has shrunk to several days.
Google's Mandiant threat hunters released a report of 2023 time-to-exploit trends and found that, from 2022 to 2023 the average observed time to exploit (TTE) shrunk from 32 days to just five, meaning threat actors are moving incredibly quickly nowadays. That drop wasn't gradual, either: from 2018 to 2019 Mandiant said it was around 63 days, which dropped to 44 in 2021, before lowering to 32 in 2022.
ince early September, Cloudflare's DDoS protection systems have been combating a month-long campaign of hyper-volumetric L3/4 DDoS attacks. Cloudflare’s defenses mitigated over one hundred hyper-volumetric L3/4 DDoS attacks throughout the month, with many exceeding 2 billion packets per second (Bpps) and 3 terabits per second (Tbps). The largest attack peaked 3.8 Tbps — the largest ever disclosed publicly by any organization. Detection and mitigation was fully autonomous. The graphs below represent two separate attack events that targeted the same Cloudflare customer and were mitigated autonomously.
The Wall Street Journal is reporting that Chinese hackers (Salt Typhoon) penetrated the networks of US broadband providers, and might have accessed the backdoors that the federal government uses to execute court-authorized wiretap requests. Those backdoors have been mandated by law—CALEA—since 1994.
It’s a weird story. The first line of the article is: “A cyberattack tied to the Chinese government penetrated the networks of a swath of U.S. broadband providers.” This implies that the attack wasn’t against the broadband providers directly, but against one of the intermediary companies that sit between the government CALEA requests and the broadband providers.
For years, the security community has pushed back against these backdoors, pointing out that the technical capability cannot differentiate between good guys and bad guys. And here is one more example of a backdoor access mechanism being targeted by the “wrong” eavesdroppers. //
Clive Robinson • October 8, 2024 12:34 PM
Funny in a sad way but I used CALEA as an example of a bad idea put into legislation just a short time back.
The thing that most do not realise is that the actual “back door” does not need to be present, just the hooks for it in the system.
I doubt many remember back the twenty years to the Greek Olympics, but the main cellphone provider did Vodafone did not have the CALEA software installed in it’s equipment. But because the switches had it as a paid for option the low level hooks etc were in place in them.
The CIA/NSA used “the games” as an excuse to “check security”, and in the process a backdoor was dropped onto the hooks and more than a hundred senior Greek Government individuals had their phones put under surveillance, as well as some of their families and arabic business men.
For reasons not clear but incompetence by a CIA officer was indicated the backdoor was found. As an enquiry got under way and started to home in on events a phone company employee was found dead and he was blamed. Initially claimed to be a suicide it was later found to be murder with fingers pointed at the US.
The point everyone should remember is that when designing communications systems, you must design them in a way that backdoors are not only not possible but indicative behaviour will get flagged up quickly.
Otherwise on the sensible view expressed in Claude Shannon’s pithy maxim of,
“The enemy knows the system”[1],
the enemy will try to build an illicit backdoor in if you give them any crack to exploit.
Such “defensive engineering” to stop it is not something the vast majority of software and other systems developers understand and it’s long over due as an industry that ICT “Got it’s ‘sand’ together” on the matter.
Whilst E2EE when properly done –and it’s mostly not– can protect the “message contents” it does not protect much of anything else about the communications. That is the actual traffic meta-data and meta-meta-data allows not just “Traffic Analysis” but other forms of analysis and correlation by which information can be reasoned.
[1] Actually a rewording of Dutch Prof Auguste Kerckhoffs’s 2nd principle from the early 1880’s. //
Who? • October 8, 2024 12:40 PM
NOBUS at its best.
I hope some day one of these mandated-by-law backdoors will be used to make a truly destructive attack against U.S. critical infraestructures, so they start taking cybersecurity seriously and radically change their minds with relation to government backdoors.
I am sorry for being so harsh, but weakening computer and network (well… both are the same as the old Sun Microsystems slogan said, right?) security has nothing to do with cybersecurity. A secure computer is a secure device, secure against adversaries and secure against us too. I will say more, if NSA finds a vulnerability in a software project developed outside the United States, they should communicate the vulnerability to the developers of that software project too, at least if that software is used in the United States.
No one should play in the cybersecurity field by weakening the security of computer systems, at least not if they play in the “good guys” team.
Well, take this event as a warning note. I am not able to read an article behind a paywall, so I am unsure about what this attack means, but hope it will not be too difficult to fix. And, no, the fix is not changing the backdoor to a different one. The only acceptable fix is closing the backdoor forever.
Many of the cybercriminals in this community have stolen tens of millions of dollars worth of cryptocurrency, and can easily afford to bribe police officers. KrebsOnSecurity would expect to see more of this in the future as young, crypto-rich cybercriminals seek to corrupt people in authority to their advantage.
NIST Recommends Some Common-Sense Password Rules
NIST’s second draft of its “SP 800-63-4“—its digital identify guidelines—finally contains some really good rules about passwords:
The following requirements apply to passwords:
- lVerifiers and CSPs SHALL require passwords to be a minimum of eight characters in length and SHOULD require passwords to be a minimum of 15 characters in length.
- Verifiers and CSPs SHOULD permit a maximum password length of at least 64 characters.
- Verifiers and CSPs SHOULD accept all printing ASCII [RFC20] characters and the space character in passwords.
- Verifiers and CSPs SHOULD accept Unicode [ISO/ISC 10646] characters in passwords. Each Unicode code point SHALL be counted as a signgle character when evaluating password length.
- Verifiers and CSPs SHALL NOT impose other composition rules (e.g., requiring mixtures of different character types) for passwords.
- Verifiers and CSPs SHALL NOT require users to change passwords periodically. However, verifiers SHALL force a change if there is evidence of compromise of the authenticator.
- Verifiers and CSPs SHALL NOT permit the subscriber to store a hint that is accessible to an unauthenticated claimant.
- Verifiers and CSPs SHALL NOT prompt subscribers to use knowledge-based authentication (KBA) (e.g., “What was the name of your first pet?”) or security questions when choosing passwords.
Verifiers SHALL verify the entire submitted password (i.e., not truncate it).
Hooray.
Once we engineered a selective shutdown switch into the Internet, and implemented a way to do what Internet engineers have spent decades making sure never happens, we would have created an enormous security vulnerability. We would make the job of any would-be terrorist intent on bringing down the Internet much easier.
Computer and network security is hard, and every Internet system we’ve ever created has security vulnerabilities. It would be folly to think this one wouldn’t as well. And given how unlikely the risk is, any actual shutdown would be far more likely to be a result of an unfortunate error or a malicious hacker than of a presidential order.
But the main problem with an Internet kill switch is that it’s too coarse a hammer.
Yes, the bad guys use the Internet to communicate, and they can use it to attack us. But the good guys use it, too, and the good guys far outnumber the bad guys.
Shutting the Internet down, either the whole thing or just a part of it, even in the face of a foreign military attack would do far more damage than it could possibly prevent. And it would hurt others whom we don’t want to hurt.
For years we’ve been bombarded with scare stories about terrorists wanting to shut the Internet down. They’re mostly fairy tales, but they’re scary precisely because the Internet is so critical to so many things.
Why would we want to terrorize our own population by doing exactly what we don’t want anyone else to do? And a national emergency is precisely the worst time to do it.
Just implementing the capability would be very expensive; I would rather see that money going toward securing our nation’s critical infrastructure from attack.
FlyCASS essentially offers FAR121 and FAR135 airlines a way to manage KCM and CASS requests without having to develop their own infrastructure. It pitches itself as a service requiring zero upfront cost to airlines that can be fully set up in 24 hours, with no technical staff required.
The researchers note that each airline has its own login page, which is exposed to the internet. According to the research, these login pages could be bypassed using a simple SQL injection.
"With only a login page exposed, we thought we had hit a dead end," Carroll said in his writeup. "Just to be sure though, we tried a single quote in the username as a SQL injection test, and immediately received a MySQL error.
"This was a very bad sign, as it seemed the username was directly interpolated into the login SQL query. Sure enough, we had discovered SQL injection and were able to use sqlmap to confirm the issue. Using the username of ' or '1'='1 and password of ') OR MD5('1')=MD5('1, we were able to login to FlyCASS as an administrator of Air Transport International!" //
When it came to disclosing the findings, it seems the US authorities didn't want this coming out, if the researchers' account is anything to go by. Carroll says the DHS completely ignored all attempts to disclose the findings in a coordinated way.
He also claimed the TSA "issued dangerously incorrect statements about the vulnerability, denying what we had discovered." //
"After we informed the TSA of this, they deleted the section of their website that mentions manually entering an employee ID, and did not respond to our correction. We have confirmed that the interface used by TSOs still allows manual input of employee IDs."
This blog is reserved for more serious things, and ordinarily I wouldn’t spend time on questions like the above. But much as I’d like to spend my time writing about exciting topics, sometimes the world requires a bit of what Brad Delong calls “Intellectual Garbage Pickup,” namely: correcting wrong, or mostly-wrong ideas that spread unchecked across the Internet.
This post is inspired by the recent and concerning news that Telegram’s CEO Pavel Durov has been arrested by French authorities for its failure to sufficiently moderate content. While I don’t know the details, the use of criminal charges to coerce social media companies is a pretty worrying escalation, and I hope there’s more to the story.
But this arrest is not what I want to talk about today.
What I do want to talk about is one specific detail of the reporting. Specifically: the fact that nearly every news report about the arrest refers to Telegram as an “encrypted messaging app.”
This phrasing drives me nuts because in a very limited technical sense it’s not wrong. Yet in every sense that matters, it fundamentally misrepresents what Telegram is and how it works in practice. And this misrepresentation is bad for both journalists and particularly for Telegram’s users, many of whom could be badly hurt as a result.
Questions raised as one of the world's largest PC makers joins America's critical defense team
NextDNS protects you from all kinds of security threats, blocks ads and trackers on websites and in apps and provides a safe and supervised Internet for kids — on all devices and on all networks.
Try it now
No signup required.
Sign up later to save your settings.
As Phil Root, the deputy director of the Defense Sciences Office at DARPA, recounted to Scharre, “A tank looks like a tank, even when it’s moving. A human when walking looks different than a human standing. A human with a weapon looks different.”
In order to train the artificial intelligence, it needed data in the form of a squad of Marines spending six days walking around in front of it. On the seventh day, though, it was time to put the machine to the test.
“If any Marines could get all the way in and touch this robot without being detected, they would win. I wanted to see, game on, what would happen,” said Root in the book. //
the Marines, being Marines, found several ways to bollix the AI and achieved a 100 percent success rate.
Two Marines, according to the book, somersaulted for 300 meters to approach the sensor. Another pair hid under a cardboard box.
“You could hear them giggling the whole time,” said Root in the book.
One Marine stripped a fir tree and held it in front of him as he approached the sensor. In the end, while the artificial intelligence knew how to identify a person walking, that was pretty much all it knew because that was all it had been modeled to detect. //
The moral of the story? Never bet against Marines, soldiers, or military folks in general. The American military rank-and-file has proven itself more creative than any other military in history. Whether that creativity is focused on finding and deleting bad guys or finding ways to screw with an AI and the eggheads who programmed it, my money's on the troops.
So if you have some service running on your macOS or Linux workstation on port 11223, and you assume no one can reach it because it's behind your firewall, and that your big-name browser blocks outside requests to localhost, guess again because that browser will route a 0.0.0.0:11223 request by a malicious page you're visiting to your service.
It’s supposed to make the title transfer a breeze and help Californians avoid those tedious trips to the DMV.
Users will soon be able to claim their digital titles via the DMV’s application, track and manage them without getting to the office, according to an Avalanche blog post. The time to transfer vehicle titles drops to a few minutes using blockchain rails in the backend from two weeks via the traditional process, a DMV spokesperson said in an email. //
However, given the recent spate of Microsoft outages and other hacking reports, I am a bit nervous about digitizing without serious hard copy backups. Given how expensive cars have become and how critical having one is to people’s lives and livelihoods, extreme caution should be used before proceeding.
The unintended consequences of this move could be devastating if there are significant issues with the system.
It is also disturbing to note this move is also part of Governor Gavin Newsom’s plans to even have more control over our lives….under the banner of protections. ///
What about people who don't have smartphones, or computers, or Internet ? What happens when there is actual fraud, how do you unwind that? Do people still get paper backup copies of titles?
potential problems can arise when a domain’s DNS records are “lame,” meaning the authoritative name server does not have enough information about the domain and can’t resolve queries to find it. A domain can become lame in a variety of ways, such as when it is not assigned an Internet address, or because the name servers in the domain’s authoritative record are misconfigured or missing.
The reason lame domains are problematic is that a number of Web hosting and DNS providers allow users to claim control over a domain without accessing the true owner’s account at their DNS provider or registrar. //
In the 2019 campaign, the spammers created accounts on GoDaddy and were able to take over vulnerable domains simply by registering a free account at GoDaddy and being assigned the same DNS servers as the hijacked domain. //
How does one know whether a DNS provider is exploitable? There is a frequently updated list published on GitHub called “Can I take over DNS,” which has been documenting exploitability by DNS provider over the past several years. The list includes examples for each of the named DNS providers.
When running in kernel mode rather than user mode, security software has full access to a system's hardware and software, which makes it more powerful and flexible; this also means that a bad update like CrowdStrike's can cause a lot more problems.
Recent versions of macOS have deprecated third-party kernel extensions for exactly this reason, one explanation for why Macs weren't taken down by the CrowdStrike update. But past efforts by Microsoft to lock third-party security companies out of the Windows kernel—most recently in the Windows Vista era—have been met with pushback from European Commission regulators. That level of skepticism is warranted, given Microsoft's past (and continuing) record of using Windows' market position to push its own products and services. Any present-day attempt to restrict third-party vendors' access to the Windows kernel would be likely to draw similar scrutiny. //
For context, analytics company Parametrix Insurance estimated the cost of the outage to Fortune 500 companies somewhere in the realm of $5.4 billion.
Re: Yes and no
OK, here's the situation.
you fork an upstream repo, your fork is private
you commit something there that should not see daylight (keys to the Lamborghini or whatever)
you delete that commit to hide your sins
and now that commit is apparently still easily accessible from upstream. //
Re: Yes and no
Also - as far as I can see it's not actually possible to create private native GitHub forks of public repos in GitHub
The example they cite is when you have a private repo that will eventually become public, fork it to make permanently-private fork and then later make the original repo public. Anything commited to the still-private repo up until the point the first repo is made public, can be accessed from the now-public repo. (As long as you know the commit hashes, that is - but unfortunately they're easily discoverable.) //
The issue is that people are mentally modelling forks as "that's my copy of the repo, completely separate from the original" whereas in reality the fork is just a different interface to the same pool of blobs. Furthermore, while you wouldn't be able to access commits from another fork in the same pool of blobs unless you know the commit hash, github makes those commit hashes discoverable. //
Re: Yes and no
For the people downvoting me - you are aware how quickly AWS credentials accidentally exposed on GitHub are found and abused by attackers? Honeypot tests suggest 1 minute.
https://www.comparitech.com/blog/information-security/github-honeypot/
Note that at no point in the "what to do if you've exposed credentials" section does it say "delete the public repo in the hope that this will alter the past". Magical thinking.
Having played around with this on GitHub, I will say that the message on trying to delete a repo isn't explicit enough about the unexpected (if documented) behaviour. It really ought to have a disclaimer that says "If you're trying to delete commits you wish you hadn't pushed everywhere, this won't achieve it", and a link to a page describing what will actually help. //
What happens in Repo Stays In Repo
"It's reasonable to expect that deleting a commit tree that nobody else has yet accessed will prevent it from being accessed in future."
No it isn't. It's supposed to be a history. In a code repo, the ability to permanently delete past changeset data should be considered a bug or design flaw. The inability to lose history is the whole point.
Source code has no right to be forgotten, when it's in an SCM, because the point of the SCM is to remember.
The catastrophe is yet another reminder of how brittle global internet infrastructure is. It’s complex, deeply interconnected, and filled with single points of failure. As we experienced last week, a single problem in a small piece of software can take large swaths of the internet and global economy offline.
The brittleness of modern society isn’t confined to tech. We can see it in many parts of our infrastructure, from food to electricity, from finance to transportation. This is often a result of globalization and consolidation, but not always. In information technology, brittleness also results from the fact that hundreds of companies, none of which you;ve heard of, each perform a small but essential role in keeping the internet running. CrowdStrike is one of those companies.
This brittleness is a result of market incentives. In enterprise computing—as opposed to personal computing—a company that provides computing infrastructure to enterprise networks is incentivized to be as integral as possible, to have as deep access into their customers’ networks as possible, and to run as leanly as possible.
Redundancies are unprofitable. Being slow and careful is unprofitable. Being less embedded in and less essential and having less access to the customers’ networks and machines is unprofitable—at least in the short term, by which these companies are measured. This is true for companies like CrowdStrike. It’s also true for CrowdStrike’s customers, who also didn’t have resilience, redundancy, or backup systems in place for failures such as this because they are also an expense that affects short-term profitability.
But brittleness is profitable only when everything is working. When a brittle system fails, it fails badly. The cost of failure to a company like CrowdStrike is a fraction of the cost to the global economy. And there will be a next CrowdStrike, and one after that. The market rewards short-term profit-maximizing systems, and doesn’t sufficiently penalize such companies for the impact their mistakes can have. (Stock prices depress only temporarily. Regulatory penalties are minor. Class-action lawsuits settle. Insurance blunts financial losses.) It’s not even clear that the information technology industry could exist in its current form if it had to take into account all the risks such brittleness causes. //
Imagine a house where the drywall, flooring, fireplace, and light fixtures are all made by companies that need continuous access and whose failures would cause the house to collapse. You’d never set foot in such a structure, yet that’s how software systems are built. It’s not that 100 percent of the system relies on each company all the time, but 100 percent of the system can fail if any one of them fails. But doing better is expensive and doesn’t immediately contribute to a company’s bottom line. //
This is not something we can dismantle overnight. We have built a society based on complex technology that we’re utterly dependent on, with no reliable way to manage that technology. Compare the internet with ecological systems. Both are complex, but ecological systems have deep complexity rather than just surface complexity. In ecological systems, there are fewer single points of failure: If any one thing fails in a healthy natural ecosystem, there are other things that will take over. That gives them a resilience that our tech systems lack.
We need deep complexity in our technological systems, and that will require changes in the market. Right now, the market incentives in tech are to focus on how things succeed: A company like CrowdStrike provides a key service that checks off required functionality on a compliance checklist, which makes it all about the features that they will deliver when everything is working. That;s exactly backward. We want our technological infrastructure to mimic nature in the way things fail. That will give us deep complexity rather than just surface complexity, and resilience rather than brittleness.
How do we accomplish this? There are examples in the technology world, but they are piecemeal. Netflix is famous for its Chaos Monkey tool, which intentionally causes failures to force the systems (and, really, the engineers) to be more resilient. The incentives don’t line up in the short term: It makes it harder for Netflix engineers to do their jobs and more expensive for them to run their systems. Over years, this kind of testing generates more stable systems. But it requires corporate leadership with foresight and a willingness to spend in the short term for possible long-term benefits.
Last week’s update wouldn’t have been a major failure if CrowdStrike had rolled out this change incrementally: first 1 percent of their users, then 10 percent, then everyone. But that’s much more expensive, because it requires a commitment of engineer time for monitoring, debugging, and iterating. And can take months to do correctly for complex and mission-critical software. An executive today will look at the market incentives and correctly conclude that it’s better for them to take the chance than to “waste” the time and money.
We run for search in bases: (msdn.rg-adguard.net, vlsc.rg-adguard.net tb.rg-adguard.net and uup.rg-adguard.net)