488 private links
M-209 was a light-weight portable pin-and-lug cipher machine, developed at the beginning of World War II by Boris Hagelin of AB Cryptoteknik in Stockholm (Sweden), and manufactured by Smith & Corona in Syracuse (New York, USA). The machine is designated CSP-1500 by the US Navy and is the US military variant of the C-38, which in turn is an improved version of the C-36 and C-37. A compatible motorised version – with keyboard – is known as BC-38 (later: BC-543). During WWII, the M-209 was known by German cryptanalysts as AM-1 (American Machine #1)). //
The cryptographic strength of the machine was reasonable for its time, but was not perfect. As of early 1943, it was assumed that German codebreakers were able to break an M-209 message in less than 4 hours. 1 Nevertheless, it was considered sufficiently secure for tactical messages which, due to their nature, would be meaningless after several hours. This is why the M-209 was later also used in the Korean War. The M-209 was succeeded in 1952 by the C-52 and CX-52. //
According to them, the effort to break it was impractically high.
It proved however, that American cryptologist William Friedman, had been right all along. He liked the Hagelin machines and had found them to be theoretically unbreakable, but knew that they could be setup in such a way that they became weak and vulnerable to cryptanalytic attacks [8]. British and American codebreakers were able to read the Hagelins from both enemies and allies.
After the war it became clear that the Germans were able to read 10% of the American Hagelin traffic: 6% from cryptanalysis, and 4% from captured keys. But due to the amount of work involved in breaking, the delay between intercept and decrypt was usually 7 to 10 days; too long to be usefull for tactical messages like the ones sent by the US Army. Apparently, the Japanese also understood many of the principles of Hagelin exploitation, but hardly broke Hagelin traffic [8].
For high-level messages, the Americans used a rotor machine — SIGABA — which was similar to Enigma, but much much more advanced. As far as we know, SIGABA was never compromised.
Cold War
Immediately after WWII, in 1947, the NSA started the development of a cryptanalytic machine named WARLOCK I — also known as AFSAF-D79 and CXNK — that was able to solve the Hagelin C-38/M-209 much faster than with hand methods. The machine became operational in 1951 and was used to read the traffic from many countries that were using M-209 or C-38 machines. The US had 'accidentally' released large batches of M-209 machines on the surplus market for as little as US$ 15 and even US$ 2 [8]. Many of these were purchased by South American countries.
What price common sense? • June 11, 2024 7:30 AM
@Levi B.
“Those who are not familiar with the term “bit-squatting” should look that up”
Are you sure you want to go down that rabbit hole?
It’s an instant of a general class of problems that are never going to go away.
And why in
“Web servers would usually have error-correcting (ECC) memory, in which case they’re unlikely to create such links themselves.”
The key word is “unlikely” or more formally “low probability”.
Because it’s down to the fundamentals of the universe and the failings of logic and reason as we formally use them. Which in turn has been why since at least as early as the ancient Greeks through to 20th Century, some of those thinking about it in it’s various guises have gone mad and some committed suicide.
To understand why you need to understand why things like “Error Correcting Codes”(ECC) will never by 100% effective and deterministic encryption systems especially stream ciphers will always be vulnerable. //
No matter what you do all error checking systems have both false positive and false negative results. All you can do is tailor the system to that of the more probable errors.
But there are other underlying issues, bit flips happen in memory by deterministic processes that apparently happen by chance. Back in the early 1970’s when putting computers into space became a reality it was known that computers were effected by radiation. Initially it was assumed it had to be of sufficient energy to be ‘ionizing’ but later any EM radiation such as the antenna of a hand held two way radio would do with low energy CMOS chips.
This was due to metastability. In practice the logic gates we use are very high gain analog amplifiers that are designed to “crash into the rails”. Some logic such as ECL was actually kept linear to get speed advantages but these days it’s all a bit murky.
The point is as the level at a simple logic gate input changes it goes through a transition region where the relationship between the gate input and output is indeterminate. Thus an inverter in effect might or might not invert or even oscillate with the input in the transition zone.
I won’t go into the reasons behind it but it’s down to two basic issues. Firstly the universe is full of noise, secondly it’s full of quantum effects. The two can be difficult to differentiate in even very long term measurements and engineers tend to try to lump it all under a first approximation of a Gaussian distribution as “Addative White Gaussian Noise”(AWGN) that has nice properties such as averaging predictably to zero with time and “the root of the mean squared”. However the universe tends not to play that way when you get up close, so instead “Phase Noise in a measurement window” is often used with Allan Deviation. //
There are things we can not know because they are unpredictable or beyond or ability to measure.
But also beyond a deterministic system to calculate.
Computers only know “natural numbers” or “unsigned integers” within a finite range. Everything else is approximated or as others would say “faked”. Between every natural number there are other numbers some can be found as ratios of natural numbers and others can not. What drove philosophers and mathematicians mad was the realisation of the likes of “root two”, pi and that there was an infinity of such numbers we could never know. Another issue was the spaces caused by integer multiplication the smaller all the integers the smaller the spaces between the multiples. Eventually it was realised that there was an advantage to this in that it scaled. The result in computers is floating point numbers. They work well for many things but not with addition and subtraction of small values with large values.
As has been mentioned LLM’s are in reality no different from “Digital Signal Processing”(DSP) systems in their fundamental algorithms. One of which is “Multiply and ADd”(MAD) using integers. These have issues in that values disappear or can not be calculated. With continuous signals they can be integrated in with little distortion. In LLM’s they can cause errors that are part of what has been called “Hallucinations”. That is where something with meaning to a human such as the name of a Pokemon trading card character “Solidgoldmagikarp” gets mapped to an entirely unrelated word “distribute”, thus mayhem resulted on GPT-3.5 and much hilarity once widely known.
Session is an end-to-end encrypted messenger that minimises sensitive metadata, designed and built for people who want absolute privacy and freedom from any form of surveillance.
Md5Checker is a free, faster, lightweight and easy-to-use tool to manage, calculate and verify MD5 checksum of multiple files/folders (Screenshots):
- Calculate and display MD5 checksum of multiple files at one time.
- Use MD5 checksum to fleetly verify whether files have been changed.
- Load, save, add, remove and update MD5 checksum conveniently.
- It is about 300 KB and does not require any installation (portable).
mustached-dog Seniorius Lurkius
22y
30
Subscriptor
Interestingly enough, "Jia Tan" is very close to 加蛋 in Mandarin, meaning "to add an egg". Unlikely to be a real name or a coincidence. //
choco bo Ars Praetorian
11y
402
Subscriptor++
Performance hit is quite substantial, actually. I have no doubt that this thing would have been detected, eventually. However, it might have happened months from now. Then it would have been everywhere already.
But this is a good thing. A very good thing, actually.
There have been discussions about supply chain attacks, for years. Decades, actually. We used to call it "poisoning the well" many years ago. But no matter how much we talk about it, it was all theoretical. I mean, people even assumed that compilers have been backdoored many years ago. But noone was going to spend this much effort just to show that it was possible and to make people accept the possibility. So not much was really done about it.
Until now.
Now we are already seeing changes being made to OpenSSH that would have not been possible few months ago. Native systemd notification integration is already been developed (since 30th of March), so no need for libsystemd linking anymore. It will take some time to get integrated but it will happen. We are seeing people understanding that there is absolutely no need to have binary blobs in source repositories (except rare cases, of course, but those are going to be audited even more now). Checking source repositories against tarballs have been done before, many times. But obviously it wasn't good enough or often enough. That will change as well. People being dicks to maintainers are going to get greeted with "go fuck yourself" now, without a second thought. It will be extreme but it will be safer. For eternity I was terrified of compiling software myself because every time I invoked "./configure ..." I would think "fuck knows what is going on there right now". I did occasionally check scripts, I would grep for unexpected things but I was aware I'd never detect a very skilled attacker, like this one. Now there is going to be much more checking of autoconf/make/CMake/etc files in source repos. It won't be easy to detect things, but it will be easier. More eyes will be put on sources. For example, I am going to pick a random smaller project and just read the commit history, look for oddities, etc. Not because I expect to find something but I want to see what else should be looked at, etc. Eventually, I might end up with toolset that might help speed this process up. So there will be at least one more set of eyes looking at sources. I imagine that companies/organizations with more resources are going to put tons of effort into automating all this. So yeah, xz backdoor is actually a good thing, in a very bizarre way.
Also, I can't hunt all the references at the moment but I believe it was certificate (not the SSH key) that is used as a vector of attack, because certs are checked early and no configuration options will disable that check, while it wouldn't be the case with keys. A change to OpenSSH has already been suggested so OpenSSH will only get more secure because of this and one less vector of attack is now available.
Amount of skill and time/effort invested in this is mind blowing. I don't think people outside security really comprehend the skill/time involved here, this was insanely well executed attack. My first thought was "This had to be TURLA" because it was insanely smart and whoever did this had lots of patience. This does not (and will not) happen often.
So yeah, we were incredibly lucky that a Postgres developer caught it early.
However, it is mind blowing how many times security incidents have been detected by looking at CPU/RAM usage on systems, it is really no surprise that this is how xz backdoor got detected.
Clive Robinson • March 28, 2024 6:04 AM
@ OldGuy, ALL,
Re : Chain of history
How we get from your,
“Then boss forgot his password, didn’t want to pay to get it unlocked, and turned me loose on it. Turned out their security consisted of XOR’ing every byte written to disk with the same hardcoded 8-bit value.”
To,
https://www.cnet.com/news/privacy/judge-orders-halt-to-defcon-speech-on-subway-card-hacking/
And how history is being rewritten by AI agents etc.
Your comment brings back a memory from nearly a quarter of a century ago. With ElcomSoft’s Dmitry Sklyarov being arrested and as it later turned out illegally detained and coerced by the FBI on behalf of Adobe Systems and their P155 P00r security in their e-book reader that used what sounds like exactly the same encryption system,
“Dmitry Sklyarov the 27 year old Russian programmer at the center of this case was released from U. S. custody and allowed to return to his home in Russia on December 13 2001”
https://www.eff.org/cases/us-v-elcomsoft-sklyarov
Interestingly, searching around shows that slowly bit by bit write ups on,
1, What Dmitry had presented at Defcon-9 about the truly bad state of e-book software.
2, The fact he was arrested on behest of Adobe for embarrassing them publicly about the very poor security in their e-book system
3, The fact it was even Adobe Systems or their product
4, The unlawful behaviour of US authorities
5, The names of FBI and DoJ people involved
6, The fact Dmitry was a PhD researcher.
7, A jury found both Dmitry and Elcomsoft entirely innocent on all charges brought against them.
Is getting “deleted from history” or made difficult to find, via the likes of DuckDuckGo and Microsoft AI based Search engines…
The case was quite famous at the time as it showed the FBI was not just “over reaching” but actively trying to crush legitimate academic research. With even the usually non political and non feather ruffling “Nature” making comment,
https://www.nature.com/articles/35086729
And how speaking “truth unto power” can have consequences,
‘https://www.linux.com/news/sklyarovs-defcon-presentation-online-supporters-reputation-bonfire/
Much of which is what got repeated by the Massachusetts Government against the three students and the RfID “Charlie Card”.
Clive Robinson • March 28, 2024 6:41 AM
@ OldGuy, ALL,
I forgot to add the all important,
https://en.citizendium.org/wiki/Snake_oil_(cryptography)
Which tells you,
‘One company advertised “the only software in the universe that makes your information virtually 100% burglarproof!”; their actual encryption, according to Sklyarov, was “XOR-ing each byte with every byte of the string “encrypted”, which is the same as XOR with constant byte”. Another used Rot 13 encryption, another used the same fixed key for all documents, and another stored everything needed to calculate the key in the document header.
‘
You can see why your comment triggered my memory ancient memory 😉
In December 2013, a curator and archaeologist purchased an antique silk dress with an unusual feature: a hidden pocket that held two sheets of paper with mysterious coded text written on them. People have been trying to crack the code ever since, and someone finally succeeded: University of Manitoba data analyst Wayne Chan. He discovered that the text is actually coded telegraph messages describing the weather used by the US Army and (later) the weather bureau. Chan outlined all the details of his decryption in a paper published in the journal Cryptologia.
“When I first thought I cracked it, I did feel really excited,” Chan told the New York Times. “It is probably one of the most complex telegraphic codes that I’ve ever seen.”
Today we celebrate 80 years of Colossus, the code-breaking computer that played a pivotal role in WWII.
Today we have released a series of rare and never-before-seen images of Colossus, in celebration of the 80th anniversary of the code-breaking computer that played a pivotal role in the Second World War effort.
The Colossus computer was created during the Second World War to decipher critical strategic messages between the most senior German Generals in occupied Europe, but its existence was only revealed in the early 2000s after six decades of secrecy.
On Thursday, UK's Government Communications Headquarters (GCHQ) announced the release of previously unseen images and documents related to Colossus, one of the first digital computers. The release marks the 80th anniversary of the code-breaking machines that significantly aided the Allied forces during World War II. While some in the public knew of the computers earlier, the UK did not formally acknowledge the project's existence until the 2000s.
Colossus was not one computer but a series of computers developed by British scientists between 1943 and 1945. These 2-meter-tall electronic beasts played an instrumental role in breaking the Lorenz cipher, a code used for communications between high-ranking German officials in occupied Europe. The computers were said to have allowed allies to "read Hitler's mind," according to The Sydney Morning Herald. //
The technology behind Colossus was highly innovative for its time. Tommy Flowers, the engineer behind its construction, used over 2,500 vacuum tubes to create logic gates, a precursor to the semiconductor-based electronic circuits found in modern computers. While 1945's ENIAC was long considered the clear front-runner in digital computing, the revelation of Colossus' earlier existence repositioned it in computing history. (However, it's important to note that ENIAC was a general-purpose computer, and Colossus was not.)
Passkeys are an asymmetric key pair
Each passkey is a pair of two related asymmetric cryptographic keys, which are very long, random strings of characters. While they differ from each other, they do have a special relationship - one can decrypt messages that have been encrypted by the other. This feature can be used to verify a user and authenticate them.
The key pair is made up of a private key that’s kept securely on your device, inside a password manager supporting passkeys (also called a passkey provider), and a public key that’s stored on the website you are logging into. Your private key is secure and never leaves your device, and the password manager keeps it locked by biometrics, PIN, or a password. The public key, on the other hand, could be shared with the world, such as in the case of a website data breach, and your security wouldn't be compromised so long as the private key stays safe.
For storing rarely used secrets that should not be kept on a networked computer, it is convenient to print them on paper. However, ordinary barcodes can store not much more than 2000 octets of data, and in practice even such small amounts cannot be reliably read by widely used software (e.g. ZXing).
In this note I show a script for splitting small amounts of data across multiple barcodes and generating a printable document. Specifically, this script is limited to less than 7650 alphanumeric characters, such as from the Base-64 alphabet. It can be used for archiving Tarsnap keys, GPG keys, SSH keys, etc.
On Sun, Apr 04, 2021 at 10:37:47AM -0700, jerry wrote:
Ideas? Right now, I'm experimenting with printed barcodes.
You might be interested in:
https://lab.whitequark.org/notes/2016-08-24/archiving-cryptographic-secrets-on-paper/
which was written specifically for tarsnap keys.
Cheers,
- Graham Percival
An error as small as a single flipped memory bit is all it takes to expose a private key. //
Martin Blank Ars Tribunus Militum
gromett said:
I have read it twice and am still not entirely clear.
Does this affect OpenSSH? As I read it the answer is no?
As happens often in cryptographic attacks that at least start out as implementation-specific, the likely answer is "it is not currently known to affect other implementations." Cryptographic techniques always get cheaper, never more expensive, and it is difficult to guarantee that other implementations are not vulnerable through variations of this attack. //
bobo bobo said:
A link in this article, to a Wikipedia page on Man In the Middle attacks, is labeled as a "malory in the middle" attack. But, um, the Wikipedia page does not use the term "malory". I am confused by use of the word "malory".
Typo? Or am I missing something?
It's a less common use, but it's part of the movement in the IT industry to move away from sensitive terms (e.g., master/slave becoming primary/secondary or similar). I've also heard monster-in-the-middle and monkey-in-the-middle, but really, there are no suggested terms that roll off the tongue the way man-in-the-middle does while keeping the MitM shorthand. //
FabiusCunctator Ars Scholae Palatinae
The vulnerability occurs when there are errors during the signature generation that takes place when a client and server are establishing a connection. It affects only keys using the RSA cryptographic algorithm, which the researchers found in roughly a third of the SSH signatures they examined.
A good reason to not use RSA keys if possible. I configure all of my ssh setups to use ED25519 keys by default, with a fallback to RSA if ED25519 support is not available.
You can generate an ED25519 key using the standard OpenSSH package by doing:
ssh-keygen -t ed25519
That will generate two files in your ssh key directory ('~/.ssh/' by default): 'id_ed25519' and 'id_ed25519.pub'. The first is your private key (keep it close!), while the second is your public key. Add that to the public key file you deploy to remote servers, and (where supported *) your logins will then use the new ED25519 keypair in preference to RSA ones.
- The 'where supported' caveats are important. While many if not most ssh implementations today (including OpenSSH) support ED25519 keys, there are still a few around that don't. Hence, it's a good idea to maintain both ED25519 and RSA keys and include both in your public key lists. If an implementation does not support ED25519, it will just ignore those keys and use RSA. //
Digital Twin by .ART stores information about art objects in a way that provides evidence of provenance, real-time provenance tracking and increases an art object’s value. Leveraging the easy-to-understand technology of domain names, using an international standard for describing cultural objects developed by the J. Paul Getty Trust, and offering the option of a blockchain connection, Digital Twin by .ART provides a sophisticated but easy-to-use art object identification tool.