507 private links
Gather around the fire for another retelling of computer networking history. //
Systems Approach A few weeks ago I stumbled onto an article titled "Traceroute isn’t real," which was reasonably entertaining while also not quite right in places.
I assume the title is an allusion to birds aren’t real, a well-known satirical conspiracy theory, so perhaps the article should also be read as satire. You don’t need me to critique the piece because that task has been taken on by the tireless contributors of Hacker News, who have, on this occasion, done a pretty good job of criticism.
One line that jumped out at me in the traceroute essay was the claim "it is completely impossible for [MPLS] to satisfy the expectations of traceroute." //
Many of them hated ATM with a passion – this was the height of the nethead vs bellhead wars – and one reason for that was the “cell tax.” ATM imposed a constant overhead (tax) of five header bytes for every 48 bytes of payload (over 10 percent), and this was the best case. A 20-byte IP header, by contrast, could be amortized over 1500-byte or longer packets (less than 2 percent).
Even with average packet sizes around 300 bytes (as they were at that time) IP came out a fair bit more efficient. And the ATM cell tax was in addition to the IP header overhead. ISPs paid a lot for their high-speed links and most were keen to use them efficiently. //
The other field that we quickly decided was essential for the tag header was time-to-live (TTL). It is the nature of distributed routing algorithms that transient loops can happen, and packets stuck in loops consume forwarding resources – potentially even interfering with the updates that will resolve the loop. Since labelled packets (usually) follow the path established by IP routing, a TTL was non-negotiable. I think we might have briefly considered something less than eight bits for TTL – who really needs to count up to 255 hops? – but that idea was discarded.
Route account
Which brings us to traceroute. Unlike the presumed reader of “Traceroute isn’t real,” we knew how traceroute worked, and we considered it an important tool for debugging. There is a very easy way to make traceroute operate over any sort of tunnel, since traceroute depends on packets with short TTLs getting dropped due to TTL expiry. //
ISPs didn’t love the fact that random end users can get a picture of their internal topology by running traceroute. And MPLS (or other tunnelling technologies) gave them a perfect tool for obscuring the topology.
First of all you can make sure that interior routers don’t send ICMP time exceeded messages. But you can also fudge the TTL when a packet exits a tunnel. Rather than copying the outer (MPLS) TTL to the inner (IP) TTL on egress, you can just decrement the IP TTL by one. Hey presto, your tunnel looks (to traceroute) like a single hop, since the IP TTL only decrements by one as packets traverse the tunnel, no matter how many router hops actually exist along the tunnel path. We made this a configurable option in our implementation and allowed for it in RFC 3032. //
John Smith 19Gold badge
Coat
Interesting stuff
Sorry but yes I do find this sort of stuff interesting.
Without an understanding of how we got here, how will we know where to go next?
Just a thought. //
doublelayerSilver badge
Responding to headlines never helps
This article's author goes to great lengths to argue against another post based on that post's admittedly bad headline. The reason for that is simple: the author has seen the "isn't real" bit of the headline and jumped to bad conclusions. It's not literal, but it's also not satire a la "birds aren't real". The article itself explains what they mean with the frequent claims that traceroute "doesn't exist":
From a network perspective, traceroute does not exist. It's simply an exploit, a trick someone discovered, so it's to be expected that it has no defined qualities. It's just random junk being thrown at a host, hoping that everything along the paths responds in a way that they are explicitly not required to. Is it any surprise that the resulting signal to noise ratio is awful?
I would have phrased this differently, without the hyperbole, because that clearly causes problems. This response makes no point relevant to the network administration consequences of a traceroute command that is pretty much only usable by people with a lot of knowledge about the topology of any networks they're tracing through and plenty more about what that command is actually doing. Where it does respond, specifically the viability of traceroute in MPLS, it simplifies the problem by pointing out that you can, if you desire, manually implement the TTL field, then goes on to describe the many different ways you can choose not to, ways that everyone chose to use. It is fair to say the author of the anti-traceroute article got it wrong when they claimed that MPLS couldn't support it, but in practice, "couldn't support" looks very similar to "doesn't because they deliberately chose not to". It is similar enough that it doesn't invalidate the author's main point, that traceroute is a command that is dangerous in the hands of people who aren't good at understanding why it doesn't give them as much information as they think it does. //
ColinPaSilver badge
It's the old problem
You get the first version out there, and see how popular it is. If it is popular you can add more widgets to it.
If you spend time up front doing all things, that with hindsight, you should have done, you would never ship it. Another problem is you can also add all the features you think might be used, in the original version, and then find they are not used, or have been superseded.
I was told, get something out there, for people to try. When people come hammering on your door, add the things that multiple people want.
20 hrs
the spectacularly refined chapSilver badge
Re: It's the old problem
Cf the OSI network stack, which took so long to standardise that widespread adoption of IP had already filled the void it was intended to.
In some ways that is not ideal, 30+ years on there is still no standard job submission protocol for IP, OSI had it from the start.
Have you ever wondered how the chips inside your computer work? How they process information and run programs? Are you maybe a bit let down by the low resolution of chip photographs on the web or by complex diagrams that reveal very little about how circuits work? Then you've come to the right place!
The first of our projects is aimed at the classic MOS 6502 cpu processor. ///
load some assembly language and watch the paths light up on the CPU die
Peter Galbavy
So, 2TB micro SD cards are how much volume? I am not sure what's novel here - or is it the WORM nature of the feat?
ChrisCSilver badge
Reply Icon
Using the bounding box dimensions for a MicroSD card (15x11x1mm) gives a volume of 0.165 cm3, so with 2TB per card now, that gives 12.1TB/cm3...
..which, quite frankly, is insane. I mean, even being able to shovel 2TB of data onto something the size of a fingernail still blows my mind, but the thought of being able to store 12TB in the space taken up by a sugar cube or a D6, when it really wasn't all that long ago being able to store even 1GB on a 3.5" hard drive was every bit as mind blowing at the time, really does make me stop and think about just how far we've come in such a short period of time, and what sort of similar mind blowing technological advances are yet to come over the next few decades.
FrontierMath's difficult questions remain unpublished so that AI companies can't train against it. //
On Friday, research organization Epoch AI released FrontierMath, a new mathematics benchmark that has been turning heads in the AI world because it contains hundreds of expert-level problems that leading AI models solve less than 2 percent of the time, according to Epoch AI. The benchmark tests AI language models (such as GPT-4o, which powers ChatGPT) against original mathematics problems that typically require hours or days for specialist mathematicians to complete.
FrontierMath's performance results, revealed in a preprint research paper, paint a stark picture of current AI model limitations. Even with access to Python environments for testing and verification, top models like Claude 3.5 Sonnet, GPT-4o, o1-preview, and Gemini 1.5 Pro scored extremely poorly. This contrasts with their high performance on simpler math benchmarks—many models now score above 90 percent on tests like GSM8K and MATH.
The design of FrontierMath differs from many existing AI benchmarks because the problem set remains private and unpublished to prevent data contamination. Many existing AI models are trained on other test problem datasets, allowing the AI models to easily solve the problems and appear more generally capable than they actually are. Many experts cite this as evidence that current large language models (LLMs) are poor generalist learners.
The headline is pretty scary: “China’s Quantum Computer Scientists Crack Military-Grade Encryption.”
No, it’s not true.
This debunking saved me the trouble of writing one. It all seems to have come from this news article, which wasn’t bad but was taken widely out of proportion.
Cryptography is safe, and will be for a long time
Mass storage has come a long way since the introduction of the personal computer. [Tech Time Traveller] has an interesting video about the dawn of PC hard drives focusing on a company called MiniScribe. After a promising start, they lost an IBM contract and fell on hard times.
Apparently, the company was faking inventory to the tune of $15 million because executives feared for their jobs if profits weren’t forthcoming. Once they discovered the incorrect inventory, they not only set out to alter the company’s records to match it, but they also broke into an outside auditing firm’s records to change things there, too.
Senior management hatched a plan to charge off the fake inventory in small amounts to escape the notice of investors and government regulators. But to do that, they need to be able to explain where the balance of the nonexistent inventory was. So they leased a warehouse to hold the fraud inventory and filled it with bricks. Real bricks like you use to build a house. Around 26,000 bricks were packaged in boxes, assigned serial numbers, and placed on pallets. Auditors would see the product ready to ship and there were even plans to pretend to ship them to CompuAdd and CalAbco, two customers, who had agreed to accept and return the bricks on paper allowing them to absorb the $15 million write off a little at a time.
Unfortunately, the fictitious excellent financial performance led to an expectation of even better performance in the future which necessitated even further fraud.
About
We are a small manufacturer of RPN calculators based in Switzerland.
Company Address
SwissMicros GmbH
Seestrasse 149
8712 Stäfa
Switzerland CH //
"When you pay too much, you lose a little money – that is all. When you pay too little, you sometimes lose everything, because the thing you bought was incapable of doing the thing it was bought to do. The common law of business balance prohibits paying a little and getting a lot – it can't be done. If you deal with the lowest bidder, it is well to add something for the risk you run, and if you do that you will have enough to pay for something better."
(attributed to John Ruskin, early 20th-century)
"Some people say you have to be a little crazy to buy an RPN calculator. Well, in that craziness we see genius and that's who we make the world's greatest programmable RPN calculators ever for."
(inspired by Steve Job's Think Different claims, 1980s)
"It isn’t equipment that wins the battles; it is the quality and the determination of the people fighting for a cause in which they believe."
(Gene Kranz, Failure is not an option, 2011)
Hardware hacker Dmitry Grinberg recently achieved what might sound impossible: booting Linux on the Intel 4004, the world's first commercial microprocessor. With just 2,300 transistors and an original clock speed of 740 kHz, the 1971 CPU is incredibly primitive by modern standards. And it's slow—it takes about 4.76 days for the Linux kernel to boot.
Initially designed for a Japanese calculator called the Busicom 141-PF, the 4-bit 4004 found limited use in commercial products of the 1970s before being superseded by more powerful Intel chips, such as the 8008 and 8080 that powered early personal computers—and then the 8086 and 8088 that launched the IBM PC era.
If you're skeptical that this feat is possible with a raw 4004, you're right: The 4004 itself is far too limited to run Linux directly. Instead, Grinberg created a solution that is equally impressive: an emulator that runs on the 4004 and emulates a MIPS R3000 processor—the architecture used in the DECstation 2100 workstation that Linux was originally ported to. This emulator, along with minimal hardware emulation, allows a stripped-down Debian Linux to boot to a command prompt.
Intel’s manuals for their x86/x64 processor clearly state that the fsin instruction (calculating the trigonometric sine) has a maximum error, in round-to-nearest mode, of one unit in the last place. This is not true. It’s not even close.
The worst-case error for the fsin instruction for small inputs is actually about 1.37 quintillion units in the last place, leaving fewer than four bits correct. For huge inputs it can be much worse, but I’m going to ignore that.
I was shocked when I discovered this. Both the fsin instruction and Intel’s documentation are hugely inaccurate, and the inaccurate documentation has led to poor decisions being made. //
brucedawson on October 9, 2014 at 10:38 pm
This will affect programmers who then have to work around the issue so that every day computer users are not affected. The developer of VC++ and glibc had to write alternate versions, so that’s one thing. The inaccuracies could be enough to add up over repeated calls to sin and could lead to errors in flight control software, CAD software, games, various things. It’s hard to predict where the undocumented inaccuracy could cause problems.
It likely won’t now because most C runtimes don’t use fsin anymore and because the documentation will now be fixed.
The DM32 is our enhanced classic all-rounder based on the HP 32SII. 171 functions, of which 75 are directly accessible from the keypad. Programmable. Conversions, statistics, fractions, equations, solver and more. The perfect choice for almost everybody. BETA firmware installed, updates will be required.
Anonymous Coward
Don't put it in your pocket
Are we now going to discover that Hezbollah bought a batch of calculators from Brazil some months ago?
Ian JohnstonSilver badge
Re: Don't put it in your pocket
If they did, it's a bad move which might easily blow up in their faces.
Yet Another Anonymous cowardSilver badge
Re: Scientific Calculator
Scientific calculators use a body of tested and published algorithms to determine the answer.
Non-scientific calculators believe what they read in the Daily Mail and what someones sister's best-friends hairdresser's partner saw on Facebook
Andy NonSilver badge
Re: Scientific calculator:
1+2x3=7
Daily Mail calculator:
1+2x3=9
For most of us, a calculator might have been superseded by Excel or an app on a phone, yet there remains a die-hard contingent with a passion for the push-button marvels. So the shocking discovery of an apparently rogue HP-12C has sent tremors through the calculator aficionado world.
The HP-12C [PDF] is a remarkably long-lived financial calculator from Hewlett-Packard (HP). It first appeared in 1981 and has continued in production ever since, with just the odd tweak here and there to its hardware. //
A sibling, the HP-12C Platinum, was introduced in 2003, which added to the functionality but retained the gloriously late '70s / early '80s aesthetic of the range. According to The Museum of HP Calculators, "While similar in appearance and features it appears to be a complete reimplementation by an OEM (Kinpo) based on scans of HP manuals provided by the museum." //
"Testing our rogue HP-12c, it returned a result of 331,666.9849, from a true result of 331,667.00669… giving it an accuracy [defined here as the negative logarithm to base 10 of the absolute error] of 7.2, somewhere between the HP-70 of 1974 (1.2!) and the HP-22 of 1975 (9), but far off the 10.6 achieved by the regular HP-12c and 12.2 of the HP-12c Precision." //
Not knowing about the issue at the time, Murray posted his findings on forums dedicated to the calculators – the joy of the World Wide Web is that there is a forum for everything – and after some initial skepticism, members soon weighed in with suggestions. Was this a counterfeit? It didn't look like it. Maybe the firmware was rewritten as a cost-saving exercise? Perhaps...
Some speculated that HP – or perhaps a licensee – was rather hoping that by loading up the channel with versions featuring the original firmware the problem would go away and remain unnoticed.
However, no company should reckon without the sleuthing efforts of Murray and his fellow enthusiasts when things don't seem to be... er.... adding up. ®
Migrate your system to a faster drive using Clonezilla.
Published 1997
Barry M. Leiner, Vinton G. Cerf, David D. Clark, Robert E. Kahn, Leonard Kleinrock, Daniel C. Lynch, Jon Postel, Larry G. Roberts, Stephen Wolff
The Internet has revolutionized the computer and communications world like nothing before. The invention of the telegraph, telephone, radio, and computer set the stage for this unprecedented integration of capabilities. The Internet is at once a world-wide broadcasting capability, a mechanism for information dissemination, and a medium for collaboration and interaction between individuals and their computers without regard for geographic location. The Internet represents one of the most successful examples of the benefits of sustained investment and commitment to research and development of information infrastructure. Beginning with the early research in packet switching, the government, industry and academia have been partners in evolving and deploying this exciting new technology. //
In this paper,3 several of us involved in the development and evolution of the Internet share our views of its origins and history. This history revolves around four distinct aspects. There is the technological evolution that began with early research on packet switching and the ARPANET (and related technologies),
Thursday 2nd April 2020 15:11 GMT
BJC
Millisecond roll-over?
So, what is the probability that the timing for these events is stored as milliseconds in a 32 bit structure?
Reply Icon
Re: Millisecond roll-over?
My first thought too, but that rolls over after 49.7 days.
Still, they could have it wrong again.
Re: Millisecond roll-over?
I suspect that it is a millisecond roll over and someone at the FAA picked 51 days instead of 49.7 because they don't understand software any better than Boeing.
Thursday 2nd April 2020 17:05 GMT
the spectacularly refined chap
Reply Icon
Re: Millisecond roll-over?
Could well be something like that, the earlier 248 day issue is exactly the same duration that older Unix hands will recognise as the 'lbolt issue': a variable holding the number of clock ticks since boot overflows a signed 32 bit int after 248 days assuming clock ticks are at 100Hz as was usual back then and is still quite common.
See e.g. here. The issue has been known about and the mitigation well documented for at least 30 years. Makes you wonder about the monkeys they have coding this stuff. //
bombastic bobSilver badge
Reply Icon
Devil
Re: Millisecond roll-over?
I've run into that problem (32-bit millisecond timer rollover issues) with microcontrollers, solved by doing the math correctly
capturing the tick count
if((uint32_t)(Ticker() - last_time) >= some_interval)
and
last_time=Ticker(); // for when it crosses the threshold
[ alternately last_time += some_interval when you want it to be more accurate ]
using a rollover time
if((int32_t)(Ticker() - schedule_time) >= 0)
and
schedule_time += schedule_interval (for when it crosses the threshold)
(this is how Linux kernel does its scheduled events, internally, as I recall, except it compares to jiffies which are 1/100 of a second if I remember correctly)
(examples in C of course, the programming lingo of choice the gods!)
do the math like this, should work as long as you use uint32_t data types for the 'Ticker()' function and for the 'scheduld_time'; or 'last_time' vars.
If you are an IDIOT and don't do unsigned comparisons "similar to what I just demonstrated", you can predict uptime-related problems at about... 49.71 days [assuming milliseconds].
I think i remember a 'millis()' or similarly named function in VxWorks. It's been over a decade since I've worked with it though. VxWorks itself was pretty robust back then, used in a lot of routers and other devices that "stay on all the time". So its track record is pretty good.
So the most likely scenario is what you suggested - a millisecond timer rolling over (with a 32-bit var storing info) and causing bogus data to accumulate after 49.71 days, which doesn't (for some reason) TRULY manifest itself until about 51 days...
Anyway, good catch.
US air safety bods call it 'potentially catastrophic' if reboot directive not implemented //
The US Federal Aviation Administration has ordered Boeing 787 operators to switch their aircraft off and on every 51 days to prevent what it called "several potentially catastrophic failure scenarios" – including the crashing of onboard network switches.
The airworthiness directive, due to be enforced from later this month, orders airlines to power-cycle their B787s before the aircraft reaches the specified days of continuous power-on operation.
The power cycling is needed to prevent stale data from populating the aircraft's systems, a problem that has occurred on different 787 systems in the past. //
A previous software bug forced airlines to power down their 787s every 248 days for fear electrical generators could shut down in flight.
Airbus suffers from similar issues with its A350, with a relatively recent but since-patched bug forcing power cycles every 149 hours.
Former Microsoft engineer Dave Plummer took a trip down memory lane this week by building a functioning PDP-11 minicomputer from parts found in a tub of hardware.
It's a fun watch, especially for anyone charged with maintaining these devices during their heyday. Unfortunately, Plummer did not place his creation in a period-appropriate case, and one might argue he cheated a bit by using a board containing a Linux computer to present boot devices.
Plummer's build started with a backplane containing slots for a CPU card, a pair of 512 KB RAM cards, and the Linux card – a QBone by the look of it. Also connected to the backplane were power, along with some halt and run switches.
The QBone is an interesting card and serves as an example of extending the original hardware rather than fully relying on emulation. ... In Plummer's case, he used it to provide a boot device for his bits-from-a-box PDP-11.
Once connected and with a boot device mounted, Plummer was able to fire up the computer with its mighty megabyte of memory and interact with it as if back in the previous century.
The TMS9900 could have powered the PC revolution. Here’s why it didn’t. //
The utter dominance of these Intel microprocessors goes back to 1978, when IBM chose the 8088 for its first personal computer. Yet that choice was far from obvious. Indeed, some who know the history assert that the Intel 8088 was the worst among several possible 16-bit microprocessors of the day.
It was not. There was a serious alternative that was worse. //
So why aren’t we all using 68K-based computers today?
The answer comes back to being first to market. Intel’s 8088 may have been imperfect but at least it was ready, whereas the Motorola 68K was not. And IBM’s thorough component qualification process required that a manufacturer offer up thousands of “production released” samples of any new part so that IBM could perform life tests and other characterizations. IBM had hundreds of engineers doing quality assurance, but component qualifications take time. In the first half of 1978, Intel already had production-released samples of the 8088. By the end of 1978, Motorola’s 68K was still not quite ready for production release.
And unfortunately for Motorola, the Boca Raton group wanted to bring its new IBM PC to market as quickly as possible. So they had only two fully qualified 16-bit microprocessors to choose from. In a competition between two imperfect chips, Intel’s chip was less imperfect than TI’s.
Not long after Windows PCs and servers at the Australian limb of audit and tax advisory Grant Thornton started BSODing last Friday, senior systems engineer Rob Woltz remembered a small but important fact: When PCs boot, they consider barcode scanners no differently to keyboards.
That knowledge nugget became important as the firm tried to figure out how to respond to the mess CrowdStrike created, which at Grant Thornton Australia threw hundreds of PCs and no fewer than 100 servers into the doomloop that CrowdStrike's shoddy testing software made possible.
All of Grant Thornton's machines were encrypted with Microsoft's BitLocker tool, which meant that recovery upon restart required CrowdStrike's multi-step fix and entry of a 48-character BitLocker key. //
Woltz is pleased that his idea translated into a swift recovery, but also a little regretful he didn't think of using QR codes – they could have encoded sufficient data to automate the entire remediation process.