488 private links
Intel’s manuals for their x86/x64 processor clearly state that the fsin instruction (calculating the trigonometric sine) has a maximum error, in round-to-nearest mode, of one unit in the last place. This is not true. It’s not even close.
The worst-case error for the fsin instruction for small inputs is actually about 1.37 quintillion units in the last place, leaving fewer than four bits correct. For huge inputs it can be much worse, but I’m going to ignore that.
I was shocked when I discovered this. Both the fsin instruction and Intel’s documentation are hugely inaccurate, and the inaccurate documentation has led to poor decisions being made. //
brucedawson on October 9, 2014 at 10:38 pm
This will affect programmers who then have to work around the issue so that every day computer users are not affected. The developer of VC++ and glibc had to write alternate versions, so that’s one thing. The inaccuracies could be enough to add up over repeated calls to sin and could lead to errors in flight control software, CAD software, games, various things. It’s hard to predict where the undocumented inaccuracy could cause problems.
It likely won’t now because most C runtimes don’t use fsin anymore and because the documentation will now be fixed.
The DM32 is our enhanced classic all-rounder based on the HP 32SII. 171 functions, of which 75 are directly accessible from the keypad. Programmable. Conversions, statistics, fractions, equations, solver and more. The perfect choice for almost everybody. BETA firmware installed, updates will be required.
Anonymous Coward
Don't put it in your pocket
Are we now going to discover that Hezbollah bought a batch of calculators from Brazil some months ago?
Ian JohnstonSilver badge
Re: Don't put it in your pocket
If they did, it's a bad move which might easily blow up in their faces.
Yet Another Anonymous cowardSilver badge
Re: Scientific Calculator
Scientific calculators use a body of tested and published algorithms to determine the answer.
Non-scientific calculators believe what they read in the Daily Mail and what someones sister's best-friends hairdresser's partner saw on Facebook
Andy NonSilver badge
Re: Scientific calculator:
1+2x3=7
Daily Mail calculator:
1+2x3=9
For most of us, a calculator might have been superseded by Excel or an app on a phone, yet there remains a die-hard contingent with a passion for the push-button marvels. So the shocking discovery of an apparently rogue HP-12C has sent tremors through the calculator aficionado world.
The HP-12C [PDF] is a remarkably long-lived financial calculator from Hewlett-Packard (HP). It first appeared in 1981 and has continued in production ever since, with just the odd tweak here and there to its hardware. //
A sibling, the HP-12C Platinum, was introduced in 2003, which added to the functionality but retained the gloriously late '70s / early '80s aesthetic of the range. According to The Museum of HP Calculators, "While similar in appearance and features it appears to be a complete reimplementation by an OEM (Kinpo) based on scans of HP manuals provided by the museum." //
"Testing our rogue HP-12c, it returned a result of 331,666.9849, from a true result of 331,667.00669… giving it an accuracy [defined here as the negative logarithm to base 10 of the absolute error] of 7.2, somewhere between the HP-70 of 1974 (1.2!) and the HP-22 of 1975 (9), but far off the 10.6 achieved by the regular HP-12c and 12.2 of the HP-12c Precision." //
Not knowing about the issue at the time, Murray posted his findings on forums dedicated to the calculators – the joy of the World Wide Web is that there is a forum for everything – and after some initial skepticism, members soon weighed in with suggestions. Was this a counterfeit? It didn't look like it. Maybe the firmware was rewritten as a cost-saving exercise? Perhaps...
Some speculated that HP – or perhaps a licensee – was rather hoping that by loading up the channel with versions featuring the original firmware the problem would go away and remain unnoticed.
However, no company should reckon without the sleuthing efforts of Murray and his fellow enthusiasts when things don't seem to be... er.... adding up. ®
Migrate your system to a faster drive using Clonezilla.
Published 1997
Barry M. Leiner, Vinton G. Cerf, David D. Clark, Robert E. Kahn, Leonard Kleinrock, Daniel C. Lynch, Jon Postel, Larry G. Roberts, Stephen Wolff
The Internet has revolutionized the computer and communications world like nothing before. The invention of the telegraph, telephone, radio, and computer set the stage for this unprecedented integration of capabilities. The Internet is at once a world-wide broadcasting capability, a mechanism for information dissemination, and a medium for collaboration and interaction between individuals and their computers without regard for geographic location. The Internet represents one of the most successful examples of the benefits of sustained investment and commitment to research and development of information infrastructure. Beginning with the early research in packet switching, the government, industry and academia have been partners in evolving and deploying this exciting new technology. //
In this paper,3 several of us involved in the development and evolution of the Internet share our views of its origins and history. This history revolves around four distinct aspects. There is the technological evolution that began with early research on packet switching and the ARPANET (and related technologies),
Thursday 2nd April 2020 15:11 GMT
BJC
Millisecond roll-over?
So, what is the probability that the timing for these events is stored as milliseconds in a 32 bit structure?
Reply Icon
Re: Millisecond roll-over?
My first thought too, but that rolls over after 49.7 days.
Still, they could have it wrong again.
Re: Millisecond roll-over?
I suspect that it is a millisecond roll over and someone at the FAA picked 51 days instead of 49.7 because they don't understand software any better than Boeing.
Thursday 2nd April 2020 17:05 GMT
the spectacularly refined chap
Reply Icon
Re: Millisecond roll-over?
Could well be something like that, the earlier 248 day issue is exactly the same duration that older Unix hands will recognise as the 'lbolt issue': a variable holding the number of clock ticks since boot overflows a signed 32 bit int after 248 days assuming clock ticks are at 100Hz as was usual back then and is still quite common.
See e.g. here. The issue has been known about and the mitigation well documented for at least 30 years. Makes you wonder about the monkeys they have coding this stuff. //
bombastic bobSilver badge
Reply Icon
Devil
Re: Millisecond roll-over?
I've run into that problem (32-bit millisecond timer rollover issues) with microcontrollers, solved by doing the math correctly
capturing the tick count
if((uint32_t)(Ticker() - last_time) >= some_interval)
and
last_time=Ticker(); // for when it crosses the threshold
[ alternately last_time += some_interval when you want it to be more accurate ]
using a rollover time
if((int32_t)(Ticker() - schedule_time) >= 0)
and
schedule_time += schedule_interval (for when it crosses the threshold)
(this is how Linux kernel does its scheduled events, internally, as I recall, except it compares to jiffies which are 1/100 of a second if I remember correctly)
(examples in C of course, the programming lingo of choice the gods!)
do the math like this, should work as long as you use uint32_t data types for the 'Ticker()' function and for the 'scheduld_time'; or 'last_time' vars.
If you are an IDIOT and don't do unsigned comparisons "similar to what I just demonstrated", you can predict uptime-related problems at about... 49.71 days [assuming milliseconds].
I think i remember a 'millis()' or similarly named function in VxWorks. It's been over a decade since I've worked with it though. VxWorks itself was pretty robust back then, used in a lot of routers and other devices that "stay on all the time". So its track record is pretty good.
So the most likely scenario is what you suggested - a millisecond timer rolling over (with a 32-bit var storing info) and causing bogus data to accumulate after 49.71 days, which doesn't (for some reason) TRULY manifest itself until about 51 days...
Anyway, good catch.
US air safety bods call it 'potentially catastrophic' if reboot directive not implemented //
The US Federal Aviation Administration has ordered Boeing 787 operators to switch their aircraft off and on every 51 days to prevent what it called "several potentially catastrophic failure scenarios" – including the crashing of onboard network switches.
The airworthiness directive, due to be enforced from later this month, orders airlines to power-cycle their B787s before the aircraft reaches the specified days of continuous power-on operation.
The power cycling is needed to prevent stale data from populating the aircraft's systems, a problem that has occurred on different 787 systems in the past. //
A previous software bug forced airlines to power down their 787s every 248 days for fear electrical generators could shut down in flight.
Airbus suffers from similar issues with its A350, with a relatively recent but since-patched bug forcing power cycles every 149 hours.
Former Microsoft engineer Dave Plummer took a trip down memory lane this week by building a functioning PDP-11 minicomputer from parts found in a tub of hardware.
It's a fun watch, especially for anyone charged with maintaining these devices during their heyday. Unfortunately, Plummer did not place his creation in a period-appropriate case, and one might argue he cheated a bit by using a board containing a Linux computer to present boot devices.
Plummer's build started with a backplane containing slots for a CPU card, a pair of 512 KB RAM cards, and the Linux card – a QBone by the look of it. Also connected to the backplane were power, along with some halt and run switches.
The QBone is an interesting card and serves as an example of extending the original hardware rather than fully relying on emulation. ... In Plummer's case, he used it to provide a boot device for his bits-from-a-box PDP-11.
Once connected and with a boot device mounted, Plummer was able to fire up the computer with its mighty megabyte of memory and interact with it as if back in the previous century.
The TMS9900 could have powered the PC revolution. Here’s why it didn’t. //
The utter dominance of these Intel microprocessors goes back to 1978, when IBM chose the 8088 for its first personal computer. Yet that choice was far from obvious. Indeed, some who know the history assert that the Intel 8088 was the worst among several possible 16-bit microprocessors of the day.
It was not. There was a serious alternative that was worse. //
So why aren’t we all using 68K-based computers today?
The answer comes back to being first to market. Intel’s 8088 may have been imperfect but at least it was ready, whereas the Motorola 68K was not. And IBM’s thorough component qualification process required that a manufacturer offer up thousands of “production released” samples of any new part so that IBM could perform life tests and other characterizations. IBM had hundreds of engineers doing quality assurance, but component qualifications take time. In the first half of 1978, Intel already had production-released samples of the 8088. By the end of 1978, Motorola’s 68K was still not quite ready for production release.
And unfortunately for Motorola, the Boca Raton group wanted to bring its new IBM PC to market as quickly as possible. So they had only two fully qualified 16-bit microprocessors to choose from. In a competition between two imperfect chips, Intel’s chip was less imperfect than TI’s.
Not long after Windows PCs and servers at the Australian limb of audit and tax advisory Grant Thornton started BSODing last Friday, senior systems engineer Rob Woltz remembered a small but important fact: When PCs boot, they consider barcode scanners no differently to keyboards.
That knowledge nugget became important as the firm tried to figure out how to respond to the mess CrowdStrike created, which at Grant Thornton Australia threw hundreds of PCs and no fewer than 100 servers into the doomloop that CrowdStrike's shoddy testing software made possible.
All of Grant Thornton's machines were encrypted with Microsoft's BitLocker tool, which meant that recovery upon restart required CrowdStrike's multi-step fix and entry of a 48-character BitLocker key. //
Woltz is pleased that his idea translated into a swift recovery, but also a little regretful he didn't think of using QR codes – they could have encoded sufficient data to automate the entire remediation process.
video
What price common sense? • June 11, 2024 7:30 AM
@Levi B.
“Those who are not familiar with the term “bit-squatting” should look that up”
Are you sure you want to go down that rabbit hole?
It’s an instant of a general class of problems that are never going to go away.
And why in
“Web servers would usually have error-correcting (ECC) memory, in which case they’re unlikely to create such links themselves.”
The key word is “unlikely” or more formally “low probability”.
Because it’s down to the fundamentals of the universe and the failings of logic and reason as we formally use them. Which in turn has been why since at least as early as the ancient Greeks through to 20th Century, some of those thinking about it in it’s various guises have gone mad and some committed suicide.
To understand why you need to understand why things like “Error Correcting Codes”(ECC) will never by 100% effective and deterministic encryption systems especially stream ciphers will always be vulnerable. //
No matter what you do all error checking systems have both false positive and false negative results. All you can do is tailor the system to that of the more probable errors.
But there are other underlying issues, bit flips happen in memory by deterministic processes that apparently happen by chance. Back in the early 1970’s when putting computers into space became a reality it was known that computers were effected by radiation. Initially it was assumed it had to be of sufficient energy to be ‘ionizing’ but later any EM radiation such as the antenna of a hand held two way radio would do with low energy CMOS chips.
This was due to metastability. In practice the logic gates we use are very high gain analog amplifiers that are designed to “crash into the rails”. Some logic such as ECL was actually kept linear to get speed advantages but these days it’s all a bit murky.
The point is as the level at a simple logic gate input changes it goes through a transition region where the relationship between the gate input and output is indeterminate. Thus an inverter in effect might or might not invert or even oscillate with the input in the transition zone.
I won’t go into the reasons behind it but it’s down to two basic issues. Firstly the universe is full of noise, secondly it’s full of quantum effects. The two can be difficult to differentiate in even very long term measurements and engineers tend to try to lump it all under a first approximation of a Gaussian distribution as “Addative White Gaussian Noise”(AWGN) that has nice properties such as averaging predictably to zero with time and “the root of the mean squared”. However the universe tends not to play that way when you get up close, so instead “Phase Noise in a measurement window” is often used with Allan Deviation. //
There are things we can not know because they are unpredictable or beyond or ability to measure.
But also beyond a deterministic system to calculate.
Computers only know “natural numbers” or “unsigned integers” within a finite range. Everything else is approximated or as others would say “faked”. Between every natural number there are other numbers some can be found as ratios of natural numbers and others can not. What drove philosophers and mathematicians mad was the realisation of the likes of “root two”, pi and that there was an infinity of such numbers we could never know. Another issue was the spaces caused by integer multiplication the smaller all the integers the smaller the spaces between the multiples. Eventually it was realised that there was an advantage to this in that it scaled. The result in computers is floating point numbers. They work well for many things but not with addition and subtraction of small values with large values.
As has been mentioned LLM’s are in reality no different from “Digital Signal Processing”(DSP) systems in their fundamental algorithms. One of which is “Multiply and ADd”(MAD) using integers. These have issues in that values disappear or can not be calculated. With continuous signals they can be integrated in with little distortion. In LLM’s they can cause errors that are part of what has been called “Hallucinations”. That is where something with meaning to a human such as the name of a Pokemon trading card character “Solidgoldmagikarp” gets mapped to an entirely unrelated word “distribute”, thus mayhem resulted on GPT-3.5 and much hilarity once widely known.
The 8-bit Z80 microprocessor was designed in 1974 by Federico Faggin as a binary-compatible, improved version of the Intel 8080 with a higher clock speed, a built-in DRAM refresh controller, and an extended instruction set. It was extensively used in desktop computers of the late 1970s and early 1980s, arcade video game machines, and embedded systems, and it became a cornerstone of several gaming consoles, like the Sega Master System. //
stormcrash Ars Praefectus
9y
5,868
Felix Aurelius said:
We can do the 21 gun salute with exploding polarized capacitors!Fun little confetti cannons, those.
21 exploding shorted tantalum capacitors
Twenty years ago, in a world dominated by dial-up connections and a fledgling World Wide Web, a group of New Zealand friends embarked on a journey. Their mission? To bring to life a Matrix fan film shot on a shoestring budget. The result was The Fanimatrix, a 16-minute amateur film just popular enough to have its own Wikipedia page.
As reported by TorrentFreak, the humble film would unknowingly become a crucial part of torrent history. It now stands as the world’s oldest active torrent, with an uptime now spanning a full 20 years.
Billionaire Elon Musk said this month that while the development of AI had been “chip constrained” last year, the latest bottleneck to the cutting-edge technology was “electricity supply.” Those comments followed a warning by Amazon chief Andy Jassy this year that there was “not enough energy right now” to run new generative AI services. //
“One of the limitations of deploying [chips] in the new AI economy is going to be ... where do we build the data centers and how do we get the power,” said Daniel Golding, chief technology officer at Appleby Strategy Group and a former data center executive at Google. “At some point the reality of the [electricity] grid is going to get in the way of AI.” //
Such growth would require huge amounts of electricity, even if systems become more efficient. According to the International Energy Agency, the electricity consumed by data centers globally will more than double by 2026 to more than 1,000 terawatt hours, an amount roughly equivalent to what Japan consumes annually.
Re: They Took Way Too Long To Port It
Personal view, so no [AH] tag or anything:
The Linux kernel is an extremely rapidly moving target. It has well over 450 and nearly 500 syscalls. It comprises some 20 million lines of code.
It needs constant updating and the problem is so severe that there are multiple implementations of live in-memory patching so you can do it between reboots.
Meanwhile, VMSclusters can have uptimes in decades and you can cluster VAXen to Alphas to Itanium boxes and now to x86-64 boxes, move workloads from one to another using CPU emulation if needed, and shut down old nodes, and so you could in principle take a DECnet cluster of late-1980s VAXes and gradually migrate it to a rack of x86 boxes clustered over TCP/IP without a single moment of downtime.
Linux is just about the worst possible fit for this I can imagine.
It has no built-in clustering in the kernel and virtually no support for filesystem sharing in the kernel itself.
It is, pardon the phrase, as much use as a chocolate teapot for this stuff.
VMS is a newer and more capable OS than traditional UNIX. I know Unix folks like to imagine it's some eternal state of the art, but it's not. It's a late-1960s OS for standalone minicomputers. Linux is a modernised clone of a laughably outdated design.
VMS is a late 1970s OS for networked and clustered minicomputers. It's still old fashioned but it has strengths and extraordinary resilience and uptimes is one of them.
Re: Linux needs constant updating
Yeah, no. To refute a few points:
Remember, there are LTS versions with lifetimes measured in years.
Point missed error. "This is a single point release! We are now on 4.42.16777216." You still have to update it. Even if with some fugly livepatch hack.
And nobody ever ran VMSclusters with uptimes measured in years
Citation: 10 year cluster uptime.
https://www.osnews.com/story/13245/openvms-cluster-achieves-10-year-uptime/
Citation: 16 year cluster uptime.
Linux “clusters” scale to supercomputers with millions of interconnected nodes.
Point missed. Linux clusters are by definition extremely loosely clustered. VMSclusters are a tight/close cluster model where it can be non-obvious which node you are even attached to.
Linus Torvalds used VMS for a while, and hated it
I find it tends to be what you're used to or enounter first.
I met VMS before Unix -- and very nearly before Windows existed at all -- and I preferred it. I still hate the terse little commands and the cryptic glob expansion and the regexes and all this cultural baggage.
I am not alone.
UNIX became popular because it did so many things so much more logically
I call BS. This is the same as the bogus "it's intuitive" claim. Intuitive means "what I got to know first." Douglas Adams nailed it.
https://www.goodreads.com/quotes/39828-i-ve-come-up-with-a-set-of-rules-that-describe
Thinks of why Windows nowadays is at an evolutionary dead end
Linux is a dead end too. Unix in general is. We should have gone with Plan 9, and we still should.
Garner Products, a data elimination firm, has a machine that it claims can process 500 hard drives (the HDD kind) per day in a way that leaves a drive separated into those useful components. And the DiskMantler does this by shaking the thing to death (video).
The DiskMantler, using "shock, harmonics, and vibration," vibrates most drives into pieces in between 8–90 seconds, depending on how much separation you want. Welded helium drives take about two minutes. The basic science for how this works came from Gerhard Junker, the perfectly named German scientist who fully explored the power of vibrations, or "shear loading perpendicular to the fastener axis," to loosen screws and other fasteners.
As Garner's chief global development officer, Michael Harstrick, told E-Scrap News, the device came about when a client needed a way to extract circuit boards from drives fastened with proprietary screw heads. Prying or other destruction would have been too disruptive and potentially damaging. After testing different power levels and durations, Garner arrived at a harmonic vibration device that can take apart pretty much any drive, even those with more welding than screws. "They still come apart," Harstrick told E-Scrap News. "It just takes a little bit."
The SBC6120 Model 2 is a conventional single board computer with the typical complement of EPROM, RAM, a RS232 serial port, an IDE disk interface, and an optional non-volatile RAM disk memory card. What makes it unique is that the CPU is the Harris HD-6120 PDP-8 on a chip. The 6120 is the second generation of single chip PDP-8 compatible microprocessors and was used in Digital's DECmate-I, II, III and III+ "personal" computers.
The SBC6120 can run all standard DEC paper tape software, such as FOCAL-69, with no changes. Simply use the ROM firmware on the SBC6120 to download FOCAL69.BIN from a PC connected to the console port (or use a real ASR-33 and read the real FOCAL-69 paper tape, if you’re so inclined!), start at 2008, and you’re running.
OS/278, OS/78 and, yes - OS/8 V3D or V3S - can all be booted on the SBC6120 using either RAM disk or IDE disk as mass storage devices. Since the console interface in the SBC6120 is KL8E compatible and does not use a HD-6121, there is no particular need to use OS/278 and real OS/8 V3D runs perfectly well.
The SBC6120 measures just 4.2 inches by 6.2 inches, or roughly the same size and shape as a standard 3½" disk drive. A four layer PC board with internal power planes was needed to fit all the parts in this space. A complete SBC6120 requires just 175mA at 5V to operate, and this requirement can easily be cut in half by omitting the LED POST code display. Imagine - you can have an entire PDP-8, running OS/8 from a RAM disk, that’s the size of a paperback book and runs on less than half a watt!
Celebrating the world's first minicomputer, and
the machine that taught me assembly language.
The 12-bit PDP-8 contained a single 12-bit accumulator (AC),
a 1-bit "Link" (L), and a 12-bit program counter (PC):
Original photo credit: Gerhard Kreuzer
Later models (the /e, /f, /m & /a) added a 12-bit multiplier quotient (MQ) register.
The term “minicomputer” was not coined to mean miniature, it
was originally meant to mean minimal, which is a term that,
more than anything else, accurately describes the PDP-8.
Whereas today's machines group their binary digits (bits) into sets of four in a system called “hexadecimal”, the PDP-8, like most computers of its era, used “octal” notation, grouping its bits into sets of three. This meant that the PDP-8's 12-bit words were written as four octal digits ranging from 0 through 7.
The first 3 bits of the machine's 12-bit word (its first octal digit) is the operation code (OpCode). This equipped the machine with just eight basic instructions: