507 private links
What price common sense? • June 11, 2024 7:30 AM
@Levi B.
“Those who are not familiar with the term “bit-squatting” should look that up”
Are you sure you want to go down that rabbit hole?
It’s an instant of a general class of problems that are never going to go away.
And why in
“Web servers would usually have error-correcting (ECC) memory, in which case they’re unlikely to create such links themselves.”
The key word is “unlikely” or more formally “low probability”.
Because it’s down to the fundamentals of the universe and the failings of logic and reason as we formally use them. Which in turn has been why since at least as early as the ancient Greeks through to 20th Century, some of those thinking about it in it’s various guises have gone mad and some committed suicide.
To understand why you need to understand why things like “Error Correcting Codes”(ECC) will never by 100% effective and deterministic encryption systems especially stream ciphers will always be vulnerable. //
No matter what you do all error checking systems have both false positive and false negative results. All you can do is tailor the system to that of the more probable errors.
But there are other underlying issues, bit flips happen in memory by deterministic processes that apparently happen by chance. Back in the early 1970’s when putting computers into space became a reality it was known that computers were effected by radiation. Initially it was assumed it had to be of sufficient energy to be ‘ionizing’ but later any EM radiation such as the antenna of a hand held two way radio would do with low energy CMOS chips.
This was due to metastability. In practice the logic gates we use are very high gain analog amplifiers that are designed to “crash into the rails”. Some logic such as ECL was actually kept linear to get speed advantages but these days it’s all a bit murky.
The point is as the level at a simple logic gate input changes it goes through a transition region where the relationship between the gate input and output is indeterminate. Thus an inverter in effect might or might not invert or even oscillate with the input in the transition zone.
I won’t go into the reasons behind it but it’s down to two basic issues. Firstly the universe is full of noise, secondly it’s full of quantum effects. The two can be difficult to differentiate in even very long term measurements and engineers tend to try to lump it all under a first approximation of a Gaussian distribution as “Addative White Gaussian Noise”(AWGN) that has nice properties such as averaging predictably to zero with time and “the root of the mean squared”. However the universe tends not to play that way when you get up close, so instead “Phase Noise in a measurement window” is often used with Allan Deviation. //
There are things we can not know because they are unpredictable or beyond or ability to measure.
But also beyond a deterministic system to calculate.
Computers only know “natural numbers” or “unsigned integers” within a finite range. Everything else is approximated or as others would say “faked”. Between every natural number there are other numbers some can be found as ratios of natural numbers and others can not. What drove philosophers and mathematicians mad was the realisation of the likes of “root two”, pi and that there was an infinity of such numbers we could never know. Another issue was the spaces caused by integer multiplication the smaller all the integers the smaller the spaces between the multiples. Eventually it was realised that there was an advantage to this in that it scaled. The result in computers is floating point numbers. They work well for many things but not with addition and subtraction of small values with large values.
As has been mentioned LLM’s are in reality no different from “Digital Signal Processing”(DSP) systems in their fundamental algorithms. One of which is “Multiply and ADd”(MAD) using integers. These have issues in that values disappear or can not be calculated. With continuous signals they can be integrated in with little distortion. In LLM’s they can cause errors that are part of what has been called “Hallucinations”. That is where something with meaning to a human such as the name of a Pokemon trading card character “Solidgoldmagikarp” gets mapped to an entirely unrelated word “distribute”, thus mayhem resulted on GPT-3.5 and much hilarity once widely known.
The 8-bit Z80 microprocessor was designed in 1974 by Federico Faggin as a binary-compatible, improved version of the Intel 8080 with a higher clock speed, a built-in DRAM refresh controller, and an extended instruction set. It was extensively used in desktop computers of the late 1970s and early 1980s, arcade video game machines, and embedded systems, and it became a cornerstone of several gaming consoles, like the Sega Master System. //
stormcrash Ars Praefectus
9y
5,868
Felix Aurelius said:
We can do the 21 gun salute with exploding polarized capacitors!Fun little confetti cannons, those.
21 exploding shorted tantalum capacitors
Twenty years ago, in a world dominated by dial-up connections and a fledgling World Wide Web, a group of New Zealand friends embarked on a journey. Their mission? To bring to life a Matrix fan film shot on a shoestring budget. The result was The Fanimatrix, a 16-minute amateur film just popular enough to have its own Wikipedia page.
As reported by TorrentFreak, the humble film would unknowingly become a crucial part of torrent history. It now stands as the world’s oldest active torrent, with an uptime now spanning a full 20 years.
Billionaire Elon Musk said this month that while the development of AI had been “chip constrained” last year, the latest bottleneck to the cutting-edge technology was “electricity supply.” Those comments followed a warning by Amazon chief Andy Jassy this year that there was “not enough energy right now” to run new generative AI services. //
“One of the limitations of deploying [chips] in the new AI economy is going to be ... where do we build the data centers and how do we get the power,” said Daniel Golding, chief technology officer at Appleby Strategy Group and a former data center executive at Google. “At some point the reality of the [electricity] grid is going to get in the way of AI.” //
Such growth would require huge amounts of electricity, even if systems become more efficient. According to the International Energy Agency, the electricity consumed by data centers globally will more than double by 2026 to more than 1,000 terawatt hours, an amount roughly equivalent to what Japan consumes annually.
Re: They Took Way Too Long To Port It
Personal view, so no [AH] tag or anything:
The Linux kernel is an extremely rapidly moving target. It has well over 450 and nearly 500 syscalls. It comprises some 20 million lines of code.
It needs constant updating and the problem is so severe that there are multiple implementations of live in-memory patching so you can do it between reboots.
Meanwhile, VMSclusters can have uptimes in decades and you can cluster VAXen to Alphas to Itanium boxes and now to x86-64 boxes, move workloads from one to another using CPU emulation if needed, and shut down old nodes, and so you could in principle take a DECnet cluster of late-1980s VAXes and gradually migrate it to a rack of x86 boxes clustered over TCP/IP without a single moment of downtime.
Linux is just about the worst possible fit for this I can imagine.
It has no built-in clustering in the kernel and virtually no support for filesystem sharing in the kernel itself.
It is, pardon the phrase, as much use as a chocolate teapot for this stuff.
VMS is a newer and more capable OS than traditional UNIX. I know Unix folks like to imagine it's some eternal state of the art, but it's not. It's a late-1960s OS for standalone minicomputers. Linux is a modernised clone of a laughably outdated design.
VMS is a late 1970s OS for networked and clustered minicomputers. It's still old fashioned but it has strengths and extraordinary resilience and uptimes is one of them.
Re: Linux needs constant updating
Yeah, no. To refute a few points:
Remember, there are LTS versions with lifetimes measured in years.
Point missed error. "This is a single point release! We are now on 4.42.16777216." You still have to update it. Even if with some fugly livepatch hack.
And nobody ever ran VMSclusters with uptimes measured in years
Citation: 10 year cluster uptime.
https://www.osnews.com/story/13245/openvms-cluster-achieves-10-year-uptime/
Citation: 16 year cluster uptime.
Linux “clusters” scale to supercomputers with millions of interconnected nodes.
Point missed. Linux clusters are by definition extremely loosely clustered. VMSclusters are a tight/close cluster model where it can be non-obvious which node you are even attached to.
Linus Torvalds used VMS for a while, and hated it
I find it tends to be what you're used to or enounter first.
I met VMS before Unix -- and very nearly before Windows existed at all -- and I preferred it. I still hate the terse little commands and the cryptic glob expansion and the regexes and all this cultural baggage.
I am not alone.
UNIX became popular because it did so many things so much more logically
I call BS. This is the same as the bogus "it's intuitive" claim. Intuitive means "what I got to know first." Douglas Adams nailed it.
https://www.goodreads.com/quotes/39828-i-ve-come-up-with-a-set-of-rules-that-describe
Thinks of why Windows nowadays is at an evolutionary dead end
Linux is a dead end too. Unix in general is. We should have gone with Plan 9, and we still should.
Garner Products, a data elimination firm, has a machine that it claims can process 500 hard drives (the HDD kind) per day in a way that leaves a drive separated into those useful components. And the DiskMantler does this by shaking the thing to death (video).
The DiskMantler, using "shock, harmonics, and vibration," vibrates most drives into pieces in between 8–90 seconds, depending on how much separation you want. Welded helium drives take about two minutes. The basic science for how this works came from Gerhard Junker, the perfectly named German scientist who fully explored the power of vibrations, or "shear loading perpendicular to the fastener axis," to loosen screws and other fasteners.
As Garner's chief global development officer, Michael Harstrick, told E-Scrap News, the device came about when a client needed a way to extract circuit boards from drives fastened with proprietary screw heads. Prying or other destruction would have been too disruptive and potentially damaging. After testing different power levels and durations, Garner arrived at a harmonic vibration device that can take apart pretty much any drive, even those with more welding than screws. "They still come apart," Harstrick told E-Scrap News. "It just takes a little bit."
The SBC6120 Model 2 is a conventional single board computer with the typical complement of EPROM, RAM, a RS232 serial port, an IDE disk interface, and an optional non-volatile RAM disk memory card. What makes it unique is that the CPU is the Harris HD-6120 PDP-8 on a chip. The 6120 is the second generation of single chip PDP-8 compatible microprocessors and was used in Digital's DECmate-I, II, III and III+ "personal" computers.
The SBC6120 can run all standard DEC paper tape software, such as FOCAL-69, with no changes. Simply use the ROM firmware on the SBC6120 to download FOCAL69.BIN from a PC connected to the console port (or use a real ASR-33 and read the real FOCAL-69 paper tape, if you’re so inclined!), start at 2008, and you’re running.
OS/278, OS/78 and, yes - OS/8 V3D or V3S - can all be booted on the SBC6120 using either RAM disk or IDE disk as mass storage devices. Since the console interface in the SBC6120 is KL8E compatible and does not use a HD-6121, there is no particular need to use OS/278 and real OS/8 V3D runs perfectly well.
The SBC6120 measures just 4.2 inches by 6.2 inches, or roughly the same size and shape as a standard 3½" disk drive. A four layer PC board with internal power planes was needed to fit all the parts in this space. A complete SBC6120 requires just 175mA at 5V to operate, and this requirement can easily be cut in half by omitting the LED POST code display. Imagine - you can have an entire PDP-8, running OS/8 from a RAM disk, that’s the size of a paperback book and runs on less than half a watt!
Celebrating the world's first minicomputer, and
the machine that taught me assembly language.
The 12-bit PDP-8 contained a single 12-bit accumulator (AC),
a 1-bit "Link" (L), and a 12-bit program counter (PC):
Original photo credit: Gerhard Kreuzer
Later models (the /e, /f, /m & /a) added a 12-bit multiplier quotient (MQ) register.
The term “minicomputer” was not coined to mean miniature, it
was originally meant to mean minimal, which is a term that,
more than anything else, accurately describes the PDP-8.
Whereas today's machines group their binary digits (bits) into sets of four in a system called “hexadecimal”, the PDP-8, like most computers of its era, used “octal” notation, grouping its bits into sets of three. This meant that the PDP-8's 12-bit words were written as four octal digits ranging from 0 through 7.
The first 3 bits of the machine's 12-bit word (its first octal digit) is the operation code (OpCode). This equipped the machine with just eight basic instructions:
Ars Technica's guide to keyboards: Mechanical, membrane, and buckling springs.
The All-In-Plan is HP's latest attempt at that goal, hoping people believe that the subscription service will simplify things for themselves. And by including high cancellation fees, HP is looking to lock subscribers in for two years. //
In the blog post announcing the subscription, Diana Sroka, head of product for consumer services at HP, boasted about how people could "never own a printer again," "say goodbye to your tech troubles," and enjoy "hassle-free printing." The problem is that tech troubles and hassle-filled printing aren't the products of merely owning a printer; they're connected to disruptive and anti-consumer practices from printer vendors. //
HP is hoping to convince people that the answer to torturous printer experiences is to "never own a printer again." But considering the above frustrations, some might just never own an HP printer again.
Tue 13 Feb 2024 // 11:17 UTC
OBIT Polymath, pioneering developer of software and hardware, a prolific writer, and true old-school hacker John Walker has passed away.
His death was announced in a brief personal obituary on SCANALYST, a discussion forum hosted on Walker's own remarkably broad and fascinating website, Fourmilab. Its name is a playful take on Fermilab, the US physics laboratory, and fourmi, the French for "ant."
In the early days of microcomputers, everyone just invented their own user interfaces, until an Apple-influenced IBM standard brought about harmony. Then, sadly, the world forgot. In 1981, the IBM PC arrived and legitimized microcomputers as business tools, not just home playthings. The PC largely created the industry that the …
COMMENTS
Hobbes OS/2 Archive: "As of April 15th, 2024, this site will no longer exist."
In a move that marks the end of an era, New Mexico State University (NMSU) recently announced the impending closure of its Hobbes OS/2 Archive on April 15, 2024. For over three decades, the archive has been a key resource for users of the IBM OS/2 operating system and its successors, which once competed fiercely with Microsoft Windows. //
Archivists such as Jason Scott of the Internet Archive have stepped up to say that the files hosted on Hobbes are safe and already mirrored elsewhere. "Nobody should worry about Hobbes, I've got Hobbes handled," wrote Scott on Mastodon in early January. OS/2 World.com also published a statement about making a mirror. But it's still notable whenever such an old and important piece of Internet history bites the dust.
Like many archives, Hobbes started as an FTP site. "The primary distribution of files on the Internet were via FTP servers," Scott tells Ars Technica. "And as FTP servers went down, they would also be mirrored as subdirectories in other FTP servers. Companies like CDROM.COM / Walnut Creek became ways to just get a CD-ROM of the items, but they would often make the data available at http://ftp.cdrom.com to download." //
This story was updated on January 30 to reflect that the OS/2 archive likely started in 1990, according to people who ran the Hobbes server. The university ran Hobbes on one of two NeXT machines, the other called Calvin. //
IBM's SAA and CUA brought harmony to software design… until everyone forgot //
In the early days of microcomputers, everyone just invented their own user interfaces, until an Apple-influenced IBM standard brought about harmony. Then, sadly, the world forgot.
In 1981, the IBM PC arrived and legitimized microcomputers as business tools, not just home playthings. The PC largely created the industry that the Reg reports upon today, and a vast and chaotic market for all kinds of software running on a vast range of compatible computers. Just three years later, Apple launched the Macintosh and made graphical user interfaces mainstream. IBM responded with an obscure and sometimes derided initiative called Systems Application Architecture, and while that went largely ignored, one part of it became hugely influential over how software looked and worked for decades to come.
One bit of IBM's vast standard described how software user interfaces should look and work – and largely by accident, that particular part caught on and took off. It didn't just guide the design of OS/2; it also influenced Windows, and DOS and DOS apps, and of pretty much all software that followed. //
The problem is that developers who grew up with these pre-standardization tools, combined with various keyboardless fondleslabs where such things don't exist, don't know what CUA means. If someone's not even aware there is a standard, then the tools they build won't follow it. As the trajectories of KDE and GNOME show, even projects that started out compliant can drift in other directions.
This doesn't just matter for grumpy old hacks. It also disenfranchizes millions of disabled computer users, especially blind and visually-impaired people. You can't use a pointing device if you can't see a mouse pointer, but Windows can be navigated 100 per cent keyboard-only if you know the keystrokes – and all blind users do. Thanks to the FOSS NVDA tool, there's now a first-class screen reader for Windows that's free of charge.
Most of the same keystrokes work in Xfce, MATE and Cinnamon, for instance. Where some are missing, such as the Super key not opening the Start menu, they're easily added. This also applies to environments such as LXDE, LXQt and so on. //
Menus bars, dialog box layouts, and standard keystrokes to operate software are not just some clunky old 1990s design to be casually thrown away. They were the result of millions of dollars and years of R&D into human-computer interfaces, a large-scale effort to get different types of computers and operating systems talking to one another and working smoothly together. It worked, and it brought harmony in place of the chaos of the 1970s and 1980s and the early days of personal computers. It was also a vast step forward in accessibility and inclusivity, opening computers up to millions more people.
Just letting it fade away due to ignorance and the odd traditions of one tiny subculture among computer users is one of the biggest mistakes in the history of computing.
On Thursday, UK's Government Communications Headquarters (GCHQ) announced the release of previously unseen images and documents related to Colossus, one of the first digital computers. The release marks the 80th anniversary of the code-breaking machines that significantly aided the Allied forces during World War II. While some in the public knew of the computers earlier, the UK did not formally acknowledge the project's existence until the 2000s.
Colossus was not one computer but a series of computers developed by British scientists between 1943 and 1945. These 2-meter-tall electronic beasts played an instrumental role in breaking the Lorenz cipher, a code used for communications between high-ranking German officials in occupied Europe. The computers were said to have allowed allies to "read Hitler's mind," according to The Sydney Morning Herald. //
The technology behind Colossus was highly innovative for its time. Tommy Flowers, the engineer behind its construction, used over 2,500 vacuum tubes to create logic gates, a precursor to the semiconductor-based electronic circuits found in modern computers. While 1945's ENIAC was long considered the clear front-runner in digital computing, the revelation of Colossus' earlier existence repositioned it in computing history. (However, it's important to note that ENIAC was a general-purpose computer, and Colossus was not.)
Douglas Engelbart changed computer history forever on December 9, 1968.
A half century ago, computer history took a giant leap when Douglas Engelbart—then a mid-career 43-year-old engineer at Stanford Research Institute in the heart of Silicon Valley—gave what has come to be known as the "mother of all demos."
On December 9, 1968 at a computer conference in San Francisco, Engelbart showed off the first inklings of numerous technologies that we all now take for granted: video conferencing, a modern desktop-style user interface, word processing, hypertext, the mouse, collaborative editing, among many others.
Even before his famous demonstration, Engelbart outlined his vision of the future more than a half-century ago in his historic 1962 paper, "Augmenting Human Intellect: A Conceptual Framework."
To open the 90-minute-long presentation, Engelbart posited a question that almost seems trivial to us in the early 21st century: "If in your office, you as an intellectual worker were supplied with a computer display, backed up by a computer that was alive for you all day, and was instantly responsible—responsive—to every action you had, how much value would you derive from that?"
Of course at that time, computers were vast behemoths that were light-years away from the pocket-sized devices that have practically become an extension of ourselves.
Engelbart, who passed away in 2013, was inspired by a now-legendary essay published in 1945 by Vannevar Bush, physicist who had been in charge of the United States Office of Scientific Research and Development during World War II.
That essay, "As We May Think," speculated on a "future device for individual use, which is a sort of mechanized private file and library." It was this essay that stuck with a young Engelbart—then a Navy technician stationed in the Philippines—for more than two decades.
By 1968, Engelbart had created what he called the "oN-Line System," or NLS, a proto-Intranet. The ARPANET, the predecessor to the Internet itself, would not be established until late the following year.
Five years later, in 1973, Xerox debuted the Alto, considered to be the first modern personal computer. That, in turn served as the inspiration for both the Macintosh and Microsoft Windows, and the rest, clearly, is history.
Evangelist of lean software and devisor of 9 programming languages and an OS was 89 //
In his work, the languages and tools he created, in his eloquent plea for smaller, more efficient software – even in the projects from which he quit – his influence on the computer industry has been almost beyond measure. The modern software industry has signally failed to learn from him. Although he has left us, his work still has much more to teach.
The C programming language was devised in the early 1970s as a system implementation language for the nascent Unix operating system. Derived from the typeless language BCPL, it evolved a type structure; created on a tiny machine as a tool to improve a meager programming environment, it has become one of the dominant languages of today. This paper studies its evolution.
Who would win: the world's fastest computer circa 1976, or a $35 single-board computer from 2012? //
"In 1978, the Cray-1 supercomputer cost $7 million, weighed 10,500 pounds and had a 115 kilowatt power supply. It was, by far, the fastest computer in the world," Longbottom writes of the device, designed as the flagship product of Seymour Cray's high-performance computing company. "The Raspberry Pi costs around $70 (CPU board, case, power supply, SD Card), weighs a few ounces, uses a five watt power supply and is more than 4.5 times faster than the Cray 1." //
The same benchmark tests show even bigger gains for newer devices in the Raspberry Pi family, as you'd expect: the Raspberry Pi 400, the newest device in Longbottom's performance table, showed a performance gain of up to 95.5 times the Cray-1's results — in a device which fits on the palm of your hand, rather than becoming a very expensive piece of uncomfortable office furniture.
IBM found themselves in a similar predicament in the 1970s after working on a type of mainframe computer made to be a phone switch. Eventually the phone switch was abandoned in favor of a general-purpose processor but not before they stumbled onto the RISC processor which eventually became the IBM 801. //
They found that by eliminating all but a few instructions and running those without a microcode layer, the processor performance gains were much more than they would have expected at up to three times as fast for comparable hardware. //
stormwyrm says:
January 1, 2024 at 1:56 am
Oddball special-purpose instructions like that are not what makes an architecture CISC though.
Special-purpose instructions are not what makes an architecture RISC or CISC. In all cases these weird instructions operate only on registers and likely take only one processor bus cycle to execute. Contrast this with the MOVSD instruction on x86 that moves data pointed to the ESI register to the address in the EDI register and increments these registers to point to the next dword. Three bus cycles at least, one for instruction fetch, one to load data at the address of ESI, and another to store a copy of the data to the address at EDI. This is what is meant by “complex” in CISC. RISC processors in contrast have to have dedicated instructions that do load and store only so that the majority of instructions run on only one bus cycle. //
Nicholas Sargeant says:
January 1, 2024 at 4:00 am
Stormwyrm has it correct. When we started with RISC, the main benefit was that we knew how much data to pre-fetch into the pipeline – how wide an instruction was, how long the operands were – so the speed demons could operate at full memory bus capacity. The perceived problem with the brainiac CISC instruction sets was that you had to fetch the first part of the instruction to work what the operands were, how long they were and where to collect them from. Many ckock cycles would pass by to run a single instruction. RISC engines could execute any instruction in one clock cycle. So, the so-called speed demons could out-pace brainiacs, even if you had to occasionally assemble a sequence of RISC instructions to do the same as one CISC. Since it wasn’t humans working out the optimal string of RISC instructions, but a compiler, who would it trouble if reading assembler for RISC made so much less sense than reading CISC assembler?
Now, what we failed to comprehend was that CISC engines would get so fast that they could execute a complex instruction in the same, single external clock cycle – when supported by pre-fetch, heavy pipelining, out-of-order execution, branch target caching, register renaming, broadside cache loading and multiple redundant execution units. The only way that RISC could have outpaced CISC was to run massively parallel execution units in parallel (since they individually would be much simpler and more compact on the silicon). However, parallel execution was too hard for most compilers to exploit in the general case.