413 private links
Hobbes OS/2 Archive: "As of April 15th, 2024, this site will no longer exist."
In a move that marks the end of an era, New Mexico State University (NMSU) recently announced the impending closure of its Hobbes OS/2 Archive on April 15, 2024. For over three decades, the archive has been a key resource for users of the IBM OS/2 operating system and its successors, which once competed fiercely with Microsoft Windows. //
Archivists such as Jason Scott of the Internet Archive have stepped up to say that the files hosted on Hobbes are safe and already mirrored elsewhere. "Nobody should worry about Hobbes, I've got Hobbes handled," wrote Scott on Mastodon in early January. OS/2 World.com also published a statement about making a mirror. But it's still notable whenever such an old and important piece of Internet history bites the dust.
Like many archives, Hobbes started as an FTP site. "The primary distribution of files on the Internet were via FTP servers," Scott tells Ars Technica. "And as FTP servers went down, they would also be mirrored as subdirectories in other FTP servers. Companies like CDROM.COM / Walnut Creek became ways to just get a CD-ROM of the items, but they would often make the data available at http://ftp.cdrom.com to download." //
This story was updated on January 30 to reflect that the OS/2 archive likely started in 1990, according to people who ran the Hobbes server. The university ran Hobbes on one of two NeXT machines, the other called Calvin. //
IBM's SAA and CUA brought harmony to software design… until everyone forgot //
In the early days of microcomputers, everyone just invented their own user interfaces, until an Apple-influenced IBM standard brought about harmony. Then, sadly, the world forgot.
In 1981, the IBM PC arrived and legitimized microcomputers as business tools, not just home playthings. The PC largely created the industry that the Reg reports upon today, and a vast and chaotic market for all kinds of software running on a vast range of compatible computers. Just three years later, Apple launched the Macintosh and made graphical user interfaces mainstream. IBM responded with an obscure and sometimes derided initiative called Systems Application Architecture, and while that went largely ignored, one part of it became hugely influential over how software looked and worked for decades to come.
One bit of IBM's vast standard described how software user interfaces should look and work – and largely by accident, that particular part caught on and took off. It didn't just guide the design of OS/2; it also influenced Windows, and DOS and DOS apps, and of pretty much all software that followed. //
The problem is that developers who grew up with these pre-standardization tools, combined with various keyboardless fondleslabs where such things don't exist, don't know what CUA means. If someone's not even aware there is a standard, then the tools they build won't follow it. As the trajectories of KDE and GNOME show, even projects that started out compliant can drift in other directions.
This doesn't just matter for grumpy old hacks. It also disenfranchizes millions of disabled computer users, especially blind and visually-impaired people. You can't use a pointing device if you can't see a mouse pointer, but Windows can be navigated 100 per cent keyboard-only if you know the keystrokes – and all blind users do. Thanks to the FOSS NVDA tool, there's now a first-class screen reader for Windows that's free of charge.
Most of the same keystrokes work in Xfce, MATE and Cinnamon, for instance. Where some are missing, such as the Super key not opening the Start menu, they're easily added. This also applies to environments such as LXDE, LXQt and so on. //
Menus bars, dialog box layouts, and standard keystrokes to operate software are not just some clunky old 1990s design to be casually thrown away. They were the result of millions of dollars and years of R&D into human-computer interfaces, a large-scale effort to get different types of computers and operating systems talking to one another and working smoothly together. It worked, and it brought harmony in place of the chaos of the 1970s and 1980s and the early days of personal computers. It was also a vast step forward in accessibility and inclusivity, opening computers up to millions more people.
Just letting it fade away due to ignorance and the odd traditions of one tiny subculture among computer users is one of the biggest mistakes in the history of computing.
On Thursday, UK's Government Communications Headquarters (GCHQ) announced the release of previously unseen images and documents related to Colossus, one of the first digital computers. The release marks the 80th anniversary of the code-breaking machines that significantly aided the Allied forces during World War II. While some in the public knew of the computers earlier, the UK did not formally acknowledge the project's existence until the 2000s.
Colossus was not one computer but a series of computers developed by British scientists between 1943 and 1945. These 2-meter-tall electronic beasts played an instrumental role in breaking the Lorenz cipher, a code used for communications between high-ranking German officials in occupied Europe. The computers were said to have allowed allies to "read Hitler's mind," according to The Sydney Morning Herald. //
The technology behind Colossus was highly innovative for its time. Tommy Flowers, the engineer behind its construction, used over 2,500 vacuum tubes to create logic gates, a precursor to the semiconductor-based electronic circuits found in modern computers. While 1945's ENIAC was long considered the clear front-runner in digital computing, the revelation of Colossus' earlier existence repositioned it in computing history. (However, it's important to note that ENIAC was a general-purpose computer, and Colossus was not.)
Douglas Engelbart changed computer history forever on December 9, 1968.
A half century ago, computer history took a giant leap when Douglas Engelbart—then a mid-career 43-year-old engineer at Stanford Research Institute in the heart of Silicon Valley—gave what has come to be known as the "mother of all demos."
On December 9, 1968 at a computer conference in San Francisco, Engelbart showed off the first inklings of numerous technologies that we all now take for granted: video conferencing, a modern desktop-style user interface, word processing, hypertext, the mouse, collaborative editing, among many others.
Even before his famous demonstration, Engelbart outlined his vision of the future more than a half-century ago in his historic 1962 paper, "Augmenting Human Intellect: A Conceptual Framework."
To open the 90-minute-long presentation, Engelbart posited a question that almost seems trivial to us in the early 21st century: "If in your office, you as an intellectual worker were supplied with a computer display, backed up by a computer that was alive for you all day, and was instantly responsible—responsive—to every action you had, how much value would you derive from that?"
Of course at that time, computers were vast behemoths that were light-years away from the pocket-sized devices that have practically become an extension of ourselves.
Engelbart, who passed away in 2013, was inspired by a now-legendary essay published in 1945 by Vannevar Bush, physicist who had been in charge of the United States Office of Scientific Research and Development during World War II.
That essay, "As We May Think," speculated on a "future device for individual use, which is a sort of mechanized private file and library." It was this essay that stuck with a young Engelbart—then a Navy technician stationed in the Philippines—for more than two decades.
By 1968, Engelbart had created what he called the "oN-Line System," or NLS, a proto-Intranet. The ARPANET, the predecessor to the Internet itself, would not be established until late the following year.
Five years later, in 1973, Xerox debuted the Alto, considered to be the first modern personal computer. That, in turn served as the inspiration for both the Macintosh and Microsoft Windows, and the rest, clearly, is history.
Evangelist of lean software and devisor of 9 programming languages and an OS was 89 //
In his work, the languages and tools he created, in his eloquent plea for smaller, more efficient software – even in the projects from which he quit – his influence on the computer industry has been almost beyond measure. The modern software industry has signally failed to learn from him. Although he has left us, his work still has much more to teach.
The C programming language was devised in the early 1970s as a system implementation language for the nascent Unix operating system. Derived from the typeless language BCPL, it evolved a type structure; created on a tiny machine as a tool to improve a meager programming environment, it has become one of the dominant languages of today. This paper studies its evolution.
Who would win: the world's fastest computer circa 1976, or a $35 single-board computer from 2012? //
"In 1978, the Cray-1 supercomputer cost $7 million, weighed 10,500 pounds and had a 115 kilowatt power supply. It was, by far, the fastest computer in the world," Longbottom writes of the device, designed as the flagship product of Seymour Cray's high-performance computing company. "The Raspberry Pi costs around $70 (CPU board, case, power supply, SD Card), weighs a few ounces, uses a five watt power supply and is more than 4.5 times faster than the Cray 1." //
The same benchmark tests show even bigger gains for newer devices in the Raspberry Pi family, as you'd expect: the Raspberry Pi 400, the newest device in Longbottom's performance table, showed a performance gain of up to 95.5 times the Cray-1's results — in a device which fits on the palm of your hand, rather than becoming a very expensive piece of uncomfortable office furniture.
IBM found themselves in a similar predicament in the 1970s after working on a type of mainframe computer made to be a phone switch. Eventually the phone switch was abandoned in favor of a general-purpose processor but not before they stumbled onto the RISC processor which eventually became the IBM 801. //
They found that by eliminating all but a few instructions and running those without a microcode layer, the processor performance gains were much more than they would have expected at up to three times as fast for comparable hardware. //
stormwyrm says:
January 1, 2024 at 1:56 am
Oddball special-purpose instructions like that are not what makes an architecture CISC though.
Special-purpose instructions are not what makes an architecture RISC or CISC. In all cases these weird instructions operate only on registers and likely take only one processor bus cycle to execute. Contrast this with the MOVSD instruction on x86 that moves data pointed to the ESI register to the address in the EDI register and increments these registers to point to the next dword. Three bus cycles at least, one for instruction fetch, one to load data at the address of ESI, and another to store a copy of the data to the address at EDI. This is what is meant by “complex” in CISC. RISC processors in contrast have to have dedicated instructions that do load and store only so that the majority of instructions run on only one bus cycle. //
Nicholas Sargeant says:
January 1, 2024 at 4:00 am
Stormwyrm has it correct. When we started with RISC, the main benefit was that we knew how much data to pre-fetch into the pipeline – how wide an instruction was, how long the operands were – so the speed demons could operate at full memory bus capacity. The perceived problem with the brainiac CISC instruction sets was that you had to fetch the first part of the instruction to work what the operands were, how long they were and where to collect them from. Many ckock cycles would pass by to run a single instruction. RISC engines could execute any instruction in one clock cycle. So, the so-called speed demons could out-pace brainiacs, even if you had to occasionally assemble a sequence of RISC instructions to do the same as one CISC. Since it wasn’t humans working out the optimal string of RISC instructions, but a compiler, who would it trouble if reading assembler for RISC made so much less sense than reading CISC assembler?
Now, what we failed to comprehend was that CISC engines would get so fast that they could execute a complex instruction in the same, single external clock cycle – when supported by pre-fetch, heavy pipelining, out-of-order execution, branch target caching, register renaming, broadside cache loading and multiple redundant execution units. The only way that RISC could have outpaced CISC was to run massively parallel execution units in parallel (since they individually would be much simpler and more compact on the silicon). However, parallel execution was too hard for most compilers to exploit in the general case.
Sometimes you come across one of those ideas that at first appear to have to be some kind of elaborate joke, but as you dig deeper into it, it begins to make a disturbing kind of sense. This is where the idea of diagonally-oriented displays comes to the fore. Although not a feature that is generally supported by operating systems, [xssfox] used the xrandr (x resize and rotate) function in the Xorg display server to find the perfect diagonal display orientation to reach a happy balance between the pros and cons of horizontal and vertical display orientations.
The era of mainframe computers and directly programming machines with switches is long past, but plenty of us look back on that era with a certain nostalgia. Getting that close to the hardware and knowing precisely what’s going on is becoming a little bit of a lost art. That’s why [Phil] took it upon himself to build this homage to the mainframe computer of the 70s, which all but disappeared when PCs and microcontrollers took over the scene decades ago.
The machine, known as PlasMa, is not a recreation of any specific computer but instead looks to recreate the feel of computers of this era in a more manageable size.
The Internet started in the 1960s as a way for government researchers to share information. Computers in the '60s were large and immobile and in order to make use of information stored in any one computer, one had to either travel to the site of the computer or have magnetic computer tapes sent through the conventional postal system.
Another catalyst in the formation of the Internet was the heating up of the Cold War. The Soviet Union's launch of the Sputnik satellite spurred the U.S. Defense Department to consider ways information could still be disseminated even after a nuclear attack. This eventually led to the formation of the ARPANET (Advanced Research Projects Agency Network), the network that ultimately evolved into what we now know as the Internet. ARPANET was a great success but membership was limited to certain academic and research organizations who had contracts with the Defense Department. In response to this, other networks were created to provide information sharing.
January 1, 1983 is considered the official birthday of the Internet. Prior to this, the various computer networks did not have a standard way to communicate with each other. A new communications protocol was established called Transfer Control Protocol/Internetwork Protocol (TCP/IP). This allowed different kinds of computers on different networks to "talk" to each other. ARPANET and the Defense Data Network officially changed to the TCP/IP standard on January 1, 1983, hence the birth of the Internet. All networks could now be connected by a universal language.
PhilipStorry Ars Scholae Palatinae 19y 998 Subscriptor++
So why aren’t banks jumping at the opportunity to cast off their mainframes and move to the cloud? Risk and conversion cost. As a rule, banks are risk-averse. They are often trailing adopters for new technology and only do so when under competitive or regulatory pressure.
And more importantly, IBM understands this.
Would it be cheaper to run your processes on someone's cloud? Maybe. Will you spend the next two decades tweaking and rewriting as those cloud services get changed or replaced? Definitely.
I'd love to see a proper study into how much a 1990's/2000s Java replacement for an old Mainframe COBOL program has cost so far in terms of redevelopment for later JVMs and other maintenance. Has anyone seen such a thing?
If there's one thing IBM offers it's a kind of platform stability that can be measured in decades. You're not going to worry so much about what works with the latest OS version - this isn't that kind of environment.
Is that a good or a bad thing? We'll see both views in the comments here. But IBM's mainframes are the extreme expression of "If it's working today, it will work for years to come". And for some processes, that stability is very attractive - much more attractive than the costs of constant improvement and the risks it brings.
A brief comparison of z/OS and UNIX
z/OS concepts
What would we find if we compared z/OS® and UNIX®? In many cases, we'd find that quite a few concepts would be mutually understandable to users of either operating system, despite the differences in terminology.
For experienced UNIX users, Mapping UNIX to z/OS terms and concepts provides a small sampling of familiar computing terms and concepts. As a new user of z/OS, many of the z/OS terms will sound unfamiliar to you. As you work through this information center, however, the z/OS meanings will be explained and you will find that many elements of UNIX have analogs in z/OS.
Nobody minded for 20 years or so, until another student took action. //
Have you ever been asked to fix unofficial apps, written one yourself, or delivered mission-critical services while still a student? If so, click here to send On Call an email and we'll consider your story for a future instalment.
Don't be shy – we always need more yarns to consider. And remember: you'll always be anonymous.
There are only two hard things in Computer Science: cache invalidation and naming things. -- Phil Karlton
Long a favorite saying of mine, one for which I couldn't find a satisfactory URL.
Like many good phrases, it's had a host of riffs on it. A couple of them I feel are worth adding to the page
Leon Bambrick @secretGeek
·
There are 2 hard problems in computer science: cache invalidation, naming things, and off-by-1 errors.
9:20 AM · Jan 1, 2010
Mathias Verraes @mathiasverraes
·
There are only two hard problems in distributed systems: 2. Exactly-once delivery 1. Guaranteed order of messages 2. Exactly-once delivery
2:40 PM · Aug 14, 2015
They're rarely helpful. Actually, they usually add insult to injury. But what would computing be without 'em? Herewith, a tribute to a baker's dozen of the best (or is that worst?).
"To err is human, but to really foul things up you need a computer.” So goes an old quip attributed to Paul Ehrlich. He was right. One of the defining things about computers is that they–or, more specifically, the people who program them–get so many things so very wrong. Hence the need for error messages, which have been around nearly as long as computers themselves..
In theory, error messages should be painful at worst and boring at best. They tend to be cryptic; they rarely offer an apology even when one is due; they like to provide useless information like hexadecimal numbers and to withhold facts that would be useful, like plain-English explanations of how to right want went wrong. In multiple ways, most of them represent technology at its most irritating. //
- Abort, Retry, Fail? (MS-DOS)
In many ways, it remains an error message to judge other error messages by. It’s terse. (Three words.) It’s confusing. //
[UPDATE: Almost four hundred people have chimed into this discussion, and many nominated other error messages that are at least as worthy of celebration as the ones in the story. So celebrate ’em we did–please check out The 13 Other Greatest Error Messages of All Time.]
tl;dr: Non-spec DisplayPort cables were feeding back power from the monitor to the video card, causing my computer to reboot during POST. Buy this cable. //
When I got down on the floor to look inside the case, I saw something moving.
The case fan was spinning.
The freaking case fan was spinning. And I was holding the unplugged power cable in my hand. How is this even possible?!
I unplugged all of the PSU cords from my motherboard, and things shut down. The GPU light turned off, and the fan died.
The next day, I returned to Google, and for the hell of it, searched “computer is on even when unplugged.” I know, it sounds ridiculous, but it lead me to the answer I had been looking for (in addition to a funny meme of The Pope casting holy water for someone with a similar issue).
I scrolled, and scrolled, pages and pages and URL after URL of articles on something called The DisplayPort Pin 20 problem started showing up. It felt so funny to search for months with no solution and then to almost be assaulted by the number of posts on the same issue I was having.
It turns out that cable manufacturers who don’t adhere directly to the official DisplayPort spec end up connecting the 20th pin in the cable on both sides. That pin, carries – you guessed it – power!
This issue was serious. Enough power had been backflowing from my monitors into my GPU to run my case fans when the system was off.
The venerable PDP-11 minicomputer is still spry to this day, powering GE nuclear power-plant robots - and will do so for another 37 years.
That's right: PDP-11 assembler coders are hard to find, but the nuclear industry is planning on keeping the 16-bit machines ticking over until 2050 – long enough for a couple of generations of programmers to come and go.
Now that you've cleaned up the coffee spills and finished laughing, take a look here, at Vintage Computer forums, where GE's Chris Issel has resorted to seek assembly programmers for the 1970s tech.
Wednesday 19th June 2013 08:28 GMT
John Smith 19Gold badge
Coat
PDP 11 odds and ends.
The PDP 11 (like the PARC Alto) had a main processor built from standard 4 bit TTL "ALU" parts and their companion "register file." So 2nd, 3rd,4th sourced. I'm not sure how many mfg still list them on their available list in the old standard 0.1" pin spacing.
El Reg ran a story that Chorus (formerly British Steel) ran them for controlling all sorts of bits of their rolling mills but I can't recall if they are
I think the core role for this task is the refueling robots for the CANDU reactors. CANDU allows "on load" refuelling. The robots work in pairs locked onto each end of the pressurized pipes that carry the fuel and heavy water coolant/moderator. They then pressurize their internal storage areas, open the ends and one pushes new fuel bundles in while the other stores the old ones, before sealing the ends. However CANDU have been working on new designs with different fuel mixes (CANDU's special sauce (C Lewis Page) is that it's run with unenriched Uranium, which is much cheaper and does not need a bomb making enrichment facility) and new fuel bundle geometries, so time for a software upgrade.
And 128 users on a PDP 11/70. Certain customers ran bespoke OSes in the early 90s that could get 300+ when VMS could only support about less than 20 on the same spec.
Note for embedded use this is likely to be RSX rather than VMS, which also hosted the ICI developed RTL/2, which was partly what hosted the BBC CEEFAX service for decades.
Yes, it's an anorak.. //
Wednesday 19th June 2013 18:20 GMT
Jamie JonesSilver badge
Thumb Up
Who's laughing?
I feel much better knowing this.
What is the alternative? Buggy software written by the "'Have you tried switching it off and on again" generation?
RSX11M - Dave Cutler
Anyone who read the RSX11M sources (driver writers especially) realised that Dave Cutler was a very very good programmer long before he worked on VMS and later Windows NT. He managed to get a multiuser protected general purpose operating system to work with a minimum memory footprint of under 32kbytes on machines with about the same CPU power as the chip on a credit card. (A 96kByte PDP 11/40 (1/3 mip) with 2 RK05 disks (2.4Mbyte each) could support 2 concurrent programmers - a PDP 11/70 (1 mip) with 1Mbyte and 2 RM03 disk packs (65Mbyte each) could support 10 or more.) During the many years that the CEGB used PDP-11 computers with RSX11M, I did not hear of a single OS failure that was not caused by a hardware fault - I wish that current systems were as good. //
Wednesday 19th June 2013 15:09 GMT
annodomini2
Reply Icon
FAIL
Re: there are alternatives
They would never redesign the system, if the system has issues, they are known and fixes are well known.
Changing the system design introduces potential risks and unknowns into the system.
It's not about Zero failure, it's about safe and predictable failure. //
Wednesday 19th June 2013 07:53 GMT
Bob Dunlop
Hey I was taught assembler programming using a pdp11 .
After it's nice clean structure, the mess that was 8086 code came as quite a shock.