488 private links
The C programming language was devised in the early 1970s as a system implementation language for the nascent Unix operating system. Derived from the typeless language BCPL, it evolved a type structure; created on a tiny machine as a tool to improve a meager programming environment, it has become one of the dominant languages of today. This paper studies its evolution.
IBM found themselves in a similar predicament in the 1970s after working on a type of mainframe computer made to be a phone switch. Eventually the phone switch was abandoned in favor of a general-purpose processor but not before they stumbled onto the RISC processor which eventually became the IBM 801. //
They found that by eliminating all but a few instructions and running those without a microcode layer, the processor performance gains were much more than they would have expected at up to three times as fast for comparable hardware. //
stormwyrm says:
January 1, 2024 at 1:56 am
Oddball special-purpose instructions like that are not what makes an architecture CISC though.
Special-purpose instructions are not what makes an architecture RISC or CISC. In all cases these weird instructions operate only on registers and likely take only one processor bus cycle to execute. Contrast this with the MOVSD instruction on x86 that moves data pointed to the ESI register to the address in the EDI register and increments these registers to point to the next dword. Three bus cycles at least, one for instruction fetch, one to load data at the address of ESI, and another to store a copy of the data to the address at EDI. This is what is meant by “complex” in CISC. RISC processors in contrast have to have dedicated instructions that do load and store only so that the majority of instructions run on only one bus cycle. //
Nicholas Sargeant says:
January 1, 2024 at 4:00 am
Stormwyrm has it correct. When we started with RISC, the main benefit was that we knew how much data to pre-fetch into the pipeline – how wide an instruction was, how long the operands were – so the speed demons could operate at full memory bus capacity. The perceived problem with the brainiac CISC instruction sets was that you had to fetch the first part of the instruction to work what the operands were, how long they were and where to collect them from. Many ckock cycles would pass by to run a single instruction. RISC engines could execute any instruction in one clock cycle. So, the so-called speed demons could out-pace brainiacs, even if you had to occasionally assemble a sequence of RISC instructions to do the same as one CISC. Since it wasn’t humans working out the optimal string of RISC instructions, but a compiler, who would it trouble if reading assembler for RISC made so much less sense than reading CISC assembler?
Now, what we failed to comprehend was that CISC engines would get so fast that they could execute a complex instruction in the same, single external clock cycle – when supported by pre-fetch, heavy pipelining, out-of-order execution, branch target caching, register renaming, broadside cache loading and multiple redundant execution units. The only way that RISC could have outpaced CISC was to run massively parallel execution units in parallel (since they individually would be much simpler and more compact on the silicon). However, parallel execution was too hard for most compilers to exploit in the general case.
Cosmopolitan Libc makes C a build-anywhere run-anywhere language, like Java, except it doesn't need an interpreter or virtual machine. Instead, it reconfigures stock GCC and Clang to output a POSIX-approved polyglot format that runs natively on Linux + Mac + Windows + FreeBSD + OpenBSD + NetBSD + BIOS on AMD64 and ARM64 with the best possible performance.
This is the project webpage for the Netwide Assembler (NASM), an asssembler for the x86 CPU architecture portable to nearly every modern platform, and with code generation for many platforms old and new.
A dispute between a prominent open-source developer and the maker of software used to manage Linux kernel development has forced Linux creator Linus Torvalds to embark on a new software project of his own. The new effort, called "git," began last week after a licensing dispute forced Torvalds to abandon the proprietary BitKeeper software he had used since 2002 to manage Linux kernel development.
The conflict touches on the difference between open-source developers who view Linux's open, collaborative approach as a technically superior way to build software and advocates of free software who see the ability to access and change source code as fundamental freedom.
As a result of the dispute, Torvalds is now working with other Linux developers to create software that can quickly make changes to 17,000 files that make up the Linux kernel, the central component of the Linux operating system. "Git, to some degree, was designed on the principle that everything you ever do on a daily basis should take less than a second," Torvalds said in an e-mail interview.
Did you hear the news? Firefox development is moving from Mercurial to Git. While the decision is far from being mine, and I was barely involved in the small incremental changes that ultimately led to this decision, I feel I have to take at least some responsibility. And if you are one of those who would rather use Mercurial than Git, you may direct all your ire at me.
But let's take a step back and review the past 25 years leading to this decision. You'll forgive me for skipping some details and any possible inaccuracies. This is already a long post, while I could have been more thorough, even I think that would have been too much. This is also not an official Mozilla position, only my personal perception and recollection as someone who was involved at times, but mostly an observer from a distance.
Sometimes people forget, especially software people, that work is as much about programming the people as it is the machines.
There are only two hard things in Computer Science: cache invalidation and naming things. -- Phil Karlton
Long a favorite saying of mine, one for which I couldn't find a satisfactory URL.
Like many good phrases, it's had a host of riffs on it. A couple of them I feel are worth adding to the page
Leon Bambrick @secretGeek
·
There are 2 hard problems in computer science: cache invalidation, naming things, and off-by-1 errors.
9:20 AM · Jan 1, 2010
Mathias Verraes @mathiasverraes
·
There are only two hard problems in distributed systems: 2. Exactly-once delivery 1. Guaranteed order of messages 2. Exactly-once delivery
2:40 PM · Aug 14, 2015
On 18/03/2022 00:07, Colin Percival wrote:
On 3/17/22 08:17, Arthur Chance wrote:
Is it possible to invalidate an existing tarsnap key so it cannot be
used in future. I have a key for a decommissioned machine so it's no
longer needed and hypothetically it could be used for DoS attack (by
creating bogus archives and draining the account funds). Obviously this
is impossible unless the key leaks somehow, but operational paranoia
would suggest invalidating it would be a good idea.The API for disabling keys is "send Colin an email". ;-)
So API = Application Programmer's Initiative. :-)
I am a software engineer, and have been for most of my life.
One afternoon I was thinking about my tendency to obsess over minor technical details. I'm not alone in this tendency, but I have no doubt that many others — even some in my profession — view it as a peculiar form of madness. What metaphor, I wondered, could possibly convey why it was so difficult to let go of seemingly-trivial issues?
As it happens, I'd recently been discussing Douglas Hofstadter's Gödel, Escher, Bach with a friend. It was that book which introduced me to Zen kōans.
Thoughts collided, and the first of these pseudo-kōans was born. Consider it an experiment: an attempt at merging vocation and avocation. //
Although the title of this collection is a rather obvious play on The Gateless Gate (a historically important collection of Zen kōans), please note that the offerings here are not Zen kōans, nor do I intend any disrespect to practicioners of Zen Buddhism.
Here’s a very 1960s data visualization of just how much code they wrote—this is Margaret Hamilton, director of software engineering for the project, standing next to a stack of paper containing the software: //
As enormous and successful as Burkey’s project has been, however, the code itself remained somewhat obscure to many of today’s software developers. That was until last Thursday (July 7), when former NASA intern Chris Garry uploaded the software in its entirety to GitHub, //
But as the always-sharp joke detectives in Reddit’s r/ProgrammerHumor section found, many of the comments in the AGC code go beyond boring explanations of the software itself. They’re full of light-hearted jokes and messages, and very 1960s references.
One of the source code files, for example, is called
BURN_BABY_BURN--MASTER_IGNITION_ROUTINE
TC BANKCALL # TEMPORARY, I HOPE HOPE HOPE
that one in the cornerSilver badge
Reply Icon
Re: Makes you proud
The one that gets me is when you find and fix a long-standing bug in some code and wonder how in hell it managed to keep going in its original state for so many years!
You have to tread carefully around those sorts of bugs, just in case it turns out to be a Schroedinbug - and you have just observed that, unfixed, it can't possibly work. Which collapses its wave function and, through spooky action at a distance, every running copy of that program will suddenly stop working!