488 private links
Thursday 2nd April 2020 15:11 GMT
BJC
Millisecond roll-over?
So, what is the probability that the timing for these events is stored as milliseconds in a 32 bit structure?
Reply Icon
Re: Millisecond roll-over?
My first thought too, but that rolls over after 49.7 days.
Still, they could have it wrong again.
Re: Millisecond roll-over?
I suspect that it is a millisecond roll over and someone at the FAA picked 51 days instead of 49.7 because they don't understand software any better than Boeing.
Thursday 2nd April 2020 17:05 GMT
the spectacularly refined chap
Reply Icon
Re: Millisecond roll-over?
Could well be something like that, the earlier 248 day issue is exactly the same duration that older Unix hands will recognise as the 'lbolt issue': a variable holding the number of clock ticks since boot overflows a signed 32 bit int after 248 days assuming clock ticks are at 100Hz as was usual back then and is still quite common.
See e.g. here. The issue has been known about and the mitigation well documented for at least 30 years. Makes you wonder about the monkeys they have coding this stuff. //
bombastic bobSilver badge
Reply Icon
Devil
Re: Millisecond roll-over?
I've run into that problem (32-bit millisecond timer rollover issues) with microcontrollers, solved by doing the math correctly
capturing the tick count
if((uint32_t)(Ticker() - last_time) >= some_interval)
and
last_time=Ticker(); // for when it crosses the threshold
[ alternately last_time += some_interval when you want it to be more accurate ]
using a rollover time
if((int32_t)(Ticker() - schedule_time) >= 0)
and
schedule_time += schedule_interval (for when it crosses the threshold)
(this is how Linux kernel does its scheduled events, internally, as I recall, except it compares to jiffies which are 1/100 of a second if I remember correctly)
(examples in C of course, the programming lingo of choice the gods!)
do the math like this, should work as long as you use uint32_t data types for the 'Ticker()' function and for the 'scheduld_time'; or 'last_time' vars.
If you are an IDIOT and don't do unsigned comparisons "similar to what I just demonstrated", you can predict uptime-related problems at about... 49.71 days [assuming milliseconds].
I think i remember a 'millis()' or similarly named function in VxWorks. It's been over a decade since I've worked with it though. VxWorks itself was pretty robust back then, used in a lot of routers and other devices that "stay on all the time". So its track record is pretty good.
So the most likely scenario is what you suggested - a millisecond timer rolling over (with a 32-bit var storing info) and causing bogus data to accumulate after 49.71 days, which doesn't (for some reason) TRULY manifest itself until about 51 days...
Anyway, good catch.
Evangelist of lean software and devisor of 9 programming languages and an OS was 89 //
In his work, the languages and tools he created, in his eloquent plea for smaller, more efficient software – even in the projects from which he quit – his influence on the computer industry has been almost beyond measure. The modern software industry has signally failed to learn from him. Although he has left us, his work still has much more to teach.
The C programming language was devised in the early 1970s as a system implementation language for the nascent Unix operating system. Derived from the typeless language BCPL, it evolved a type structure; created on a tiny machine as a tool to improve a meager programming environment, it has become one of the dominant languages of today. This paper studies its evolution.
IBM found themselves in a similar predicament in the 1970s after working on a type of mainframe computer made to be a phone switch. Eventually the phone switch was abandoned in favor of a general-purpose processor but not before they stumbled onto the RISC processor which eventually became the IBM 801. //
They found that by eliminating all but a few instructions and running those without a microcode layer, the processor performance gains were much more than they would have expected at up to three times as fast for comparable hardware. //
stormwyrm says:
January 1, 2024 at 1:56 am
Oddball special-purpose instructions like that are not what makes an architecture CISC though.
Special-purpose instructions are not what makes an architecture RISC or CISC. In all cases these weird instructions operate only on registers and likely take only one processor bus cycle to execute. Contrast this with the MOVSD instruction on x86 that moves data pointed to the ESI register to the address in the EDI register and increments these registers to point to the next dword. Three bus cycles at least, one for instruction fetch, one to load data at the address of ESI, and another to store a copy of the data to the address at EDI. This is what is meant by “complex” in CISC. RISC processors in contrast have to have dedicated instructions that do load and store only so that the majority of instructions run on only one bus cycle. //
Nicholas Sargeant says:
January 1, 2024 at 4:00 am
Stormwyrm has it correct. When we started with RISC, the main benefit was that we knew how much data to pre-fetch into the pipeline – how wide an instruction was, how long the operands were – so the speed demons could operate at full memory bus capacity. The perceived problem with the brainiac CISC instruction sets was that you had to fetch the first part of the instruction to work what the operands were, how long they were and where to collect them from. Many ckock cycles would pass by to run a single instruction. RISC engines could execute any instruction in one clock cycle. So, the so-called speed demons could out-pace brainiacs, even if you had to occasionally assemble a sequence of RISC instructions to do the same as one CISC. Since it wasn’t humans working out the optimal string of RISC instructions, but a compiler, who would it trouble if reading assembler for RISC made so much less sense than reading CISC assembler?
Now, what we failed to comprehend was that CISC engines would get so fast that they could execute a complex instruction in the same, single external clock cycle – when supported by pre-fetch, heavy pipelining, out-of-order execution, branch target caching, register renaming, broadside cache loading and multiple redundant execution units. The only way that RISC could have outpaced CISC was to run massively parallel execution units in parallel (since they individually would be much simpler and more compact on the silicon). However, parallel execution was too hard for most compilers to exploit in the general case.
There are only two hard things in Computer Science: cache invalidation and naming things. -- Phil Karlton
Long a favorite saying of mine, one for which I couldn't find a satisfactory URL.
Like many good phrases, it's had a host of riffs on it. A couple of them I feel are worth adding to the page
Leon Bambrick @secretGeek
·
There are 2 hard problems in computer science: cache invalidation, naming things, and off-by-1 errors.
9:20 AM · Jan 1, 2010
Mathias Verraes @mathiasverraes
·
There are only two hard problems in distributed systems: 2. Exactly-once delivery 1. Guaranteed order of messages 2. Exactly-once delivery
2:40 PM · Aug 14, 2015