Well we built some instrumentation around it at work back in the 90s and still use it today. It was ahead of its time. It had hardware loops, a hardware call stack, hardware circular buffer addressing, and a DMA controller. In one instruction, you could do 2 FPU operations and a memory move with a DMA transfer going on in the background. It was an insane architecture. And it could handle 3 separate memory spaces, so even though it’s a 32-bit chip, you could access well over 4 GB of RAM.
The best thing about chips of that era though is you could tell ahead of time exactly how long your code will take to execute. Like you just type numbers into a spreadsheet and add up the instruction cycle counts. That kind of analysis is hopeless these days, but it informed the design of the instrument. More recently, we’ve been looking at RISC-V for a newer generation, but it’s harder to predict ahead of time how it will perform?
I know how you feel, I once made the Kessler run in under twelve parsecs myself.
Whatcha coding that needs to be so precisely timed? Something nuclear? I heard once that nuclear plants have something called real time operating systems that allow for that type of timing prediction.
I can’t say too much about it but we’re in the mining sector.
And yeah, if I had to do it all over again from scratch, I’d definitely be looking at a real-time OS. There just weren’t many options back in the day besides coding it all yourself. Even now, I’d have to benchmark the OS to see what its latency is actually like? We had it down in the microseconds range with our custom OS but if it’s more like milliseconds with an off-the-shelf OS, for example, that would change the whole ball game.
Well we built some instrumentation around it at work back in the 90s and still use it today. It was ahead of its time. It had hardware loops, a hardware call stack, hardware circular buffer addressing, and a DMA controller. In one instruction, you could do 2 FPU operations and a memory move with a DMA transfer going on in the background. It was an insane architecture. And it could handle 3 separate memory spaces, so even though it’s a 32-bit chip, you could access well over 4 GB of RAM.
The best thing about chips of that era though is you could tell ahead of time exactly how long your code will take to execute. Like you just type numbers into a spreadsheet and add up the instruction cycle counts. That kind of analysis is hopeless these days, but it informed the design of the instrument. More recently, we’ve been looking at RISC-V for a newer generation, but it’s harder to predict ahead of time how it will perform?
I know how you feel, I once made the Kessler run in under twelve parsecs myself.
Whatcha coding that needs to be so precisely timed? Something nuclear? I heard once that nuclear plants have something called real time operating systems that allow for that type of timing prediction.
I can’t say too much about it but we’re in the mining sector.
And yeah, if I had to do it all over again from scratch, I’d definitely be looking at a real-time OS. There just weren’t many options back in the day besides coding it all yourself. Even now, I’d have to benchmark the OS to see what its latency is actually like? We had it down in the microseconds range with our custom OS but if it’s more like milliseconds with an off-the-shelf OS, for example, that would change the whole ball game.
super interesting read. thanks for your time!