Finally ARM? Water under the bridge.

Shawn R. Hartsock
2 min readDec 2, 2020

--

In my CS degree, 25+ years ago now, I learned how to create a processor’s design mask, building my own 4-bit CPU and assembly language. We would move on to study CISC and RISK designs and assembly languages, then build compilers for them for real and made up languages. These were all arcane tools when I learned them but I sense even more so now.

I spent a few months using the Oculus Quest SDK in Unity3D and 30+ years after I learned software development I was amazed at how a complete novice today might be shielded completely from any direct code whatsoever. They would ostensibly accomplish what I would call computer programming but virtually no code would be typed in by them. Instead a suite of tools could help that developer point and click their way to kind-of sort-of working code.

These layers of abstraction between the software developer of today and the hardware actually help the case for things like ARM. Reduced instruction sets reduce the number of operations but bigger instruction sets better match how developers think. This hardly seems relevant anymore.

The Accumulator design in the x86 architecture is a richer instruction set but had odd specific registers for results. The MIPS architecture (not quite ARM but a cousin) that I studied had fewer instructions but no register specificity. This made some things easier to directly program and others harder. It also meant that unless you were super duper disciplined you might lose track of which registers were getting used for what things. And … yet none of this seems relevant anymore.

The old argument between CISC and RISK is really about the amount of electrons used to accomplish each set of tasks versus the ease of comprehension for a developer.

An analogy for a non-programmer…

Imagine a chip as a maze full of water. This is the electricity in the chip. If one maze is shorter, shallower, or simpler, it will use less water than a bigger more complex maze.

There’s a whole lot of over simplification there but it … in a nutshell explains why the Apple M1 can have an astounding battery life compared to its brothers on a richer CISC chip. And with virtualization and bytecode translation projects … things that hardly matter anymore in the human factors of computer software creation … I don’t see why you couldn’t just create a Hypervisor-like product that bridged any missing software gaps.

CISC versus RISC is water under the bridge. If an ARM chip saves power, and you can bridge any stoppers, why wouldn’t you use it?

--

--