← Back to context

Comment by 13of40

7 years ago

Similar story with me. I got an Amiga 1000 and did a fair bit of assembly coding on it, then ended up writing some 16-bit x86 assembly for school later on. Being used to having sixteen 32-bit registers, then all the sudden having to use AX, BX, CX, and DX (and don't forget they all have slightly different purposes!) was like being brutally shoved back into the 80's.

Well, history has shown neither RISC nor CISC as actually a better choice, since both models more or less converged to a sort of hybrid design years ago.

  • And what this has to do with Amiga?

    • At a simplistic level, the difference between RISC and CISC processor design boils down to having many registers and reduced instructions, or few registers and extra specialized instructions.

      What was being described is going to programming from a RISC-like design processor to a CISC-like design processor, and how they felt constrained after doing so. It likely does feel more constraining (I don't really remember how I felt about it back when I did it, but I also went the other direction, and only in the context of classwork), but in the end, most people are programming a level removed from that anyways.

      There used to be quite a lot of arguments about what design was better (IMO mostly fueled by Macs running a RISC processor and Windows running a CISC processor, at least until Apple switched to x86). I find it slightly comical that both designs ended up in a fairly similar place though (with RISC processors adding extra instructions, and CISC processors adding more registers, even if mostly just logical registers).

      2 replies →

And seriously, who thought that "MOV A B" actually means "A = B" was a good idea?

68000 has it right!