Comment by whizzter
3 days ago
Was it necessarily a dead end? Considering the ways Intel and later AMD managed to upgrade/re-invent x86 that until x64 still retained so much of the x86 instruction encoding/heritage (heck, even x64 retains some of the instruction encoding characteristics).
Had the Amiga retained relevance for longer and without a push for PowerPC I don't see a reason why 68k wouldn't have been extended. Heck the FPGA Apollo 68080 would've matched end of 1990s P-II's and FPGA's aren't speed monsters to begin with.
The 68060 is pretty good to be fair, but it never ended up being widely used and Motorola definitely saw PPC as the future.
Maybe if these theoretical new 68k Amigas became a huge market hit they could have taken the arch further and it could have remained competitive, but all the other 68k shops had already pretty much given up or moved on already (Apple was already going PPC, Sun went SPARC, NeXT gave up on their 68k hardware, Atari was exiting the computer business entirely, etc) so I don’t know that the market would have been there to support development against the vast amount of competition from both the huge x86 bastion on one hand and the multitude of RISC newcomers on the other.
Right, and I think that is a junction. Had Motorola not been enamoured with the new shiny as a chipcompany and realized that they already had a huge market that just wanted improved performance of their software and pushed 68k improvements instead of a new PPC architecture, both Apple and (a better managed) Commodore could've been competitive with improved 68k designs.
Remember, Intel also barked up the wrong tree with Itanium for 64bit and didn't really let go until AMD forced their hand with x64.
The argument is that 68k is "CISCier" than x86, the addressing modes in particular, so making a performant modern out-of-order superscaler core that uses it would be harder than x86.
Don't agree there considering x86 has MODRM, size-prefix(16/32 and later 64bit operand sizes), SIB(with prefix for 32bit), segment/selector prefixes,etc.
Biggest difference perhaps where 68000 is more complicated is postincrement but considering all the cruft 32bit X86 already inherited from 8086 compared to the "clean" 32bit variations of 68000 I'd make it a toss at best but leaning to 68000 being easier (stuff like IP relative addressing also exists on the RISC-y ARM arch).
Apart from addressing the sheer number of weird x86 instructions and prefixes has always been the bane of lowpower x86.
I believe in that. But Commodore could have plunked a cheap 68020 in their machines for backwards compatilibity (like how MSX2 had a SOC MSX1 inside, PS2 had a PS1 SOC, PS3 had a PS2 SOC, and so on) and put another "real" socketed CPU as a co-processor. Or made big-box machines with CPUs on PCI cards, for infinite expansion options. "True" multitasking, perfect for CAD, 3D rendering and non-linear video editing. It would have been very cool with an architecture where the UI could be rendered with almost hard realtime and heavy processing happened elsewhere.
This is almost exactly what the plan was, until C= went out of business:
https://en.wikipedia.org/wiki/Amiga_Hombre_chipset
It was going to be HP PA-RISC based and have an AGA Amiga SoC, including a 68k core.
2 replies →