Comment by kens

1 day ago

That's an interesting question. Keep in mind that the 8086 was built as a stopgap processor to sell until Intel's iAPX 432 "micro-mainframe" processor was completed. Moreover, the 8086 was designed to be assembly-language compatible with the 8080 (through translation software) so it could take advantage of existing software. It was also designed to be compatible with the 8080's 16-bit addressing while supporting more memory.

Given those constraints, the design of the 8086 makes sense. In hindsight, though, considering that the x86 architecture has lasted for decades, there are a lot of things that could have been done differently. For example, the instruction encoding is a mess and didn't have an easy path for extending the instruction set. Trapping on invalid instructions would have been a good idea. The BCD instructions are not useful nowadays. Treating a register as two overlapping 8-bit registers (AL, AH) makes register renaming difficult in an out-of-order execution system. A flat address space would have been much nicer than segmented memory, as you mention. The concept of I/O operations vs memory operations was inherited from the Datapoint 2200; memory-mapped I/O would have been better. Overall, a more RISC-like architecture would have been good.

I can't really fault the 8086 designers for their decisions, since they made sense at the time. But if you could go back in a time machine, one could certainly give them a lot of advice!

As someone who did assembly coding on the 8086/286/386 in the 90s, the xH and xL registers were quite useful to write efficient code. Maybe 64-bit mode should have gotten rid of them completely though, rather than only when REX.W=1.

AAA/AAS/DAA/DAS were used quite a lot by COBOL compilers. These days ASCII and BCD processing doesn't use them, but it takes very fast data paths (the microcode sequencer in the 8086 was pretty slow), large ALUs, and very fast multipliers (to divide by constant powers of 10) to write efficient routines.

I/O ports have always been weird though. :)

> I can't really fault the 8086 designers for their decisions, since they made sense at the time. But if you could go back in a time machine, one could certainly give them a lot of advice!

Thanks for capturing my feeling very precisely! I was indeed thinking what they could have done better with the same approximate number of transistor and the benefit of a time traveler :) And yes the constraints you mention (8080 compatibility, etc) indeed limit their leeway so maybe we'd have to point the time machine at a few years earlier and influence the 8080 first

  • What's that military adage? Something along the lines of 'planned to win the (prior) war'?

    There's also the needs of the moment. Wasn't the 8086 a 'drop in' replacement for the 8080, and also (offhand recollection) limited by the number of pins on some of it's package options? This was still an era when it was common for even multiple series of computers from a vendor to have incompatible architectures that required at the very least recompiling software if not whole new programs.