← Back to context

Comment by 15155

2 days ago

> and quite a bit worse than just dealing with an MCU.

Unless you're using some kind of USB DFU mode (which is annoying on assembly lines), SWD-based flashing of an MCU is substantially more complicated than the JTAG sequences that some internal-flash FPGAs use for programming..

These chips are just as easy or easier to program than any ARM MCU. Raw SPI NOR flash isn't "easy" to program if you've never done it before, either.

It's mostly the whole "two binaries" problem.

Oh look, the factory screwed up and isn't flashing the MCU this week! Does the board survive?

Oh look, the factory screwed up and isn't flashing the PLD this week! Does the board survive?

Oh look, the factory... wait, what is the factory doing and why are they putting that sticker on that....

You get the idea. Yes, yes, it is all solvable. I have never claimed it isn't. I am just claiming it is a giant pain in the ass and limits use of these things. I will bend over backwards to keep boards at one binary that needs to be loaded.

  • Embed the bitstream into your MCU firmware binary, bitbang the 50-100KB bitstream into SRAM via JTAG from your MCU in all of 10ms. This is <100 lines of Rust.

    • Yes, it's solvable. But my whole argument is that the entire experience is death by a thousand cuts. I'm not seeing how "it's possible in 100 lines of Rust" (a language most people don't even use for embedded work) is really countering my argument.

  • I honestly start to wonder how in the world we survived flashing 3 different binaries, for years (bitstream, 2 MCUs), without ever getting a complaint from the production floor.

    I should check my spam folder.