Comment by b0a04gl
2 days ago
> “Our method is able to solve over 24 million optimization problems in less than 90 minutes.”
that line is doing heavy lifting. sounds insane until you look closer they batch out embarrassingly parallel, lowdimensional problems no live latencies, no network I/O, no grid API jitter. just hammering a static dataset in memory. real markets stall, disconnect, price slippage, queue delays. none of that here. so yeah, 24 million looks cool in the abstract, but under the hood it's just cleanroom compute; feels like they optimised the benchmark more than the actual system
Assuming you add all the annoying details that algo trade execution brings, the algorithm still provides the answer on which position to take within a few microseconds, which is what you want if you trade in a limit order book.
true, you want microsecond decisions at the core, no doubt ; but that’s only half the game. an ideal action in clean memory isn’t the same once it hits fragmented liquidity, stale quotes, partial fills. if the algo doesn’t account for execution drift or book pressure post-placement, the microsecond edge fades fast. so yeah, fast compute’s necessary but not sufficient without modelling the messy tail end too