← Back to context

Comment by haspok

12 hours ago

To me, the odd part is when you compare the performance of RPC vs inline code. You present it as if you found something new and foundational, only possible thanks to AI, when in fact, it has nothing to do with AI, and the results should be no surprise to anyone.

Your original architecture was a kludge to start with, it was a self-inflicted wound. This is probably the craziest part:

> We’d tried a few things over the years - optimizing expressions, output caching, and even embedding V8 directly into Go (to avoid the network hop).

I know hindsight is 20/20 - but still, you made the wrong decision at the start, and then you kept digging the hole deeper and deeper. Hopefully a good lesson for everyone working with microservices.

To end on a more positive note, I think this (porting code to other languages/platforms) is one use-case where AI code generation really shines, and will be of immense value in the future. Great reporting, just let's not confuse code generation with architectural decisions.

Oh, I don't disagree. The original vision and what the product ended up doing are light years apart. Likely, had we known what it would evolve into, we would have decided on a different solution (perhaps not JSONata at all, for example).

Having said that, My opinion still is that the previous solution had valid business merit. Though inefficient, the fact that it was infinitely scalable and the only limit was pure dollar cost is pretty valuable. It enables business stakeholders / managers to objectively quantify the value of the feature (for X dollars we get Y business, scaling linearly). I've worked in many systems where this was not at all the case, and there was a hard-limit at some point where the feature simply shut down.