← Back to context

Comment by ramraj07

6 days ago

[flagged]

I think you mistook my comment. Insofar as its anything, it was a concession to that use case.

I gave an example below: debugging a microservice-based request flow from a front-end, thru various middle layers and services, to a back-end, perhaps triggering other systems along the way. Something similar to what I worked on in 2012 for the UK olympics.

Unless I'm mistaken, and happy to be, I'm not sure where the LLM is supposed to offer a significant productivity factor here.

Overall, my point is -- indeed -- that we cannot really have good faith conversations in blog posts and comment sections. These are empirical questions which need substantial evidence from both sides -- ideally, videos of a workday.

Its very hard to guess what anyone is really talking about at the level of abstraction that all this hot air is conducted at.

As far as i can tell the people hyping LLMs the most are juniors, data scientists who do not do software engineering, and people working on greenfield/blank-page apps.

These groups never address the demand from these sceptical senior software engineers -- for obvious reasons.

  • I recently had to do something similar and pasted new relic logs for a trace id and copy pasted the swagger pages for all the microservices involved, and in round two the code for the service it suspected, and Gemini helped me debug without even putting my full head into the problem.

Most of my day job is worrying about the correctness of compiler optimizations. LLMs frequently can't even accurately summarize the language manual (especially on the level of detail I need).