Comment by tkiolp4

2 days ago

As I work more with AI, I’ve came to the conclusion that I have no patience to read AI-generated content, whether the content is right or wrong. I just feel like it’s time wasted. Countless of examples: meeting summaries (nobody reads them), auto generated code (we usually do it for prototypes and pocs, if it works, we ship it, no reviews. For serious stuff we take care of the code carefully), and a large etc.

I like AI on the producing side. Not so much on the consuming side.

For me, AI meeting summaries are pretty useful. The only way I see they're not useful for you is that you're disciplined enough to write down a plan based on the meeting subject.

I tend to agree. Except if it's text generated by me for me.

I don't want you to send me a AI-generated summary of anything, but if I initiated it looking for answers, then it's much more helpful.

  • I'm not doing this much now, but this AI-generated text might be more useful if you use AI to ask questions using it as a source.

    • Meeting notes are useful in two ways, for me:

      - I'm reviewing the last meeting of a regular meeting cadence to see what we need to discuss.

      - I put it in a lookup (vector store, whatever) so I can do things like "what was the thing customer xyz said they needed to integrate against".

      Those are pretty useful. But I don't usually read the whole meeting notes.

      I think this is probably more broadly true too. AI can generate far more text than we can process, and text treatises on what an AI was prompted to say is pretty useless. But generating text not with the purpose of presenting it to the user but as a cold store of information that can be paired with good retrieval can be pretty useful.

      3 replies →

    • While in principle that should be great I don't even slightly trust it as a technique because you're compounding points at which the LLM can get things wrong. First you've got the speech to text engine, which will introduce errors based on things like people mumbling, or a bird shouting outside the window. That's then fed into a summarising LLM to make the meeting notes, which may latch onto the errors in the speech to text engine, or just make up its own new and exciting misinterpretations. Finally you're feeding those into some sort of document store to ask another LLM questions about them, and that LLM too can misinterpret things in interesting ways. Its like playing a game of chinese whispers with yourself.

      1 reply →

I don't read most of my (work) emails either, but I think the most important part of AI generated meeting notes is that they're searchable / indexable, in the off chance that you do need to find or refer to something mentioned in an article.

But to be blunt / irreverent, it's the same with Git commit messages or technical documentation; nobody reads them unless they need them, and only the bits that are important to them at that point in time.

  • I can't help but see the irony of complaining that people don't read git commit messages or technical documentation in the comments for a product to assist in code review.

    You know what really, really, helps while doing code review? Good commit messages, and more generally, good commit practices so that each commit is describing a set of changes which make sense together. If you have that then code review becomes much easier, you just step through each commit in turn and you can see how the code got to be where it is now, rather than Github's default "here's everything, good luck" view.

    The other thing that helps? Technical documentation that describes why things are as they are, and what we're trying to achieve with a piece of work.

    • Maybe AI doing rebase, code chunking and commits will help? This kinda makes sense. Reviewable+mergables chunks do make the code way faster to merge.

That's fair! If there were a "minimal" mode where you could still access callers, data flows, and dependencies with no AI text, would it be helpful for your reviews?

  • Not parent, but in my opinion the answer here is yes. I agree that there is a real need here and a potentially solid value proposition (which is not the case with a lot of vscode-fork+LLM-based starups) but the whole point should be to combat the verbosity and featurelessness of LLM-generated code and text. Using an LLM on the backend to discover meaningful connections in the codebase may sometimes be the right call but the output of that analysis should be some simple visual indication of control flow or dependency like you mention. At a first look the output in the editor looks more like an expansion rather than a distillation.

    Unrelated, but I don't know why I expected the website and editor theme to be hay-yellow and or hay-yellow and black instead of the classic purple on black :)

    • Thanks for the opinion! That makes a lot of sense and I like the concept of being an extension of a user's own analysis vs hosing them with information.

      Yeah originally I thought of using yellow/brown or yellow/black but for some reason I didn't like the color. Plenty of time to go back though!

honestly i feel the same way and i can't quite put into words why. I guess if I had to -- I think it's because I know not all AI generated stuff is equally created and that some people are terrible at prompting/or don't even proofread the stuff that's outputted, so I have this internal barometer that screams "you're likely wasting your time reading this" and so I just learned to avoid it entirely. Which is sad, because clearly now a ton of stuff is AI generated, so I barely read anything, _especially_ if I see any signals like "it's not just this, it's that"