Comment by delusional
25 days ago
For your context, I'm an AI hater, so understand my assumptions as such.
> The obvious best solution is to have your agent write release notes for your agent in the future to have context. No more tedious writing or reading, but also no missing context.
Why is more AI the "obvious" best solution here? If nobody wants to read your release notes, then why write them? And if they're going to slim them down with their AI anyway, then why not leave them terse?
It sounds like you're just handwaving at a problem and saying "that's where the AI would go" when really that problem is much better solved without AI if you put a little more thought into it.
For release notes in particular, I think AI can have value. This is because more than a summary, release notes are a translation from code and more or less accurate summaries into prose.
AI is good at translation, and in this case it can have all the required context.
Plus it can be costly (time and tokens) to both “prompt it yourself” or read the code and all commit logs.
If your process always goes PR -> main and the PR descriptions + commit messages are properly formatted, generating release notes shouldn't be much more than condensing those into a structured list with subheadings.
This is something LLMs are excellent at.
What better solution do you have in mind? This scenario is AI being used as a tool to eliminate toil. It’s not replacing human creativity, or anything like that.
If you have a problem with that, then you should also have a problem with computers in general.
But maybe you do have a problem with computers - after all, they regularly eliminate jobs, for example. In that case, AI is only special in its potentially greater effectiveness at doing what computers have always been used to do.
But most of us use computers in various ways even if we have qualms about such things. In practice, the same already applies to AI, and likely will for you too, in future.
It's not eliminating toil, it's externalizing it from the writer to the reader.
If writing something is too tedious for you, at least respect my time as the reader enough to just give me the prompt you used rather than the output.
In a lot of my AI assisted writing, the prompt is an order of magnitude larger than the output.
Prompt: here are 5 websites, 3 articles I wrote, 7 semi-relevant markdown notes, the invitation for the lecture I'm giving, a description of the intended audience, and my personal plan and outline.
Output: draft of a lecture
And then the review, the iteration, feedback loops.
The result is thoroughly a collaboration between me and AI. I am confident that this is getting me past writer blocks, and is helping me build better arcs in my writing and lectures.
The result is also thoroughly what I want to say. If I'm unhappy with parts, then I add more input material, iterate further.
I assure you that I spend hours preparing a 10_min pitch. With AI.
(This comment was produced without AI.)
5 replies →
The original comment was saying that the AI would be both the writer now and the reader, in future. That's how the toil is eliminated. Instead of reading or searching through a series of release notes, you can just ask questions about what you're specifically looking for.
> If writing something is too tedious for you, at least respect my time as the reader
"If comprehending something is too tedious for you..."
Seriously, don't jump to indignant rhetoric before you're sure you've understood the discussion.
5 replies →
[flagged]
Obvious better solution is to either a.) not write those release notes b.) try to figure out release notes format and process that leads to useful release notes. Once it is useful, you can decide to automate it or not - and measure whether automation is still achieving the goal.
What OP did was "we lacked communication, then created ineffective process that achieved nothing, so we automated the ineffective process and pay third party for doing it".
If you pay tokens for release notes that nobody reads, they you may just ... not pay tokens.
To me, the value I would look to extract feom LLMs is turn the code changes into user-readable, concise release notes.
If you are coding with the help of LLMs, then release notes are your human-crafted prompt.
Basically, the intent is given as a decision somewhere, and that is human driven.
This is kind of a fundamental issue with release notes. They are broadcasting lots of information, and only a small amount of information is relevant to any particular user (at least in my experience).
If I had a technically capable human assistant, I would have them filter through release notes from a vendor and only give me the relevant information for APIs I use. Having them take care of the boring, menial task so I can focus on more important things seems like a no brainer. So it seems reasonable to me to have an AI do that for me as well.
I read a lot of release notes in my job and the idea that that is some kind of noticeable time sink that needs to be streamlined is bizarre to me. Just read the notes.
If your assistant is technical enough to know which parts apply to you and which do not, they likely don't need you to do the rest of the job either.
An LLM could do this by looking over the full codebase and release notes and do a shorter summary, bit probably at the cost of many tokens today.
Or you could Ctrl-F.