Comment by Wilfred
18 hours ago
If I've understood this interesting workflow correctly, there's two major components.
streamdown: a markdown renderer for the terminal, intended for consuming LLM output. It has affordances to make it easier to run the code snippets: no indentation, easy insertion in the clipboard, fzf access to previous items.
llmehelp: tools to slurp the current tmux text content (i.e. recent command output) as well as slurp the current zsh prompt (i.e. the command you're currently writing).
I think the idea is then you bounce between the LLM helping you and just having a normal shell/editor tmux session. The LLM has relevant context to your work without having to explicitly give it anything.
Basically.
About 20 years ago I had a since-long-disappeared article called "The Great Productivity Mental Experiment" which we can extend now for the AI era:
You've got 3 equally capable competent programmers with the same task, estimated to take on the order of days.
#1 has no Internet access and only debuggers, source and on system documentation
#2 has all that + search engines and the Internet
#3 has all #2 + all the SOTA AI tools.
They are all given the same task, and a timer starts.
Who gets to "first run" the fastest? 90% success rate? 99.9%?
The point of the exercise is the answer: "I don't know"
Ergo there is no clear objective time saver.
The next question is what would establish a clear victor without having to make a taxonomy of the tasks. We're looking for best time practice.
The answer is workflow, engagement, and behavior.
Current AI flow will get you to first run faster. But to the 99.9% pass? Without new flows it can't. It's a phenomenon called automation complacency and it can make bad workflows very costly.
(The original point of the exercise was to point out how better tools don't fix bad practice and how frameworks, linters, the internet, stronger type systems ... these can either solve problems or create larger ones based on how you use them. There is no silver bullet as Fred Brooks said in the 1980s)