Comment by iambateman
10 months ago
> "feed it the text of my mostly-complete blog post, and ask the LLM to pretend to be a cynical Hacker News commenter and write five distinct comments based on the blog post."
It feels weird to write something positive here...given the context...but this is a great idea. ;)
This is the kind of task where, before LLMs, I wouldn't have done it. Maybe if it was something really important I'd circulate it to a couple friends to get rough feedback, but mostly it was just let it fly. I think it's pretty revolutionary to be able to get some useful feedback in seconds, with a similar knock-on effect in the pull request review space.
The other thing I find LLMs most useful for is work that is simply unbearably tedious. Literature reviews are the perfect example of this - Sure, I could go read 30-50 journal articles, some of which are relevant, and form an opinion. But my confidence level in letting the AI do it in 90 seconds is reasonable-ish (~60%+) and 60% confidence in 90 seconds is infinitely better than 0% confidence because I just didn't bother.
A lot of the other highly hyped uses for LLMs I personally don't find that compelling - my favorite uses are mostly like a notebook that actually talks back, like the Young Lady's Illustrated Primer from Diamond Age.
> But my confidence level in letting the AI do it in 90 seconds is reasonable-ish (~60%+) and 60% confidence in 90 seconds is infinitely better than 0% confidence because I just didn't bother.
So you got the 30 to 50 articles summarized by the LLM, now how do you know what 60% you can trust and what’s hallucinated without reading it? It’s hard to be usable at all unless you already do know what is real and what is not.
So, generally how I use it is to get background for further research. So you're right, you do have to do further reading, but it's at the second tier - "now that I know roughly what I'm looking for, I have targets for further reading", rather than "How does any of this work and what are relevant articles"