Comment by ManlyBread
6 days ago
I think that the whole point of the discussion forum is to talk to other people, so I am in favor of banning AI replies. There's zero value in these posts because anyone can type chatgpt.com in the browser and then ask whatever question they want at any time while getting input from an another human being is not always guaranteed.
You're like the 9th out of the 10 top-level replies I've read so far that says this, with the 10th one saying it in a different way (without suggesting they could have asked it themselves). What I find interesting is that everyone agrees and nobody argues about prompt engineering, as in, nobody says it's helpful that a skilled querier shares responses from the system. Apparently there's now the sentiment that literally anybody else could have done the same without thought
Whether prompt engineering is a skill is perhaps a different topic. I just found this meta statistic in this thread interesting to observe
I do think it would be useful to normalize pasting a link to the full transcript if you’re going to quote an LLM. Both because I do find it useful to examine others’ prompting techniques, and because that gives me the context to gauge the response’s credibility.
What did we used to call it? Google-fu?
This is probably the first time I see the term "prompt engineer" mentioned this year. I though that this joke has ran its' course back in 2023 and is largely forgotten nowadays.
A silly name, but I’ve definitely watched peers coax sensible results out of braggadocious LLMs… and also watched friends say “make me an app that enters the TPS report data for me” (or “make fully playable Grand Theft Auto, but on Mars”) and be surprised that the result is trash.
Yeah, I guess that exact wording was meme of the year 2023, but have you not seen the sentiment that e.g. developers need to learn to work with LLMs or get left behind? As though it's some skill you need to acquire
1 reply →
Possibly a distinction needs to be made between raw llm output, raw google output (like lmgtfy), or any other tool's raw output on the one hand, and a synthesis of your conclusions after having used these tools together, on the other.
Obviously cut&pasting the raw output of a google search or pubmed search or etc would be silly. Same goes for AI generated summaries and such. But references you find this way can certainly be useful.
And using spelling checkers, grammar checkers, style checkers, translation tools or etc (old fashioned or new AI-enhanced) should be ok if used wisely.
> There's zero value in these posts
Source?
You might be surprised by the tremendous amount of value in AI posts. AI considers context NI doesn't.