Comment by jjani
10 months ago
> I’ll now cover the opposite case: my peers who see generative models as superior to their own output. I see this most often in professional communication, typically to produce fluff or fix the tone of their original prompts. Every single time, the model obscures the original meaning and adds layers of superfluous nonsense to even the simplest of ideas.
I'm going to call out what I see as the elephant in the room.
This is brand new technology and 99% of people are still pretty clueless at properly using it. This is completely normal and expected. It's like the early days of the personal computer. Or Geocities and <blink> tags and under construction images.
Even in those days, incredible things were already possible by those who knew how to achieve them. The end result didn't have to be blinking text and auto-playing music. But for 99% it was.
Similarly, with current LLMs, it's already more than possible to use them in effective ways, without obscuring meaning or adding superfluous nonsense. In ways whose results have none of the author's criticisms apply. People just don't know how to do it yet. Many never will, just like many never learnt how to actually use a PC past Word and Excel. But many others will learn.
No comments yet
Contribute on Hacker News ↗