← Back to context

Comment by loeg

6 hours ago

AI gets things wrong ("hallucinates") much more often than actual subject matter experts. This is disingenuous.

Presumably the "subject matter expert" will review the output of the LLM, just like a reviewer. I think it's disingenuous to assume that just because someone used AI they didn't look at or reviewed the output.

  • A serious one yes.

    But why would a serious person claim that they wrote this without AI when it's obvious they used it?!

    Using any tool is fine, but someone bragging about not having used a tool they actually used should make you suspicious about the amount of care that went to their work.