← Back to context

Comment by ThalesX

2 years ago

The most significant impact ChatGPT has had on my life is that I have some interns helping me write documentation for several projects. The ChatGPT noise they started introducing has been disruptive to the company and project management. Inaccurate percentages, years, institution abbreviations, etc., etc.

I had to confront them multiple times about using the tool and not checking its results and actually doing the checking myself. Most of the time it's close to the truth, but not quite, and in the field the projects are in, not quite doesn't cut it.

I also have project partners I introduced to ChatGPT. They produce a lot of noise but less insight than before they started using this technology. In one recent project, I was involved with 5 partner companies, and 4 of them produced excellent 5 to 10-page reports. One gave me a 100-page buzzword-filled, no-substance report. Guess who used GPT.

The good part is that I'm now pretty good at spotting ChatGPT written content. I think the technology will evolve, but in its current state I feel there's a lot of noise.

I'm personally horrified that the normal response to this isn't "if I catch you using ChatGPT again, you're fired".

What are you paying people for if not their unique contributions? What do they think they're doing when they farm it out to a tool, other than inviting you to cut out the middleman? How on earth do they expect to become better at their jobs this way? Have they no shame or sense of pride? It's pathetic.

This is entirely orthogonal to the question of whether GPT is intelligent.

  • > How on earth do they expect to become better at their jobs this way? Have they no shame or sense of pride? It's pathetic.

    To some people a job is just a way to make money to fund their hobbies or feed their mouths. Sometimes they do not care about their boss or company, at all.

    • No but they should probably care about their own career and skills if they don't want to go hungry.

This is a good reflection of AI generative content. This is actually a good reflection of any computer assisted generated content. AI has allowed junior professionals to become managers of AI machines. Even if very few of them are qualified to do so.

In my line, I love automation, but I have to remember to check the final work product of the automation. And I don’t. But my superiors are always checking my work.

  • Nah, it's a strong indicator that people don't know how to write good prompts, or how to co-edit iteratively with the AI.

    • That’s funny. It is like saying, people don’t know how to use good search keywords to get good search results.

      Eventually the product catches up to serve even those who are not good at using it.

I find it very interesting that apparently either you advised your interns to use ChatGPT or they brought their cheating school habits to work, hoping that you'd be as BS-oblivious as their professors.

Any tips for spotting GPT text?

  • One snarky edgy tactic I read is for everything human written to include ethnic/racial slurs here and there. ChatGPT and its ilk would never include such words. See also software license schemes using similar verboten terms to ensure no corporation could use the code without explicity violating the license. Simply require [bad word] to be included and you successfully identify as not part of the risk averse hive mind. At least until something changes.

    • Students or whoever can ask chatGPT to generate a response THEN then can insert their own bad words or whatever in between. This "tactic" would only work if someone is blindly copy and pasting generated responses without proofreading. And even if they are, how do you prove it?

  • Whiles it’s difficult to spot AI generated content, more steps should be taken in order to…

    • Stupid people will overuse that tool and call everything AI generated, just like they overuse AI content generation.