← Back to context

Comment by pibaker

8 hours ago

I tried asking ChatGPT if Japanese high speed rail has level crossings and it correctly identified the line I used as my counterexample (Yamagata Shinkansen). I think GP is just plainly misinformed in a more boring way.

If you paste the comment it replies to into ChatGPT, it generates almost exact same answer as that comment. Also, "Finally, ..." and "it's not A, it's B" is a good tell.

  • Damn, I tried doing what you did and got a similar response too, down to exact wordings like "short answer, long answer" and "conservative maintenance". I will admit i was too quick to dismiss the accusation in my previous reply.

  • > If you paste the comment it replies to into ChatGPT, it generates almost exact same answer as that comment.

    But would it have generated almost the same comment 4 hours ago, when the comment was posted here?

    A few months ago I posted a comment in a thread about some new law that would not have been needed if a law from many years early had not seemingly arbitrarily limited itself to some particular cases. I speculated on some reasons why the original law might have been written that way.

    A couple hours later I asked an LLM about it (Perplexity) and it gave the same reasons I had guessed. I checked the links it provided to get a suitable reference if the topic ever came up again...and it turned out my comment was its source!