← Back to context

Comment by yrds96

5 hours ago

I wonder if this became a so well known "benchmark" that models already got trained for it.

Given the likeness of the sky between the 2 examples, the overall similarities and the fact that the pelican is so well done, there is 0-doubt that the benchmark is in the training data of these models by now

That doesn't make it any less of an achievement given the model size or the time it took to get the results

If anything, it shows there's still much to discover in this field and things to improve upon, which is really interesting to watch unfold

every model release Simon comes with his Pelican and then this comment follows.

Can we stop both? its so boring

  • I really appreciate you speaking up. Happened yesterday on GPT Image 2, bit my tongue b/c people would see it as fun policing, and same thing today. And it happens on every. single. LLM. release. thread.

    It's disruptive to the commons, doesn't add anything to knowledge of a model at this point, and it's way out of hand when people are not only engaging with the original and creating screenfuls to wade through before on-topic content, but now people are creating the thread before it exists to pattern-match on the engagement they see for the real thing. So now we have 2x.

    • No more disruptive than this comment. If you don't like it, downvote and move on. It's on topic and doesn't contradict the rules. The reason you see Simon's comment on the top is because people like it and upvote it.

      1 reply →