← Back to context

Comment by dmd

3 days ago

I think it's somewhat interesting that codex (gpt-5.3-codex xhigh), given the exact same prompt, came up with a very similar result.

https://3e.org/private/self-portrait-plotter.svg

Asked gemini the same question and it produced a similar-ish image: https://manuelmoreale.dev/hn/gemini_1.svg

When I removed the plot part and simply asked to generate an SVG it basically created a fancy version of the Gemini logo: https://manuelmoreale.dev/hn/gemini_2.svg

This is honestly all quite uninteresting to me. The most interesting part is that the various tools all create a similar illustration though.

  • Is it? They're all generalizing from a pretty similar pool of text, and especially for the idea of a "helpful, harmless, knowledgeable virtual assistant", I think you'd end up in the same latent design space. Encompassing, friendly, radiant.

    Note that Claude, ChatGPT, Perplexity, and other LLM companies (assumably human) designers chose a similar style for their app icon: a vaguely starburst or asterisk shaped pop of lines.

    • > Is it? They're all generalizing from a pretty similar pool of text, and especially for the idea of a "helpful, harmless, knowledgeable virtual assistant", I think you'd end up in the same latent design space. Encompassing, friendly, radiant.

      I'm inclined to agree, but I can't help but notice that the general motif of something like an eight-spoked wheel (always eight!) keeps emerging, across models and attempts.

      Although this is admittedly a small sample size.

      Edit: perhaps the models are influenced by 8-spoked versions of https://en.wikipedia.org/wiki/Dharmachakra in the training data?

      3 replies →

    • Sure, I think it's pretty interesting that given the same(ish) unthinkably vast amount of input data and (more or less) random starting weights, you converge on similar results with different models.

      The result is not interesting, of course. But I do find it a little fascinating when multiple chaotic paths converge to the same result.

      These models clearly "think" and behave in different ways, and have different mechanisms under the hood. That they converge tells us something, though I'm not qualified (or interested) to speculate on what that might be.

      1 reply →

    • > Is it? They're all generalizing from a pretty similar pool of text, and especially for the idea of a "helpful, harmless, knowledgeable virtual assistant", I think you'd end up in the same latent design space. Encompassing, friendly, radiant.

      Oh yeah I totally agree with that. What I was referring to was the fact that even though are different companies trying to build "different" products, the output is very similar which suggests that they're not all that different after all.

      2 replies →

    • A few of us can't help but notice all the "AI" companies have gone for buttholes as logos.

AFAIK all of these models have been trained in very similar ways, on very similar corpuses. They could be heavily influenced by the same literature.

I wonder if anyone recognizes it really closely. The Pale Fire quote below is similar but not really the same.

Are you crazy or am I because I scrolled through that blog and am left scratching my head at you and your claim.