Comment by captainclam
10 days ago
It looks to me like OpenAI's image pipeline takes an image as input, derives the semantic details, and then essentially regenerates an entirely new image based on the "description" obtained from the input image.
Even Sam Altman's "Ghiblified" twitter avatar looks nothing like him (at least to me).
Other models seem much more able to operate directly on the input image.
You can see this in the images of the Newton: in GPT's versions, the text and icons are corrupted.
Isn't this from the model working o. really low res images, and then bein uppscalef afterwards?