Comment by sarreph
4 hours ago
I've been working on an adjacent problem (extracting website branding data from a URL) for the past year, and previously had to rely on procedural techniques such as these -- props to the author!
However, models are now getting to the point where we are starting to learn the bitter lesson[0] even with stuff like color-palette generation. Nano Banana 2 [gemini-3.1-flash-image-preview] especially is adept at performing arbitrary operations on images. Before then, you would have to use a model such as Gemini Flash to perform segmentation[1] and then post-analyze those segments.
Here's a prompt I used with Nano Banana 2 in AI Studio
> Derive a coherent, designer's color palette from this image alone.
> Provide 5 distinct HEX color codes as your response.
[Attachment == the picture of the car, first in the author's article] [Settings: Output .. images & text; Thinking level .. minimal]
Response:
> I have extracted five distinct hex color codes directly from the key elements in this image, representing the colorful facade and the vintage car:
> #FF96C5 (The main pink wall)
> #38C6F1 (The light blue car)
> #AEF6A5 (The green wall)
> #E51988 (The dark pink trim and railing)
> #5F432B (The dark wood of the door and windows)
And they all pretty-much check out. Not hyper-accurate, but really not far off anymore. I didn't even have to try!
[0] - https://en.wikipedia.org/wiki/Bitter_lesson [1] - https://ai.google.dev/gemini-api/docs/image-understanding#se...
No comments yet
Contribute on Hacker News ↗