Show HN: isometric.nyc – giant isometric pixel art map of NYC

21 hours ago (cannoneyed.com)

Hey HN! I wanted to share something I built over the last few weeks: isometric.nyc is a massive isometric pixel art map of NYC, built with nano banana and coding agents.

I didn't write a single line of code.

Of course no-code doesn't mean no-engineering. This project took a lot more manual labor than I'd hoped!

I wrote a deep dive on the workflow and some thoughts about the future of AI coding and creativity:

http://cannoneyed.com/projects/isometric-nyc

I just want to say this isn't just amazing -- it's my new favorite map of NYC.

It's genuinely astonishing how much clearer this is than a traditional satellite map -- how it has just the right amount of complexity. I'm looking at areas I've spent a lot of time in, and getting an even better conceptual understanding of the physical layout than I've ever been able to get from satellite (technically airplane) images. This hits the perfect "sweet spot" of detail with clear "cartoon" coloring.

I see a lot of criticism here that this isn't "pixel art", so maybe there's some better term to use. I don't know what to call this precise style -- it's almost pixel art without the pixels? -- but I love it. Serious congratulations.

  • Author here, and to reiterate another reply - all of the critique of "pixel art" is completely fair. Aesthetically and philosophically, what AI does for "pixel art" is very off. And once you see the AI you can't really unsee it.

    But I didn't want to call it a "SimCity" map, though that's really the vibe/inspiration I wanted to capture, because that implies a lot of other things, so I used the term "pixel art" even though I figured it might get a lot of (valid) pushback...

    In general, labels and genres are really hard - "techno" to a deep head in Berlin is very different than "techno" to my cousins. This issue has always been fraught, because context and meaning and technique are all tangled up in these labels which are so important to some but so easily ignored by others. And those questions are even harder in the age of AI where the machine just gobbles up everything.

    But regardless, it was a fun project! And to me at least it's better to just make cool ambitious things in good faith and understand that art by definition is meaningful and therefore makes people feel things from love to anger to disgust to fascination.

  • TBH, the nano banana ones are closer to pixel art than Qwen Image ones. Much closer.

    This looks like early 2000 2.5D art, like Diablo style.

  • High-res pixel art? Maybe adding an extra zoom level so people can actually see the pixels as big fat squares would help? Of course purists would probably still not call it pixel art, but as you wrote, leaning more into the "pixel art" aspect would hurt the realistic representation.

    Actually, if you only look at (midtown and uptown) Manhattan, is looks more "pixel art"-y because of the 90-degree street grid and most buildings being aligned with it. But the other boroughs don't lend themselves to this style so well. Of course, you could have forced all buildings to have angles in 45° steps, but that would have deviated from "ground truth" a lot.

I was extremely excited until I looked closer and realized how many of these look like ... well AI. The article is such a good read and would recommend people check it out.

Feels like something is missing... maybe just a pixelation effect over the actual result? Seems like a lot of the images also lack continuity (something they go over in the article)

Overall, such a cool usage of AI that blends Art and AI well.

  • Basically, it's not pixel art at all.

    It's very cool and I don't mind the use of AI at all but I think calling it pixel art is just very misleading. It's closer to a filter but not quite that either.

    • Yup, not pixel art. I wonder if people are not zooming in on it properly? If you zoom in max you see how much strangeness there is.

      It kind of looks like a Google Sketchup render that someone then went and used the Photoshop Clone and Patch tools on in arbitrary ways.

      Doesn’t really look anything like pixel art at all. Because it isn’t.

      1 reply →

  • Yeah it leaves a lot to be desired. Once you see the AI it's hard to unsee. I actually had a few other generation styles, more 8-bit like, that probably would have lended themselves better to actual pixel-art processing, but I opted to use this fine-tune and in for a penny in for a pound, so to speak...

  • For projects like this (that would have been just inconceivably too much work without the help of AI), I'm fine with AI usage.

> What’s possible now that was impossible before?

> I spent a decade as an electronic musician, spending literally thousands of hours dragging little boxes around on a screen. So much of creative work is defined by this kind of tedious grind. ... This isn't creative. It's just a slog. Every creative field - animation, video, software - is full of these tedious tasks. Of course, there’s a case to be made that the very act of doing this manual work is what refines your instincts - but I think it’s more of a “Just So” story than anything else. In the end, the quality of art is defined by the quality of your decisions - how much work you put into something is just a proxy for how much you care and how much you have to say.

Great insights here, thanks for sharing. That opening question really clicked for me.

  • That quote seriously rubs me the wrong way. "Dragging little boxes around" in a DAW is creative, it constitutes the entire process of composing electronic music. You are notating what notes to play, when and for how long they play, what instrument plays them, and any modifications to the default sound of that instrument. Is writing sheet music tedious? Sure, it can be, when the speed of notating by hand can't keep up with the speed your brain is thinking through ideas. But being tedious is not mutually exclusive with being creative despite the attempt to explicitly contrast them as such, and the solution to the process of notating your creativity being tedious is not "randomly generate a bunch of notes and instruments that have little relation with the ones you're thinking of". This excerpt supposes that generative AI lets you automate the tedious part while keeping "the quality of your decisions", but it doesn't keep your decisions, it generates its own "decisions" from a broad, high-level prompt and your role is reduced to merely making decisions about whether or not you like the content generated, which is not creativity.

    • I'd say that deciding where a transient should go" is creative, manually aligning 15 other tracks over and over again is not (not to mention having to do it in both the DAW and melodyne)...

      I agree that "push button get image" AI generation is at best a bit cheap, at worst deeply boring. Art is resistance in a medium - but at what point is that resistance just masochism?

      George Perec took this idea to the max when he wrote an entire novel without the letter "E" - in French! And then someone had the audacity to translate it to English (e excluded)! Would I ever want to do that? Hell no, but I'm very glad to live in a world where someone else is crazy enough to.

      I've spent my 10,000 hours making "real" art and don't really feel the need to justify myself - but to all of the young people out there who are afraid to play with something new because some grumps on hacker news might get upset:

      It doesn't matter what you make or how you make it. What matters is why you make it and what you have to say.

      2 replies →

    • I don't know anything about electronic music or what a DAW is, but his usage of "dragging boxes around" could either be a gross reduction in the process of creating art, or it could genuinely be just mundane tasks.

      It's like if someone says my job as a SWE is just pressing keys, or looking at screens. I mean, technically that's true, and a lot of what I do daily can certainly be considered mundane. But from an outsiders perspective, both mundane and creative tasks may look identical.

      I play around with image/video gen, using both "single prompt to generate" à la nano banana or sora, and also ComfyUI. Though what I create in ComfyUI often pales in comparison to what Nano or Sora can generate given my hardware constraints, I would consider the stuff I make in ComfyUI more creative than what I make from Sora or Nano, mainly because of how I need to orchestrate my comfy ui workflow, loras, knobs, fine tuning, control net, etc, not to mention prompt refinement.

      I think creativity in art just boils down to the process required to get there, which I think has always been true. I can shred papers in my office, but when Banksy shred his painting, it became a work of art, because of the circumstances in which it was creative.

      1 reply →

  • Tedium in art is full of micro decisions. The sum of these decisions makes a subtle but big impact in the result. Skipping these means less expression.

So, wait: this is just based on taking the 40 best/most consistent Nano Banana outputs for a prompt to do pixel-art versions of isometric map tiles? That's all it takes to finetune Qwen to reliably generate tiles in exactly the same style?

Also, does someone have an intuition for how the "masking" process worked here to generate seamless tiles? I sort of grok it but not totally.

  • I think the core idea in "masking" is to provide adjacent pixel art tiles as part of the input when rendering a new tile from photo reference. So part of the input is literal boundary conditions on the output for the new tile.

    Reference image from the article: https://cannoneyed.com/img/projects/isometric-nyc/training_d...

    You have to zoom in, but here the inputs on the left are mixed pixel art / photo textures. The outputs on the right are seamless pixel art.

    Later on he talks about 2x2 squares of four tiles each as input and having trouble automating input selection to avoid seams. So with his 512x512 tiles, he's actually sending in 1024x1024 inputs. You can avoid seams if every new tile can "see" all its already-generated neighbors.

    You get a seam if you generate a new tile next to an old tile but that old tile is not input to the infill agorithm. The new tile can't see that boundary, and the style will probably not match.

    • That’s exactly right - the fine tuned Qwen model was able to generate seamless pixels most of the time, but you can find lots of places around the map where it failed.

      More interestingly, not even the biggest smartest image models can tell if a seam exists or not (likely due to the way they represent image tokens internally)

      6 replies →

  • you can tell the diffusion from space, sadly it would normally take years to do it the conventional way, which is still the only correct way.

  • Does anyone have a good reference for finetuning Qwen? This article opened my eyes a bit...

    • The turn-key option is ostris ai-toolkit which has good tutorials on YT and can be run completely locally or via RunPod. Claude Code can set everything up for you (speaking from experience) and can even SSH into RunPod.

Sorry about the hug of death - while I spent an embarassing amount of money on rented H100s, I couldn't be bothered to spend $5 for Cloudflare workers... Hope you all enjoy it, it should be back up now

  • Curious if the H100s were strictly for VRAM capacity? Since this is a batch job and latency doesn't really matter, it seems like you could have run this on 4090s for a fraction of the cost. Unless the model size was the bottleneck, the price to performance ratio on consumer hardware is usually much better for this kind of workload.

Truly amazing work! I only had time to scan quickly your write-up but I'll try to read it entirely, it's very interesting thank you!

As you say: software engineering doesn’t go away in the age of AI - it just moves up the ladder of abstraction ! At least in the mid term :)

> This project is far from perfect, but without generative models, it couldn’t exist. There’s simply no way to do this much work on your own,

Maybe, though a guy did physically carve/sculpt the majority of NYC: https://mymodernmet.com/miniature-model-new-york-minninycity...

Want to thank you for taking the time to write up the process.

I know you'll get flak for the agentic coding, but I think it's really awesome you were able to realize an idea that otherwise would've remained relegated to "you know what'd be cool.." territory. Also, just because the activation energy to execute a project like this is lower doesn't mean the creative ceiling isn't just as high as before.

Not working here, some CORS issue.

Firefox, Ubuntu latest.

Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://isometric-nyc-tiles.cannoneyed.com/dzi/tiles_metadat.... (Reason: CORS header ‘Access-Control-Allow-Origin’ missing). Status code: 429.

Edit: i see now, the error is due to the cloudflare worker being rate limited :/ i read the writeup though, pretty cool, especially the insight about tool -> lib -> application

Great project! Just wanted to talk about the "/ The edit problem" section in the blog post - it's almost entirely false. Inpainting (when you provide a mask) has existed for years, and for good edit models (such as Nano Banana Pro) you can annotate the image, and it'll only edit one part, keeping the rest almost entirely consistent. The same goes for other top editing models like Flux 2, Seedream 4.5 and so on.

Transparency also exists, e.g. GPT Image does it, and Nano Banana Pro should have it supported soon as well.

Author here: Just got out of some meetings at work and see that HN is kicking my cloudflare free plan's butt. Let me get Claude to fix it, hold tight!

  • We should be back online! Thanks for everyone's patience, and big thanks to Claude for helping me debug this and to Cloudflare for immediately turning the website back on after I gave them some money

I was most surprised by the fact that it only took 40 examples for a Qwen finetune to match the style and quality of (interactively tuned) Nano Banana. Certainly the end result does not look like the stock output of open-source image generation models.

I wonder if for almost any bulk inference / generation task, it will generally be dramatically cheaper to (use fancy expensive model to generate examples, perhaps interactively with refinements) -> (fine tune smaller open-source model) -> (run bulk task).

  • In my experience image models are very "thirsty" and can often learn the overall style of an image from far fewer models. Even Qwen is a HUGE model relatively speaking.

    Interestingly enough, the model could NOT learn how to reliably generate trees or water no matter how much data and/or strategies I threw at it...

    This to me is the big failure mode of fine-tuning - it's practically impossible to understand what will work well and what won't and why

    • I see, yeah, I can see how if it's like 100% matching some parts of the style, but then failing completely on other parts, it's a huge pain to deal with. I wonder if a bigger model could loop here - like, have GPT 5.2 compare the fine-tune output and the Nano Banana output, notice that trees + water are bad, select more examples to fine-tune on, and the retry. Perhaps noticing that the trees and water are missing or bad is a more human judgement, though.

      1 reply →

This is a fantastic piece of work, AI or no!

Feature idea: For those of us who aren't familiar with The City, could you allow clicks on the image to identify specific landmarks (buildings, etc.)? Or is that too computationally intensive? I can identify a few things, but it would sure be nice to know what I'm looking at.

This is an incredible piece of work that I had a blast exploring!

I must admit I spent way too much time finding landmarks I visited when I last holidayed from Australia. Now I'm feel nostalgic.

Thanks so much for sharing!

I created an account just to say that your map is amazing! It's beautiful and very interesting to look at, because of all the details. Thank you for documenting your method of creating this! I've been looking for a way to create a similar map for the island I'm living at, and maybe your method will work for this. Thank you so much for documenting the steps!

Would be cool if you could adjust the sun's position & see the effect on shadows.

Could even extend it to add a "night mode" too, though that'd require extensive retexturing.

Very impressive result! are you taking requests for the next ones? SF :D Tokyo :D Paris :D Milan :D Rome :D Sydney :D

Oh man...

  • Really want to do SF next. Maybe the next gen of models will be reliable enough to automate it but this took WAY too much manual labor for a working man. I’ll get the code up soon if people wanna fork it!

    • I'd absolutely love to play with this. one idea I had is to train another model to create bitmaps of sidewalks and roads and add a simulation for pedestrians and cars. day/night cycle would also be so cool!

  • How about Staten Island next?

    • I second that! I know that Staten Island isn't that well liked by the other boroughs, but the only time I visited New York so far, I stayed at an AirBNB (actually it was a HomeAway) there and have pleasant memories of seeing Manhattan in the distance through the window and using the ferry each morning to get there...

Hey, engineer at Oxen.ai here. Glad the fine-tuning worked well for this project! If anyone has questions on that part of it we would be happy to answer.

We have a blog post on a similar workflow here: https://www.oxen.ai/blog/how-we-cut-inference-costs-from-46k...

On the inference cost and speed: we're actively working on that and have a pretty massive upgrade there coming soon.

  • I didn't know Oxen.ai. I took a look at your docs on how to fine-tune LLMs:

    > Oxen.ai supports datasets in a variety of file formats. The only requirement is that you have a column where each row is a list of messages. Each message is an dictionary with a role and content key. The role can be “system”, “user”, or “assistant”. The content is the message content.

    Oh, so you're forced to use the ChatML multi-turn conversation format.

This is so cool! Please give me a way to share lat/long links with folks so I can show them places that are special to me :)

> This project is far from perfect, but without generative models, it couldn’t exist. There’s simply no way to do this much work on your own

100 people built this in 1964: https://queensmuseum.org/exhibition/panorama-of-the-city-of-...

One person built this in the 21st century: https://gothamist.com/arts-entertainment/truckers-viral-scal...

AI certainly let you do it much faster, but it’s wrong to write off doing something like this by hand as impossible when it has actually been done before. And the models built by hand are the product of genuine human creativity and ingenuity; this is a pixelated satellite image. It’s still a very cool site to play around with, but the framing is terrible.

  • It feels like whether or not it's possible to make a model of new york misses the point. This kind of project could just as easily scale up to be an isometric map of the entire world, every city, every forest, every mountain hut, and update continuously as new buildings are built and cities grow. Expensive yes, for now- but the speed and scale to make and maintain such representations will absolutely be outside the realm of what was previously possible. Not just maps of the world, maps of human knowledge, history, to such detail that would entail not just tens of thousands of hours but billions or trillions of hours if done by hand.

amazing work!

gemini 3.5 pro reverse engineered it - if you use the code at the following gist, you can jump to any specific lat lng :-)

https://gist.github.com/gregsadetsky/c4c1a87277063430c26922b...

also, check out https://cannoneyed.com/isometric-nyc/?debug=true ..!

---

code below (copy & paste into your devtools, change the lat lng on the last line):

    const calib={p1:{pixel:{x:52548,y:64928},geo:{lat:40.75145020893891,lng:-73.9596826628078}},p2:{pixel:{x:40262,y:51982},geo:{lat:40.685498640229675,lng:-73.98074283976926}},p3:{pixel:{x:45916,y:67519},geo:{lat:40.757903901085726,lng:-73.98557060196454}}};function getAffineTransform(){let{p1:e,p2:l,p3:g}=calib,o=e.geo.lat*(l.geo.lng-g.geo.lng)-l.geo.lat*(e.geo.lng-g.geo.lng)+g.geo.lat*(e.geo.lng-l.geo.lng);if(0===o)return console.error("Points are collinear, cannot solve."),null;let n=(e.pixel.x*(l.geo.lng-g.geo.lng)-l.pixel.x*(e.geo.lng-g.geo.lng)+g.pixel.x*(e.geo.lng-l.geo.lng))/o,x=(e.geo.lat*(l.pixel.x-g.pixel.x)-l.geo.lat*(e.pixel.x-g.pixel.x)+g.geo.lat*(e.pixel.x-l.pixel.x))/o,i=(e.geo.lat*(l.geo.lng*g.pixel.x-g.geo.lng*l.pixel.x)-l.geo.lat*(e.geo.lng*g.pixel.x-g.geo.lng*e.pixel.x)+g.geo.lat*(e.geo.lng*l.pixel.x-l.geo.lng*e.pixel.x))/o,t=(e.pixel.y*(l.geo.lng-g.geo.lng)-l.pixel.y*(e.geo.lng-g.geo.lng)+g.pixel.y*(e.geo.lng-l.geo.lng))/o,p=(e.geo.lat*(l.pixel.y-g.pixel.y)-l.geo.lat*(e.pixel.y-g.pixel.y)+g.geo.lat*(e.pixel.y-l.pixel.y))/o,a=(e.geo.lat*(l.geo.lng*g.pixel.y-g.geo.lng*l.pixel.y)-l.geo.lat*(e.geo.lng*g.pixel.y-g.geo.lng*e.pixel.y)+g.geo.lat*(e.geo.lng*l.pixel.y-l.geo.lng*e.pixel.y))/o;return{Ax:n,Bx:x,Cx:i,Ay:t,By:p,Cy:a}}function jumpToLatLng(e,l){let g=getAffineTransform();if(!g)return;let o=g.Ax*e+g.Bx*l+g.Cx,n=g.Ay*e+g.By*l+g.Cy,x=Math.round(o),i=Math.round(n);console.log(` Jumping to Geo: ${e}, ${l}`),console.log(` Calculated Pixel: ${x}, ${i}`),localStorage.setItem("isometric-nyc-view-state",JSON.stringify({target:[x,i,0],zoom:13.95})),window.location.reload()};
    jumpToLatLng(40.757903901085726,-73.98557060196454);

This is really wonderful. Thanks for doing it!

I especially appreciated the deep dive on the workflow and challenges. It's the best generally accessible explication I've yet seen of the pros and cons of vibe coding an ambitious personal project with current tooling. It gives a high-level sense of "what it's generally like" with enough detail and examples to be grounded in reality while avoiding slipping into the weeds.

This is delightful. It scratches that SimCity itch while still being recognizably New York. I ended up losing track of time exploring my old neighborhood and landmarks, and it somehow conveys the city's scale and quirks better than a satellite view. Thanks for sharing it.

what an amazing feat of artwork and such a fine writeup as well. learnt a ton! thank you for this

Wow, this is absolutely amazing! This is the kind of project I am trying to do with agents, but of course I'm really far from this, but you motivated me to do better!

> I’m not particularly interested in getting mired down in the muck of the morality and economics of it all. I’m really only interested in one question: What’s possible now that was impossible before?

Upvote for the cool thing I haven’t seen before but cancelled out by this sentiment. Oof.

  • I mean this pretty literally though - I'm not particularly interested in these questions. They've been discussed a ton by people way more qualified to discuss them, but I personally I feel like it's been pretty much the same conversation on loop for the last 5 years...

    That's not to say they're not very important issues! They are, and I think it's reasonable to have strong opinions here because they cut to the core of how people exist in the world. I was a musician for my entire 20s - trust me that I deeply understand the precarity of art in the age of the internet, and I can deeply sympathize with people dealing with precarity in the age of AI.

    But I also think it's worth being excited about the birth of a fundamentally new way of interacting with computers, and for me, at this phase in my life, that's what I want to write and think about.

    • I appreciate the thoughtful reply. I will try to give you the benefit of the doubt then and not extrapolate from your relatively benign feelings as it pertains to a creative art project any capacity for you to take up engineering projects that would make the world worse.

      You get your votes back from me.

  • This is basically the inversion of the famous Jurassic Park quote. “Never mind if we should. What if we could?”

Maybe I'm just missing it somewhere but does anyone see what the licence for this map is?

> Slop vs. Art

> If you can push a button and get content, then that content is a commodity. Its value is next to zero.

> Counterintuitively, that’s my biggest reason to be optimistic about AI and creativity. When hard parts become easy, the differentiator becomes love.

Love that. I've been struggling to succinctly put that feeling into words, bravo.

  • I agree this is the interesting part of the project. I was disappointed when I realized this art was AI generated - I love isometric handdrawn art and respect the craft. But after reading the creator's description of their thoughtful use of generative AI, I appreciated their result more.

  • Where’s the love here? There are artists who dedicate their lives to creating a single masterwork. This is someone spending a weekend on a “neat idea”.

    • You're inferring that time invested per project is directly proportional to love for the craft, which I disagree with strongly. Taking a weekend to explore a new medium is an act of interest/love in the craft. Is Robert Bateman any less dedicated to a life of art than Michelangelo was? Maybe, maybe not, but I think we can both agree they produced quick sketches, and dedicated their lives to producing incredible art.

      I expect artists will experiment with the new tools and produce incredibly creative works with them, far beyond the quality I can produce by typing in "a pelican riding a bicycle".

      1 reply →

This is awesome, thanks for sharing this!

I am especially impressed with the “i didn’t write a single line of code” part, because I was expecting it to be janky or slow on mobile, but it feels blazing fast just zooming around different areas.

And it is very up to date too, as I found a building across the street from me that got finished only last year being present.

I found a nitpicky error though: in Brooklyn downtown, where Cadman Plaza Park is, your webite makes it looks like there is a large rectangular body of water there (e.g., a pool or a fountain). In reality, there is no water at all, it is just a concrete slab area.

  • the classic "water/concrete" issue! There's probably a lot of those around the map - turns out, it's pretty hard to tell the difference between water and concrete/terrain in a lot of the satellite imagery that the image model was looking at to generate the pixel images!

  • The author had built something like this image viewer before and used an existing library to handle some of the rendering.

Highly recommend reading the blog post too. It helped me see some of the limitations of this approach and I thought it was useful,

This is one of the coolest things I've ever seen. Massive kudos to you. I am forwarding to all NYers I know. It gives me chills to relive specific places, though I'm far from the city now.

Really fun to fly around, find my old apartment building etc.

It would be neat if you could drag and click to select an area to inpaint. Let's see everyone's new Penn Station designs!

Would guess it'd have to be BYOK but it works pretty well:

https://i.imgur.com/EmbzThl.jpeg

Much better than trying to inpaint directly on Google Earth data

This is awesome, and thanks so much for the deep dive into process!!

One thing I would suggest is to also post-process the pixel art with something like this tool to have it be even sharper. The details fall off as you get closer, but running this over larger patch areas may really drive the pixel art feel.

https://jenissimo.itch.io/unfaker

  • Oh wow this is great, I threw a few of these tricks at it and never really felt like I nailed it... Might have to revisit after playing around with this more

Really slick. Let’s see the AI open it up so we can use it to generate a similar map for anyplace on Earth :)

One thing I learned from this is that my prompts are much less detailed than what author has been using.

Very cool work and great write up.

"map of NYC" does not include Staten Island. That's how we like it

  • I just couldn't justify spending dozens more hours for Staten Island...

    • What do you estimate the cost would be to have each tile hand drawn by an artist?

      I don't think there are enough artists in the world to achieve this in a reasonable amount of time (1-5 years) and you're probably looking at a $10M cost?

      Part of me wonders if you put a kickstarter together if you could raise the funds to have it hand drawn but no way the very artists you hire wouldn't be tempted to use AI themselves.

You mentioned needing 40k tiles and renting a H100 for 3$/hour at 200tiles/hour, so am I right to assume that you spend 200*3=600$ for running the inference? That also means letting it run 25 nights a 8 hours or so?

Cool project!

  • Yup back of the napkin is probably about there - also spent a fair bit on the oxen.ai fine-tuning service (worth every penny)... paint ain't free, so to speak

To take it a step further it would be super cool to so rhiw figure out the roadway system from the map data and use the buildings as masks over the roads and have little simulated cars driving

  • 100% - I originally wanted to do that but when I realized how much manual work I'd have to do just to get the tiles generated I had to cut back on scope pretty hard.

    I actually have a nice little water shader that renders waves on the water tiles via a "depth mask", but my fine-tunes for generating the shader mask weren't reliable enough and I'd spent far too much time on the project to justify going deeper. Maybe I'll try again when the next generation of smarter, cheaper models get released.

This is very cool, it would be awesome if I could rotate it as well by 90 degree increments to peek at different angles! I loved RCT growing up so this is hitting the nostalgia!

A bit tangential but i really think the .nyc domain is underappreciated.

SF/Mountain View etc don't even have one! you get a little piece of the NYC brand just for you!

Very gorgeous and creative! Any plans to attempt other isometric cities? (Like SF, LA, Paris, London)

Amazing. Took forever but I found my building in Brooklyn as well as the nearby dealership, gas station, and public school.

Just curious, about how long did this project take you? I don't see that mentioned in the article.

  • We had our third kid in late November, and I worked sporadically on it over the following two months of paternity leave and holiday... If I had to bet, I'd say I put in well over 200 hours of work on it, the majority of that being manual auditing/driving of the generation process. If any AI model were reliable at checking the generated pixels, I could have automated this process, but they simply aren't there yet, so I had to do a lot more manual work than I'd anticipated.

    All told I probably put in less than 20 hours of actual software engineering work, though, which consisted entirely of writing specs and iterating with various coding agents.

    • > If any AI model were reliable at checking the generated pixels, I could have automated this process, but they simply aren't there yet, so I had to do a lot more manual work than I'd anticipated.

      Since the output is so cool and generally interesting, there might be an opportunity for those forking this to do other cities to deploy a web app to crowd source identifying broken tiles and maybe classifying the error or even providing manual hinting for the next run. It takes a village to make a (sim) city! :-)

      1 reply →

Some people reported 429 - otherwise known as HN hug of death.

You probably need to adjust how caching is handled with this.

I see you used Gemini-CLI some but no mention of Antigravity. Surprising for a Googler. Reasons?

  • I used antigravity a bit, but it still feels a bit wonky compared to Cursor. Since this was on my own time, I'm gonna use the stuff that feels best. Though, by the end of the project I wasn't touching an IDE at all.

This is incredible, I love it. It would be great to be able to overlay OSM data over it for those of us who aren't that familiar with the city and would like to know more about certain areas or places

> But at the end of the day, the last 10% always takes up 90% of the time and, as always, the difference between good enough and great is the amount of love you put into the work.

Nicely put.

This doesn't really look like pixel art; it looks like you applied a (very sophisticated) Photoshop filter to Google Earth. Everything is a little blurry, and the characteristic sharp edges of handmade pixel art (e.g. [0]) are completely absent.

To me, the appeal of pixel art is that each pixel looks deliberately placed, with clever artistic tricks to circumvent the limitations of the medium. For instance, look at the piano keys here [1]. They deliberately lack the actual groupings of real piano keys (since that wouldn't be feasible to render at this scale), but are asymmetrically spaced in their own way to convey the essence of a keyboard. It's the same sort of cleverness that goes into designing LEGO sets.

None of these clever tricks are apparent in the AI-generated NYC.

On another note, a big appeal of pixel art for me is the sheer amount of manual labor that went into it. Even if AI were capable of rendering pixel art indistinguishable from [0] or [1], I'm not sure I'd be impressed. It would be like watching a humanoid robot compete in the Olympics. Sure, a Boston Dynamics bot from a couple years in the future will probably outrun Usain Bolt and outgymnast Simone Biles, but we watch Bolt and Biles compete because their performance represents a profound confluence of human effort and talent. Likewise, we are extremely impressed by watching human weightlifters throw 200kg over their heads but don't give a second thought to forklifts lifting 2000kg or 20000kg.

OP touches on this in his blog post [2]:

   I spent a decade as an electronic musician, spending literally thousands of hours dragging little boxes around on a screen. So much of creative work is defined by this kind of tedious grind. [...] This isn't creative. It's just a slog. Every creative field - animation, video, software - is full of these tedious tasks. In the end, the quality of art is defined by the quality of your decisions - how much work you put into something is just a proxy for how much you care and how much you have to say.

I would argue that in some case (e.g. pixel art), the slog is what makes the art both aesthetically appealing (the deliberately placed nature of each pixel is what defines the aesthetic) but also awe-inspiring (the slog represents an immense amount of sustained focus).

[0] https://platform.theverge.com/wp-content/uploads/sites/2/cho...

[1] https://www.reddit.com/media?url=https%3A%2F%2Fi.redd.it%2Fu...

[2] https://cannoneyed.com/projects/isometric-nyc

  • Yeah this is all completely fair and I agree with all of it. Aesthetically and philosophically, what AI does for "pixel art" is very off. And once you see the "AI" you can't really unsee it.

    But I didn't want to call it a "SimCity" map, though that's really the vibe/inspiration I wanted to capture, because that implies other things, so I used the term "pixel art" even though I knew it'd get a lot of (valid) pushback...

    As with all things art, labels are really difficult and the context / meaning / technique is at once completely tied to genre but also completely irrelevant. Think about the label "techno" - the label is deeply meaningful and subtle to some and almost meaningless to others

While impressive on a technical level, I can't help but notice that it just looks...bad? Just a strange blurry mess that only vaguely smells of pixelart.

Makes me feel insane that we're passing this off as art now.

Would it be simple to modify this to make a highly stylized version of NYC instead? Like post apocalyptic NYC or medieval NYC, night time NYC, etc. because then that would have some very interesting applications

  • simple is relative, could definitely be done, but until the models get a bit smarter and require less manual hand-holding it'd be a lot of grindy work

Holy damn, this map is a dream and the best map of NYC I've ever seen!

It's as if NYC was built in Transport Tycoon Deluxe.

I'll be honest, I've been pretty skeptical about AI and agentic coding for real-life problems and projects. But this one seems like the final straw that'll change my mind.

Thanks for making it, I really enjoy the result (and the educational value of the making-of post)!

This is huge!

At first I thought this was someone working thousands of hours putting this together, and I thought: I wonder if this could be done with AI…

Appreciate that writeup. Very detailed insights into the process. However those conclusions left me on the fence about whether I 'liked' the project. The conclusions about 'unlocking scale' and commodity content having zero value. Where does that leave you and this project? Does it really matter that much that the project couldn't exist without genAI? Maybe it shouldn't exist then at all. As with alot of the areas AI touches, the problem isn't the tools or use of them exactly, it's the scale. We're not ready for it. We're not ready for the scale of impact the tech touches in multitude of areas. Including the artistic world. The diminished value and loss of opportunities. We're not ready for the impacts of use by bad actors. The scale of output like this, as cool as it is, is out of balance with the loss of huge chunk of human activity and expression. Sigh.

  • At the risk of rehashing the same conversation over and over again, I think this is true of every technology ever.

    Personally I'm extremely excited about all of the creative domains that this technology unlocks, and also extremely saddened/worried about all of the crafts it makes obsolete (or financially non-viable)...

    • Do you seriously believe this[1] makes any craft obsolete or financially non-viable?

      [1] https://files.catbox.moe/1uphaw.png

      This is a fairly cool and novel application of generative AI[2], but it did not generate pixel art and it's still wildly incoherent slop when you examine it closely. This mostly works because it uses scale to obfuscate the flaws; users are expected to be zoomed out and not looking at the details. But the details are what makes art work. You could not sell a game or an animation like this. This is not replacing anybody.

      [2] It's also wholly unrepresentative of general use-cases. 99.99999999% of generative AI usage does not involve a ton of manual engineering labour fine-tuning a model and doing the things you did to get this set up. Even with all of that effort, what you've produced here is never replacing a commercially viable pixel artist. The rest of the world slapping a prompt into an online generator is even further away from doing that.

  • Does it really matter that much that a sewage treatment plant couldn't exist without automated sensors? Maybe it shouldn't exist then at all.

  • > Where does that leave you and this project? Does it really matter that much that the project couldn't exist without genAI? Maybe it shouldn't exist then at all. As with alot of the areas AI touches, the problem isn't the tools or use of them exactly, it's the scale. We're not ready for it. We're not ready for the scale of impact the tech touches in multitude of areas. Including the artistic world. The diminished value and loss of opportunities. We're not ready for the impacts of use by bad actors. The scale of output like this, as cool as it is, is out of balance with the loss of huge chunk of human activity and expression. Sigh.

    If you don’t see these tools as a way for ALL of us to more-intimately reach more of our intended audiences,

    whether as a musician, marketer, small business, whatever,

    then I don’t know if you were really passionate or excited about what you were doing in the first place.