Apple Research unearthed forgotten AI technique and using it to generate images

4 days ago (9to5mac.com)

I find it fascinating that Apple-centric media sites are stretching so much to position the company in the AI race. The title is meant to say that Apple found something unique that other people missed, when the simplest explanation is they started working on this a while back (2021 paper, afterall) and just released it.

A more accurate headline would be - Apple starting to create images using 4 year old techniques.

  • It's not even some "forgotten AI technique" (sigh...). It's been a hot topic for the last 5 years. Used a lot with Variational Auto-encoders, etc. Such a bad journalism.

  • >I find it fascinating that Apple-centric media sites are stretching so much to position the company in the AI race

    Or, you know, just posting an article based on an Apple's press release about a new technique that falls squarely into their target audience (people reading Apple centric news) and is a great fit to current fashionable technologies (AI) people will show interest in.

    Without giving a fuck to "position the company in the AI race". They'd post about Apple sewers having an issue at their HQs, if that news story was available.

    Besides, when did Apple ever came first in some particular tech race (say, the mp3 player, the smartphone, the store, the tablet, the smartwatch, maybe VR now)? What they do typically is wait for the dust to settle and sweep the end-user end of that market.

  • That site's target market is what we know as "Apple fanboys". I'm not one to consider 9to5 serious journalism (nor even worthy to post in HN), but even those publications that I consider serious are businesses, too, and need to pander to their markets in order to make money.

  • > I find it fascinating that Apple-centric media sites are stretching so much to position the company in the AI race.

    A glance through the comments also shows HNers doing their best too. The mind still boggles as to why this site is so willing to perform mental gymnastics for a corporate.

    • We seriously need an AI to dampen the reality distorion field and bring back common sense. Maybe it can be something that people install in their browsers.

Forgotten from like 2021? NVAE[1] was a great paper but maybe four years is long enough to be forgotten in the AI space? shrug

1. NVAE: A Deep Hierarchical Variational Autoencoder https://arxiv.org/pdf/2007.03898

  • Right, it is bizzare to read that someone "unearthed a forgotten AI technique" that you happened to have worked with/on when it was still hot - when did I become a fossil? :D

    Also, if we're being nitpicky, diffusion model inference has been proven equivalent to (and is often used as) a particular NF so.. shrug

  • They are both variational inference, but Normalizing Flow (NF) is not VAE.

    • If you read the paper, you'll find "More Expressive Approximate Posteriors with Normalizing Flows" is in the methods section. The authors are in fact using (inverse) normalizing flows within the context of VAEs.

      The appendix goes on to explain, "We apply simple volume-preserving normalizing flows of the form z′ = z + b(z) to the samples generated by the encoder at each level".

It’s pretty great that despite having large data centers capable of doing this kind of computation, Apple continues to make things work locally. I think there is a lot of value in being able to hold the entirety of a product in hand.

  • With no company having a clear lead in everyday ai for the non technical mainstream user, there is only going to be a race to the bottom for subscription and API pricing.

    Local doesn't cost the company anything, and increases the minimum hardware customers need to buy.

  • It's very convenient for Apple to do this: less expenses on costly AI chips, and more excuses to ask customers to buy their latest hardware.

    • Users have to pay for the compute somehow. Maybe by paying for models run in datacenters. Maybe paying for hardware that's capable enough to run models locally.

      5 replies →

flows make sense here not just for size but cuz they're fully invertible and deterministic. imagine running same gen on 3 iphones, same output. means apple can kinda ensure same input gives same output across devices, chips, runs. no weird variance or sampling noise. good for caching, testing, user trust all that. fits apple's whole determinism dna and more of predictable gen at scale

  • Normalizing flows generate samples by starting from Gaussian noise and passing it through a series of invertible transformations. Diffusion models generate samples by starting from Gaussian noise and running it through an inverse diffusion process.

    To get deterministic results, you fix the seed for your pseudorandom number generator and make sure not to execute any operations that produce different results on different hardware. There's no difference between the approaches in that respect.

normalizing flow might be unpopular but definitely not a forgotten technique

I wonder if it’s noticeably faster or slower than the common way on the same set of hardware.

This subject is fascinating and the article is informative, but I wish that HN had a button like "flag", but specific for articles that seems written by AI (well at least the section "How STARFlow compares with OpenAI’s 4o image generator" sounds like it)

  • I had the opposite reaction, it definitely reads like a tech journalist who doesn’t have a great understanding of the tech. AI would’ve written a less clunky (and possibly incorrect) explanation.

  • FWIW, you can always report any HN quality concerns to hn@ycombinator.com and it'll be reviewed promptly and fairly (IMO).

  • It reads like the work of a professional writer who uses a handful of variant sentence structures and conventions to quickly write an article. That’s what professional writers are trained to do.

Apple AI team keeps going against the bitter lesson and focusing on small on-device models.

Let's see how this would turn out in longterm.

  • """The bitter lesson""" is how you get the current swath of massively unprofitable AI companies that are competing with each other over who can lose money faster.

    • I can't tell if you're perpetuating the myth that these companies are losing money on their paid offerings, or just overestimating how much money they lose on their free offerings.

      1 reply →

  • They took a simple technique (normalizing flows), instantiated its basic building blocks with the most general neural network architecture known to work well (transformer blocks), and trained models of different sizes on various datasets to see whether it scales. Looks very bitter-lesson-pilled to me.

    That they didn't scale beyond AFHQ (high-quality animal faces: cats, dogs and big cats) at 256×256 is probably not due to an explicit preference for small models at the expense of output resolution, but because this is basic research to test the viability of the approach. If this ever makes it into a product, it'll be a much bigger model trained on more data.

    EDIT: I missed the second paper https://arxiv.org/abs/2506.06276 where they scale up to 1024×1024 with a 3.8-billion-parameter model. It seems to do about as well as diffusion models of similar size.

  • The bitter-er lesson is that distillation from bigger models works pretty damn well. It’s great news for the GPU poor, not great for the guys training the models we distill from.

  • somewhat hard to say how the cards fall when the cost of 'intelligence' is coming down 1000x year over year while at the same time compute continues to scale. the bet should be made on both sides probably

  • Edge compute would be clutch, but Apple feels a decade too early.

    • Maybe for a big llm, but if they add some gpu cores and added a magnitude or 2 more unified memory to their i devices, or shoehorned m socs into high tier iDevices (especially as their lithography process advances), image generation becomes more viable, no? Also, I thought I read somewhere that apple wanted to infer simpler queries locally and switch to datacenter inference when the request was more complicated.

      If they approach things this way, and transistor progress continues linearly (relative to the last few years) maybe they can make their first devices that can meet these goals in… 2-3 years?