← Back to context

Comment by behnamoh

2 years ago

I hate this "tierification" of products into categories: normal, pro, max, ultra

Apple does this and it's obvious that they do it to use the "decoy effect" when customers want to shop. Why purchase a measly regular iPhone when you can spend a little more and get the Pro version?

But when it comes to AI, this tierification only leads to disappointment—everyone expects the best models from the FAANGO (including OpenAI), no one expects Google or OpenAI to offer shitty models that underperform their flagships when you can literally run Llama 2 and Mistral models that you can actually own.

I don't understand -- these are all literally tied directly to performance.

They're tiers of computing power and memory. More performance costs more money to produce. The "nano" can fit on a phone, while the others can't.

Are you really objecting to the existence of different price/performance tiers...? Do you object to McDonald's selling 3 sizes of soft drink? There's nothing "decoy" about any of this.

  • > Do you object to McDonald's selling 3 sizes of soft drink?

    Yes, actually, for different reasons - McDonald’s charges only a tiny bit more for the largest size of drink than they do for the smallest (which is easy because soft drinks are a few cents’ worth of syrup and water, and the rest is profit). That pushes people toward huge drinks, which means more sugar, more caffeine, and more addiction.

    • But you're not objecting to selling 3 sizes. You're just objecting that the prices aren't far enough apart...

No, it’s not just to use the “decoy effect.” They do this to share development costs across a whole product line. Low volume, expensive products are subsidized by high volume, mass market devices. Without these tiers, they’d be unable to differentiate the products and so lose the margins of the high end products (and their entire reason for existing).

Unless you expect Apple to just sell the high end devices at a loss? Or do you want the high end chips to be sold in the mass market devices and for Apple to just eat the R&D costs?

  • > They do this to share development costs across a whole product line. Low volume, expensive products are subsidized by high volume, mass market devices

    Usually it’s the other way around. Mass market products have thin margins and are subsidized by high end / B2B products because the customers for those products have infinitely deep pockets.

    > Or do you want the high end chips to be sold in the mass market devices and for Apple to just eat the R&D costs?

    Literally what Steve Jobs was steadfast in :). One iPhone for everyone. He even insisted on the Plus models carrying no extra features.

    • Usually it’s the other way around. Mass market products have thin margins and are subsidized by high end / B2B products because the customers for those products have infinitely deep pockets.

      That's usually what I've seen, but the M1 MacBook Air came out first and the M1 Pro and Max came out much later.

      1 reply →

This isn't "tierificaton" or even premiumization. That may come later.

Large AI models have tight resources requirements. You physically can't use X billion parameters without ~X billion ~bytes of memory.

It makes complete sense to have these 3 "tiers". You have a max capability option, a price-performance scaling option, and an edge compute option.

  • > Large AI models have tight resources requirements. You physically can't use X billion parameters without ~X billion ~bytes of memory.

    Well, X billion bits times the parameter bit size. For base models, those are generally 32-bit (so 4X bytes), though smaller quantizations ate possible and widely used for public models, and I would assume as a cost measure for closed hosted models as well.

It has to be this way when current LLMs have orders of magnitude electricity cost differences depending on the output you desire.

Tierification of AI models is not some business strategy, it is a necessary consequence of the reality that AI is massively compute constrained right now. The size of a model is extremely important for inference time and cost. It just doesn't make sense to release one single model when your method will always yield a family of models with increasing size. The customer can choose a model corresponding to their needs.

I think the expensive ones are used when the customer is the user — e.g. ChatGPT Plus (personal) subscription — and the cheap ones when they are not — e.g. customer support service bots.

I'm honestly 100% okay with it as long as it's reasonable and not confusing to customers. (Not saying Apple isn't somewhat; I mean, buying a non-Pro iPhone 15 and not being able to view WebM files feels literally fucking insane, and that's apparently how that works, but that's a rant for a different thread.) In cases like this, presumably the idea isn't actually feature-gating, it's scaling up. AI inference costs compute time, and although I have no idea if the inference occurs on special hardware or not, if it does, I can only presume that scaling up the special hardware to meet demand is challenging and very much not like scaling up e.g. a typical web service.

IMO, Tiers can be useful when they make sense and aren't just for artificial market segmentation.

My guess is they're branding it in this way to obfuscate the number of parameters used, which makes sense because more parameters doesn't necessarily mean a better model. It's kind of like the "number of bits" competition in video game consoles back in the 90s.

I think it depends. It's always worth having a small fast model for some tasks and being able to run it completely offline on a mobile cpu. Maybe not as a chat companion, for for text understanding or indexing all your messages and photos for search, it may be enough.

It's safe to assume there's good reason in this case. Nano runs locally on smartphones. Pro and Ultra will likely be cost and speed.