Comment by ApolloVonZ
6 days ago
Despite all the “Apple is evil” or “Apple is behind” (because they don’t do evil). Well, what they made with the Foundation Model is great. The fact that they build a system within the Swift language that allows you to specify structured data models (structs) to be used like any other model in a modern programming language, and you actually get back generated data in that format is great. Unlike a lot of other AIs where you might get back a well formatted JSON after a carefully crafted request, but still you never can’t be sure and need to implement a bunch of safeguards. Obviously it’s still the beginning and other tools might do something similar. But as an iOS developer that makes the usage of AI so much simpler. Especially with the bridge to external AIs that still allows you to map back to the type safe structured Swift models. I try not to be a hater, every progress, even slow or underwhelming at first might lead to improvements everywhere else.
Apple is behind. People forget that Google was shipping mobile-scale transformer-based LLMs in 2019: https://github.com/google-research/bert
By the time Apple has an AI-native product ready, people will already associate it with dehumanization and fascism.
I think it's a great move that Apple is cautious with including a hardcore LLM in everything. They are not that useful to the regular user.
Nobody is forcing "hardcore LLM" features on anyone, besides maybe Microsoft. This is that same cope as "I'm glad Apple Car can't crash and threaten people's lives" despite the fact that... yunno, Apple really wanted to bring it to market.
Siri, sideloading and AI features all all the same way; give people options and nobody will complain.
1 reply →
Guided generation is called "Structured Output" by other providers?
Well partially generated content streaming thing is great and I haven't seen it anywhere else.
Sorry if I didn’t use the correct terms. Didn’t catch up on all the terminology coming from my native language. ;) But yes, I agree, the fact that parts, different parameters, of the model can be completed asynchronous by streaming the output of the model, is quite unique. Apple/swift was late with async/await, but putting it all together, it probably plays well with the ‘never’ (I know ) asynchronous and reactive coding.
An issue with this is that model quality can get a lot lower when you force it into a structured form, because it's out of distribution for the model.
(I'm pretty sure this is actually what drove Microsoft Sydney insane.)
Reasoning models can do better at this, because they can write out a good freeform output and then do another pass to transform it.
I have this toy agent I'm writing, I always laugh that I, human, write a code that generates human-readable markdown, that I feed to llm where I ask it to produce a json, so I can parse (by code I, or it wrote) and output in a consistent human-readable form.
I'm thinking about let it output freeform and then use another model to use to force that into structured.
4 replies →
How do you think their implementation works under the hood? I'm almost certain it's also just a variant of "structured outputs", which many inference providers or LLM libraries have long supported.
Huh? Grammar-based sampling has been commonplace for years. It's a basic feature with guaranteed adherence. There is no "carefully crafting" anything, including safeguards.