← Back to context

Comment by gdubs

6 days ago

There's a different thread if you want to wax about Fluid Glass etc [1], but there's some really interesting new improvements here for Apple Developers in Xcode 26.

The new foundation frameworks around generative language model stuff looks very swift-y and nice for Apple developers. And it's local and on device. In the Platforms State of the Union they showed some really interesting sample apps using it to generate different itineraries in a travel app.

The other big thing is vibe-coding coming natively to Xcode through ChatGPT (and other) model integration. Some things that make this look like a nice quality-of-life improvement for Apple developers is the way that it tracks iterative changes with the model so you can rollback easily, and the way it gives context to your codebase. Seems to be a big improvement from the previous, very limited GPT integration with Xcode and the first time Apple Developers have a native version of some of the more popular vibe-coding tools.

Their 'drag a napkin sketch into Xcode and get a functional prototype' is pretty wild for someone who grew up writing [myObject retain] in Objective-C.

Are these completely ground-breaking features? I think it's more what Apple has historically done which is to not be first into a space, but to really nail the UX. At least, that's the promise – we'll have to see how these tools perform!

1: https://news.ycombinator.com/item?id=44226612

> And it's local and on device.

Does that explain why you don't have to worry about token usage? The models run locally?

  • > You don’t have to worry about the exact tokens that Foundation Models operates with, the API nicely abstracts that away for you [1]

    I have the same question. Their Deep dive into the Foundation Models framework video is nice for seeing code using the new `FoundationModels` library but for a "deep dive", I would like to learn more about tokenization. Hopefully these details are eventually disclosed unless someone else here already knows?

    [1] https://developer.apple.com/videos/play/wwdc2025/301/?time=1...

    • I guess I'd say "mu", from a dev perspective, you shouldn't care about tokens ever - if your inference framework isn't abstracting that for you, your first task would be to patch it to do so.

      To parent, yes this is for local models, so insomuch worrying about token implies financial cost, yes

      6 replies →

The direction the software engineering is going in with this whole "vibe coding" thing is so depressing to me.

I went into this industry because I grew up fascinated by computers. When I learned how to code, it was about learning how to control these incredible machines. The joy of figuring something out by experimenting is quickly being replaced by just slamming it into some "generative" tool.

I have no idea where things go from here but hopefully there will still be a world where the craft of hand writing code is still valued. I for one will resist the "vibe coding" train for as long as I possibly can.

  • To be meta about it, I would argue that thinking "generatively" is a craft in and of itself. You are setting the conditions for work to grow rather than having top-down control over the entire problem space.

    Where it gets interesting is being pushed into directions that you wouldn't have considered anyway rather than expediting the work you would have already done.

    I can't speak for engineers, but that's how we've been positioning it in our org. It's worth noting that we're finding GenAI less practical in design-land for pushing code or prototyping, but insanely helpful helping with research and discovery work.

    We've been experimenting with more esoteric prompts to really challenge the models and ourselves.

    Here's a tangible example: Imagine you have an enormous dataset of user-research, both qual and quant, and you have a few ideas of how to synthesize the overall narrative, but are still hitting a wall.

    You can use a prompt like this to really get the team thinking:

    "What empty spaces or absences are crucial here? Amplify these voids until they become the primary focus, not the surrounding substance. Describe how centering nothingness might transform your understanding of everything else. What does the emptiness tell you?"

    or

    "Buildings reveal their true nature when sliced open. That perfect line that exposes all layers at once - from foundation to roof, from public to private, from structure to skin.

    What stories hide between your floors? Cut through your challenge vertically, ruthlessly. Watch how each layer speaks to the others. Notice the hidden chambers, the unexpected connections, the places where different systems touch.

    What would a clean slice through your problem expose?"

    LLM's have completely changed our approach to research and, I would argue, reinvigorated an alternate craftsmanship to the ways in which we study our products and learn from our users.

    Of course the onus is on us to pick apart the responses for any interesting directions that are contextually relevant to the problem we're attempting to solve, but we are still in control of the work.

    Happy to write more about this if folks are interested.

  • Personally I still love the craft of software. But there are times where boilerplate really kills the fun of setting something up, to take one example.

    Or like this week I was sick and didn't have the energy to work in my normal way and it was fun to just tell ChatGPT to build a prototype I had in mind.

    We live in a world of IKEA furniture - yet people still desire handmade furniture, and people still enjoy and take deep satisfaction in making them.

    All this to say I don't blame you for being dismayed. These are fairly earth shattering developments we're living through and if it doesn't cause people to occasionally feel uneasy or even nostalgia for simpler times, then they're not paying attention.

  • I share your frustration. But for better or worse, computer language will eventually be replaced by human language. It's inevitable :(

  • This sounds like a boomer trying to resist using Google in favor of encyclopedias.

    Vibe coding can be whatever you want to make of it. If you want to be prescriptive about your instructions and use it as a glorified autocomplete, then do it. You can also go at it from a high-level point of view. Either way, you still need to code review the AI code as if it was a PR.

    • Is any AI assisted coding === Vibe Coding now?

      Coding with an AI can be whatever one can achieve, however I don’t see how vibe coding would be related to an autocomplete: with an autocomplete you type a bit of code that a program (AI or not) complete. In VC you almost doesn’t interact with the editor, perhaps only for copy/paste or some corrections. I’m not even sure for the manual "corrections" parts if we take Simon Willinson definition [0], which you’re not forced to obviously, however if there’s contradictory views I’ll be glad to read them.

      0 > If an LLM wrote every line of your code, but you've reviewed, tested, and understood it all, that's not vibe coding in my book—that's using an LLM as a typing assistant

      https://arstechnica.com/ai/2025/03/is-vibe-coding-with-ai-gn...

      (Your may also consider rethinking your first paragraph up to HN standards because while the content is pertinent, the form sounds like a youngster trying to demo iKungFu on his iPad to Jackie Chan)

      1 reply →

    • This sounds like someone who doesn't actually know how to code, doesn't enjoy the craft, and probably only got into the industry because it pays well and not because they actually enjoy it.

      1 reply →

    • No, that's what's separates the vibecoding from the glorified autocomplete. as originally defined, vibe coding doesn't include the final code review of the generated code, just a quick spot check, and then moving on to the next prompt.

      1 reply →

    • Karpathy's definition of vibe coding as I understood it was just verbally directing an agent based on vibes you got from the running app without actually seeing the code.

      1 reply →

    • You can take an augmented approach, a sort of capability fusion, or you can spam regenerate until it works.

I might we wrong but I guess this will only works on iphone 16 devices and iphone 15 pro - thus drastically limits your user base and you would still have to use online API for most apps. I was hoping they provide free ai api on their private cloud for other devices even if also running small models

  • If you start writing an app now, by the time it's polished enough to release it, the iPhone 16 will already be a year old phone, and there will be plenty potential customers.

    If your app is worthwhile, and gets popular in a few years, by that time iPhone 16 will be an old phone and a reasonable minimum target.

    Skate to where the puck is going...

    • Developers could be adding a feature utilizing LLMs to their existing app that already has a large user base. This could be a matter of a few weeks from an idea ti shipping the feature. While competitors use API calls to just "get things done", you are trying to figure out how to serve both iPhone 16 and older users, and potentially Android/web users if your product is also available elsewhere. I don't see how an iPhone 16 only feature helps anyone's product development, especially when the quality still remains to be seen.

    • Basically this - network effects are huge. People will definitely by hardware if it solves a problem for them - so many people bought blackberries just for BBM.

    • Exactly, it can take at least a couple of years to get big/important apps to use iOS, macOS features. By that Iphone 16 would be quite common.

  • Drastically limits your user base for like 3 years.

    Phones still get replaced often, and the people who don’t replace them are the type of people who won’t spend a lot of money on your app.

If the new foundation models are on device, does that mean they’re limited to information they were trained on up to that point?

Or do have the ability to reach out to the internet for up to the moment information?

  • In addition to context you provide, the API lets you programmatically declare tools