← Back to context

Comment by Razengan

14 days ago

I did ask the AI first, about some things that I already knew how to do.

It gave me horribly inefficient or long-winded ways of doing it. In the time it took for "prompt tuning" I could have just written the damn code myself. It decreased the confidence for anything else it suggested about things I didn't already know about.

Claude still sometimes insists that iOS 26 isn't out yet. sigh.. I suppose I just have to treat it as an occasional alternative to Google/StackOverflow/Reddit for now. No way would I trust it to write an entire class let alone an app and be able to sleep at night (not that I sleep at night, but that's besides the point)

I think I prefer Xcode's built-in local model approach better, where it just offers sane autocompletions based on your existing code. e.g. if you already wrote a Dog class it can make a Cat class and change `bark()` to `meow()`

You can write the "prompt tuning" down in AGENTS.md and then you only need to do it once. This is why you need to keep working with different ones to get the feeling what they're good at and how you can steer them closer to your style and preferences without having to reiterate from scratch every time.

I personally have a git submodule built specifically for shared instructions like that, it contains the assumptions and defaults for my specific style of project for 3 different programming languages. When I update it on one project, all my projects benefit.

This way I don't need to tell whatever LLM I'm working with to use modernc.org/sqlite for database connections, for example.

  • > You can write the "prompt tuning" down in AGENTS.md and then you only need to do it once.

    Yeah, I just mean: I know how to "fix" the AI for things that I already know about.

    But how would I know if it's wrong or right about the stuff I DON"T know?? I'd have to go Google shit anyway to verify it.

    This is me asking ChatGPT 5 about ChatGPT 5: https://i.imgur.com/aT8C3qs.png

    Asking about Nintendo Switch 2: https://i.imgur.com/OqmB9jG.png

    Imagine if AI was somebody's first stop for asking about those things. They'd be led to believe they weren't out when they in fact were!

    • There's your problem right there.

      Don't use it as a knowledge machine, use it as a tool.

      Agentic LLMs are the ones that work. The ones that "use tools in a loop to achieve a goal"[0]. I just asked Claude to "add a release action that releases the project as a binary for every supported Go platform" to one of my Github projects. I can see it worked because the binaries appeared as a release. It didn't "hallucinate" anything nor was it a "stohastic parrot". It applied a well known pattern to a situation perfectly. (OK, it didn't use a build matrix, but that's jsut me nitpicking)

      In your cases the LLM should've seen that you're asking about current events or news and used a tool that fetches information about it. Now it just defaulted to whatever built-in training data was in its context and failed spectacularly

      AIs have a branding issue, because AI != AI which isn't AI. There are so many flavours that it's hard to figure out what people are talking about when they say "AI slop is crap" when I can see every day how "AI" makes my life easier by automating away the mundane crap.

      [0] https://simonwillison.net/2025/Sep/18/agents/

> Claude still sometimes insists that iOS 26 isn't out yet.

How would you imagine an AI system working that didn't make mistakes like that?

iOS 26 came out on September 15th.

LLMs aren't omniscient or constantly updated with new knowledge. Which means we have to figure out how to make use of them despite them not having up-to-the-second knowledge of the world.

  • > How would you imagine an AI system working that didn't make mistakes like that?

    I mean, if the user says "Use the latest APIs as of version N" and the AI thinks version N isn't out yet, then it should CHECK on the web first, it's right there, before second guessing the user. I didn't ask it whether 26 was out or not. I told it.

    Oh but I guess AIs aren't allowed to have free use of Google's web search or scrap other websites eh

    > iOS 26 came out on September 15th.

    It was in beta all year and the APIs were publicly available on Apple's docs website. If I told it to use version 26 APIs then it should just use those instead of gaslighting me.

    > LLMs aren't omniscient or constantly updated with new knowledge.

    So we shouldn't use them if we want to make apps with the latest tech? Despite what the AI companies want us to believe.

    You know, on a more general note, I think all AIs should have a toggle between "Do as I say" (Monkey Paw) and "Do what I mean"

    • Was this Claude Code or Claude.ai or some other tool that used Claude under the hood?

      Different harnesses have different search capabilities.

      If I'm doing something that benefits from search I tend to switch to ChatGPT because I know it has a really good search feature available to it. I don't trust Claude's as much.

      5 replies →