← Back to context

Comment by hirako2000

5 hours ago

What feels already like old history is that Apple made a generous deal to OpenAI based on the premise that their AI could do the claims.

Apple engineers spend months trying to prompt engineer their way, thinking the prompter is at fault if the soon to be AGI system diverged. Some of these instructions were trending out there, as reveals of how naive Apple was at the time. They could be traced from the device's logs so not so much of a leak: Don't hallucinate, strictly follow instructions, followed by all sort of refined predicates, appended as if an LLM had reason

Then Apple released a paper to warn everyone (well, a few, and to save face) that we are getting fooled.

https://ml-site.cdn-apple.com/papers/the-illusion-of-thinkin...

In case Apple is a biased anti AI propagandist, here is a similar, more recent research paper from MIT and co:

https://arxiv.org/html/2603.24755v1

its really hard to read that first apple paper in context when it doesnt have a date on it. I know research papers are meant to be timeless artifacts, but when it says things like "Recent generations of frontier language models... " "frontier LRMs" etc i'd like to know what they were testing on.

Please put a date on your research papers! I could figure it out roughly by looking at the "last accessed" date on their citations - 2025-05-15.

  • The specific models are specified. E.g gpt version's, deepseek R1 etc. but I agree that's terrible practice not to timestamp a paper aside the authors.