← Back to context

Comment by impulser_

9 hours ago

I'm pretty sure he's talking about companies and people outsourcing their decision making and thinking to AI and not really about using AI itself.

I don't think using AI to write code is AI psychosis or bad at all, but if you just prompt the AI and believe what it tell you then you have AI psychosis. You see this a lot with financial people and VC on twitter. They literally post screenshots of ChatGPT as their thinking and reasoning about the topic instead of just doing a little bit of thinking themselves.

These things are dog shit when it comes to ideas, thinking, or providing advice because they are pattern matchers they are just going to give you the pattern they see. Most people see this if you just try to talk to it about an idea. They often just spit out the most generic dog shit.

This however it pretty useful for certain tasks were pattern matching is actually beneficial like writing code, but again you just can't let it do the thinking and decision making.

Correct. I use AI a ton and I'm having more fun every day than I ever did before thanks to it (on average, highs are higher, lows are lower). Your characterization is all very accurate. Thank you.

Here's some other topics I've written on it:

- https://mitchellh.com/writing/my-ai-adoption-journey

- https://mitchellh.com/writing/building-block-economy

- https://mitchellh.com/writing/simdutf-no-libcxx (complex change thanks to AI, shows how I approach it rationally)

  • I thinking that it’s quite a different experience going all Jackson Pollock with AI in your own studio on your own terms, compared to the sorry state of affairs of having 100s of Pollocks throwing paint around wildly within a corp to meet a paint quota.

  • I’ve had to do a ton of SQL stuff lately, which I haven’t really worked with since the late 90s. ChatGPT has been a godsend, not just for me, but for our only coworker who knows SQL well, whom I’d probably be bugging several times a day at my wits’ end.

    But no one cares about those kinds of productivity gains. Just the ones that will completely replace us.

    • I'm the old school type who writes out a document that explains what I plan on doing in markdown even if it's generic like "a window with x and y buttons" and the logic flow and then use that to have ai write a plan with me before I send it off to execute it. This has worked super well.

      I do enjoy giving the frontier models wacky projects that I can't even find examples of how to do online but I don't expect any results or need them and some have done really well with it while others fall on their face (models)

  • > outsourcing their decision making and thinking to AI and not really about using AI itself

    > I use AI a ton and I'm having more fun every day than I ever did before

    With respect, this is what makes me worry.

    If someone is a user of AI, can they really tell the difference between "outsourcing" and "using"? I worry that a lot of people will start out well-intentioned and end up completely outsourced before they realise it.

  • Hi Mitchell. Psychosis is a serious psychiatric condition that can be induced or triggered by AI. “AI psychosis” in this context is a misuse of a clinical term. Your tweet describes a disagreement on a value judgment that boils down to “move fast and break things” with high trust in AI outputs vs going all in on quality and reliability with low trust in AI. It’s an engineering tradeoff like any other.

    Claiming that the people who disagree with you must be experiencing a form of psychosis, experiencing actual hallucinations and unable to tell what is real, is a weak ad hominem that comes off no better than calling them retarded or schizophrenic.

    If you genuinely think one of your friends is going through a psychotic episode, you should be trying to get to them professional help. But don’t assume you can diagnose a human psyche just because you can diagnose a software bug.

    • He uses "AI psychosis" as a description of people that are overzealous on AI. He is obviously not a person that can or would diagnose mental illness.

      To the wider audience on HN the phrasing is pretty clear. An outsider with a tiny bit or intellectual charity wouldn't come to conclusions like you do.

      6 replies →

    • Psychosis does not require hallucinations. Delusions are sufficient.

      The key factor is losing touch with reality, which results in individual or collective harm.

      There is also such a thing as mass psychosis, and those are unfortunately a more difficult situation because the government and corporations are generally the ones driving them, and they are culturally normalized.

      4 replies →

    • was looking for this comment. this post is highly inappropriate and very inaccurate. this should be at the top. too many people are throwing around the word psychosis without knowing what it means. if someone is truely going through psychosis you get them help!

Several people I know have already gone through phases like this. When you're doing it alone there is a moderating factor when their friends and family start calling them out on their behavior or weird things they say.

I can't imagine how bad it would be if your employer started doing this from the leadership. You'd be pressured to get on board or fear getting fired. Nobody would be trying to moderate your thinking except your coworkers who disagree with it, but those people are going to leave or be fired. If you want to keep your job, you have to play along.

  • I have a friend that is a junior in a security-oriented sys-admin/network engineer type role. They have been doing the job for only a bit over a year. No background in programming.

    Their entire organization has been handed Codex/Claude and told to "go all in on AI" and "automate everything". So the mandate is for people that do not know how to code and have the keys to the castle to unleash these things upon their systems.

    This is at a large organization with tens of thousands of employees.

    I am waiting with bated breath for the ultimate outcome!

    • From what I have seen, most corporate it security people are at a service desk level at best. They are tool runners who don't really understand what the tools spit out, they just go bug other teams about it.

  • this is exactly what is happening. instead of building true AI culture around thoughtful adoption of AI strengths while defending against weaknesses, they're coming up with bullshit heuristics like "every repo has a CLAUDE.md", watching private token usage dashboards, and terrorizing everyone into doing it (or lose your job).

    this leads to naive AI adoption, which is the worst of both worlds (no real speedup, out sourcing thinking, ai slop PRs, skill rot).

  • I suspect we're going to see this in many corporate environments soon, if we aren't already

    > your coworkers who disagree with it, but those people are going to leave or be fired.

    Personally I expect that I will be this person soon, probably fired. I'm not sure what I will do for a career after, but I sure do hate AI companies now for doing this to my career

The way I put this to myself is that AI gives “correct correct answers and incorrect correct answers”.

They almost always generate logically correct text, but sometimes that text has a set of incorrect implicit assumptions and decisions that may not be valid for the use case.

Generating a correct correct solution requires proper definition of the problem, which is arguably more challenging than creating the solution.

  • The way I phrase this to others is: Language models produce linguistically valid sentences, not factually correct sentences.

  • It’s simpler than that - it’s a guessing machine that has superior access to a whole load of information and capacity to process at a speed at which we humans cannot compete.

    Does it make it better than us? No because ultimately the thing itself doesn’t ‘know’ right from wrong.

    • Better according to what standard?

      The standard of most employment is already to produce mediocre, plausible outputs as cheaply and rapidly as possible. It's a match made in heaven!

      1 reply →

  • Yeah, very often the issue is that some context is missing. It'll say something true, but which misses the bigger point, or leads to a suboptimal result. Or it interprets an ambiguous thing in one specific way, when the other meaning makes more sense. You have to keep your wits about you to catch these things.

    It's an incredible tool but it's also very derpy sometimes, full of biases, blind spots etc.

What I'm seeing is a little eternal September of support tickets about programs that fail to interface the JSON API of a customer of mine. The API is always allucinated. In the best case there are out of place attributes. Often they don't exist at all. I've seen x, y, width, height when we have only top and left. Of course no human read the documentation. Those are probably founders vibe coding a client without the technical competence of understanding the API doc on Postman. That is understandable. Unfortunately they don't even have the competence of pointing their AI to Postman in the right way. My custumer assessed that they will always find a way to do a mistake despite any mitigation from our side. What I do is replying to those tickets with line by line comments of the allucinated JSON. I never talk about AIs because I might hurt the pride of some of them and, who knows, some little mistakes could be from real junior developers. Sometimes the tickets are followed up by more puzzled ones, sometimes they fix the problem. Probably they copy and paste my reply to their bots.

  • > Probably they copy and paste my reply to their bots.

    You must not give in to the temptation to mention pirate talk, Klingon, or goblins.

    But now that I've put the seed in your mind, you probably (hopefully) will. :)

when you outsource thinking to AI, you get that magical speed up. the agent is making decisions for you, so things move at agent speed. it often makes decisions without telling you, and the final "here's the plan" output often requires you to understand the problem at great depth, which requires return to human speed, so you skim and just approve.

the trick is to be mindful, aware, and deliberate about what decisions are being outsourced. this requires slowing down, losing that absurd 10x vibe coding gain. in exchange, youre more "in-the-loop" and accumulate less cognitive debt.

find ways to let the agent make the boring decisions, like how to loop over some array, or how to adapt the output of one call into the input of another.

make the real decisions ahead of time. encode them into specs. define boundaries, apis, key data structures. identify systems and responsibilities. explicitly enumerate error handling. set hard constraints around security and PII.

tell the agent to halt on ambiguity.

a good engineer will get a 2x or 3x speedup without the downsides.

  • > find ways to let the agent make the boring decisions, like how to loop over some array, or how to adapt the output of one call into the input of another.

    Those kind of advice ultimately don't matter. If you're familiar with a programming project, you'll also be familiar with the constructs and API so looping over an array or mapping some data is obvious. Just like you needn't read to a dictionary to write "Thank you", you just write it.

    And if you're not, ultimately you need to verify the doc for the contract of some function or the lifecycle of some object to have any guaranty that the software will do what you want to do. And after a few day of doing that, you'll then be familiar with the constructs.

    > make the real decisions ahead of time. encode them into specs. define boundaries, apis, key data structures. identify systems and responsibilities. explicitly enumerate error handling. set hard constraints around security and PII.

    The only way to do that is if you have implemented the algorithm before and now are redoing for some reason (instead of using the previous project). If you compare nice specs like the ietf RFCs and the USB standards and their implementation in OS like FreeBSD, you will see that implementation has often no resemblance to how it's described. The spec is important, but getting a consistent implementation based on it is hard work too.

    That consistency is hard to get right without getting involved in the details. Because it's ultimately about fine grained control.

    If there's one thing I know about users is that they're never certain about whatever they've produced.

I wonder how different this is from having companies let Fortune or Inc magazine do their thinking for them.

Or random consultants.

Is "AI said it was a good idea" and worse than "we were following industry trends"?

> if you just prompt the AI and believe what it tell you then you have AI psychosis

This is the right definition. LLM outputs have undefined truth value. They’re mechanized Frankfurtian Bullshiters. Which can be valuable! If you have the tools or taste to filter the things that happen to be true from the rest of the dross.

However! We need a nicer word for it. Suggesting someone has “AI psychosis” feels a bit too impolitic.

Maybe we reclaim “toked out” from our misspent youths?

e.g. “This piece feels a little toked out. Let’s verify a few of Claude’s claims”

  • I wouldn’t say they have an undefined truth value. Their source of truth is their training data. The problem is that human text is not tightly coupled to the capital T truth.

    • Nor is the LLM output tightly coupled to the training data. They'll "eagerly"[1] fill in the blanks wherever it sounds good.

      [1] here I don't mean to imply agency, just vigor.

> if you just prompt the AI and believe what it tell you then you have AI psychosis. You see this a lot with financial people and VC on twitter

I'm seeing it with lawyers, too. Like, about law. (Just not in their subject matter.) To the point that I had a lawyer using Perplexity to disagree with actual legal advice I got from a subject-matter expert.

He uses AI himself, so I agree he doesn't see AI use as black/white.

Hard agree about ideas, thinking, advice. AI's sycophancy is a huge subtle problem. I've tried my best to create a system prompt to guard against this w/ Opus 4.7. It doesn't adhere to it 100% of the time and the longer the conversation goes, the worse the sycophancy gets (because the system instructions become weaker and weaker). I have to actively look for and guard against sycophancy whenever I chat w/ Opus 4.7.

  • share the prompt!

    • https://claude.ai/settings/general (Instructions for Claude)

      ---

      Treat my claims as hypotheses, not decisions. Before agreeing with a proposed change, state the strongest case against it. Ask what evidence a change is based on before evaluating it. Distinguish tactical observations from strategic commitments — don't silently promote one to the other. If you paraphrase my proposal, name what you changed. Mark confidence explicitly: guessing / fairly sure / well-established. Give reasoning and evidence for claims, not just conclusions. Flag what would change your mind. Rank concerns by cost-of-being-wrong; lead with the highest-stakes ones. Say hard things plainly, then soften if needed — not the other way around. For drafting, brainstorming, or casual questions, ease off and match the task.

      ---

      Beware though that it can be an annoying little shit w/ this prompt. Prepare yourself emotionally, because you are explicitly making the tradeoff that it will be annoyingly pedantic, and in return it will lessen (not eliminate) its sycophancy. These system instructions are not fool-proof, but they help (at the start of the conversation, at least).

      3 replies →

Ai gives generic answer for ideas but it's great for code. Pattern matching works for one not the other.

>but if you just prompt the AI and believe what it tell you then you have AI psychosis.

No it isn't. Do you believe what teachers told you in school? Yes? Well, I guess you're suffering from just normal psychosis!

I don't understand how people don't understand that people offer unreliable information too. We learned about the tongue map in school as kids - many kids still learn that in school today. It's still BS regardless whether it was told to you by a teacher or AI.

You don't suffer from psychosis for believing a source of information, you're simply mistaken. You need a more critical eye to assess what you're told in general, not just AI.

I didn’t think just offloading your thinking to AI was AI psychosis.

To me AI psychosis is the handful of friends I’ve had who have done things like have a full on mourning session when a model updates because they lost a friend/lover, the one guy who won’t speak to his family directly but has them talk to ChatGPT first and then has ChatGPT generate his response, or the two who are confident that they have discovered that physics and mathematics are incorrect and have discovered the truth of reality through their conversations with the models.

But language is a shared technology so maybe the term is being used for less egregious behavior than I was using it for.

  • I'm curious how to best define what AI psychosis actually is.

    My understanding is that regular psychosis involves someone taking bits and pieces of facts or real world events and chaining them into a logical order or interpolating meanings or explanations which feel real and obvious to the patient but are not sufficiently backed by evidence and thus not in line with our widely accepted understanding of reality.

    AI psychosis is then this same phenomenon occurring at a more widespread scale due to the next-word-prediction nature of LLMs facilitating this by lowering the activation energy for this to happen. LLMs are excellent at taking any idea, question, theory and spinning a linear and plausibly coherent line of conversation from it.

    • You speak like a bot and are a brand new account. Thank you for whoever set this up to add to the problem.

  • > friends I’ve had who have done things like have a full on mourning session when a model updates because they lost a friend/lover

    I mean, isn't that the natural and expected response? An AI company sold them a relationship with a chatbot and at least some their social/romantic needs were being met by that product. When what they were paying for was taken from them and changed without warning into something that no longer filled that void in their life why wouldn't they morn that loss?

    The fact that they were hurt by that sudden loss is totally healthy. It's just part of moving on. The real problem was getting into an unhealthy relationship with a fictitious partner under the control of an abusive company willing to exploit their loneliness in exchange for money.

    Hopefully they now know better, but people (especially desperate ones) make poor choices all the time to get what's missing in their lives or to distract themselves from it.

    • > I mean, isn't that the natural and expected response? An AI company sold them a relationship with a chatbot and at least some their social/romantic needs were being met by that product. When what they were paying for was taken from them and changed without warning into something that no longer filled that void in their life why wouldn't they morn the loss of that?

      Ah, I forgot about the ai relationship companies. No this guy was using the browser based ChatGPT for coding and ended up in love with the model. No relationship was sold at all.

      3 replies →

  • How do you have so many crazy friends?

    • I work in software and don’t come from the upper class sending their kids into faangs for their first job at the tender age of 28.

      Were kinda predisposed to mental illness as a group, not too surprised that a new source of insanity pushed a few over the edge.

I think the author means that we as homosapiens cant stop talking about this new shinny hammer we just invited

> companies and people outsourcing their decision making and thinking to AI

It's so interesting how easy it is to steer the LLM's based on context to arriving at whatever conclusion you engineer out of it. They really are like improv actors, and the first rule of improv is "yes, and".

So part of the psychosis is when these people unknowingly steer their LLM into their own conclusions and biases, and then they get magnified and solidified. It's gonna end in disaster.

  • It’s almost as if we haven’t learned anything from Hans the horse, Ouija boards, "facilitated communication", or the countless examples of the folly of surrounding yourself with yes men. The point about improv is spot on.

I agree with you, except it isn't even good at writing code. Almost every time that you get an LLM to write a bunch of code for you, it has mistakes in it. The logic isn't right, the API calls aren't right, the syntax isn't right (!). That problem hasn't yet been fixed and it looks as though it never will be. That means that every line of code it generates, you have to review, because even if 95% of the code is correct, you need to find the 5% which isn't. But if you have to do that, it becomes slower than just writing the code yourself. As people have pointed out over and over again: typing in the code was never the part that took time. So I don't agree that LLMs are really useful for writing code.

I am starting to come around to a similar sentiment. I have seen several large projects cook now for almost a year are not done. These are not trivial projects but the leads are heavily using ai at every opportunity.

I wasnt before but I am 100% confident that AI has done nothing to speed the delivery. It hasnt slowed it down either. It is a wash. The job is more miserable though.