← Back to context

Comment by mitchellh

11 hours ago

Correct. I use AI a ton and I'm having more fun every day than I ever did before thanks to it (on average, highs are higher, lows are lower). Your characterization is all very accurate. Thank you.

Here's some other topics I've written on it:

- https://mitchellh.com/writing/my-ai-adoption-journey

- https://mitchellh.com/writing/building-block-economy

- https://mitchellh.com/writing/simdutf-no-libcxx (complex change thanks to AI, shows how I approach it rationally)

I thinking that it’s quite a different experience going all Jackson Pollock with AI in your own studio on your own terms, compared to the sorry state of affairs of having 100s of Pollocks throwing paint around wildly within a corp to meet a paint quota.

I’ve had to do a ton of SQL stuff lately, which I haven’t really worked with since the late 90s. ChatGPT has been a godsend, not just for me, but for our only coworker who knows SQL well, whom I’d probably be bugging several times a day at my wits’ end.

But no one cares about those kinds of productivity gains. Just the ones that will completely replace us.

  • I find SQL and data(bases) in general to be LLM’s Achilles’ heel. Databases are rarely under version control, so the training data only has one half of the knowledge.

    My comments are more in the context of OLAP queries and other non-normalised data often queried via SQL.

    I train non-LLM transformer models on (older and rarer) datasets, and automating the ingestion of sprawling datasets with hundreds of columns, often in a variety of local languages and different naming conventions adopted over decades, with quite a few duplicated columns…. The LLMs perform badly, it’s nigh impossible to test (for me as a user in prod) and it’s nearly impossible for the LLM companies to test (in training) to RLVR and RLHF this.

  • I'm the old school type who writes out a document that explains what I plan on doing in markdown even if it's generic like "a window with x and y buttons" and the logic flow and then use that to have ai write a plan with me before I send it off to execute it. This has worked super well.

    I do enjoy giving the frontier models wacky projects that I can't even find examples of how to do online but I don't expect any results or need them and some have done really well with it while others fall on their face (models)

  • I'm always amazed by those comments. Why couldn't you buy a book on SQL[0], and spend a week on it? Or just go over to YouTube for a refresher?

    [0]: Like https://www.oreilly.com/library/view/sql-queries-for/9780134...

    • I'm amazed you think that instead of using an LLM that someone will go buy a book and spend a week learning something that, judging by the fact that they last used it 30 years ago, likely won't be relevant for them soon.

      1 reply →

    • When you have a general idea of what smells bad vs what's okay...why?

      I'd rather get it from the LLM and review

    • This is fine for a moderately sized query. When your queries start taking in 8 joins and 20 fields per table because you're running queries on Presto with 5 TB of data, not only is it drastically better at writing (because it doesn't mess up the fields), you can ask it to try the query 5 different ways to help you land on the most optimal.

      3 replies →

> outsourcing their decision making and thinking to AI and not really about using AI itself

> I use AI a ton and I'm having more fun every day than I ever did before

With respect, this is what makes me worry.

If someone is a user of AI, can they really tell the difference between "outsourcing" and "using"? I worry that a lot of people will start out well-intentioned and end up completely outsourced before they realise it.

Hi Mitchell. Psychosis is a serious psychiatric condition that can be induced or triggered by AI. “AI psychosis” in this context is a misuse of a clinical term. Your tweet describes a disagreement on a value judgment that boils down to “move fast and break things” with high trust in AI outputs vs going all in on quality and reliability with low trust in AI. It’s an engineering tradeoff like any other.

Claiming that the people who disagree with you must be experiencing a form of psychosis, experiencing actual hallucinations and unable to tell what is real, is a weak ad hominem that comes off no better than calling them retarded or schizophrenic.

If you genuinely think one of your friends is going through a psychotic episode, you should be trying to get to them professional help. But don’t assume you can diagnose a human psyche just because you can diagnose a software bug.

  • He uses "AI psychosis" as a description of people that are overzealous on AI. He is obviously not a person that can or would diagnose mental illness.

    To the wider audience on HN the phrasing is pretty clear. An outsider with a tiny bit or intellectual charity wouldn't come to conclusions like you do.

  • Psychosis does not require hallucinations. Delusions are sufficient.

    The key factor is losing touch with reality, which results in individual or collective harm.

    There is also such a thing as mass psychosis, and those are unfortunately a more difficult situation because the government and corporations are generally the ones driving them, and they are culturally normalized.

    • Yes. I was offering examples. Again, having a difference of opinion is not a delusion.

      If he meant mass psychosis, he should have said mass psychosis. And again, since he is not a public health scientist or any flavor of psych professional, he probably shouldn’t make those proclamations. And should probably call for a wellness check instead of posting on social media if he were truly concerned for their health.

      4 replies →

  • was looking for this comment. this post is highly inappropriate and very inaccurate. this should be at the top. too many people are throwing around the word psychosis without knowing what it means. if someone is truely going through psychosis you get them help!