← Back to context

Comment by grey-area

1 day ago

The heart of the article is this conclusion, which I think is correct from first-hand experience with these tools and teams trying to use them:

So what good are these tools? Do they have any value whatsoever?

Objectively, it would seem the answer is no.

The main benefit I've gotten from AI that I see no one talking about is it dramatically lessens the mental energy required to work on a side project after a long day of work. I code during the day, it's hard to find motivation to code at night. It's a lot easier to say "do this", have the AI generate shitty code, then say, "you duplicated X function, you over complicated Y, you have a bug at Z" then have it fix it. On a good day I get stuff done quicker, on an average day I don't think I do. However, I am getting more done because the it takes a huge chunk out of the mental load for me and requires significantly less motivation to get something done on my side project. I think that is worth it to me. That said, I am just about to ban my junior engineers from using it at work because I think it is detrimental to their growth.

  • I agree with the side-project thing, where the code is only incidental to working on the real project. I recently wanted to organize thousands of photos my family had taken over decades and sprawled on a network drive, and in 5 minutes vibe-coded a script to recursively scan, de-dupe, rename with datetime and hash, and organize by camera from the EXIF data.

    I could have written it myself in a few hours, with the Python standard docs open on one monitor and coding and debugging on the other etc, but my project was "organize my photos" not "write a photo organizing app". However, often I do side projects to improve my skills, and using an AI is antithetical to that goal.

    • I've found a lot of utility in this. Small throw away utility apps where I just want to automate some dumb thing once or twice and the task is just unusual enough that I can't just grab something off the shelf.

      I reached for claude code to just vibe a basic android ux to drive some rest apis for an art project as the existing web UI would be a PITA to use under the conditions I had. Worked well enough and I could spend my time finishing other parts of the project. It would not have been worth the time to write the app myself and I would have just suffered with the mobile web UI instead. Would I have distributed that Android app? God no, but it did certainly solve the problem I had in that moment.

  • Very much the same for me. I use some LLMs to do small tasks at work where I know they can be useful, it's about 5-10% of my coding work which itself is about 20% of my time.

    Outside of work though it's been great to have LLMs to dive into stuff I don't work with, which would take me months of learning to start from scratch. Mostly programming microcontrollers for toy projects, or helping some artists I know to bring their vision to life.

    It's absurdly fun to get kickstarted into a new domain without having to learn the nitty-gritty first but I eventually need to learn it, it just lowers the timeframe to when the project becomes fun to work with (aka: it barely works but does something that can be expanded upon).

I don't think you understand what the word "objectively" means.

  • It's from the post. And I agree, the author has no clue what objectively means.

    Just another old-man-shouting-at-cloud blog post. you company culture sucks and the juniors need to be managed better. Don't blame the tools.

  • Objectively is like literally - they both also mean their own opposites (subjectively and figuratively) due to how people actually use them in real life writing and communication as opposed to literature

AI tools absolutely can deliver value for certain users and use cases. The problem is that they’re not magic, they’re a tool and they have certain capabilities and limitations. A screwdriver isn’t a bad tool just because it sucks at opening beer bottles.

  • So what use cases are those?

    It seems to me that the limitations of this particular tool make it suitable only in cases where it doesn't matter if the result is wrong and dangerous as long as it's convincing. This seems to be exclusively various forms of forgery and fraud, e.g. spam, phishing, cheating on homework, falsifying research data, lying about current events, etc.

    • Extracting structured data from unstructured text at runtime. Some models are really good at that and it’s immensely useful for many businesses.

      3 replies →

    • > So what use cases are those?

      I think that as software/data people, we tend to underestimate the number of business processes that are repetitive but require natural language parsing to be done. Examples would include supply chain (basically run on excels and email). Traditionally, these were basically impossible to automate because reading free text emails and updating some system based on that was incredibly hard. LLMs make this much, much easier. This is a big opportunity for lots of companies in normal industries (there's lots of it in tech too).

      More generally, LLMs are pretty good at document summarisation and question answering, so with some guardrails (proper context, maybe multiple LLM calls involved) this can save people a bunch of time.

      Finally, they can be helpful for broad search queries, but this is much much trickier as you'd need to build decent context offline and use that, which (to put it mildly) is a non-trivial problem.

      In the tech world, they are really helpful in writing one to throw away. If you have a few ideas, you can now spec them out and get sortof working code from an LLM which lowers the bar to getting feedback and seeing if the idea works. You really do have to throw it away though, which is now much, much cheaper with LLM technology.

      I do think that if we could figure out context management better (which is basically decent internal search for a company) then there's a bunch of useful stuff that could be built, but context management is a really, really hard problem so that's not gonna happen any time soon.

    • I started a new job recently, and used ChatGPT tons to learn how to use the new tools: python, opencv, fastapi. I had questions that were too complex for a web search, which ChatGPT answered very coherently! I found it a very good tool to use alongside web search, documentation, and trawling through Stack Overflow.

    • I personally use it as a starting point for research and for summarizing very long articles.

      I’m a mostly self taught hobbyist programmer, so take this with a grain of salt, but It’s also been great for giving me a small snippet of code to use as a starting point for my projects. I wouldn’t just check whatever it generates directly into version control without testing it and figuring out how it works first. It’s not a replacement for my coding skills, but an augmentation of them.

I need you to tell me how when I just fed Claude a 40 page Medicare form and asked it to translate it to a print-friendly CSS version and uses Cottle for templating "objectivtely" was of no value to me?

What about 20 minuets ago when I threw a 20-line Typescript error in and it explained it in English to me? What definition of "objective" would that fall under?

Or get this, I'm building off of an existing state machine library and asked it to find any potential performance issues and guess what? It actually did. What universe do you live in where that doesn't have objective value?

Am I going to need to just start sharing my Claude chat history to prove to people who live under a rock that a super-advanced pattern matcher that can compose results can be useful???

Go ahead, ask it to write some regex and then tell me how "objectively" useless it is?

  • >Am I going to need to just start sharing my Claude chat history to prove to people

    I think we'll all need at least 3 of your anecdotes before we change our minds and can blissfully ignore the slow-motion train wreck that we all see heading our way. Sometime later in your life when you're in the hospital hooked up to a machine the firmware of which was vibe-coded and the FDA oversight was ChatGPT-checked, you may have misgivings but try not to be alarmed, that's just the naysayers getting to you.

    • >Sometime later in your life when you're in the hospital hooked up to a machine the firmware of which was vibe-coded and the FDA oversight was ChatGPT-checked, you may have misgivings but try not to be alarmed, that's just the naysayers getting to you.

      This sentence is proof you guys are some of the most absurd people on this planet.