Comment by mattmaroon
10 hours ago
A lot of these companies are not small monthly fees. And if you’ve ever worked with them, you’ll know that many of the tools they sell are an exact match for almost nobody’s needs.
So what happens is a corporation ends up spending a lot of money for a square tool that they have to hammer into a circle hole. They do it because the alternative is worse.
AI coding does not allow you to build anything even mildly complex with no programmers yet. But it does reduced by an order of magnitude the amount of money you need to spend on programming a solution that would work better.
Another thing AI enables is significantly lower switching costs. A friend of mine owned an in person and online retailer that was early to the game, having come online in the late 90s. I remember asking him, sometime around 2010, when his Store had become very difficult to use, why he didn’t switch to a more modern selling platform, and the answer was that it would have taken him years to get his inventory moved from one system to another. Modern AI probably could’ve done almost all of the work for him.
I can’t even imagine what would happen if somebody like Ford wanted to get off of their SAP or Oracle solution. A lot of these products don’t withhold access to your data but they also won’t provide it to you in any format that could be used without a ton of work that until recently would’ve required a large number of man hours
I have a prime example of this were my company was able to save $250/usr/mo for 3 users by having Claude build a custom tool for updating ancient (80's era) proprietary manufacturing files to modern ones. It's not just a converter, it's a gui with the tools needed to facilitate a quick manual conversion.
There is only one program that offers this ability, but you need to pay for the entire software suite, and the process is painfully convoluted anyway. We went from doing maybe 2-3 files a day to do doing 2-3 files an hour.
I have repeated ad-nausea that the magic of LLMs is the ability to built the exact tool you need for the exact job you are doing. No need for the expensive and complex 750k LOC full tool shed software suite.
Was the custom tool developed by copying how the existing software worked? Copying existing functionality is not always possible, and doesn't capture the real costs.
No, it is incredibly streamlined because it tailored specifically to achieve this modernization.
The paid program can do it because it can accept these files as an input, and then you can use the general toolset to work towards the same goal. But the program is clunky an convoluted as hell.
To give an example, imagine you had tens of thousands of pictures of people posing, and you needed to change everyone's eye color based on the shirt color they were wearing.
You can do this in Photoshop, but it's a tedious process and you don't need all $250/mo of Photoshop to do it.
Instead make a program that auto grabs the shirt color, auto zooms in on the pupils, shows a side window of where the object detection is registering, and tees up the human worker to quickly shade in the pupils.
Dramatically faster, dramatically cheaper, tuned exactly for the specific task you need to do.
1 reply →
> It's not just a converter, it's a gui with the tools needed to facilitate a quick manual conversion.
is this like a meta-joke?
> I have a prime example of this were my company was able to save $250/usr/mo for 3 users by having Claude build a custom tool for updating ancient (80's era) proprietary manufacturing files to modern ones.
The funny thing about examples like this is that they mostly show how dumb and inefficient the market is with many things. This has been possible for a long time with, you know, people, just a little more expensive than a Claude subscription, but would have paid for itself many times over through the years.
It's not just a joke, it's a meta-joke! To address the substance of your comment, it's probably an opportunity cost thing. Programmers on staff were likely engaged in what was at least perceived as higher value work, and replacing the $250/mo subscription didn't clear the bar for cost/benefit.
Now with Claude, it's easy to make a quick and dirty tool to do this without derailing other efforts, so it gets done.
1 reply →
The problem with this reasoning is it requires assuming that companies do things for no reason.
However possible it was to do this work in the past, it is now much easier to do it. When something is easier it happens more often.
No one is arguing it was impossible to do before. There's a lot of complexity and management attention and testing and programmer costs involved in building something in house such that you need a very obvious ROI before you attempt it especially since in house efforts can fail.
3 replies →
Our company just went through an ERP transition and AI of all kinds was 0% helpful for the same reason it’s difficult for humans to execute: little to no documentation and data model mismatches.
surprising considering you just listed two primary use cases (exploring codebases/data models + creating documentation)
Exploring a codebase tells you WHAT it's doing, but not WHY. In older codebases you'll often find weird sections of code that solved a problem that may or may not still exist. Like maybe there was an import process that always left three carriage returns at the end of each record, so now you got some funky "lets remove up to three carriage returns" function that probably isn't needed. But are you 100% sure it's not needed?
Same story with data models, let's say you have the same data (customer contact details) in slightly different formats in 5 different data models. Which one is correct? Why are the others different?
Ultimately someone has to solve this mystery and that often means pulling people together from different parts of the business, so they can eventually reach consensus on how to move forward.
> creating documentation
How is an AI supposed to create documentation, except the most useless box-ticking kind? It only sees the existing implementation, so the best it can do is describe what you can already see (maybe with some stupid guesses added in).
IMHO, if you're going to use AI to "write documentation," that's disposable text and not for distribution. Let the next guy generate his own, and he'll be under no illusions about where the text he's reading came from.
If you're going to write documentation to distribute, you had better type out words from your own damn mind based on your own damn understanding with your own damn hands. Sure, use an LLM to help understand something, but if you personally don't understand, you're in no position to document anything.
I don't find this surprising. Code and data models encode the results of accumulated business decisions, but nothing about the decision making process or rationale. Most of the time, this information is stored only in people's heads, so any automated tool is necessary blind.
2 replies →
Please don't feed people LLM generated docs
I worked on a product that had to integrate with Salesforce because virtually all of our customers used it. It must have been a terrible match for their domain, because they had all integrated differently, and all the integrations were bad. There was virtually no consistency from one customer to next in how they used the Salesforce data model. Considering all of these customers were in the same industry and had 90% overlapping data models, I gave up trying to imagine how any of them benefited from it. Each one must have had to pay separately for bespoke integrations to third-party tools (as they did with us) because there was no commonality from one to the next.
One thing that's interesting is that their original Salesforce implementations were so badly done that I could imagine them being done with an LLM. The evergreen stream of work that requires human precision (so far, anyway) is all of the integration work that comes afterwards.
If it is not a small fee, I do wonder - is there still advantage to having a provider which one may take out a lawsuit against if something goes wrong? To what extent might liability and security vetting by scaled usage still hedge against AI, in your view?
> So what happens is a corporation ends up spending a lot of money for a square tool [SaaS] that they have to hammer into a circle hole.
You are assuming that corporations have the capability to design the software they need.
There are many benefits to SaaS software, and some significant costs (e.g. integration).
One major benefit of SaaS is domain knowledge and most people underestimate the complexity of even well known domains (e.g. accounts).
Companies also underestimate the difficulty of aligning diverging political needs within the business, and they underestimate the expense of distraction on a non-core area that there is no business advantage to becoming competent at. As a vendor sometimes our job was simply to be the least worst solution.
At least that's what I saw.
this is true in many cases and not in many cases. Another true one is payments - it's complex AF and and no one will sit down and vibe code it. A CRM? Easy in many cases. Some workflow tool? Easy, they know the exact workflow.
So, sure, some products will go the way of the dodo and some will not.
Why waste your time on something that isn't your core business when, presumably, the SAASes of the world will use the new tech and lower prices as well?
In most cases they can't. The cost they face is sales and marketing. Acquiring a customer costs money. Churn happens.
>But it does reduced by an order of magnitude the amount of money you need to spend on programming a solution that would work better
Could you share any data on this? Are there any case studies you could reference or at least personal experience? One order of magnitude is 10x improvement in cost, right?
I‘m not sure it’s a perfect example, but at least it’s a very realistic example from a company that really doesn’t have time and energy for hype or fluff:
We are currently sunsetting our use of Webflow for content management and hosting, and are replacing it with our own solution which Cursor & Claude Opus helped us build in around 10 days:
https://dx-tooling.org/sitebuilder/
https://github.com/dx-tooling/sitebuilder-webapp
Thanks for the link.
So, basically you made a replacement for webflow for your use case in 10 days, right?
I’m not sure the world needed yet another CMS
1 reply →
Oh, but that doesn't matter. SaaS tools aren't bought by the people that have to use them. Entire groups in big companies (HR & co) are delegating the majority of their job to SaaS and all failures are blamed on the people who have to interact with them while they are entirely ancillary to their job.
Modern AI probably could’ve done almost all of the work for him.
no way. We're not talking a standalone AI created program for a single end-user, but entire integrated e-commerce enterprise system that needs to work at scale and volume. Way harder.
I also have pretty hefty skepticism that AI is going to magically account for the kinds of weird-ass edge cases that one encounters during a large data migration.
Just like coding the AI can reach out to a human for clarification on what to do.
It's not that AI is magically going to do it, it's that the human running the migration now has better tools to generate code that does account for those one-off edge cases.
I was interviewing with a company that has done ETL migration, interop and management tools for the healthcare space, and is just dipping their toes in the "Could AI do this for us or help us?"
Their initial answer/efforts seem to be a qualified but very qualified "Possibly" (hah).
They talked of pattern matching and recognition being a very strong point, but yeah, the edge cases tripping things up, whether corrupt data or something very obscure.
Somewhat like the study of MRIs and CTs of people who had no cancer diagnosis but would later go on to develop cancer (i.e. they were sick enough that imaging and testing was being ordered but there were no/insufficient markers for a radiologist/oncologist to make the diagnosis, but in short order they did develop those markers). AI was very good at analyzing the data set and with high accuracy saying "this person likely went on to have cancer", but couldn't tell you why or what it found.