Comment by Workaccount2
19 hours ago
The MIT study found 90% of workers were regularly using LLMs.
The gap was that workers were using their own implementation instead of the company's implementation.
19 hours ago
The MIT study found 90% of workers were regularly using LLMs.
The gap was that workers were using their own implementation instead of the company's implementation.
The MIT study as released also does not really provide any support for the 95% failure rate claim. Until we have more details, we really don't know where that number came from:
https://www.linkedin.com/feed/update/urn:li:activity:7365026...
Yea from what I understand 'Chats' and AI coding are something they already have market domination/are a leader on and are a good/okay product. It's the other use cases they haven't delievered on in terms of other companies using them as a platform to deliver AI apps, which I would imagine would have been a huge vertical in their pitches to investors and internal plans.
These third-party apps get huge token usage with agenentic patterns. So losing out on them and being forced to make more internal products to tune to specific use cases is not something they want to biuld out or explore
[flagged]
AI coding is mid(okay) yes, my main point is people use it and it's a good line of business right now for them. They expected bigger break throughs like gpt-2 to 3 to 4, and that's not happening so they have to lean on the other aspects of the business more.
The fact it is mid is why they are really needing all the other lines of business to work. AKA selling tokens to AI apps the specialize in other mid products, and limit the snakeoil AI products that are littering the market ruining AI's image of being the new catch all solution.
I was a big user of IntelliSense and more heavily, IntelliJ, for most of my career. It truly seemed like magic back then. I recall telling a colleague who preferred Emacs that it felt like having an editor that could read your mind, and would joke that my tab key was getting worn out.
Then I discovered LLMs.
If you think IntelliSense is comparable to what LLMs can do, you really, really need to try giving an AI higher-level problems to solve. Throwaway example I gave in a similar thread a few weeks ago: https://news.ycombinator.com/item?id=44892576
I think a big part of simonw's shtick is trying to get people to give LLMs a proper try, and TBH that's what I end up doing a lot too, including right now! The problem is a "proper try" takes dedicated effort, because it's not obvious where the AI will excel or fail for your specific context, and people legitimately don't have enough time for that.
But once you figure it out, it feels like when you first discovered IntelliSense, except you already know IntelliSense, so it's like... IntelliSense raised to the power of IntelliSense.
3 replies →