Comment by DebtDeflation
10 hours ago
That's been true for the last year or two, but it feels like we're at an inflection point. All of the announcements from OpenAI for the last couple of months have been product focused - Instant Checkout, AgentKit, etc. Anthropic seems 100% focused on Claude Code. We're not hearing as much about AGI/Superintelligence (thank goodness) as we were earlier this year, in fact the big labs aren't even talking much about their next model releases. The focus has pivoted to building products from existing models (and building massive data centers to support anticipated consumption).
Meta hiring researchers en masse at $100m+ pay packages is fairly new, as of this summer.
I don't know if that's indicative of the market as a whole though. Zuck just seems really gutted they fell behind with Llama 4.
> Meta hiring researchers en masse at $100m+ pay packages is fairly new, as of this summer.
En masse? Wasn't it just a couple of outliers?
en deux
1 reply →
This summer was a lifetime ago.
"en masse" is a stretch
It was a few researchers and they stopped doing that already. In a way, the AI market in H1 2025 is VERY different already than H2 2025.
A lot of them left in the first days on the job. I guess they saw what they were going to work on and peaced out. No one wants to work on AI slop and mental abuse of children on social media.
I don't understand how an intelligent person could accept a job offer from Facebook in 2025 and not understand what company they just agreed to work for.
7 replies →
Stated without a shred of evidence and getting no pushback. Classic for a nonsense claim about big tech company HN doesn't like, lol.
3 replies →
> No one wants to work on AI slop and mental abuse of children on social media.
If this was true, we wouldn't have AI slop and mental abuse of children. Since we do, we know your comment is just flat out incorrect.
You must not watch broadcast television (e.g American Football). Anthropic is doing a huge ad blitz, trying to get end customers to use their chatbot.
Anthropic, frankly, needs to in ways the other big names don't.
It gets lost on people in techcentric fields because Claude's at the forefront of things we care about, but Anthropic is basically unknown among the wider populace.
Last I'd looked a few months ago, Anthropic's brand awareness was in the middle single digits; OpenAI/ChatGPT was somewhere around 80% for comparison. MS/Copilot and Gemini were somewhere between the two but closer to Open AI than Anthropic.
tl;dr - Anthropic has a lot more to gain from awareness campaigns than the other major model providers do.
Anthropic feels like a one trick pony as most users dont need or want anthropic products.
However, I speak with a small subset of our most experienced engineers and they all love Claude Sonnet 4.5. Who knows if this lead will last.
3 replies →
If Claude Code is Anthropic’s main focus why are they not responding to some of the most commented issues on their GitHub? https://github.com/anthropics/claude-code/issues/3648 has people begging for feedback and saying they’re moving to OpenAI, has been open since July and there are similar issues with 100+ comments.
Hey, Boris from the Claude Code team here. We try hard to read through every issue, and respond to as many issues as possible. The challenge is we have hundreds of new issues each day, and even after Claude dedupes and triages them, practically we can’t get to all of them immediately.
The specific issue you linked is related to the way Ink works, and the way terminals use ANSI escape codes to control rendering. When building a terminal app there is a tradeoff between (1) visual consistency between what is rendered in the viewport and scrollback, and (2) scrolling and flickering which are sometimes negligible and sometimes a really bad experience. We are actively working on rewriting our rendering code to pick a better point along this tradeoff curve, which will mean better rendering soon. In the meantime, a simple workaround that tends to help is to make the terminal taller.
Please keep the feedback coming!
It’s surprising to hear this get chalked up to “it’s the way our TUI library works”, while e.g. opencode is going to the lowest level and writing their own TUI backend. I get that we can’t expect everyone to reinvent the wheel, but it feels symptomatic of something that folks are willing to chalk up their issues as just being an unfortunate and unavoidable symptom of a library they use rather than seeming that unacceptable and going to the lowest level.
CC is one of the best and most innovative pieces of software of the last decade. Anthropic has so much money. No judgment, just curious, do you have someone who’s an expert on terminal rendering on the team? If not, why? If so, why choose a buggy / poorly designed TUI library — or why not fix it upstream?
3 replies →
That issue is the fourth most-reacted issue, and third most open issue. And the two things above it are feature requests. It seems like you should at the very least have someone pop in to say "working on it" if that's what you're doing, instead of letting it sit there for 4 months?
1 reply →
Thanks for the reply (and for Claude Code!). I've seen improvement on this particular issue already with the last major release, to the extent that it's not a day to day issue for me. I realise Github issues are not the easiest comms channel especially with 100s coming in a day, but occasional updates on some of the top 10 commented issues could perhaps be manageable and beneficial.
How about giving us the basic UX stuff that all other AI products have? I've been posting this ever since I first tried Claude: Let us:
* Sign in with Apple on the website
* Buy subscriptions from iOS In App Purchases
* Remove our payment info from our account before the inevitable data breach
* Give paying subscribers an easy way to get actual support
As a frequent traveller I'm not sure if some of those features are gated by region, because some people said they can do some of those things, but if that is true, then that still makes the UX worse than the competitors.
1 reply →
It's entirely possible they don't have the ability in house to resolve it. Based on the report this is a user interface issue. It could just be some strange setting they enabled somewhere. But it's also possible it's the result of some dependency 3 or 4 levels removed from their product. Even worse, it could be the result of interactions between multiple dependencies that are only apparent at runtime.
>It's entirely possible they don't have the ability in house to resolve it.
I've started breathing a little easier about the possibilty of AI taking all our software engineering jobs after using Anthropic's dev tools.
If the people making the models and tools that are supposed to take all our jobs can't even fix their own issues in a dependable and expedient manner, then we're probably going to be ok for a bit.
This isn't a slight against Anthropic, I love their products and use them extensively. It's more a recognition of the fact that the more difficult aspects of engineering are still quite difficult, and in a way LLMs just don't seem well suited for.
1 reply →
Seems these users are getting it on VS code, while I am getting the exact same thing when using claude code on a Linux server over SSH from Windows Terminal. At this point their app has to be the only thing in common?
2 replies →