Comment by intended
20 hours ago
> and you won't get human support or Claude – even if you are an enterprise paying out of your nose. And there's 0 redressal unless you go viral on social media.
Sadly this sounds like par for the course when it comes to tech. Too many messages and requests for help depend on knowing someone in the right slack groups.
If you’re paying out of your nose, you would have forward deployed Anthropic/OpenAI engineers on the premises.
Which is very confusing to me. If you have groundbreaking AI, you can offer groundbreaking support at scale.
You wouldn't build a chat bot for that, imagine how easy it is to make that thing go off the rails and allow anyone to reactivate their account. Really, you can't trust it to do any business function...
At least, that's really the message this sends in my opinion
I really wish more people would view these companies with the suspicion they deserve, as they sell the product as safe and comprehensive while refusing/failing to use it the same way themselves.
> If you have groundbreaking AI, you can offer groundbreaking support at scale
You're a funny one aren't you...
Meet "Fin" Anthropic's "where support questions go to die" so-called-support bot, created by Intercom but powered by Anthropic.
Maybe it's an internal in-joke in the Anthropic offices ... "Fin" in french means "End".
I don't know anyone who has had a positive experience with "Fin" .... or ever spoken to a human at Anthropic support for that matter, even if you ask "Fin" to escalate.
Nope.
Customer support and safety are cost centers. It doesn’t scale like software does and no one’s KPIs are going to improve dramatically if you provide support beyond a point.
AI and LLMs are the cool tech, and the most important thing is to push the frontier. Money spent elsewhere is money not spent on R&D.
It would be hilarious if it wasn’t the GDPs of nations being spent on this.