Comment by areoform
21 hours ago
Claude's actually pretty great at this! I actually used to use Claude A LOT to answer interesting questions (which I'll be writing up on!) More generally, Claude is palpably different from most other agents. I'd recommend these models – especially Opus – without qualifications.
But there's a process risk here based on their current practises. I'm hoping those practises change so that I can recommend Claude to everyone I know, but as of now, there's existential risk exposure here that's greater than Google's.
Anthropic's automated systems can and will ban you for pretty arbitrary things; and you won't get human support or Claude – even if you are an enterprise paying out of your nose. And there's 0 redressal unless you go viral on social media. Or know someone who knows someone. See: https://x.com/Whizz_ai/status/2051180043355967802 https://x.com/theo/status/2045618854932734260
And I say that as someone who likes how Anthropic has been training Claude and Opus. I just don't think they're prepared to be the trillion dollar company they've become. They are – in a very real way – suffering from success. Which is extremely inconvenient to be on the receiving end of when you're on a deadline.
Before AI, shipping code to production used to be a two-person task: one writes the code, another one reviews the code. Now with AI writing the code, the developer that was supposed to write the code, only has to review it. And this is because they are responsible for the code they ship.
Code review has become unbearable because before AI, developers were reviewing code as they went writing it in the first place. Granted, never perfect and why a second person reviewing code was (is?) a best practice. But effectively there was always some level of code review happening as developers wrote code.
I fear it is way more boring to review financial and medical documents completely written by AI than it is to write (and at the same time review) by yourself. And way more dangerous to ship mistakes than in most software.
> the developer that was supposed to write the code, only has to review it.
But more often than not that developer ends up reviewing far more lines of code due to the typical verbosity of an LLM.
100%... that's why I say code review became unbearable!
I am/was writing up an interesting hypothesis with Claude's help. But I redid the most important parts of the data pipeline manually. As in went in and cmd-c + cmd-v'ed the data by hand to create a reference, and I'm randomly spot checking 33% of the larger records.
The analysis itself; I'm doing it by hand.
Why not the developer write the code, then the AI review the code, and then finally a signoff from another human?
Far too often people think productivity is the point. Maybe the point is developer's understanding of the product IS the product?
You're not engineering black boxes, you're engineering legible boxes.
Isn’t there a code review agent?
Most workflows use a sub agent to review the code or an agent from a different company.
For example, Codex can review code written by Claude, etc.
/s?
Pretty great at what? I work in the insurance industry specifically medicare. All I see is sales people and other managers slopping out AI dashboards off of spreadsheets galore. Not only is it terrible for protecting PHI/PII. It also doesn't do things like RBAC very well either. Now instead of preventing a person from externally sharing a file i have to make sure they didn't egress the file to supabase or some other platform.
Here's some of the horrible things i've seen. Frontend dashboard with PHI/PII deployed via vercel/next because AI told them how to get their site online. Login is hardcoded into the frontend so anyone with inspect can find the password.
Another "fixed" dashboard deployed the same way. This time they added firebase auth so they got sign in with Google added with only logging into our domain. Wait how would they be able to create a token for our domain? They didn't the frontend just blocks domains from calling firebase.auth but firebase doesn't care. So simply calling the function in the console lets me login with any gmail account....
They also where showing me their RBAC with firebase. Again they don't have access to our Orgnization/Directory/Groups. So i wondered how they did this.. wouldn't you guess its a hardcoded list of approved users. You can literally call firebase.auth and sign in anonymously. Again only the frontend checks the email addresses. So now that i have a firebase auth all the backend firebase function just check that you have auth'd. So i can make any request i want to the backend. The frontend simply won't show me the code.
I could go on and on about the stupidity levels I'm facing but I don't feel like crashing out.
All I can say is this tool is only useful if you already know how to correctly implement these things. Does it save me time sure but I have to call it retarded and explain why not to do things. Honestly I feel like claude is good for people who like to gamble. When it gets it right it feels great but I don't want to roll the dice 30 times to get it correct.
[flagged]
> and you won't get human support or Claude – even if you are an enterprise paying out of your nose. And there's 0 redressal unless you go viral on social media.
Sadly this sounds like par for the course when it comes to tech. Too many messages and requests for help depend on knowing someone in the right slack groups.
If you’re paying out of your nose, you would have forward deployed Anthropic/OpenAI engineers on the premises.
Which is very confusing to me. If you have groundbreaking AI, you can offer groundbreaking support at scale.
You wouldn't build a chat bot for that, imagine how easy it is to make that thing go off the rails and allow anyone to reactivate their account. Really, you can't trust it to do any business function...
At least, that's really the message this sends in my opinion
1 reply →
> If you have groundbreaking AI, you can offer groundbreaking support at scale
You're a funny one aren't you...
Meet "Fin" Anthropic's "where support questions go to die" so-called-support bot, created by Intercom but powered by Anthropic.
Maybe it's an internal in-joke in the Anthropic offices ... "Fin" in french means "End".
I don't know anyone who has had a positive experience with "Fin" .... or ever spoken to a human at Anthropic support for that matter, even if you ask "Fin" to escalate.
Nope.
Customer support and safety are cost centers. It doesn’t scale like software does and no one’s KPIs are going to improve dramatically if you provide support beyond a point.
AI and LLMs are the cool tech, and the most important thing is to push the frontier. Money spent elsewhere is money not spent on R&D.
It would be hilarious if it wasn’t the GDPs of nations being spent on this.
They aren't even close to a 1T company, they're valued at <400bb and that's at like a 20x-30x multiple. They can probably raise money at a higher valuation but its literally just value based on hype, not revenue.
https://www.businessinsider.com/anthropic-trillion-dollar-va...
Check the secondaries market ;-)
fomo/hype, not revenue. Google's AI business is a profitable business model and training to inference is vertically integrated. Their AI biz did not add 1T to their market cap, despite their much more advantageous position. A 1T valuation for Anthropic makes absolutely no sense.
It also makes no sense to me there are people qualified to participate in these secondary markets who are that stupid, but here we are.
1 reply →