← Back to context

Comment by tencentshill

20 hours ago

I don't trust these AI-only companies to be overnight experts in properly handling medical, financial and insurance data. They have no business providing these tools, unless they want to take all the risk too.

I think a lot of people are misunderstanding the typical workload of people in Financial Services. They aren't using Claude to transfer money, they're just building a LOT of slideshows and fancy excel docs on made-up numbers to try to sell mergers and new financing options/types of loans. Most programmers would just consider this "sales".

  • That’s a gross over generalization. Some of the insurance data here suggests use of AI to make underwriting decisions. There are several states with regulations which could potentially pull these agent solutions into their regulatory oversight if used by the industry to effect insurance outcomes.

    • Odd lots podcast had an interesting snippet about an financial institution that uses AI to make loan decisions. The guest said that they only use it on applicants who were rejected in the traditional sequence, and then uses AI to accept them if possible. That way there's an articulable reason for a rejection, but they use the non-deterministic AI to allow an extra person through - since the laws about loans are mostly around not discriminating against people - companies are (generally) welcome to accept whoever.

      1 reply →

  • > They aren't using Claude to transfer money, they're just [...]

    It might be lower stakes, but isn't that still a juicy target for data-exfiltration attacks?

    In other words, imagine if one of your direct competitors was watching everything your employee read while making spreadsheets and slideshows.

    • Yes, corporate espionage may be alive and real but would claude on their microsoft/amazon/google cloud be different from documents on that same cloud?

      3 replies →

The only reason they are doing it is because there are regulation for people but not for machines.

  • Can't wait for Claude to submit fake tax records for me so that I can commit fraud legally.

    • This is my litmus test.

      If AI is really as wonderous as everybody says, why didn't all the employees of all the AI companies simply type "Claude, file my taxes for me" as a prompt and walk away?

      1 reply →

  • My experience has been quite the opposite. Some bank processes remain oral traditions about clicking excel filters by hand because any code would have to be extensively documented and tested.

I would recommend you to not use these, if you are not willing to absorb the risk.

Luckily there is still a significant market for the services.

  • Some human always gets to be the certified fall guy for non-compliance. Maybe the legal agent can help structure the company so that is an ignorant lower level accountant and not the CFO.

    Currently we don't know the risk, so it is kind of hard to absorb.

> properly handling

Why, they can sell user data to other brokers. Experts indeed! But not in insurance or finance, of course.

Claude's actually pretty great at this! I actually used to use Claude A LOT to answer interesting questions (which I'll be writing up on!) More generally, Claude is palpably different from most other agents. I'd recommend these models – especially Opus – without qualifications.

But there's a process risk here based on their current practises. I'm hoping those practises change so that I can recommend Claude to everyone I know, but as of now, there's existential risk exposure here that's greater than Google's.

Anthropic's automated systems can and will ban you for pretty arbitrary things; and you won't get human support or Claude – even if you are an enterprise paying out of your nose. And there's 0 redressal unless you go viral on social media. Or know someone who knows someone. See: https://x.com/Whizz_ai/status/2051180043355967802 https://x.com/theo/status/2045618854932734260

And I say that as someone who likes how Anthropic has been training Claude and Opus. I just don't think they're prepared to be the trillion dollar company they've become. They are – in a very real way – suffering from success. Which is extremely inconvenient to be on the receiving end of when you're on a deadline.

  • Before AI, shipping code to production used to be a two-person task: one writes the code, another one reviews the code. Now with AI writing the code, the developer that was supposed to write the code, only has to review it. And this is because they are responsible for the code they ship.

    Code review has become unbearable because before AI, developers were reviewing code as they went writing it in the first place. Granted, never perfect and why a second person reviewing code was (is?) a best practice. But effectively there was always some level of code review happening as developers wrote code.

    I fear it is way more boring to review financial and medical documents completely written by AI than it is to write (and at the same time review) by yourself. And way more dangerous to ship mistakes than in most software.

    • > the developer that was supposed to write the code, only has to review it.

      But more often than not that developer ends up reviewing far more lines of code due to the typical verbosity of an LLM.

      1 reply →

    • I am/was writing up an interesting hypothesis with Claude's help. But I redid the most important parts of the data pipeline manually. As in went in and cmd-c + cmd-v'ed the data by hand to create a reference, and I'm randomly spot checking 33% of the larger records.

      The analysis itself; I'm doing it by hand.

    • Why not the developer write the code, then the AI review the code, and then finally a signoff from another human?

      Far too often people think productivity is the point. Maybe the point is developer's understanding of the product IS the product?

      You're not engineering black boxes, you're engineering legible boxes.

  • Pretty great at what? I work in the insurance industry specifically medicare. All I see is sales people and other managers slopping out AI dashboards off of spreadsheets galore. Not only is it terrible for protecting PHI/PII. It also doesn't do things like RBAC very well either. Now instead of preventing a person from externally sharing a file i have to make sure they didn't egress the file to supabase or some other platform.

    Here's some of the horrible things i've seen. Frontend dashboard with PHI/PII deployed via vercel/next because AI told them how to get their site online. Login is hardcoded into the frontend so anyone with inspect can find the password.

    Another "fixed" dashboard deployed the same way. This time they added firebase auth so they got sign in with Google added with only logging into our domain. Wait how would they be able to create a token for our domain? They didn't the frontend just blocks domains from calling firebase.auth but firebase doesn't care. So simply calling the function in the console lets me login with any gmail account....

    They also where showing me their RBAC with firebase. Again they don't have access to our Orgnization/Directory/Groups. So i wondered how they did this.. wouldn't you guess its a hardcoded list of approved users. You can literally call firebase.auth and sign in anonymously. Again only the frontend checks the email addresses. So now that i have a firebase auth all the backend firebase function just check that you have auth'd. So i can make any request i want to the backend. The frontend simply won't show me the code.

    I could go on and on about the stupidity levels I'm facing but I don't feel like crashing out.

    All I can say is this tool is only useful if you already know how to correctly implement these things. Does it save me time sure but I have to call it retarded and explain why not to do things. Honestly I feel like claude is good for people who like to gamble. When it gets it right it feels great but I don't want to roll the dice 30 times to get it correct.

  • > and you won't get human support or Claude – even if you are an enterprise paying out of your nose. And there's 0 redressal unless you go viral on social media.

    Sadly this sounds like par for the course when it comes to tech. Too many messages and requests for help depend on knowing someone in the right slack groups.