← Back to context

Comment by adunsulag

7 months ago

I read your comment and yet I see tons of startups putting AI directly in the path of healthcare diagnosis, healthcare clinical decision support systems, and healthcare workflow automations. Very few are paying any attention to the 2-10% of safety problems when the AI probability goes off the correct path.

I wish more people would not do this, but from what I'm seeing, business execs are rushing full throttle into this at the goldmine that comes from 'productivity gains'. I'm hoping the legal system will find a case that can put some paranoia back into the ecosystem before AI gets too entrenched in all of these critical systems.

As has been belabored, these AIs are just models, which also means they are only software. Would you be so fire-and-brimstone if startups were using software on healthcare diagnostic data?

> Very few are paying any attention to the 2-10% of safety problems when the AI probability goes off the correct path.

This isn't how it works. It goes on a less common but still correct path.

If anything, I agree with other commenters that model training curation may become necessary to truly make a generalized model that is also ethical but I think the generalized model is kind of like an "everything app" in that it's a jack of all trades, master of none.

  • > these AIs are just models, which also means they are only software.

    Other software are much less of a black box, much more predictable and many of its paths have been tested. This difference is the whole point of all the AI safety concerns!