← Back to context

Comment by MadcapJake

7 months ago

As has been belabored, these AIs are just models, which also means they are only software. Would you be so fire-and-brimstone if startups were using software on healthcare diagnostic data?

> Very few are paying any attention to the 2-10% of safety problems when the AI probability goes off the correct path.

This isn't how it works. It goes on a less common but still correct path.

If anything, I agree with other commenters that model training curation may become necessary to truly make a generalized model that is also ethical but I think the generalized model is kind of like an "everything app" in that it's a jack of all trades, master of none.

> these AIs are just models, which also means they are only software.

Other software are much less of a black box, much more predictable and many of its paths have been tested. This difference is the whole point of all the AI safety concerns!