← Back to context

Comment by mdp2021

2 months ago

> a systemic issue

It will remain a suggestion of a systemic issue until it will be clear that architecturally all checks are implemented and mandated.

It is clear it is not, given we have examples of models that handles these cases.

I don't even know what you mean with "architecturally all checks are implemented and mandated". It suggests you may think these models work very differently to how they actually work.

  • > given we have examples of models that handles

    The suggestions come from the failures, not from the success stories.

    > what you mean with "architecturally all checks are implemented and mandated"

    That NN-models have an explicit module which works as a conscious mind and does lucid ostensive reasoning ("pointing at things") reliably respected in their conclusion. That module must be stress-tested and proven as reliable. Success stories only result based are not enough.

    > you may think these models work very differently to how they actually work

    I am interested in how they should work.

    • > The suggestions come from the failures, not from the success stories.

      That thinking is flawed. The successes conclusively proves that the issue isn't systemic because there is a solution.

      > That NN-models have an explicit module which works as a conscious mind and does lucid ostensive reasoning ("pointing at things") reliably respected in their conclusion.

      Well, this isn't how LLMs work.

      > That module must be stress-tested and proven as reliable. Success stories only result based are not enough.

      Humans aren't reliable. You're setting the bar at a level well beyond what is necessary, and almost certainly beyond what is possible.

      > I am interested in how they should work.

      We don't know how they should work, because we don't know what the optimal organisation is.

      1 reply →

  • > It suggests you may think these models work very differently to how they actually work.

    It suggests to me the opposite: that he thinks there can be no solution that doesn't involve externally policing the system (which it quite clearly needs to solve other problems with trusting the output).

    • Given that we have a solution that doesn't require "externally policing the system" given that newer/bigger models handle it, that is clearly not the case.