← Back to context

Comment by vidarh

2 months ago

It doesn't work this time because there are plenty of models, including GPT5 Thinking that can handle this correctly, and so it is clear this isn't a systemic issue that can't be trained out of them.

> a systemic issue

It will remain a suggestion of a systemic issue until it will be clear that architecturally all checks are implemented and mandated.

  • It is clear it is not, given we have examples of models that handles these cases.

    I don't even know what you mean with "architecturally all checks are implemented and mandated". It suggests you may think these models work very differently to how they actually work.

    • > given we have examples of models that handles

      The suggestions come from the failures, not from the success stories.

      > what you mean with "architecturally all checks are implemented and mandated"

      That NN-models have an explicit module which works as a conscious mind and does lucid ostensive reasoning ("pointing at things") reliably respected in their conclusion. That module must be stress-tested and proven as reliable. Success stories only result based are not enough.

      > you may think these models work very differently to how they actually work

      I am interested in how they should work.

      2 replies →

    • > It suggests you may think these models work very differently to how they actually work.

      It suggests to me the opposite: that he thinks there can be no solution that doesn't involve externally policing the system (which it quite clearly needs to solve other problems with trusting the output).

      1 reply →