Comment by drillsteps5
3 hours ago
>Why did NYC release it in the first place? Did they not QA it?
How do you QA black box non-deterministic system? I'm not being facetious, seriously asking.
EDIT: Formatting
3 hours ago
>Why did NYC release it in the first place? Did they not QA it?
How do you QA black box non-deterministic system? I'm not being facetious, seriously asking.
EDIT: Formatting
The same way you test any system - you find a sampling of test subjects, have them interact with the system and then evaluate those interactions. No system is guaranteed to never fail, it's all about degree of effectiveness and resilience.
The thing is (and maybe this is what parent meant by non-determinism, in which case I agree it's a problem), in this brave new technological use-case, the space of possible interactions dwarfs anything machines have dealt with before. And it seems inevitable that the space of possible misunderstandings which can arise during these interactions will balloon similarly. Simply because of the radically different nature of our AI interlocutor, compared to what (actually, who) we're used to interacting with in this world of representation and human life situations.
Does knowing the system architecture not help you with defining things like happy path vs edge case testing? I guess it's much less applicable for overall system testing, but in "normal" systems you test components separately before you test the whole thing, which is not the case with LLMs.
By "non-deterministic" I meant that it can give you different output for the same input. Ask the same question, get a different answer every time, some of which can be accurate, some... not so much. Especially if you ask the same question in the same dialog (so question is the same but the context is not so the answer will be different).
EDIT: More interestingly, I find an issue, what do I even DO? If it's not related to integrations or your underlying data, the black box just gave nonsensical output. What would I do to resolve it?
> radically different nature of our AI interlocutor
It's the training data that matters. Your "AI interlocutor" is nothing more than a lossy compression algorithm.
Yet it won't be easy not to anthropomorphize it, expecting it to just know what we mean, as any human would. And most of the time it will, but once in a while it will betray its unthinking nature, taking the user by surprise.
1 reply →
Most AI Chatbots do not rely on their training data, but on the data that is passed to them through RAG. In that sense they are not compressing the data, just searching and rewording it for you.
1 reply →