Comment by bluefirebrand
1 year ago
> Maybe they should also include explanations of assumptions in the output.
I think you're giving these systems a lot more "reasoning" credit than they deserve. As far as I know they don't make assumptions they just apply a weighted series of probabilities and make output. They also can't explain why they chose the weights because they didn't, they were programmed with them.
Depends entirely on how the limits are imposed. E.g. one way of imposing them that definitely does allow you to generate explanations is how gpt imposes additional limitations on the Dalle output by generating a Dalle prompt from the gpt prompt with the addition of limitations imposed by the gpt system prompt. If you need/want explainability, you very much can build scaffolding around the image generation to adjust the output in ways that you can explain.