Comment by fainpul
12 hours ago
This is ridiculous. I doubt this would work with a general AI, but it surely cannot work with LLMs who understand exactly nothing about human behaviour.
12 hours ago
This is ridiculous. I doubt this would work with a general AI, but it surely cannot work with LLMs who understand exactly nothing about human behaviour.
They may not understand it but they may very well be able to reproduce aspects of feedback and comments on similar pieces of software.
I agree that the approach shouldn’t be done unsupervised but I can imagine it being useful to gain valuable insights for improving the product before real users even interact with it.
> reproduce aspects of feedback and comments on similar pieces of software
But this is completely worthless or even misleading. There is zero value in this kind of "feedback". It will produce nonsense which sounds believable. You need to talk to real users of your software.
Letting the LLM generate use cases is probably not a good idea. But writing common use cases and having a bot crawl your app and then write up how many steps it took to accomplish the goal is a good idea that I hadn't thought of before. You could even set it up as a CI check to make sure that new features that introduce more steps for specific flows are very conscious decisions. In a large application this could be a very useful feature.