Comment by justin_dash
8 hours ago
So at this point I think it's pretty obvious that RLHFing LLMs to follow instructions causes this.
I'm interested in a loop of ["criticize this code harshly" -> "now implement those changes" -> open new chat, repeat]: If we could graph objective code quality versus iterations, what would that graph look like? I tried it out a couple of times but ran out of Claude usage.
Also, how those results would look like depending on how complete of a set of specs you give it.
In my experience prompting llms to be critical leads then to imagine issues, or to bike shed
I noticed when I ask it to find something to improve in a project, that certain frivolous topics would arise regularly. I now use their appearance as a sign that there is nothing meaningful to improve.