Comment by galaxyLogic

3 days ago

I think what AI "should" be good at is writing code that passes unit-tests written by me the Human.

AI cannot know what we want it to write - unless we tell it exactly what we want by writing some unit-tests and tell it we want code that passes them.

But is any LLM able to do that?

You can write the tests first and tell the AI to do the implementation and give it some guidance. I usually go the other direction though, I tell the LLM to stub the tests out and let me fill in the details.