← Back to context

Comment by ChicagoDave

1 year ago

I’ve been trying to get all the LLMs to do the same thing with the same lack of success.

I keep thinking there could be a way to iteratively train an LLM with declarative prompts, but as the article points out, it’s the chicken and egg problem. The LLM can’t provide a response unless it already knows the answer.

However, I believe this barrier will eventually be overcome. Just not anytime soon.