← Back to context

Comment by danpalmer

19 days ago

Yeah sure. The problem we had was that we had some "facts" to base the game on, but when the LLM generated multiple choice choose-you-own-adventure style options, they would end up being leading questions towards the facts, i.e. the LLM knows what's behind the door, so an option might have been "check for the thing behind the door", and the user now knows it's there because why else would it have asked.

Instead we put all the facts in a RAG database. Now when we ask the LLM to generate options it does so not knowing the actual answer, so they can't really be leading questions. We then take the user input, use RAG to get relevant facts, and then "reveal" those facts to the LLM in subsequent prompts.

Honestly we still didn't nail gameplay or anything, it was pretty janky but it was 2 days, a bunch of learning, and probably only 300 lines of Python in the end, so I don't want to overstate what we did. However this one detail was one that stuck with me.