Comment by WarmWash

9 days ago

Prompting is definitely a skill, similar to "googling" in the mid 00's.

You see people complaining about LLM ability, and then you see their prompt, and it's the 2006 equivalent of googling "I need to know where I can go for getting the fastest service for car washes in Toronto that does wheel washing too"

Ironically, the phrase that was a bad 2006 google query is a decent enough LLM prompt, and the good 2006 google query (keywords only) would be a bad LLM prompt.

  • That’s not true at all. I get plenty of perfect responses with few word prompts often containing typos.

    This isn’t always the case and depends on what you need.

    • How customized are your system prompts (i.e. the static preferences you set at the app level)?

      And do you perhaps also have memory enabled on the LLMs you are thinking of?

Communication is definitely a skill, and most people suck at it in general. And frequently poor communication is a direct result from the fact that we don't ourselves know what we want. We dream of a genie that not only frees us from having to communicate well, but of having to think properly. Because thinking is hard and often inconvenient. But LLMs aren't going to entirely free us from the fact that if garbage goes in, garbage will come out.

"Communication usually fails, except by accident." —Osmo A. Wiio [1]

[1] https://en.wikipedia.org/wiki/Wiio%27s_laws

I’ve been looking for tooling that would evaluate my prompt and give feedback on how to improve. I can get somewhere with custom system prompts (“before responding ensure…”) but it seems like someone is probably already working on this? Ideally it would run outside the actual thread to keep context clean. There are some options popping up on Google but curious if anyone has a first anecdote to share?

  • It really depends on how deep you want to go.

    1. Just jazz up and expand on a simple prompt.

    2. A full context deficiency analysis and multiple question interview system to bounds check and restructure your prompt into your ‘goal’.

    3. Realizing that what looks like a good human prompt is not the same as what functions as a good ‘next token’ prompt.

    If you just want #1:

    import dspy

    class EnhancePrompt(dspy.Signature):

        """Assemble the final enhanced prompt from all gathered context"""
    
        essential_context: str = dspy.InputField(desc="All essential context and requirements")
    
        original_request: str = dspy.InputField(desc="The user's original request")
    
        enhanced: str = dspy.OutputField(desc="Complete, detailed, unambiguous prompt. Omit politeness markers. You must limit all numbered lists to a maximum of 3 items.")
    

    def enhance_prompt(prompt: str, temperature: float = 0.2) -> str:

        with dspy.context(lm=dspy.LM("_MODEL_", temperature=temperature)): return dspy.ChainOfThought(EnhancePrompt)(essential_context=f"Direct enhancement request: {prompt}", original_request=prompt).enhanced
    

    res = enhance_prompt("Read bigfile.py and explain the do_math() function.")

    print(res)

    Read the file `bigfile.py` and provide a detailed explanation of the `do_math()` function. Your explanation should cover:

    1. The function's purpose and what it accomplishes

    2. The input parameters it accepts and the output/return value it produces

    3. The step-by-step logic and algorithm used within the function

    Include relevant code snippets when explaining key parts of the implementation.