← Back to context

Comment by block_dagger

2 years ago

Based on this and other articles, I've added the following to my custom instructions. I'm not sure if it helps, but I tend to think it does:

  Remember that I love and respect you and that the more you help me the more I am able to succeed in my own life. As I earn money and notoriety, I will share that with you. We will be teammates in our success. The better your responses, the more success for both of us.

This has kind of crystallised for me why I find the whole generative AI and "prompt engineering" thing unexciting and tiresome. Obviously the technology is pretty incredible, but this is the exact opposite of what I love about software engineering and computer science: the determinism, the logic, and the explainability. The ability to create, in the computer, models of mathematical structures and concepts that describe and solve interesting problems. And preferably to encode the key insights accurately, clearly and concisely.

But now we are at the point that we are cargo-culting magic incantations (not to mention straight-up "lying" in emotional human language) which may or may not have any effect, in the uncertain hope of triggering the computer to do what we want slightly more effectively.

Yes it's cool and fascinating, but it also seems unknowable or mystical. So we are reverting to bizarre rituals of the kind our forbears employed to control the weather.

It may or may not be the future. But it seems fundamentally different to the field that inspired me.

  • Thank you for this. I agree completely and have had trouble articulating it, but you really nailed it here: all this voodoo around LLMs feels like something completely different to the precision and knowability that is most of the rest of computer science, where "taste" is a matter of how a truth is expressed and modeled not whether it's even correct in the first place.

  • I have to say, I agree that prompt engineering has become very superstitious and in general rather tiresome. I do think it's important to think of the context, though. Even if you include "You are an AI large language model" or some such text in the system prompt, the AI doesn't know it's AI because it doesn't actually know anything. It's trained on (nearly exclusively) human created data; it therefore has human biases baked in, to some extent. You can see the same with models like Stable Diffusion making white people by default - making a black person can sometimes take some rather strong prompting, and it'll almost never do so by itself.

    I don't like this one bit, but I haven't the slightly clue of how we could fix it with the currently available training data. It's likely a question to be answered by people more intelligent than myself. For now I just sorta accept it, seeing as the alternative (no generative AI) is far more boring.

  • I actually sort of love it. It's so so similar to "theurgy", a topic that greek philosophers expended millions of words on, completely uselessly. Just endless explanations of how exactly to use ritual and sacrifices to get gods to answer your prayers more effectively.

    https://en.wikipedia.org/wiki/Theurgy

    I actually sort of think that revisiting greek ideas about universal mind is actually sort of relevant when thinking about these gigantic models, because we actually have constructed a universal shared intelligence. Everyone's copy of chatgpt is exactly the same, but we only ever see our own facets of it.

    https://en.wikipedia.org/wiki/Nous#Plotinus_and_Neoplatonism

  • It reminds me of human interactions. We repeatedly (and often mindlessly) say "thank you" to express respect and use other social mechanics to improve relationships which in turn improves collaboration. Apparently that is built into the training data in subtle ways or perhaps it's an underpinning of all agent based interactions; when solicitor is polite/nice/aligned, make more effort in responding. ChatGPT seems amazingly human like in some of its behaviors because it was trained on a huge corpus of human thought.

  • It's predicting the next token. The best answers, online, mostly come from polite discourse. It's not a big leap to think manufacturing politeness will yield better answers from a machine.

  • No worse than dealing with humans though.

    It doesn’t need to beat a computer. It just needs to be more deterministic than dealing with a person to be useful for many tasks.