Comment by joquarky

25 days ago

I think the "AI isn't doing anything" crowd have some kind of vocabulary/language barriers/deficiencies that prevent them from refining their prompting methods into something that works for them.

I find that the more precise I am in my prompts, the more precise the response. But that requires that I use vocabulary that I wouldn't use in a human conversation.

Same here, I created a prompt enhancer gpt and a prose enhancer gpt, and I tend to chain all my prompts through them, then I use an extension to remove markdown + replace Unicode, and then add tabs and proper formatting to product a final version of all my prompts. This tends to result in prompts that perform 20-25% better for all difficult or multi-part tasks.

Logically this makes sense, the probabilites for next tokens the model produces follows the pattern it observes from the initial input, if your prose reflects that which individuals with higher intelligence tend to use, the model will continue this high in response and vice versa

I’m curious to see what happens as the % of training input using synthetic data generated by models tips the scales so that markdown reflects higher intelligence inputs - I wonder when this will occur