Comment by behnamoh

10 months ago

> I never use ChatGPT.com or other normal-person frontends for accessing LLMs because they are harder to control. Instead, I typically access the backend UIs provided by each LLM service, which serve as a light wrapper over the API functionality which also makes it easy to port to code if necessary.

Yes, I also often use the "studio" of each LLM for better results because in my experience OpenAI "nerfs" models on the ChatGPT UI (models keep forgetting things—probably a limited context length set by OpenAI to reduce costs, generally the model is less chatty (again, probably to reduce their costs), etc. But I've noticed Gemini 2.5 Pro is the same on the studio and the Gemini app.

> Any modern LLM interface that does not let you explicitly set a system prompt is most likely using their own system prompt which you can’t control: for example, when ChatGPT.com had an issue where...

ChatGPT does have system prompts but Claude doesn't (one of its many, many UI shortcomings which Anthropic never addressed).

That said, I've found system prompts less and less useful with newer models. I can simply preface my own prompt with the instructions and the model follows them very well.

> Specifying specific constraints for the generated text such as “keep it to no more than 30 words” or “never use the word ‘delve’” tends to be more effective in the system prompt than putting them in the user prompt as you would with ChatGPT.com.

I get that LLMs have a vague idea of how many words are 30 words, but they never do a good job in these tasks for me.