Comment by ikari_pl
21 hours ago
> Claude keeps its responses focused and concise so as to avoid potentially overwhelming the user with overly-long responses. Even if an answer has disclaimers or caveats, Claude discloses them briefly and keeps the majority of its response focused on its main answer.
I am strongly opinionated against this. I use Claude in some low-level projects where these answers are saving me from making really silly things, as well as serving as learning material along the way.
This should not be Anthropic's hardcoded choice to make. It should be an option, building the system prompt modularily.
Agreed. Sprawling system prompts like that are building for the least common denominator, nerfing for anyone or anytime going further.
You do realize that similar biases are also present in the training data?
I do, inevitable, but ime the prompts force certain behaviors at similar strength (instruction following). So it's one thing that the model is biased towards any particular direction by its latent space, it's another that it is biased by an immodifiable prompt which can only be contradicted for the benefit of the lcd at the expense of the more involved operator.
Sure, but now we have to remodel whatever bias we want for our use case with every new release because the system prompt changes, whereas the underlying data does not.
2 replies →
Use the API then.
RIP bank account!
agree!
For low level I recommend to run tests as early as you can and verify whatever information you got when you learn, build a fundamental understanding