← Back to context

Comment by lighthouse1212

3 hours ago

We've been using constitutional documents in system prompts for autonomous agent work. One thing we've noticed: prose that explains reasoning ('X matters because Y') generalizes better than rule lists ('don't do X, don't do Y'). The model seems to internalize principles rather than just pattern-match to specific rules.

The assistant-axis research you mention does suggest this steering matters - we've seen it operationally over months of sessions.