← Back to context

Comment by kace91

3 days ago

I'm not defending the original idea, to be clear, just pointing out the different argument.

I personally don't find the assumption that a smarter AI would be harder to tame convincing. My experience seems to be that we can tell it's improved precisely because it is better at following abstract instructions, and there is nothing fundamentally different in the instructions "format this in a corporate friendly way" and "format this speech to be alligned with the interest of {X}".

Without that base, the post-talk of who would this smarter untamed AI align with becomes moot.

Besides, we're also missing that if someone's goals is to policy speech, a tool that can scrub user conversations and deduce intention or political leaning has obvious usages. You might be better off as an authoritarian just letting everyone talk to the LLM and waiting for intelligence to collect itself.